<p><br>
On Jan 14, 2013 8:49 PM, "Joe Julian" <<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>> wrote:<br>
><br>
> That's impressive, thanks. <br>
><br>
> To be clear, that follows the second suggestion which requires the library in the 3.4 qa release, right? </p>
<p>Yes this uses libgfapi from 3.4. So essentially it is better to use libgfapi instead of Fuse mount from guest.</p>
<p>><br>
> Bharata B Rao <<a href="mailto:bharata.rao@gmail.com">bharata.rao@gmail.com</a>> wrote:<br>
>><br>
>> Joe,<br>
>><br>
>> On Sun, Jan 13, 2013 at 8:41 PM, Joe Julian <<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>> wrote:<br>
>><br>
>>> You have two options:<br>
>>> 1. Mount the GlusterFS volume from within the VM and host the data you're<br>
>>> operating on there. This avoids all the additional overhead of managing a<br>
>>> filesystem on top of FUSE.<br>
>><br>
>><br>
>> In my very limited testing, I have found that using the gluster data<br>
>> drive as a 2nd gluster drive (1st being the VM image itself) to QEMU<br>
>> gives better performance than mounting the gluster volume directly<br>
>> from guest.<br>
>><br>
>> Here are some numbers from FIO read and write:<br>
>> Env: Dual core x86_64 system with F17 running 3.6.10-2.fc17.x86_64<br>
>> kernel for host and F18 3.6.6-3.fc18.x86_74 for guest.<br>
>><br>
>> Case 1:<br>
>> Mount a gluster volume (test) from inside guest and run FIO<br>
>> read and writes into the mounted gluster drive.<br>
>> [host]# qemu -drive file=gluster://bharata/rep/F18,if=virtio,cache=none<br>
>> [guest]# glusterfs -s bharata --volfile-id=test /mnt<br>
>><br>
>> Case 2: Specify gluster volume (test) as a drive to QEMU itself.<br>
>> [host]# qemu -drive<br>
>> file=gluster://bharata/rep/F18,if=virtio,cache=none -drive<br>
>> file=gluster://bharata/test/F17,if=virtio,cache=none.<br>
>> [guest]# mount /dev/vdb3 /mnt<br>
>><br>
>> In both the above cases, the VM image(F18) resides on GlusterFS volume<br>
>> (rep). And FIO read and writes are performed to /mnt/data1 in both<br>
>> cases.<br>
>><br>
>> FIO aggregated bandwidth (kB/s) (Avg of 5 runs)<br>
>> Case1 Case 2<br>
>> read 28740 52309<br>
>> Write 27578 48765<br>
>><br>
>> FIO load file is as follows:<br>
>> [global]<br>
>> ioengine=libaio<br>
>> direct=1<br>
>> rw=read # rw=write for write test<br>
>> bs=128k<br>
>> size=512m<br>
>> directory=/mnt/data1<br>
>> [file1]<br>
>> iodepth=4<br>
>> [file2]<br>
>> iodepth=32<br>
>> [file3]<br>
>> iodepth=8<br>
>> [file4]<br>
>> iodepth=16<br>
>><br>
>> Of course this is just one case, I wonder if you have seen better<br>
>> numbers for guest FUSE mount case for any of the benchmarks you use ?<br>
>><br>
>>> 2. Try the 3.4 qa release and native GlusterFS support in the latest<br>
>>> qemu-kvm.<br>
>><br>
>><br>
>> Regards,<br>
>> Bharata.</p>