Bharata B Rao :<div> Thanks!! This is what I wanted! </div><div> I'll try your patch and make some test.</div><div> BTW: why gluster-qemu <span style="background-color:rgb(255,255,255);font-family:verdana,sans-serif;line-height:13.333333015441895px">integration had so high </span><span style="background-color:rgb(255,255,255);color:rgb(34,34,34);font-size:14px;line-height:23px">performance than fuse mount?</span></div>
<div><br></div><div>Best Regards,</div><div>yinyin<br><br><div class="gmail_quote">On Thu, Aug 9, 2012 at 10:02 PM, John Mark Walker <span dir="ltr"><<a href="mailto:johnmark@redhat.com" target="_blank">johnmark@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Bharata:<br>
<br>
Thanks for writing this up. I bet someone could take this information and flesh out more scenarios + tests, posting the results on <a href="http://gluster.org" target="_blank">gluster.org</a>. Any takers?<br>
<span class="HOEnZb"><font color="#888888"><br>
-JM<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
----- Original Message -----<br>
> On Wed, Aug 8, 2012 at 11:50 PM, John Mark Walker<br>
> <<a href="mailto:johnmark@redhat.com">johnmark@redhat.com</a>> wrote:<br>
> ><br>
> > ----- Original Message -----<br>
> >><br>
> >> Or change your perspective. Do you NEED to write to the VM image?<br>
> >><br>
> >> I write to fuse mounted GlusterFS volumes from within my VMs. The<br>
> >> VM<br>
> >> image is just for the OS and application. With the data on a<br>
> >> GlusterFS<br>
> >> volume, I get the normal fuse client performance from within my<br>
> >> VM.<br>
><br>
> I ran FIO on 3 scenarios and here are the comparison numbers from<br>
> them:<br>
><br>
> Scenario 1: GlusterFS block backend of QEMU is used for root and data<br>
> partition (a gluster volume)<br>
> ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024<br>
> -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none<br>
> -drive file=gluster://bharata/test/F17,if=virtio,cache=none<br>
><br>
> Scenario 2: GlusterFS block backend of QEMU for root and GlusterFS<br>
> FUSE mount for data partition<br>
> ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024<br>
> -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none<br>
> -drive file=/mnt/F17,if=virtio,cache=none<br>
> (Here data partition is FUSE mounted on host at /mnt)<br>
><br>
> Scenarios 3: GlusterFS block backend of QEMU for root and FUSE<br>
> mounting gluster data partition from inside VM<br>
> ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024<br>
> -smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none<br>
><br>
> FIO exercises the data partition in each case.<br>
><br>
> Here are the numbers:<br>
><br>
> Scenario 1: aggrb=47836KB/s<br>
> Scenario 2: aggrb=20894KB/s<br>
> Scenario 3: aggrb=36936KB/s<br>
><br>
> FIO load file I used is this:<br>
> ; Read 4 files with aio at different depths<br>
> [global]<br>
> ioengine=libaio<br>
> direct=1<br>
> rw=read<br>
> bs=128k<br>
> size=512m<br>
> directory=/data1<br>
> [file1]<br>
> iodepth=4<br>
> [file2]<br>
> iodepth=32<br>
> [file3]<br>
> iodepth=8<br>
> [file4]<br>
> iodepth=16<br>
><br>
> Regards,<br>
> Bharata.<br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
><br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br></div>