<div dir="ltr">Can you please post the backtrace and logs from the crash?<div><br></div><div>Thanks,</div><div>Avati</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jul 26, 2013 at 2:42 AM, Guido De Rosa <span dir="ltr">&lt;<a href="mailto:guido.derosa@vemarsas.it" target="_blank">guido.derosa@vemarsas.it</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I&#39;m using QEMU (1.5.1) compiled with GlusterFS  native support, i.e.<br>
linked to libgfapi (v3.4.0) to bypass FUSE overhead and improve<br>
performance.<br>
<br>
All is explained in Bharata Rao&#39;s blog:<br>
<br>
<a href="http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/" target="_blank">http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/</a><br>
<br>
And the problem I ecounter is explained in my comment:<br>
<br>
<a href="http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/#comment-520" target="_blank">http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/#comment-520</a><br>
<br>
As the author suggests, I&#39;m here asking what&#39;s the current situation<br>
about libgfapi, when a client application is running on a replicated<br>
volume which undergoes add-brick or replace-brick operations. As per<br>
my test with QEMU, it just crashes, anytime bricks are added or<br>
replaced - which, as opposed, does never happen on FUSE mounts.<br>
<br>
Thanks very much.<br>
<br>
Guido<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br></div>