<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Apr 17, 2014 at 6:58 PM, Bharata B Rao <span dir="ltr"><<a href="mailto:bharata.rao@gmail.com" target="_blank">bharata.rao@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div>Hi,<br><br></div>In QEMU, we initialize gfapi in the following manner:<br>
<br>********************<br></div>glfs = glfs_new();<br></div><div>if (!glfs)<br>
</div><div>Â Â goto out;<br></div>if (glfs_set_volfile_server() < 0)<br></div>Â Â goto out;<br></div>if (glfs_set_logging() < 0)<br></div>Â Â goto out;<br></div>if (glfs_init(glfs))<br></div>Â Â goto out;<br><br>...<br>
<br></div>out:<br></div>if (glfs)<br></div>Â Â glfs_fini(glfs)<br>*********************<br><br clear="all"><div><div><div><div><div><div><div><div><div><div><div><div><div>Now if either glfs_set_volfile_server() or glfs_set_logging() fails, we end up doing glfs_fini() which eventually hangs in glfs_lock().<br>
<br>#0Â 0x00007ffff554a595 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0<br>#1Â 0x00007ffff79d312e in glfs_lock (fs=0x555556331310) at glfs-internal.h:176<br>#2Â 0x00007ffff79d5291 in glfs_active_subvol (fs=0x555556331310) at glfs-resolve.c:811<br>
#3Â 0x00007ffff79c9f23 in glfs_fini (fs=0x555556331310) at glfs.c:753<br><br>Note that we haven't done glfs_init() in this failure case.<br><br></div><div>- Is this failure expected ? If so, what is the recommended way of releasing the glfs object ?<br>
</div><div>- Does glfs_fini() depend on glfs_init() to have worked successfully ?<br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></blockquote><div><br>170 static inline int<br>171 glfs_lock (struct glfs *fs)<br>
172 {<br>173Â Â Â Â Â Â Â Â pthread_mutex_lock (&fs->mutex);<br>174 <br>175Â Â Â Â Â Â Â Â while (!fs->init)<br>176Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â pthread_cond_wait (&fs->cond, &fs->mutex);<br><br></div><div>The glfs_lock indeed seems to work only when glfs_init is succesfull!<br>
</div><div>We can call glfs_unset_volfile_server for the error case of glfs_set_volfile_server as a good practice.<br>But it does look like we need a opposite of glfs_new (maybe glfs_destroy) for cases like these to clenaup stuff that glfs_new() allocated<br>
<br>thats my 2 cents... hope to hear from other gluster core folks on this<br><br></div><div>thanx,<br>deepak<br></div><div>Â </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div></div><div>- Since QEMU-GlusterFS driver was developed when libgfapi was very new, can gluster developers take a look at the order of the glfs_* calls we are making in QEMU and suggest any changes, improvements or additions now given that libgfapi has seen a lot of development ?<br>
</div><div><br></div><div>Regards,<br>Bharata.<span class=""><font color="#888888"><br></font></span></div><span class=""><font color="#888888"><div>-- <br><a href="http://raobharata.wordpress.com/" target="_blank">http://raobharata.wordpress.com/</a>
</div></font></span></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br>_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
<br></blockquote></div><br></div></div>