<div dir="ltr">On Tue, Jul 30, 2013 at 7:47 AM, Roberto De Ioris <span dir="ltr"><<a href="mailto:roberto@unbit.it" target="_blank">roberto@unbit.it</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
> On Mon, Jul 29, 2013 at 10:55 PM, Anand Avati <<a href="mailto:anand.avati@gmail.com">anand.avati@gmail.com</a>><br>
> wrote:<br>
><br>
><br>
</div><div class="im">> I am assuming the module in question is this -<br>
> <a href="https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c" target="_blank">https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c</a>.<br>
> I<br>
> see that you are not using the async variants of any of the glfs calls so<br>
> far. I also believe you would like these "synchronous" calls to play<br>
> nicely<br>
> with Coro:: by yielding in a compatible way (and getting woken up when<br>
> response arrives in a compatible way) - rather than implementing an<br>
> explicit glfs_stat_async(). The ->request() method does not seem to be be<br>
> naturally allowing the use of "explictly asynchronous" calls within.<br>
><br>
> Can you provide some details of the event/request management in use? If<br>
> possible, I would like to provide hooks for yield and wakeup primitives in<br>
> gfapi (which you can wire with Coro:: or anything else) such that these<br>
> seemingly synchronous calls (glfs_open, glfs_stat etc.) don't starve the<br>
> app thread without yielding.<br>
><br>
> I can see those hooks having a benefit in the qemu gfapi driver too,<br>
> removing a bit of code there which integrates callbacks into the event<br>
> loop<br>
> using pipes.<br>
><br>
> Avati<br>
><br>
><br>
<br>
</div>This is a prototype of async way:<br>
<br>
<a href="https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L43" target="_blank">https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L43</a><br>
<br>
basically once the async request is sent, the uWSGI core (it can be a<br>
coroutine, a greenthread or another callback) wait for a signal (via pipe<br>
[could be eventfd() on linux]) of the callback completion:<br>
<br>
<a href="https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L78" target="_blank">https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L78</a><br>
<br>
the problem is that this approach is racey in respect of the<br>
uwsgi_glusterfs_async_io structure. </blockquote><div><br></div><div>It is probably OK since you are waiting for the completion of the AIO request before issuing the next. One question I have in your usage is, who is draining the "\1" written to the pipe in uwsgi_glusterfs_read_async_cb() ? Since the same pipe is re-used for the next read chunk, won't you get an immediate wake up if you tried polling on the pipe without draining?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can i assume after glfs_close() all of<br>
the pending callbacks are cleared ?</blockquote><div><br></div><div>With the way you are using the _async() calls, you do have the guarantee - because you are waiting for the completion of each AIO request right after issuing.</div>
<div><br></div><div>The enhancement to gfapi I was proposing was to expose hooks at yield() and wake() points for external consumers to wire in their own ways of switching out of the stack. This is still a half baked idea, but it will let you use only glfs_read(), glfs_stat() etc. (and NOT the explicit async variants), and the hooks will let you do wait_read_hook() and write(pipefd, '\1') respectively in a generic way independent of the actual call.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> In such a way i could simply<br>
deallocate it (now it is on the stack) at the end of the request.<br></blockquote><div><br></div><div>You probably need to do all that in case you want to have multiple outstanding AIOs at the same time. From what I see, you just need co-operative waiting till call completion.</div>
<div><br></div><div>Also note that the ideal block size for performing IO is 128KB. 8KB is too little for a distributed filesystem.</div><div><br></div><div>Avati</div></div></div></div>