<div dir="ltr"><div><div><div><div><div>We've noticed that gfapi threads won't die until process exit, they aren't joined to in glfs_fini(). Is that expected? The following will create 4*N threads:<br><br></div>
for( idx=0; idx<N; ++idx) {<br></div> glfs_new<br></div> glfs_set_volfile_server<br></div> glfs_init<br></div><div> // pause a bit here<br></div> glfs_fini<br>}<br><div><br><div><div><div><div><div>-K<br>
</div><div><br></div></div></div></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jan 31, 2014 at 9:07 AM, Kelly Burkhart <span dir="ltr"><<a href="mailto:kelly.burkhart@gmail.com" target="_blank">kelly.burkhart@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thanks Anand,</div><div><br></div><div>I notice three different kind of threads: gf_timer_proc and syncenv_processor in libglusterfs and glfs_poller in the api. Right off the bat two syncenv threads are created and one each of the other two are created. In my limited testing, it doesn't seem to take much for more threads to be created.</div>
<div><br></div><div>The reason I'm concerned is that we intend to run our gluster client on a machine with all but one core dedicated to latency critical apps. The remaining core will handle all other things. In this scenario creating scads of threads seems likely to be a pessimization compared to just having one thread with an epoll loop handling everything. Would any of you familiar with the guts of gluster predict a problem with pegging a gfapi client and all of it's child threads to a single core?</div>
<div><br></div><div>BTW, attached is a simple patch to help me track what threads are created, it's linux specific, but I think it's useful. It adds an identifier and instance count to each kind of child thread so I see this in top:</div>
<div><br></div><div><div>top - 08:35:47 up 48 min, 3 users, load average: 0.12, 0.07, 0.05</div><div>Tasks: 9 total, 0 running, 9 sleeping, 0 stopped, 0 zombie</div><div>Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st</div>
<div>Mem: 16007M total, 1372M used, 14634M free, 96M buffers</div><div>Swap: 2067M total, 0M used, 2067M free, 683M cached</div><div><br></div><div> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND</div>
<div>22979 kelly 20 0 971m 133m 16m S 0 0.8 0:00.06 tst</div><div>22987 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:0</div><div>22988 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:1</div>
<div>22989 kelly 20 0 971m 133m 16m S 0 0.8 0:00.03 tst/gp:0</div><div>22990 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:0</div><div>22991 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:2</div>
<div>22992 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:3</div><div>22993 kelly 20 0 971m 133m 16m S 0 0.8 0:01.98 tst/gp:1</div><div>22994 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:1</div>
<div><br></div><div>Thanks,</div><div><br></div><div>-K</div></div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 30, 2014 at 4:38 PM, Anand Avati <span dir="ltr"><<a href="mailto:avati@gluster.org" target="_blank">avati@gluster.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thread count is independent of number of servers. The number of sockets/connections is a function of number of servers/bricks. There are a minimum number of threads (like the timer threads, syncop exec threads, io-threads, epoll thread, depending on interconnect RDMA event reaping threads) and some of them (syncop and io-thread) count are dependent on the work load. All communication with servers is completely asynchronous and we do not spawn a new thread per server.<div>
<br></div><div>HTH</div><span><font color="#888888"><div>Avati</div><div><br></div></font></span></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div>On Thu, Jan 30, 2014 at 1:17 PM, James <span dir="ltr"><<a href="mailto:purpleidea@gmail.com" target="_blank">purpleidea@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><div>On Thu, Jan 30, 2014 at 4:15 PM, Paul Cuzner <<a href="mailto:pcuzner@redhat.com" target="_blank">pcuzner@redhat.com</a>> wrote:<br>
> Wouldn't the thread count relate to the number of bricks in the volume,<br>
> rather that peers in the cluster?<br>
<br>
<br>
</div>My naive understanding is:<br>
<br>
1) Yes, you should expect to see one connection to each brick.<br>
<br>
2) Some of the "scaling gluster to 1000" nodes work might address the<br>
issue, as to avoid 1000 * brick count/perserver connections.<br>
<br>
But yeah, Kelly: I think you're seeing the right number of threads.<br>
But this is outside of my expertise.<br>
<span><font color="#888888"><br>
James<br>
</font></span></div></div><div><div><br><div>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</div></div></div></blockquote></div><br></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>