<div dir="ltr">Please provide the full client and server logs (in a bug report). The snippets give some hints, but are not very meaningful without the full context/history since mount time (they have after-the-fact symptoms, but not the part which show the reason why disconnects happened).<div>
<br></div><div>Even before looking into the full logs here are some quick observations:</div><div><br></div><div>- write-behind-window-size = 1024MB seems *excessively* high. Please set this to 1MB (default) and check if the stability improves.</div>
<div><br></div><div>- I see RDMA is enabled on the volume. Are you mounting clients through RDMA? If so, for the purpose of diagnostics can you mount through TCP and check the stability improves? If you are using RDMA with such a high write-behind-window-size, spurious ping-timeouts are an almost certainty during heavy writes. The RDMA driver has limited flow control, and setting such a high window-size can easily congest all the RDMA buffers resulting in spurious ping-timeouts and disconnections. </div>
<div><br></div><div>Avati</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Dec 12, 2013 at 5:03 PM, harry mangalam <span dir="ltr">&lt;<a href="mailto:harry.mangalam@uci.edu" target="_blank">harry.mangalam@uci.edu</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div style="font-family:&#39;Droid Sans Fallback&#39;;font-size:12pt;font-weight:400;font-style:normal">
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Hi All,</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">(Gluster Volume Details at bottom)</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">I&#39;ve posted some of this previously, but even after various upgrades, attempted fixes, etc, it remains a problem.</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Short version:  Our gluster fs (~340TB) provides scratch space for a ~5000core academic compute cluster.  </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Much of our load is streaming IO, doing a lot of genomics work, and that is the load under which we saw this latest failure.</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Under heavy batch load, especially array jobs, where there might be several 64core nodes doing I/O on the 4servers/8bricks, we often get job failures that have the following profile:</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Client POV:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Here is a sampling of the client logs (/var/log/glusterfs/gl.log) for all compute nodes that indicated interaction with the user&#39;s files</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">&lt;<a href="http://pastie.org/8548781" target="_blank">http://pastie.org/8548781</a>&gt;</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Here are some client Info logs that seem fairly serious:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">&lt;<a href="http://pastie.org/8548785" target="_blank">http://pastie.org/8548785</a>&gt;</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">The errors that referenced this user were gathered from all the nodes that were running his code (in compute*) and agglomerated with:</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">cut -f2,3 -d&#39;]&#39; compute* |cut -f1 -dP | sort | uniq -c | sort -gr </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">and placed here to show the profile of errors that his run generated.</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">&lt;<a href="http://pastie.org/8548796" target="_blank">http://pastie.org/8548796</a>&gt;</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">so 71 of them were:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">  W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-gl-client-7: remote operation failed: Transport endpoint is not connected. </p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">etc</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">We&#39;ve seen this before and previously discounted it bc it seems to have been related to the problem of spurious NFS-related bugs, but now I&#39;m wondering whether it&#39;s a real problem. </p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Also the &#39;remote operation failed: Stale file handle. &#39; warnings.</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">There were no Errors logged per se, tho some of the W&#39;s looked fairly nasty, like the &#39;dht_layout_dir_mismatch&#39;</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">From the server side, however, during the same period, there were:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">0 Warnings about this user&#39;s files</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">0 Errors </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">458 Info lines</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">of which only 1 line was not a &#39;cleanup&#39; line like this:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">10.2.7.11:[2013-12-12 21:22:01.064289] I [server-helpers.c:460:do_fd_cleanup] 0-gl-server: fd cleanup on /path/to/file</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">it was:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">10.2.7.14:[2013-12-12 21:00:35.209015] I [server-rpc-fops.c:898:_gf_server_log_setxattr_failure] 0-gl-server: 113697332: SETXATTR /bio/tdlong/RNAseqIII/ckpt.1084030 (c9488341-c063-4175-8492-75e2e282f690) ==&gt; trusted.glusterfs.dht</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">We&#39;re losing about 10% of these kinds of array jobs bc of this, which is just not supportable.</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Gluster details</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">servers and clients running gluster 3.4.0-8.el6 over QDR IB, IPoIB, thru 2 Mellanox, 1 Voltaire switches, Mellanox cards, CentOS 6.4</p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">$ gluster volume info</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Volume Name: gl</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Type: Distribute</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Status: Started</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Number of Bricks: 8</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Transport-type: tcp,rdma</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Bricks:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick1: bs2:/raid1</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick2: bs2:/raid2</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick3: bs3:/raid1</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick4: bs3:/raid2</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick5: bs4:/raid1</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick6: bs4:/raid2</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick7: bs1:/raid1</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Brick8: bs1:/raid2</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Options Reconfigured:</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.write-behind-window-size: 1024MB</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.flush-behind: on</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.cache-size: 268435456</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">nfs.disable: on</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.io-cache: on</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.quick-read: on</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">performance.io-thread-count: 64</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">auth.allow: 10.2.*.*,10.1.*.*</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">&#39;gluster volume status gl detail&#39;: </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">&lt;<a href="http://pastie.org/8548826" target="_blank">http://pastie.org/8548826</a>&gt;</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">[m/c 2225] / 92697 Google Voice Multiplexer: <a href="tel:%28949%29%20478-4487" value="+19494784487" target="_blank">(949) 478-4487</a></p>

<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">415 South Circle View Dr, Irvine, CA, 92697 [shipping]</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px">---</p>
<p style="margin-top:0px;margin-bottom:0px;margin-left:0px;margin-right:0px;text-indent:0px"> </p></div><br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>