<div dir="ltr">How are you doing the read/write tests on the fuse/glusterfs mountpoint? Many small files will be slow because all the time is spent coordinating locks.</div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Feb 27, 2013 at 9:31 AM, Thomas Wakefield <span dir="ltr">&lt;<a href="mailto:twake@cola.iges.org" target="_blank">twake@cola.iges.org</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">Help please-<div><br></div><div><br></div><div>I am running 3.3.1 on Centos using a 10GB network.  I get reasonable write speeds, although I think they could be faster.  But my read speeds are REALLY slow.</div>
<div><br></div><div>Executive summary:</div><div><br></div><div>On gluster client-</div><div>Writes average about 700-800MB/s</div><div>Reads average about 70-80MB/s</div><div><br></div><div>On server-</div><div>Writes average about 1-1.5GB/s</div>
<div>Reads average about 2-3GB/s</div><div><br></div><div>Any thoughts?</div><div><br></div><div><br></div><div><br></div><div>Here are some additional details:</div><div><br></div><div>Nothing interesting in any of the log files, everything is very quite.</div>
<div>All servers had no other load, and all clients are performing the same way.</div><div><br></div><div><br></div><div><div>Volume Name: shared</div><div>Type: Distribute</div><div>Volume ID: de11cc19-0085-41c3-881e-995cca244620</div>
<div>Status: Started</div><div>Number of Bricks: 26</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: fs-disk2:/storage/disk2a</div><div>Brick2: fs-disk2:/storage/disk2b</div><div>Brick3: fs-disk2:/storage/disk2d</div>
<div>Brick4: fs-disk2:/storage/disk2e</div><div>Brick5: fs-disk2:/storage/disk2f</div><div>Brick6: fs-disk2:/storage/disk2g</div><div>Brick7: fs-disk2:/storage/disk2h</div><div>Brick8: fs-disk2:/storage/disk2i</div><div>Brick9: fs-disk2:/storage/disk2j</div>
<div>Brick10: fs-disk2:/storage/disk2k</div><div>Brick11: fs-disk2:/storage/disk2l</div><div>Brick12: fs-disk2:/storage/disk2m</div><div>Brick13: fs-disk2:/storage/disk2n</div><div>Brick14: fs-disk2:/storage/disk2o</div><div>
Brick15: fs-disk2:/storage/disk2p</div><div>Brick16: fs-disk2:/storage/disk2q</div><div>Brick17: fs-disk2:/storage/disk2r</div><div>Brick18: fs-disk2:/storage/disk2s</div><div>Brick19: fs-disk2:/storage/disk2t</div><div>Brick20: fs-disk2:/storage/disk2u</div>
<div>Brick21: fs-disk2:/storage/disk2v</div><div>Brick22: fs-disk2:/storage/disk2w</div><div>Brick23: fs-disk2:/storage/disk2x</div><div>Brick24: fs-disk3:/storage/disk3a</div><div>Brick25: fs-disk3:/storage/disk3b</div><div>
Brick26: fs-disk3:/storage/disk3c</div><div>Options Reconfigured:</div><div>performance.write-behind: on</div><div>performance.read-ahead: on</div><div>performance.io-cache: on</div><div>performance.stat-prefetch: on</div>
<div>performance.quick-read: on</div><div>cluster.min-free-disk: 500GB</div><div>nfs.disable: off</div></div><div><br></div><div><br></div><div>sysctl.conf settings for 10GBe</div><div><div># increase TCP max buffer size settable using setsockopt()</div>
<div>net.core.rmem_max = 67108864 </div><div>net.core.wmem_max = 67108864 </div><div># increase Linux autotuning TCP buffer limit</div><div>net.ipv4.tcp_rmem = 4096 87380 67108864</div><div>net.ipv4.tcp_wmem = 4096 65536 67108864</div>
<div># increase the length of the processor input queue</div><div>net.core.netdev_max_backlog = 250000</div><div># recommended default congestion control is htcp </div><div>net.ipv4.tcp_congestion_control=htcp</div><div># recommended for hosts with jumbo frames enabled</div>
<div>net.ipv4.tcp_mtu_probing=1</div></div><div><br></div><div><br></div><div><div><br></div></div><div><br></div><br><br><div>
<span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div>
<span style="text-indent:0px;letter-spacing:normal;font-variant:normal;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><span style="text-indent:0px;letter-spacing:normal;font-variant:normal;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:12px;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
<div>Thomas W.<br>Sr.  Systems Administrator COLA/IGES<br><a href="mailto:twake@cola.iges.org" target="_blank">twake@cola.iges.org</a><br></div></div></span></span></div>Affiliate <span style="font-size:12px">Computer Scientist GMU</span></span>
</div>
<br></div><br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>