<div dir="ltr">Yea, only write to the glusterfs mountpoint. Writing directly to the bricks is bad and shouldn't be done.</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Feb 14, 2013 at 11:58 AM, Michael Colonno <span dir="ltr"><<a href="mailto:mcolonno@stanford.edu" target="_blank">mcolonno@stanford.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Good place to start: do the bricks have to be clients as well? In other words if I copy a file to a Gluster brick without going through a glusterfs or NFS mount will that disrupt the parallel file system? I assumed files need to be routed through a glusterfs mount point for Gluster to be able to track them(?) What's recommended for bricks which also need i/o to the entire volume?<br>
<br>
Thanks,<br>
Mike C.<br>
<div class="HOEnZb"><div class="h5"><br>
On Feb 14, 2013, at 10:28 AM, harry mangalam <<a href="mailto:harry.mangalam@uci.edu">harry.mangalam@uci.edu</a>> wrote:<br>
<br>
> While I don't understand your 'each brick system also being a client' setup -<br>
> you mean that each gluster brick is a native gluster client as well? And that<br>
> is where much of your gluster access is coming from? That seems .. suboptimal<br>
> if that's the setup. Is there a reason for that setup?<br>
><br>
> We have a distributed-only glusterfs feeding a medium cluster over a similar<br>
> same setup QDR IPoIB with 4 servers with 2 bricks each. On a fairly busy<br>
> system (~80MB/s background), I can get about 100-300MB/s writes to the gluster<br>
> fs on a large 1.7GB file. (With tiny writes, the perf decreases<br>
> dramatically).<br>
><br>
> Here is my config: (if anyone spies something that I should change to increase<br>
> my perf, please feel free to point out my mistake)<br>
><br>
> gluster:<br>
> Volume Name: gl<br>
> Type: Distribute<br>
> Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332<br>
> Status: Started<br>
> Number of Bricks: 8<br>
> Transport-type: tcp,rdma<br>
> Bricks:<br>
> Brick1: bs2:/raid1<br>
> Brick2: bs2:/raid2<br>
> Brick3: bs3:/raid1<br>
> Brick4: bs3:/raid2<br>
> Brick5: bs4:/raid1<br>
> Brick6: bs4:/raid2<br>
> Brick7: bs1:/raid1<br>
> Brick8: bs1:/raid2<br>
> Options Reconfigured:<br>
> performance.write-behind-window-size: 1024MB<br>
> performance.flush-behind: on<br>
> performance.cache-size: 268435456<br>
> nfs.disable: on<br>
> performance.io-cache: on<br>
> performance.quick-read: on<br>
> performance.io-thread-count: 64<br>
> auth.allow: 10.2.*.*,10.1.*.*<br>
><br>
> my RAID6s (via 3ware 9750s) are mounted with the following options<br>
><br>
> /dev/sdc /raid1 xfs rw,noatime,sunit=512,swidth=8192,allocsize=32m 0 0<br>
> /dev/sdd /raid2 xfs rw,noatime,sunit=512,swidth=7680,allocsize=32m 0 0<br>
> (and should probably be using 'nobarrier,inode64' as well. - testing this now)<br>
><br>
> There are some good refs on prepping XFS fs for max perf here:<br>
> <<a href="http://www.mythtv.org/wiki/Optimizing_Performance#XFS-Specific_Tips" target="_blank">http://www.mythtv.org/wiki/Optimizing_Performance#XFS-Specific_Tips</a>><br>
> The script at:<br>
> <<a href="http://www.mythtv.org/wiki/Optimizing_Performance#Further_Information" target="_blank">http://www.mythtv.org/wiki/Optimizing_Performance#Further_Information</a>><br>
> can help to setup the sunit/swidth options.<br>
> <<a href="http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-" target="_blank">http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-</a><br>
> edition/><br>
> Your ib interfaces should be using large mtus (65536)<br>
><br>
> hjm<br>
><br>
> On Wednesday, February 13, 2013 10:35:12 PM Michael Colonno wrote:<br>
>> More data: I got the Infiniband network (QDR) working well and<br>
>> switched my gluster volume to the Infiniband fabric (IPoIB, not RDMA since<br>
>> it doesn't seem to be supported yet for 3.x). The filesystem was slightly<br>
>> faster but still well short of what I would expect by a wide margin. Via an<br>
>> informal test (timing the movement of a large file) I'm getting several MB/s<br>
>> - well short of even a standard Gb network copy. With the faster network<br>
>> the CPU load on the brick systems increased dramatically: now I'm seeing<br>
>> 200%-250% usage by glusterfsd and glusterfs.<br>
>><br>
>> This leads me to believe that gluster is really not enjoying my<br>
>> eight-brick, 2x replication volume with each brick system also being a<br>
>> client. I tried a rebalance but no measurable effect. Any suggestions for<br>
>> improving the performance? Having each brick be a client of itself seemed<br>
>> the most logical choice to remove interdependencies but now I'm doubting the<br>
>> setup.<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>> From: <a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a><br>
>> [mailto:<a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a>] On Behalf Of Joe Julian<br>
>> Sent: Sunday, February 03, 2013 11:47 AM<br>
>> To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> Subject: Re: [Gluster-users] high CPU load on all bricks<br>
>><br>
>><br>
>><br>
>> On 02/03/2013 11:22 AM, Michael Colonno wrote:<br>
>><br>
>><br>
>><br>
>> Having taken a lot more data it does seem the glusterfsd and<br>
>> glusterd processes (along with several ksoftirqd) spike up to near 100% on<br>
>> both client and brick servers during any file transport across the mount.<br>
>> Thankfully this is short-lived for the most part but I'm wondering if this<br>
>> is expected behavior or what others have experienced(?) I'm a little<br>
>> surprised such a large CPU load would be required to move small files and /<br>
>> or use an application within a Gluster mount point.<br>
>><br>
>><br>
>> If you're getting ksoftirqd spikes, that sounds like a hardware issue to me.<br>
>> I never see huge spikes like that on my servers nor clients.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> I wanted to test this against an NFS mount of the same Gluster<br>
>> volume. I managed to get rstatd installed and running but my attempts to<br>
>> mount the volume via NFS are met with:<br>
>><br>
>><br>
>><br>
>> mount.nfs: requested NFS version or transport protocol is not<br>
>> supported<br>
>><br>
>><br>
>><br>
>> Relevant line in /etc/fstab:<br>
>><br>
>><br>
>><br>
>> node1:/volume /volume nfs<br>
>> defaults,_netdev,vers=3,mountproto=tcp 0 0<br>
>><br>
>><br>
>><br>
>> It looks like CentOS 6.x has NFS version 4 built into everything. So a few<br>
>> questions:<br>
>><br>
>><br>
>><br>
>> - Has anyone else noted significant performance differences between a<br>
>> glusterfs mount and NFS mount for volumes of 8+ bricks?<br>
>><br>
>> - Is there a straightforward way to make the newer versions of CentOS<br>
>> play nice with NFS version 3 + Gluster?<br>
>><br>
>> - Are there any general performance tuning guidelines I can follow to<br>
>> improve CPU performance? I found a few references to the cache settings but<br>
>> nothing solid.<br>
>><br>
>><br>
>><br>
>> If the consensus is that NFS will not gain anything then I won't waste the<br>
>> time setting it all up.<br>
>><br>
>><br>
>> NFS gains you the use of FSCache to cache directories and file stats making<br>
>> directory listings faster, but it adds overhead decreasing the overall<br>
>> throughput (from all the reports I've seen).<br>
>><br>
>> I would suspect that you have the kernel nfs server running on your servers.<br>
>> Make sure it's disabled.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> From: <a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a><br>
>> [mailto:<a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a>] On Behalf Of Michael Colonno<br>
>> Sent: Friday, February 01, 2013 4:46 PM<br>
>> To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> Subject: Re: [Gluster-users] high CPU load on all bricks<br>
>><br>
>><br>
>><br>
>> Update: after a few hours the CPU usage seems to have dropped<br>
>> down to a small value. I did not change anything with respect to the<br>
>> configuration or unmount / stop anything as I wanted to see if this would<br>
>> persist for a long period of time. Both the client and the self-mounted<br>
>> bricks are now showing CPU < 1% (as reported by top). Prior to the larger<br>
>> CPU loads I installed a bunch of software into the volume (~ 5 GB total). Is<br>
>> this kind a transient behavior - by which I mean larger CPU loads after a<br>
>> lot of filesystem activity in short time - typical? This is not a problem<br>
>> in my deployment; I just want to know what to expect in the future and to<br>
>> complete this thread for future users. If this is expected behavior we can<br>
>> wrap up this thread. If not then I'll do more digging into the logs on the<br>
>> client and brick sides.<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>> From: Joe Julian [mailto:<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>]<br>
>> Sent: Friday, February 01, 2013 2:08 PM<br>
>> To: Michael Colonno; <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> Subject: Re: [Gluster-users] high CPU load on all bricks<br>
>><br>
>><br>
>><br>
>> Check the client log(s).<br>
>><br>
>> Michael Colonno <<a href="mailto:mcolonno@stanford.edu">mcolonno@stanford.edu</a>> wrote:<br>
>><br>
>> Forgot to mention: on a client system (not a brick) the<br>
>> glusterfs process is consuming ~ 68% CPU continuously. This is a much less<br>
>> powerful desktop system so the CPU load can't be compared 1:1 with the<br>
>> systems comprising the bricks but still very high. So the issue seems to<br>
>> exist with both glusterfsd and glusterfs processes.<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>> From: <a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a><br>
>> [mailto:<a href="mailto:gluster-users-bounces@gluster.org">gluster-users-bounces@gluster.org</a>] On Behalf Of Michael Colonno<br>
>> Sent: Friday, February 01, 2013 12:46 PM<br>
>> To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> Subject: [Gluster-users] high CPU load on all bricks<br>
>><br>
>><br>
>><br>
>> Gluster gurus ~<br>
>><br>
>><br>
>><br>
>> I've deployed and 8-brick (2x replicate) Gluster 3.3.1 volume on<br>
>> CentOS 6.3 with tcp transport. I was able to build, start, mount, and use<br>
>> the volume. On each system contributing a brick, however, my CPU usage<br>
>> (glusterfsd) is hovering around 20% (virtually zero memory usage<br>
>> thankfully). These are brand new, fairly beefy servers so 20% CPU load is<br>
>> quite a bit. The deployment is pretty plain with each brick mounting the<br>
>> volume to itself via a glusterfs mount. I assume this type of CPU usage is<br>
>> atypically high; is there anything I can do to investigate what's soaking up<br>
>> CPU and minimize it? Total usable volume size is only about 22 TB (about 45<br>
>> TB total with 2x replicate).<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>><br>
>> _____<br>
>><br>
>><br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
> ---<br>
> Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine<br>
> [m/c 2225] / 92697 Google Voice Multiplexer: <a href="tel:%28949%29%20478-4487" value="+19494784487">(949) 478-4487</a><br>
> 415 South Circle View Dr, Irvine, CA, 92697 [shipping]<br>
> MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)<br>
> ---<br>
> "Something must be done. [X] is something. Therefore, we must do it."<br>
> Bruce Schneier, on American response to just about anything.<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>