<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello,<div><br></div><div>I'm running GlusterFS v3.0.2 with the native FUSE plugin on 2 Rackspace VM's each running 4GB Ram & 160GB HD. (GFS1 and GFS2) Available HD space is approx 57% remaining.</div><div><br></div><div>glusterfsd and postfix are the only processes running on these 2 servers, with a total of 6 external clients connected. Each server is a client to each other as well. (8 Total clients) </div><div><br></div><div>Upon fresh boot of the server, and processes, the total RAM usage is very minimal, however after a few hours of uptime, the RAM usage is almost completely depleted down to < 100MB on GFS2 and < 20MB on GFS1.</div><div><br></div><div>"lsof | grep gfs" reveals 53 connections on GFS1 and 45 on GFS2 from the multiple clients.</div><div><br></div><div>This doesn't appear to be client related, since resources are minimal at boot time, with all connections active. However, I'm not completely familiar with the configuration features.</div><div><br></div><div>I've just pushed these servers into production, and the websites they serve are receiving approximately 50k hits a day total. Yet, this RAM issue was present before any real traffic existed. Do I have a config error? or am I missing any major performance tuning options?</div><div><b><br></b></div><div>Any help would be very much appreciated. Thanks,</div><div>Chris</div><div><br></div><div>TOP:</div><div><br></div><div><div> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND </div><div> 2421 root 20 0 612m 419m 1092 S 0 10.4 1715:36 glusterfsd </div></div><div><b><br></b></div><div><b>Here is my server config:</b></div><div> </div><div><div>root@lmdc3gfs02:~# cat /etc/glusterfs/glusterfs-server.vol </div><div>volume posix</div><div> type storage/posix</div><div> option directory /data/export</div><div>end-volume</div><div> </div><div>volume locks</div><div> type features/locks</div><div> subvolumes posix</div><div>end-volume</div><div> </div><div>volume brick</div><div> type performance/io-threads</div><div> option thread-count 8</div><div> subvolumes locks</div><div>end-volume</div><div> </div><div>volume posix-ns</div><div> type storage/posix</div><div> option directory /data/export-ns</div><div>end-volume</div><div> </div><div>volume locks-ns</div><div> type features/locks</div><div> subvolumes posix-ns</div><div>end-volume</div><div> </div><div>volume brick-ns</div><div> type performance/io-threads</div><div> option thread-count 8</div><div> subvolumes locks-ns</div><div>end-volume</div><div> </div><div>volume server</div><div> type protocol/server</div><div> option transport-type tcp</div><div> option auth.addr.brick.allow *</div><div> option auth.addr.brick-ns.allow *</div><div> subvolumes brick brick-ns</div><div>end-volume</div></div><div><br></div><div><b>Client Config:</b></div><div><br></div><div><div>root@lmdc3gfs02:~# cat /etc/glusterfs/glusterfs-client.vol </div><div>volume brick1</div><div> type protocol/client</div><div> option transport-type tcp/client</div><div> option remote-host 10.179.122.66 # IP address of the remote brick</div><div> option remote-subvolume brick # name of the remote volume</div><div> option ping-timeout 2</div><div>end-volume</div><div><br></div><div>volume brick2</div><div> type protocol/client</div><div> option transport-type tcp/client</div><div> option remote-host 10.179.122.69 # IP address of the remote brick</div><div> option remote-subvolume brick # name of the remote volume</div><div> option ping-timeout 2</div><div>end-volume</div><div><br></div><div>volume brick1-ns</div><div> type protocol/client</div><div> option transport-type tcp/client</div><div> option remote-host 10.179.122.66 # IP address of the remote brick</div><div> option remote-subvolume brick-ns # name of the remote volume</div><div> option ping-timeout 2</div><div>end-volume</div><div><br></div><div>volume brick2-ns</div><div> type protocol/client</div><div> option transport-type tcp/client</div><div> option remote-host 10.179.122.69 # IP address of the remote brick</div><div> option remote-subvolume brick-ns # name of the remote volume</div><div> option ping-timeout 2</div><div>end-volume</div><div><br></div><div>volume afr1</div><div> type cluster/afr</div><div> subvolumes brick1 brick2</div><div>end-volume</div><div><br></div><div>volume afr-ns</div><div> type cluster/afr</div><div> subvolumes brick1-ns brick2-ns</div><div>end-volume</div></div><div><br></div><div><br></div></body></html>