<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 08/05/2014 09:16 PM, Ryan Clough
wrote:<br>
</div>
<blockquote
cite="mid:CAG-ey24w=RSoVqLe7eDtYndO9Ks18xuna_N1tvo8hrz89EmKmQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>I spoke too soon. Runing "ls -lR" on one of
our largest directory structures overnight has
caused glusterfs to use lots of memory. It
appears as though glusterfs process is still
gradually consuming more and more memory. I
have tried to release the memory forcefully by
issuing this command:<br>
sync; echo 3 > /proc/sys/vm/drop_caches<br>
</div>
But glusterfs holds on to its memory. The high
memory is also expressed on the client side as
well as on the server side.<br>
</div>
<div>Right now both of my brick servers are using
about 7GB of RAM for the glusterfs process and
the client that is running the "ls -lR" is using
about 8GB of RAM. Below are some basic
specifications of my hardware. Both server and
client are running version 3.5.2. I have
attached a statedump of the client glusterfs.<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
Could you please send out statedumps please. That should help us
figure out what the problem is.<br>
<br>
Pranith<br>
<blockquote
cite="mid:CAG-ey24w=RSoVqLe7eDtYndO9Ks18xuna_N1tvo8hrz89EmKmQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
</div>
<div><br>
</div>
Brick server hardware:<br>
</div>
Dual 6-core Intel Xeon CPU E5-2620 0 @ 2.00GHz (HT
is on)<br>
</div>
32GB SDRAM<br>
</div>
2 - 500GB SATA drives in RAID1 for OS<br>
</div>
12 - 3TB SATA drives in RAID6 with LVM and XFS for data<br>
<br>
<br>
</div>
Client hardware:<br>
</div>
Dual 8-core AMD OpteronProcessor 6128<br>
</div>
32GB SDRAM<br>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div dir="ltr"><font size="1">Ryan Clough<br>
Information Systems<br>
<a moz-do-not-send="true"
href="http://www.decisionsciencescorp.com/"
target="_blank">Decision Sciences International
Corporation</a></font><span
style="font-family:"Calibri","sans-serif";color:#1f497d"><a
moz-do-not-send="true"
href="http://www.decisionsciencescorp.com/"
target="_blank"><span style="color:blue"></span></a></span></div>
</div>
<br>
<br>
<div class="gmail_quote">On Mon, Aug 4, 2014 at 12:07 PM, Ryan
Clough <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:ryan.clough@dsic.com" target="_blank">ryan.clough@dsic.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>Hi,<br>
</div>
I too was experiencing this issue on my bricks. I am using
version 3.5.2 and after setting io-cache and quick-read to
"off", as Poornima suggested, I am no longer seeing
glusterfs gobbling memory. I noticed it first when I
enabled quotas and during the quota-crawl glusterfs
process would be OOM killed by the kernel. Before, my
bricks would consume all available memory until swap was
exhausted and the kernel OOMs the glusterfs process. There
is a rebalance running right now and glusterfs is
behaving. Here is some output of my current config. Let me
know if I can provide anything else to help.<br>
<br>
[root@tgluster01 ~]# gluster volume status all detail<br>
Status of volume: tgluster_volume<br>
------------------------------------------------------------------------------<br>
Brick : Brick tgluster01:/gluster_data<br>
Port : 49153 <br>
Online : Y <br>
Pid : 2407 <br>
File System : xfs <br>
Device : /dev/mapper/vg_data-lv_data<br>
Mount Options :
rw,noatime,nodiratime,logbufs=8,logbsize=256k,inode64,nobarrier<br>
Inode Size : 512 <br>
Disk Space Free : 3.5TB <br>
Total Disk Space : 27.3TB <br>
Inode Count : 2929685696 <br>
Free Inodes : 2863589912 <br>
------------------------------------------------------------------------------<br>
Brick : Brick tgluster02:/gluster_data<br>
Port : 49152 <br>
Online : Y <br>
Pid : 2402 <br>
File System : xfs <br>
Device : /dev/mapper/vg_data-lv_data<br>
Mount Options :
rw,noatime,nodiratime,logbufs=8,logbsize=256k,inode64,nobarrier<br>
Inode Size : 512 <br>
Disk Space Free : 5.4TB <br>
Total Disk Space : 27.3TB <br>
Inode Count : 2929685696 <br>
Free Inodes : 2864874648 <br>
<br>
[root@tgluster01 ~]# gluster volume status<br>
Status of volume: tgluster_volume<br>
Gluster process Port Online
Pid<br>
------------------------------------------------------------------------------<br>
Brick tgluster01:/gluster_data 49153
Y 2407<br>
Brick tgluster02:/gluster_data 49152
Y 2402<br>
Quota Daemon on localhost N/A Y 2415<br>
Quota Daemon on tgluster02 N/A Y 2565<br>
<br>
Task Status of Volume tgluster_volume<br>
------------------------------------------------------------------------------<br>
Task : Rebalance <br>
ID :
31fd1edb-dd6d-4c25-b4b5-1ce7bc0670f3<br>
Status : in progress<br>
<br>
[root@tgluster01 ~]# gluster volume info<br>
Volume Name: tgluster_volume<br>
Type: Distribute<br>
Volume ID: 796774f8-f9ec-476c-9d08-0f5f937d5ad9<br>
Status: Started<br>
Number of Bricks: 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: tgluster01:/gluster_data<br>
Brick2: tgluster02:/gluster_data<br>
Options Reconfigured:<br>
features.quota-deem-statfs: on<br>
performance.client-io-threads: on<br>
performance.md-cache-timeout: 1<br>
performance.cache-max-file-size: 10MB<br>
network.ping-timeout: 60<br>
performance.write-behind-window-size: 4MB<br>
performance.read-ahead: on<br>
performance.cache-refresh-timeout: 1<br>
performance.cache-size: 10GB<br>
performance.quick-read: off<br>
nfs.disable: on<br>
features.quota: on<br>
performance.io-thread-count: 24<br>
cluster.eager-lock: on<br>
server.statedump-path: /var/log/glusterfs/<br>
performance.flush-behind: on<br>
performance.write-behind: on<br>
performance.stat-prefetch: on<br>
performance.io-cache: off<br>
<br>
[root@tgluster01 ~]# gluster volume status all mem<br>
Memory status for volume : tgluster_volume<br>
----------------------------------------------<br>
Brick : tgluster01:/gluster_data<br>
Mallinfo<br>
--------<br>
Arena : 25788416<br>
Ordblks : 7222<br>
Smblks : 1<br>
Hblks : 12<br>
Hblkhd : 16060416<br>
Usmblks : 0<br>
Fsmblks : 80<br>
Uordblks : 25037744<br>
Fordblks : 750672<br>
Keepcost : 132816<br>
<br>
Mempool Stats<br>
-------------<br>
Name HotCount ColdCount
PaddedSizeof AllocCount MaxAlloc Misses Max-StdAlloc<br>
---- -------- ---------
------------ ---------- -------- -------- ------------<br>
tgluster_volume-server:fd_t 11
1013 108 194246 22 0
0<br>
tgluster_volume-server:dentry_t 16384
0 84 1280505 16384 481095 32968<br>
tgluster_volume-server:inode_t 16383
1 156 13974240 16384 7625153 39688<br>
tgluster_volume-changelog:changelog_local_t
0 64 108 0 0
0 0<br>
tgluster_volume-locks:pl_local_t 0
32 148 3922857 4 0 0<br>
tgluster_volume-marker:marker_local_t 0
128 332 6163938 8 0 0<br>
tgluster_volume-quota:struct saved_frame 0
16 124 65000 6 0 0<br>
tgluster_volume-quota:struct rpc_req 0
16 588 65000 6 0 0<br>
tgluster_volume-quota:quota_local_t 0
64 404 4476051 8 0 0<br>
tgluster_volume-server:rpcsvc_request_t 0
512 2828 6694494 8 0 0<br>
glusterfs:struct saved_frame 0
8 124 2 2 0 0<br>
glusterfs:struct rpc_req 0
8 588 2 2 0 0<br>
glusterfs:rpcsvc_request_t 1 7
2828 2 1 0 0<br>
glusterfs:data_t 164
16219 52 60680465 2012 0
0<br>
glusterfs:data_pair_t 159
16224 68 34718980 1348 0
0<br>
glusterfs:dict_t 15
4081 140 24689263 714 0
0<br>
glusterfs:call_stub_t 0 1024
3756 8263013 9 0 0<br>
glusterfs:call_stack_t 1 1023
1836 6675669 8 0 0<br>
glusterfs:call_frame_t 0
4096 172 55532603 251 0
0<br>
----------------------------------------------<br>
Brick : tgluster02:/gluster_data<br>
Mallinfo<br>
--------<br>
Arena : 18714624<br>
Ordblks : 4211<br>
Smblks : 1<br>
Hblks : 12<br>
Hblkhd : 16060416<br>
Usmblks : 0<br>
Fsmblks : 80<br>
Uordblks : 18250608<br>
Fordblks : 464016<br>
Keepcost : 131360<br>
<br>
Mempool Stats<br>
-------------<br>
Name HotCount ColdCount
PaddedSizeof AllocCount MaxAlloc Misses Max-StdAlloc<br>
---- -------- ---------
------------ ---------- -------- -------- ------------<br>
tgluster_volume-server:fd_t 11
1013 108 155373 22 0
0<br>
tgluster_volume-server:dentry_t 16383
1 84 1297732 16384 396012 21124<br>
tgluster_volume-server:inode_t 16384
0 156 13896002 16384 7434842 24494<br>
tgluster_volume-changelog:changelog_local_t
0 64 108 0 0
0 0<br>
tgluster_volume-locks:pl_local_t 2
30 148 5578625 17 0 0<br>
tgluster_volume-marker:marker_local_t 3
125 332 6834019 68 0 0<br>
tgluster_volume-quota:struct saved_frame 0
16 124 64922 10 0 0<br>
tgluster_volume-quota:struct rpc_req 0
16 588 65000 10 0 0<br>
tgluster_volume-quota:quota_local_t 3
61 404 4216852 64 0 0<br>
tgluster_volume-server:rpcsvc_request_t 3
509 2828 6406870 64 0 0<br>
glusterfs:struct saved_frame 0
8 124 2 2 0 0<br>
glusterfs:struct rpc_req 0
8 588 2 2 0 0<br>
glusterfs:rpcsvc_request_t 1 7
2828 2 1 0 0<br>
glusterfs:data_t 185
16198 52 80402618 1427 0
0<br>
glusterfs:data_pair_t 177
16206 68 40014499 737 0
0<br>
glusterfs:dict_t 18
4078 140 35345779 729 0
0<br>
glusterfs:call_stub_t 3 1021
3756 21374090 68 0 0<br>
glusterfs:call_stack_t 4 1020
1836 6824400 68 0 0<br>
glusterfs:call_frame_t 20
4076 172 97255627 388 0
0<br>
----------------------------------------------<br>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">
<div>
<div dir="ltr"><font size="1">Ryan Clough<br>
Information Systems<br>
<a moz-do-not-send="true"
href="http://www.decisionsciencescorp.com/"
target="_blank">Decision Sciences International
Corporation</a></font><span
style="font-family:"Calibri","sans-serif";color:#1f497d"><a
moz-do-not-send="true"
href="http://www.decisionsciencescorp.com/"
target="_blank"><span style="color:blue"></span></a></span></div>
</div>
<div>
<div class="h5">
<br>
<br>
<div class="gmail_quote">On Sun, Aug 3, 2014 at
11:36 PM, Poornima Gurusiddaiah <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:pgurusid@redhat.com"
target="_blank">pgurusid@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
Hi,<br>
<br>
From the statedump it is evident that the iobufs
are leaking.<br>
Also the hot count of the
pool-name=w-vol-io-cache:rbthash_entry_t is
10053, implies io-cache xlator could be the
cause of the leak.<br>
From the logs, it looks like, quick-read
performance xlator is calling iobuf_free with
NULL pointers, implies quick-read could be
leaking iobufs as well.<br>
<br>
As a temperory solution, could you disable
io-cache and/or quick-read and see if the leak
still persists?<br>
<br>
$gluster volume set io-cache off<br>
$gluster volume set quick-read off<br>
<br>
This may reduce the performance to certain
extent.<br>
<br>
For further debugging, could you provide the
core dump or steps to reproduce if avaiable?<br>
<div><br>
Regards,<br>
Poornima<br>
<br>
----- Original Message -----<br>
From: "Tamas Papp" <<a
moz-do-not-send="true"
href="mailto:tompos@martos.bme.hu"
target="_blank">tompos@martos.bme.hu</a>><br>
</div>
<div>
<div>To: "Poornima Gurusiddaiah" <<a
moz-do-not-send="true"
href="mailto:pgurusid@redhat.com"
target="_blank">pgurusid@redhat.com</a>><br>
Cc: <a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org"
target="_blank">Gluster-users@gluster.org</a><br>
Sent: Sunday, August 3, 2014 10:33:17 PM<br>
Subject: Re: [Gluster-users] high memory
usage of mount<br>
<br>
<br>
On 07/31/2014 09:17 AM, Tamas Papp wrote:<br>
><br>
> On 07/31/2014 09:02 AM, Poornima
Gurusiddaiah wrote:<br>
>> Hi,<br>
><br>
> hi,<br>
><br>
>> Can you provide the statedump of
the process, it can be obtained as<br>
>> follows:<br>
>> $ gluster --print-statedumpdir
#create this directory if it doesn't<br>
>> exist.<br>
>> $ kill -USR1
<pid-of-glusterfs-process> #generates
state dump.<br>
><br>
> <a moz-do-not-send="true"
href="http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip"
target="_blank">http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip</a><br>
><br>
>> Also, xporting Gluster via
Samba-VFS-plugin method is preferred over<br>
>> Fuse mount export. For more details
refer to:<br>
>> <a moz-do-not-send="true"
href="http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/"
target="_blank">http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/</a><br>
>><br>
><br>
> When I tried it about half year ago it
didn't work properly. Clients<br>
> lost mounts, access errors etc.<br>
><br>
> But I will give it a try, though it's
not included in ubuntu's samba<br>
> AFAIK.<br>
><br>
><br>
> Thank you,<br>
> tamas<br>
><br>
> ps. I forget to mention, I can see this
issue only one node. The rest<br>
> of nodes are fine.<br>
<br>
hi Poornima,<br>
<br>
Do you have idea, what's going on here?<br>
<br>
Thanks,<br>
tamas<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org"
target="_blank">Gluster-users@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://supercolony.gluster.org/mailman/listinfo/gluster-users"
target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<span><font color="#888888">This communication is intended only
for the person(s) to whom it is addressed and may contain
confidential and/or privileged information. Any review,
re-transmission, dissemination, copying or other use of, or
taking of any action in reliance upon, this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this communication in error,
please report the error to the sender by return email and
delete this communication from your records.</font></span><br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>