<div dir="ltr"><div>It seems this has come up before. Is there a bug associated with NFS OOM errors?<br><br></div><div>If not - Jens, could you file one?<br></div><div><br></div>-JM<br><br></div><div class="gmail_extra"><br>
<br><div class="gmail_quote">On Fri, Mar 21, 2014 at 7:46 AM, Jens Laas <span dir="ltr"><<a href="mailto:jens.laas@uadm.uu.se" target="_blank">jens.laas@uadm.uu.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
(14.03.20 kl.14:42) Paul Robert Marino skrev följande till Jens Laas:<br>
<br>
> you ran out of ram<br>
> tune your box or take stuff off of it if its running any thing else.<br>
> when you run out of memory the kernel just kills whatever process it<br>
> happens to find first it may or may not be the actuall process at<br>
> fault.<br>
<br>
True.<br>
<br>
But the memory consumtion seems somewhat excessive.<br>
<br>
By copying 4GB of data to the volume, glusterfs/nfs process grows by 6GB.<br>
<br>
To work around this, we now do a native glusterfs-mount and export it via<br>
"standard" linux nfs. Which does not exhibit this behaviour.<br>
<br>
I just wanted to point out that a memory leak might be present.<br>
<br>
The data copied consists of a lot of small files and directories.<br>
<br>
Best regards,<br>
Jens<br>
<br>
><br>
> On Thu, Mar 20, 2014 at 9:42 AM, Jens Laas <<a href="mailto:jens.laas@uadm.uu.se">jens.laas@uadm.uu.se</a>> wrote:<br>
> ><br>
> > 4GB server (RHEL6).<br>
> > glusterfs-3.4.2-1.el6.x86_64 etc from gluster site.<br>
> ><br>
> > Copying files via NFS to gluster.<br>
> ><br>
> > Out of memory: Kill process 18225 (glusterfs) score 660 or sacrifice child<br>
> > Killed process 18225, UID 0, (glusterfs) total-vm:3675904kB, anon-rss:3422940kB,<br>
> > file-rss:2072kB<br>
> ><br>
> > [2014-03-20 13:19:24.951428] D [nfs3-helpers.c:1618:nfs3_log_common_call]<br>
> > 0-nfs-nfsv3: XID: 9fd2b6b3, ACCESS: args: FH: exportid<br>
> > c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid cc40b980-8a77-4cbb-ba23-147e37059a2d<br>
> > [2014-03-20 13:19:24.951550] D [mem-pool.c:422:mem_get]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3svc_access+0x7c)<br>
> > [0x7ffff3110b8c]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_access+0xfd)<br>
> > [0x7ffff311082d]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_call_state_init+0x45)<br>
> > [0x7ffff310b535]))) 0-mem-pool: Mem pool is full. Callocing mem<br>
> > [2014-03-20 13:19:24.951684] D [afr-common.c:745:afr_get_call_child]<br>
> > 0-gv0-replicate-0: Returning 0, call_child: 1, last_index: -1<br>
> > [2014-03-20 13:19:24.952050] D [nfs3-helpers.c:3380:nfs3_log_common_res]<br>
> > 0-nfs-nfsv3: XID: 9fd2b6b3, ACCESS: NFS: 0(Call completed successfully.), POSIX:<br>
> > 7(Argument list too long)<br>
> > [2014-03-20 13:19:24.952458] D [nfs3-helpers.c:1675:nfs3_log_create_call]<br>
> > 0-nfs-nfsv3: XID: a0d2b6b3, CREATE: args: FH: exportid<br>
> > c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid cc40b980-8a77-4cbb-ba23-147e37059a2d,<br>
> > name: 1.0, mode: EXCLUSIVE<br>
> > [2014-03-20 13:19:24.952522] D [mem-pool.c:422:mem_get]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3svc_create+0xa7)<br>
> > [0x7ffff3115e27]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_create+0x38b)<br>
> > [0x7ffff3115c5b]<br>
> > (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_call_state_init+0x45)<br>
> > [0x7ffff310b535]))) 0-mem-pool: Mem pool is full. Callocing mem<br>
> > [2014-03-20 13:19:24.954278] D<br>
> > [afr-transaction.c:1144:afr_post_nonblocking_entrylk_cbk] 0-gv0-replicate-0: Non<br>
> > blocking entrylks done. Proceeding to FOP<br>
> > [2014-03-20 13:19:24.966742] D [afr-lk-common.c:447:transaction_lk_op]<br>
> > 0-gv0-replicate-0: lk op is for a transaction<br>
> > [2014-03-20 13:19:24.967190] D<br>
> > [afr-transaction.c:1094:afr_post_nonblocking_inodelk_cbk] 0-gv0-replicate-0: Non<br>
> > blocking inodelks done. Proceeding to FOP<br>
> > [2014-03-20 13:19:24.967324] D [client-rpc-fops.c:2789:client_fdctx_destroy]<br>
> > 0-gv0-client-0: sending release on fd<br>
> > [2014-03-20 13:19:24.967362] D [client-rpc-fops.c:2789:client_fdctx_destroy]<br>
> > 0-gv0-client-1: sending release on fd<br>
> > [2014-03-20 13:19:24.967485] D [nfs3-helpers.c:3449:nfs3_log_newfh_res]<br>
> > 0-nfs-nfsv3: XID: a0d2b6b3, CREATE: NFS: 0(Call completed successfully.), POSIX:<br>
> > 0(Success), FH: exportid c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid<br>
> > be79d2ff-0339-435f-ac36-b313e089e245<br>
> > [Thread 0x7ffff0e27700 (LWP 18236) exited]<br>
> > [Thread 0x7ffff4852700 (LWP 18231) exited]<br>
> > [Thread 0x7ffff568a700 (LWP 18230) exited]<br>
> > [Thread 0x7ffff608b700 (LWP 18229) exited]<br>
> > [Thread 0x7ffff6a8c700 (LWP 18228) exited]<br>
> ><br>
> > Program terminated with signal SIGKILL, Killed.<br>
> > The program no longer exists.<br>
> > (gdb)<br>
> ><br>
> > Regards,<br>
> > Jens<br>
> > _______________________________________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
<br>
-----------------------------------------------------------------------<br>
'In theory, there is no difference between theory and practice.<br>
But, in practice, there is.'<br>
-----------------------------------------------------------------------<br>
Jens Låås Email: <a href="mailto:jens.laas@its.uu.se">jens.laas@its.uu.se</a><br>
ITS Phone: <a href="tel:%2B46%2018%20471%2077%2003" value="+46184717703">+46 18 471 77 03</a><br>
SWEDEN<br>
-----------------------------------------------------------------------<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>