<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Hi,<br>I can confirm that memory usage is perfectly normal now (about 100 MiB), having simply disabled DRC.<br><br>Many thanks,<br>Giuseppe<br><br><div><hr id="stopSpelling">From: giuseppe.ragusa@hotmail.com<br>To: vbellur@redhat.com; gluster-devel@nongnu.org<br>Date: Fri, 28 Mar 2014 00:27:07 +0100<br>Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak<br><br>
<style><!--
.ExternalClass .ecxhmmessage P {
padding:0px;
}
.ExternalClass body.ecxhmmessage {
font-size:12pt;
font-family:Calibri;
}
--></style>
<div dir="ltr">Hi,<br><br><div>> Date: Thu, 27 Mar 2014 09:26:10 +0530<br>> From: vbellur@redhat.com<br>> To: giuseppe.ragusa@hotmail.com; gluster-devel@nongnu.org<br>> Subject: Re: [Gluster-devel] Gluster 3.5 (latest nightly) NFS memleak<br>> <br>> On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote:<br>> > Hi all,<br>> > I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published<br>> > nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0<br>> > (latest snapshot too) on 2 physical nodes (12 GiB RAM) with<br>> > self-hosted-engine.<br>> ><br>> > I suppose this should be a good "selling point" for Gluster/oVirt and I<br>> > have solved almost all my oVirt problems but one remains:<br>> > Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine)<br>> > grows (from reboot) to about 8 GiB RAM usage (I even had it die before,<br>> > when put under cgroup memory restrictions) in about one day of no actual<br>> > usage (only the oVirt Engine VM is running on one node with no other<br>> > operations done on it or the whole cluster).<br>> ><br>> > I have seen similar reports on users and devel mailing lists and I'm<br>> > wondering how I can help in diagnosing this and/or if it would be better<br>> > to rely on latest 3.4.x Gluster (but it seems that the stable line has<br>> > had its share of memleaks too...).<br>> ><br>> <br>> Can you please check if turning off drc through:<br>> <br>> volume set <volname> nfs.drc off<br>> <br>> helps?<br>> <br>> -Vijay<br><br>I'm reinstalling just now to start from scratch with clean logs, configuration etc.<br>I will report after one day of activity, but from the old system I can already confirm that I had plenty of logs containing:<br><br><pre class="ecxbz_comment_text ecxbz_wrap_comment_text">0-rpc-service: DRC failed to detect duplicates<br><br><br>like in BZ#1008301<br></pre>Many thanks for your suggestion.<br><br>Regards,<br>Giuseppe<br><br></div>                                            </div>
<br>_______________________________________________
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel</div>                                            </div></body>
</html>