<p dir="ltr">No firewalls in this case... </p>
<p dir="ltr">--<br>
Gene Liverman<br>
Systems Administrator<br>
Information Technology Services<br>
University of West Georgia<br>
<a href="mailto:gliverma@westga.edu">gliverma@westga.edu</a></p>
<p dir="ltr"> </p>
<div class="gmail_quote">On Jun 10, 2014 12:57 PM, "Paul Robert Marino" <<a href="mailto:prmarino1@gmail.com">prmarino1@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Ive also seen this happen when there is a firewall in the middle and<br>
nfslockd malfunctioned because of it.<br>
<br>
<br>
On Tue, Jun 10, 2014 at 12:20 PM, Gene Liverman <<a href="mailto:gliverma@westga.edu">gliverma@westga.edu</a>> wrote:<br>
> Thanks! I turned off drc as suggested and will have to wait and see how that<br>
> works. Here are the packages I have installed via yum:<br>
> # rpm -qa |grep -i gluster<br>
> glusterfs-cli-3.5.0-2.el6.x86_64<br>
> glusterfs-libs-3.5.0-2.el6.x86_64<br>
> glusterfs-fuse-3.5.0-2.el6.x86_64<br>
> glusterfs-server-3.5.0-2.el6.x86_64<br>
> glusterfs-3.5.0-2.el6.x86_64<br>
> glusterfs-geo-replication-3.5.0-2.el6.x86_64<br>
><br>
> The nfs server service was showing to be running even when stuff wasn't<br>
> working. This is from while it was broken:<br>
><br>
> # gluster volume status<br>
> Status of volume: gv0<br>
> Gluster process Port<br>
> Online Pid<br>
> ------------------------------------------------------------------------------------------------------------<br>
> Brick eapps-gluster01.my.domain:/export/sdb1/gv0 49152 Y 39593<br>
> Brick eapps-gluster02.my.domain:/export/sdb1/gv0 49152 Y 2472<br>
> Brick eapps-gluster03.my.domain:/export/sdb1/gv0 49152 Y 1866<br>
> NFS Server on localhost 2049 Y<br>
> 39603<br>
> Self-heal Daemon on localhost N/A Y<br>
> 39610<br>
> NFS Server on eapps-gluster03.my.domain 2049 Y 35125<br>
> Self-heal Daemon on eapps-gluster03.my.domain N/A Y 35132<br>
> NFS Server on eapps-gluster02.my.domain 2049 Y 37103<br>
> Self-heal Daemon on eapps-gluster02.my.domain N/A Y 37110<br>
><br>
> Task Status of Volume gv0<br>
> ---------------------------------------------------------------------------------------------------------------<br>
><br>
><br>
> Running 'service glusterd restart' on the NFS server made things start<br>
> working again after this.<br>
><br>
><br>
> -- Gene<br>
><br>
><br>
><br>
><br>
> On Tue, Jun 10, 2014 at 12:10 PM, Niels de Vos <<a href="mailto:ndevos@redhat.com">ndevos@redhat.com</a>> wrote:<br>
>><br>
>> On Tue, Jun 10, 2014 at 11:32:50AM -0400, Gene Liverman wrote:<br>
>> > Twice now I have had my nfs connection to a replicated gluster volume<br>
>> > stop<br>
>> > responding. On both servers that connect to the system I have the<br>
>> > following<br>
>> > symptoms:<br>
>> ><br>
>> > 1. Accessing the mount with the native client is still working fine<br>
>> > (the<br>
>> > volume is mounted both that way and via nfs. One app requires the nfs<br>
>> > version)<br>
>> > 2. The logs have messages stating the following: "kernel: nfs: server<br>
>> > my-servers-name not responding, still trying"<br>
>> ><br>
>> > How can I fix this?<br>
>><br>
>> You should check if the NFS-server (a glusterfs process) is still<br>
>> running:<br>
>><br>
>> # gluster volume status<br>
>><br>
>> If the NFS-server is not running anymore, you can start it with:<br>
>><br>
>> # gluster volume start $VOLUME force<br>
>> (you only need to do that for one volume)<br>
>><br>
>><br>
>> In case this is with GlusterFS 3.5, you may be hitting a memory leak in<br>
>> the DRC (Duplicate Request Cache) implementation of the NFS-server. You<br>
>> can disable DRC with this:<br>
>><br>
>> # gluster volume set $VOLUME nfs.drc off<br>
>><br>
>> In glusterfs-3.5.1 DRC will be disabled by default, there have been too<br>
>> many issues with DRC to enable it for everyone. We need to do more tests<br>
>> and fix DRC in the current development (master) branch.<br>
>><br>
>> HTH,<br>
>> Niels<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>