<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
btw... My bug entry to change that glib and kernel error string (<a
href="https://bugzilla.redhat.com/show_bug.cgi?id=832694">https://bugzilla.redhat.com/show_bug.cgi?id=832694</a>)
have been accepted and patched. Just waiting for them to make it
downstream.<br>
<br>
<div class="moz-cite-prefix">On 10/15/2013 2:54 PM, Justin Dossey
wrote:<br>
</div>
<blockquote
cite="mid:CAPMPShxkwLNJM738OQMfv9s5ia=99rpDUq0mM17tmOK3=+H8GQ@mail.gmail.com"
type="cite">
<div dir="ltr">I've seen these errors too on GlusterFS 3.3.1 nodes
with glusterfs-fuse mounts. It's particularly strange because
we're not using NFS to mount the volumes.</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Tue, Oct 15, 2013 at 1:44 PM, Neil
Van Lysel <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:van-lyse@cs.wisc.edu" target="_blank">van-lyse@cs.wisc.edu</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello!<br>
<br>
Many of our Gluster client nodes are seeing a lot of these
errors in their log files:<br>
<br>
[2013-10-15 06:48:59.467263] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-6: remote operation failed: Stale NFS file
handle. Path: /path (3cfbebf4-40e4-4300-aa6e-bd43b4310b94)<br>
[2013-10-15 06:48:59.467331] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-7: remote operation failed: Stale NFS file
handle. Path: /path (3cfbebf4-40e4-4300-aa6e-bd43b4310b94)<br>
[2013-10-15 06:48:59.470554] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-0: remote operation failed: Stale NFS file
handle. Path: /path (d662e7db-7864-4b18-b587-bdc5e8756076)<br>
[2013-10-15 06:48:59.470624] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-1: remote operation failed: Stale NFS file
handle. Path: /path (d662e7db-7864-4b18-b587-bdc5e8756076)<br>
[2013-10-15 06:49:04.537548] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-3: remote operation failed: Stale NFS file
handle. Path: /path (a4ea32e0-25f8-440d-b258-23430490624d)<br>
[2013-10-15 06:49:04.537651] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-2: remote operation failed: Stale NFS file
handle. Path: /path (a4ea32e0-25f8-440d-b258-23430490624d)<br>
[2013-10-15 06:49:14.380551] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-0: remote operation failed: Stale NFS file
handle. Path: /path (669a2d6b-2998-48b2-8f3f-93d5f65cdd87)<br>
[2013-10-15 06:49:14.380663] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-1: remote operation failed: Stale NFS file
handle. Path: /path (669a2d6b-2998-48b2-8f3f-93d5f65cdd87)<br>
[2013-10-15 06:49:14.386390] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-4: remote operation failed: Stale NFS file
handle. Path: /path (016aafa9-35ac-4f6f-90bd-b4ac5d435ad0)<br>
[2013-10-15 06:49:14.386471] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-home-client-5: remote operation failed: Stale NFS file
handle. Path: /path (016aafa9-35ac-4f6f-90bd-b4ac5d435ad0)<br>
[2013-10-15 18:28:10.630357] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-2: remote operation failed: Stale NFS file
handle. Path: /path (5d6153cc-64b3-4151-85cd-2646c33c6918)<br>
[2013-10-15 18:28:10.630425] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-3: remote operation failed: Stale NFS file
handle. Path: /path (5d6153cc-64b3-4151-85cd-2646c33c6918)<br>
[2013-10-15 18:28:10.636301] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-4: remote operation failed: Stale NFS file
handle. Path: /path (2f64b9fe-02a0-408b-9edb-0c5e5bf0ed0e)<br>
[2013-10-15 18:28:10.636377] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-5: remote operation failed: Stale NFS file
handle. Path: /path (2f64b9fe-02a0-408b-9edb-0c5e5bf0ed0e)<br>
[2013-10-15 18:28:10.638574] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-5: remote operation failed: Stale NFS file
handle. Path: /path (990de721-1fc9-461d-8412-8c17c23ebbbd)<br>
[2013-10-15 18:28:10.638647] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-4: remote operation failed: Stale NFS file
handle. Path: /path (990de721-1fc9-461d-8412-8c17c23ebbbd)<br>
[2013-10-15 18:28:10.645043] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-7: remote operation failed: Stale NFS file
handle. Path: /path (0d8d3c5a-d26e-4c15-a8d5-987a4033a6d0)<br>
[2013-10-15 18:28:10.645157] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-6: remote operation failed: Stale NFS file
handle. Path: /path (0d8d3c5a-d26e-4c15-a8d5-987a4033a6d0)<br>
[2013-10-15 18:28:10.648126] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-6: remote operation failed: Stale NFS file
handle. Path: /path (c1c84d57-f54d-4dc1-a5df-9be563da78fb)<br>
[2013-10-15 18:28:10.648276] W [client-rpc-fops.c:2624:client3_3_lookup_cbk]
0-scratch-client-7: remote operation failed: Stale NFS file
handle. Path: /path (c1c84d57-f54d-4dc1-a5df-9be563da78fb)<br>
<br>
<br>
How can I resolve these errors?<br>
<br>
<br>
*gluster --version:<br>
glusterfs 3.4.0 built on Jul 25 2013 04:12:27<br>
<br>
<br>
*gluster volume info:<br>
Volume Name: scratch<br>
Type: Distributed-Replicate<br>
Volume ID: 198b9d77-96e6-4c7f-9f0c-3618cbcaa940<br>
Status: Started<br>
Number of Bricks: 4 x 2 = 8<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.129.40.21:/data/glusterfs/brick1/scratch<br>
Brick2: 10.129.40.22:/data/glusterfs/brick1/scratch<br>
Brick3: 10.129.40.23:/data/glusterfs/brick1/scratch<br>
Brick4: 10.129.40.24:/data/glusterfs/brick1/scratch<br>
Brick5: 10.129.40.21:/data/glusterfs/brick2/scratch<br>
Brick6: 10.129.40.22:/data/glusterfs/brick2/scratch<br>
Brick7: 10.129.40.23:/data/glusterfs/brick2/scratch<br>
Brick8: 10.129.40.24:/data/glusterfs/brick2/scratch<br>
Options Reconfigured:<br>
features.quota: off<br>
<br>
Volume Name: home<br>
Type: Distributed-Replicate<br>
Volume ID: 0d8ebafc-471e-4b16-a4a9-787ce8616225<br>
Status: Started<br>
Number of Bricks: 4 x 2 = 8<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.129.40.21:/data/glusterfs/brick1/home<br>
Brick2: 10.129.40.22:/data/glusterfs/brick1/home<br>
Brick3: 10.129.40.23:/data/glusterfs/brick1/home<br>
Brick4: 10.129.40.24:/data/glusterfs/brick1/home<br>
Brick5: 10.129.40.21:/data/glusterfs/brick2/home<br>
Brick6: 10.129.40.22:/data/glusterfs/brick2/home<br>
Brick7: 10.129.40.23:/data/glusterfs/brick2/home<br>
Brick8: 10.129.40.24:/data/glusterfs/brick2/home<br>
Options Reconfigured:<br>
features.quota: off<br>
<br>
<br>
*gluster volume status:<br>
Status of volume: scratch<br>
Gluster process Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick 10.129.40.21:/data/glusterfs/brick1/scratch
49154 Y 7536<br>
Brick 10.129.40.22:/data/glusterfs/brick1/scratch
49154 Y 27976<br>
Brick 10.129.40.23:/data/glusterfs/brick1/scratch
49154 Y 7436<br>
Brick 10.129.40.24:/data/glusterfs/brick1/scratch
49154 Y 19773<br>
Brick 10.129.40.21:/data/glusterfs/brick2/scratch
49155 Y 7543<br>
Brick 10.129.40.22:/data/glusterfs/brick2/scratch
49155 Y 27982<br>
Brick 10.129.40.23:/data/glusterfs/brick2/scratch
49155 Y 7442<br>
Brick 10.129.40.24:/data/glusterfs/brick2/scratch
49155 Y 19778<br>
NFS Server on localhost 2049
Y 7564<br>
Self-heal Daemon on localhost N/A
Y 7569<br>
NFS Server on 10.129.40.24 2049
Y 19788<br>
Self-heal Daemon on 10.129.40.24 N/A
Y 19792<br>
NFS Server on 10.129.40.23 2049
Y 7464<br>
Self-heal Daemon on 10.129.40.23 N/A
Y 7468<br>
NFS Server on 10.129.40.22 2049
Y 28004<br>
Self-heal Daemon on 10.129.40.22 N/A
Y 28008<br>
<br>
There are no active volume tasks<br>
Status of volume: home<br>
Gluster process Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick 10.129.40.21:/data/glusterfs/brick1/home
49152 Y 7549<br>
Brick 10.129.40.22:/data/glusterfs/brick1/home
49152 Y 27989<br>
Brick 10.129.40.23:/data/glusterfs/brick1/home
49152 Y 7449<br>
Brick 10.129.40.24:/data/glusterfs/brick1/home
49152 Y 19760<br>
Brick 10.129.40.21:/data/glusterfs/brick2/home
49153 Y 7554<br>
Brick 10.129.40.22:/data/glusterfs/brick2/home
49153 Y 27994<br>
Brick 10.129.40.23:/data/glusterfs/brick2/home
49153 Y 7454<br>
Brick 10.129.40.24:/data/glusterfs/brick2/home
49153 Y 19766<br>
NFS Server on localhost 2049
Y 7564<br>
Self-heal Daemon on localhost N/A
Y 7569<br>
NFS Server on 10.129.40.24 2049
Y 19788<br>
Self-heal Daemon on 10.129.40.24 N/A
Y 19792<br>
NFS Server on 10.129.40.22 2049
Y 28004<br>
Self-heal Daemon on 10.129.40.22 N/A
Y 28008<br>
NFS Server on 10.129.40.23 2049
Y 7464<br>
Self-heal Daemon on 10.129.40.23 N/A
Y 7468<br>
<br>
There are no active volume tasks<br>
<br>
<br>
*The gluster volumes are mounted using the glusterfs-fuse
package (glusterfs-fuse-3.4.0-3.el6.x86_64) on the clients
like so:<br>
/sbin/mount.glusterfs 10.129.40.21:home /home<br>
/sbin/mount.glusterfs 10.129.40.21:scratch /scratch<br>
<br>
<br>
*Gluster packages on Gluster servers:<br>
glusterfs-server-3.4.0-3.el6.x86_64<br>
glusterfs-libs-3.4.0-8.el6.x86_64<br>
glusterfs-3.4.0-3.el6.x86_64<br>
glusterfs-geo-replication-3.4.0-3.el6.x86_64<br>
glusterfs-fuse-3.4.0-3.el6.x86_64<br>
glusterfs-rdma-3.4.0-3.el6.x86_64<br>
<br>
<br>
*Gluster packages on clients:<br>
glusterfs-fuse-3.4.0-3.el6.x86_64<br>
glusterfs-3.4.0-3.el6.x86_64<br>
<br>
<br>
All clients and servers are running the same OS and kernel:<br>
<br>
*uname -a:<br>
Linux <hostname> 2.6.32-358.6.1.el6.x86_64 #1 SMP Tue
Apr 23 16:15:13 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux<br>
<br>
*cat /etc/redhat-release :<br>
Scientific Linux release 6.3 (Carbon)<br>
<br>
<br>
Thanks for your help,<br>
<br>
Neil Van Lysel<br>
UNIX Systems Administrator<br>
Center for High Throughput Computing<br>
University of Wisconsin - Madison<br>
<br>
<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://supercolony.gluster.org/mailman/listinfo/gluster-users"
target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
Justin Dossey<br>
CTO, PodOmatic
<div><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>