<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'>
Hi,<br><br>I got the same problem as Juergen,<br>My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0<br><br>Volume Name: poolsave<br>Type: Replicate<br>Status: Started<br>Number of Bricks: 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: ylal2950:/soft/gluster-data<br>Brick2: ylal2960:/soft/gluster-data<br>Options Reconfigured:<br>diagnostics.brick-log-level: DEBUG<br>network.ping-timeout: 20<br>performance.cache-size: 512MB<br>nfs.port: 2049<br><br>I'm running this command : <br><br>I get those error : <br>tar: ./uvs00: owner not changed<br>tar: could not stat ./uvs00/log/0906uvsGESEC.log<br>tar: ./uvs00: group not changed<br>tar: could not stat ./uvs00/log/0306uvsGESEC.log<br>tar: ./uvs00/log: Input/output error<br>cannot change back?: Unknown error 526<br>tar: ./uvs00/log: owner not changed<br>tar: ./uvs00/log: group not changed<br>tar: tape blocksize error<br><br>And then I tried to "ls" in gluster mount : <br>/bin/ls: .: Input/output error<br><br>only way is to restart the volume<br><br><br>Here is the logfile in Debug mod : <br><br><br>Given volfile:<br>+------------------------------------------------------------------------------+<br>&nbsp; 1: volume poolsave-client-0<br>&nbsp; 2:&nbsp;&nbsp;&nbsp;&nbsp; type protocol/client<br>&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp; option remote-host ylal2950<br>&nbsp; 4:&nbsp;&nbsp;&nbsp;&nbsp; option remote-subvolume /soft/gluster-data<br>&nbsp; 5:&nbsp;&nbsp;&nbsp;&nbsp; option transport-type tcp<br>&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp; option ping-timeout 20<br>&nbsp; 7: end-volume<br>&nbsp; 8: <br>&nbsp; 9: volume poolsave-client-1<br>&nbsp;10:&nbsp;&nbsp;&nbsp;&nbsp; type protocol/client<br>&nbsp;11:&nbsp;&nbsp;&nbsp;&nbsp; option remote-host ylal2960<br>&nbsp;12:&nbsp;&nbsp;&nbsp;&nbsp; option remote-subvolume /soft/gluster-data<br>&nbsp;13:&nbsp;&nbsp;&nbsp;&nbsp; option transport-type tcp<br>&nbsp;14:&nbsp;&nbsp;&nbsp;&nbsp; option ping-timeout 20<br>&nbsp;15: end-volume<br>&nbsp;16: <br>&nbsp;17: volume poolsave-replicate-0<br>&nbsp;18:&nbsp;&nbsp;&nbsp;&nbsp; type cluster/replicate<br>&nbsp;19:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-client-0 poolsave-client-1<br>&nbsp;20: end-volume<br>&nbsp;21: <br>&nbsp;22: volume poolsave-write-behind<br>&nbsp;23:&nbsp;&nbsp;&nbsp;&nbsp; type performance/write-behind<br>&nbsp;24:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-replicate-0<br>&nbsp;25: end-volume<br>&nbsp;26: <br>&nbsp;27: volume poolsave-read-ahead<br>&nbsp;28:&nbsp;&nbsp;&nbsp;&nbsp; type performance/read-ahead<br>&nbsp;29:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-write-behind<br>&nbsp;30: end-volume<br>&nbsp;31: <br>&nbsp;32: volume poolsave-io-cache<br>&nbsp;33:&nbsp;&nbsp;&nbsp;&nbsp; type performance/io-cache<br>&nbsp;34:&nbsp;&nbsp;&nbsp;&nbsp; option cache-size 512MB<br>&nbsp;35:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-read-ahead<br>&nbsp;36: end-volume<br>&nbsp;37: <br>&nbsp;38: volume poolsave-quick-read<br>&nbsp;39:&nbsp;&nbsp;&nbsp;&nbsp; type performance/quick-read<br>&nbsp;40:&nbsp;&nbsp;&nbsp;&nbsp; option cache-size 512MB<br>&nbsp;41:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-io-cache<br>&nbsp;42: end-volume<br>&nbsp;43: <br>&nbsp;44: volume poolsave-stat-prefetch<br>&nbsp;45:&nbsp;&nbsp;&nbsp;&nbsp; type performance/stat-prefetch<br>&nbsp;46:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-quick-read<br>&nbsp;47: end-volume<br>&nbsp;48: <br>&nbsp;49: volume poolsave<br>&nbsp;50:&nbsp;&nbsp;&nbsp;&nbsp; type debug/io-stats<br>&nbsp;51:&nbsp;&nbsp;&nbsp;&nbsp; option latency-measurement off<br>&nbsp;52:&nbsp;&nbsp;&nbsp;&nbsp; option count-fop-hits off<br>&nbsp;53:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave-stat-prefetch<br>&nbsp;54: end-volume<br>&nbsp;55: <br>&nbsp;56: volume nfs-server<br>&nbsp;57:&nbsp;&nbsp;&nbsp;&nbsp; type nfs/server<br>&nbsp;58:&nbsp;&nbsp;&nbsp;&nbsp; option nfs.dynamic-volumes on<br>&nbsp;59:&nbsp;&nbsp;&nbsp;&nbsp; option rpc-auth.addr.poolsave.allow *<br>&nbsp;60:&nbsp;&nbsp;&nbsp;&nbsp; option nfs3.poolsave.volume-id 71e0dabf-4620-4b6d-b138-3266096b93b6<br>&nbsp;61:&nbsp;&nbsp;&nbsp;&nbsp; option nfs.port 2049<br>&nbsp;62:&nbsp;&nbsp;&nbsp;&nbsp; subvolumes poolsave<br>&nbsp;63: end-volume<br><br>+------------------------------------------------------------------------------+<br>[2011-06-09 16:52:23.709018] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-0: changing port to 24014 (from 0)<br>[2011-06-09 16:52:23.709211] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-1: changing port to 24011 (from 0)<br>[2011-06-09 16:52:27.716417] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)<br>[2011-06-09 16:52:27.716650] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-0: Connected to 10.68.217.85:24014, attached to remote volume '/soft/gluster-data'.<br>[2011-06-09 16:52:27.716679] I [afr-common.c:2514:afr_notify] 0-poolsave-replicate-0: Subvolume 'poolsave-client-0' came back up; going online.<br>[2011-06-09 16:52:27.717020] I [afr-common.c:836:afr_fresh_lookup_cbk] 0-poolsave-replicate-0: added root inode<br>[2011-06-09 16:52:27.729719] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)<br>[2011-06-09 16:52:27.730014] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-1: Connected to 10.68.217.86:24011, attached to remote volume '/soft/gluster-data'.<br>[2011-06-09 17:01:35.537084] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.546601] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.569755] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.569881] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.579674] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.587907] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.612918] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.645357] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.660873] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.660955] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.665933] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.666057] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.671199] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.671241] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.680959] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.715633] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.732798] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Permission denied<br>[2011-06-09 17:01:35.733044] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Permission denied<br>[2011-06-09 17:01:35.750009] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (--&gt;/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.784610] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.68.217.85:24014)<br>[2011-06-09 17:01:35.784745] E [rpc-clnt.c:338:saved_frames_unwind] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (--&gt;/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752080<br>[2011-06-09 17:01:35.784770] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.784811] E [rpc-clnt.c:338:saved_frames_unwind] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (--&gt;/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.752414<br>[2011-06-09 17:01:35.784828] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.784875] I [client.c:1883:client_rpc_notify] 0-poolsave-client-0: disconnected<br>[2011-06-09 17:01:35.785400] W [socket.c:204:__socket_rwv] 0-poolsave-client-1: readv failed (Connection reset by peer)<br>[2011-06-09 17:01:35.785435] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-1: reading from socket failed. Error (Connection reset by peer), peer (10.68.217.86:24011)<br>[2011-06-09 17:01:35.785496] E [rpc-clnt.c:338:saved_frames_unwind] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (--&gt;/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752089<br>[2011-06-09 17:01:35.785516] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.785542] W [client3_1-fops.c:4379:client3_1_xattrop] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817662] I [socket.c:2272:socket_submit_request] 0-poolsave-client-1: not connected (priv-&gt;connected = 0)<br>[2011-06-09 17:01:35.817698] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x576x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport (poolsave-client-1)<br>[2011-06-09 17:01:35.817721] W [client3_1-fops.c:4735:client3_1_inodelk] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817744] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x577x Program: GlusterFS 3.1, ProgVers: 310, Proc: 29) to rpc-transport (poolsave-client-1)<br>[2011-06-09 17:01:35.817780] I [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817897] E [rpc-clnt.c:338:saved_frames_unwind] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (--&gt;/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.784870<br>[2011-06-09 17:01:35.817918] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 0-poolsave-client-1: disconnected<br>[2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 0-poolsave-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>[2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-1: connection to 10.68.217.86:24011 failed (Connection refused)<br>[2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 0-poolsave-replicate-0: no subvolumes up<br>[2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up<br>[2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up<br>[2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00: no child is up<br>[2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-0: connection to 10.68.217.85:24014 failed (Connection refused)<br>[2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br><br><br><br>&gt; Message: 7<br>&gt; Date: Thu, 9 Jun 2011 12:56:39 +0530<br>&gt; From: Shehjar Tikoo &lt;shehjart@gluster.com&gt;<br>&gt; Subject: Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem<br>&gt; To: J?rgen Winkler &lt;juergen.winkler@xidras.com&gt;<br>&gt; Cc: gluster-users@gluster.org<br>&gt; Message-ID: &lt;4DF075AF.3040509@gluster.com&gt;<br>&gt; Content-Type: text/plain; charset="us-ascii"; format=flowed<br>&gt; <br>&gt; This can happen if all your servers were unreachable for a few seconds. The <br>&gt; situation must have rectified during the restart. We could confirm if you <br>&gt; change the log level on nfs to DEBUG and send us the log.<br>&gt; <br>&gt; Thanks<br>&gt; -Shehjar<br>&gt; <br>&gt; Ju"rgen Winkler wrote:<br>&gt; &gt; Hi,<br>&gt; &gt; <br>&gt; &gt; i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our <br>&gt; &gt; Servers are loosing the Mount but when you restart the Volume on the <br>&gt; &gt; Server it works again without a remount.<br>&gt; &gt; <br>&gt; &gt; On the server i noticed this entries in the Glusterfs/Nfs  log-file when <br>&gt; &gt; the mount on the Client becomes unavailable  :<br>&gt; &gt; <br>&gt; &gt; [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; [2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat] <br>&gt; &gt; 0-ksc-replicate-0: /: no child is up<br>&gt; &gt; <br>&gt; &gt; <br>&gt; &gt; Thx for the help<br>&gt; &gt; <br>&gt; &gt; _______________________________________________<br>&gt; &gt; Gluster-users mailing list<br>&gt; &gt; Gluster-users@gluster.org<br>&gt; &gt; http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>&gt; <br>&gt; <br>&gt; <br>&gt; ------------------------------<br>&gt; <br>&gt; _______________________________________________<br>&gt; Gluster-users mailing list<br>&gt; Gluster-users@gluster.org<br>&gt; http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>&gt; <br>&gt; <br>&gt; End of Gluster-users Digest, Vol 38, Issue 14<br>&gt; *********************************************<br>                                               </body>
</html>