<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'>
Hi,<br><br>I got the same problem as Juergen,<br>My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0<br><br>Volume Name: poolsave<br>Type: Replicate<br>Status: Started<br>Number of Bricks: 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: ylal2950:/soft/gluster-data<br>Brick2: ylal2960:/soft/gluster-data<br>Options Reconfigured:<br>diagnostics.brick-log-level: DEBUG<br>network.ping-timeout: 20<br>performance.cache-size: 512MB<br>nfs.port: 2049<br><br>I'm running this command : <br><br>I get those error : <br>tar: ./uvs00: owner not changed<br>tar: could not stat ./uvs00/log/0906uvsGESEC.log<br>tar: ./uvs00: group not changed<br>tar: could not stat ./uvs00/log/0306uvsGESEC.log<br>tar: ./uvs00/log: Input/output error<br>cannot change back?: Unknown error 526<br>tar: ./uvs00/log: owner not changed<br>tar: ./uvs00/log: group not changed<br>tar: tape blocksize error<br><br>And then I tried to "ls" in gluster mount : <br>/bin/ls: .: Input/output error<br><br>only way is to restart the volume<br><br><br>Here is the logfile in Debug mod : <br><br><br>Given volfile:<br>+------------------------------------------------------------------------------+<br> 1: volume poolsave-client-0<br> 2: type protocol/client<br> 3: option remote-host ylal2950<br> 4: option remote-subvolume /soft/gluster-data<br> 5: option transport-type tcp<br> 6: option ping-timeout 20<br> 7: end-volume<br> 8: <br> 9: volume poolsave-client-1<br> 10: type protocol/client<br> 11: option remote-host ylal2960<br> 12: option remote-subvolume /soft/gluster-data<br> 13: option transport-type tcp<br> 14: option ping-timeout 20<br> 15: end-volume<br> 16: <br> 17: volume poolsave-replicate-0<br> 18: type cluster/replicate<br> 19: subvolumes poolsave-client-0 poolsave-client-1<br> 20: end-volume<br> 21: <br> 22: volume poolsave-write-behind<br> 23: type performance/write-behind<br> 24: subvolumes poolsave-replicate-0<br> 25: end-volume<br> 26: <br> 27: volume poolsave-read-ahead<br> 28: type performance/read-ahead<br> 29: subvolumes poolsave-write-behind<br> 30: end-volume<br> 31: <br> 32: volume poolsave-io-cache<br> 33: type performance/io-cache<br> 34: option cache-size 512MB<br> 35: subvolumes poolsave-read-ahead<br> 36: end-volume<br> 37: <br> 38: volume poolsave-quick-read<br> 39: type performance/quick-read<br> 40: option cache-size 512MB<br> 41: subvolumes poolsave-io-cache<br> 42: end-volume<br> 43: <br> 44: volume poolsave-stat-prefetch<br> 45: type performance/stat-prefetch<br> 46: subvolumes poolsave-quick-read<br> 47: end-volume<br> 48: <br> 49: volume poolsave<br> 50: type debug/io-stats<br> 51: option latency-measurement off<br> 52: option count-fop-hits off<br> 53: subvolumes poolsave-stat-prefetch<br> 54: end-volume<br> 55: <br> 56: volume nfs-server<br> 57: type nfs/server<br> 58: option nfs.dynamic-volumes on<br> 59: option rpc-auth.addr.poolsave.allow *<br> 60: option nfs3.poolsave.volume-id 71e0dabf-4620-4b6d-b138-3266096b93b6<br> 61: option nfs.port 2049<br> 62: subvolumes poolsave<br> 63: end-volume<br><br>+------------------------------------------------------------------------------+<br>[2011-06-09 16:52:23.709018] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-0: changing port to 24014 (from 0)<br>[2011-06-09 16:52:23.709211] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-1: changing port to 24011 (from 0)<br>[2011-06-09 16:52:27.716417] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)<br>[2011-06-09 16:52:27.716650] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-0: Connected to 10.68.217.85:24014, attached to remote volume '/soft/gluster-data'.<br>[2011-06-09 16:52:27.716679] I [afr-common.c:2514:afr_notify] 0-poolsave-replicate-0: Subvolume 'poolsave-client-0' came back up; going online.<br>[2011-06-09 16:52:27.717020] I [afr-common.c:836:afr_fresh_lookup_cbk] 0-poolsave-replicate-0: added root inode<br>[2011-06-09 16:52:27.729719] I [client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)<br>[2011-06-09 16:52:27.730014] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-1: Connected to 10.68.217.86:24011, attached to remote volume '/soft/gluster-data'.<br>[2011-06-09 17:01:35.537084] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.546601] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.569755] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.569881] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.579674] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2) [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.587907] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.612918] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.645357] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.660873] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.660955] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.665933] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.666057] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.671199] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.671241] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Directory not empty<br>[2011-06-09 17:01:35.680959] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.715633] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.732798] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0: remote operation failed: Permission denied<br>[2011-06-09 17:01:35.733044] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1: remote operation failed: Permission denied<br>[2011-06-09 17:01:35.750009] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx] (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb] (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5] (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc) [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0 gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present<br>[2011-06-09 17:01:35.784610] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.68.217.85:24014)<br>[2011-06-09 17:01:35.784745] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752080<br>[2011-06-09 17:01:35.784770] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.784811] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.752414<br>[2011-06-09 17:01:35.784828] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-0: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.784875] I [client.c:1883:client_rpc_notify] 0-poolsave-client-0: disconnected<br>[2011-06-09 17:01:35.785400] W [socket.c:204:__socket_rwv] 0-poolsave-client-1: readv failed (Connection reset by peer)<br>[2011-06-09 17:01:35.785435] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-1: reading from socket failed. Error (Connection reset by peer), peer (10.68.217.86:24011)<br>[2011-06-09 17:01:35.785496] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752089<br>[2011-06-09 17:01:35.785516] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.785542] W [client3_1-fops.c:4379:client3_1_xattrop] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817662] I [socket.c:2272:socket_submit_request] 0-poolsave-client-1: not connected (priv->connected = 0)<br>[2011-06-09 17:01:35.817698] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x576x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport (poolsave-client-1)<br>[2011-06-09 17:01:35.817721] W [client3_1-fops.c:4735:client3_1_inodelk] 0-poolsave-client-0: failed to send the fop: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817744] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to submit rpc-request (XID: 0x577x Program: GlusterFS 3.1, ProgVers: 310, Proc: 29) to rpc-transport (poolsave-client-1)<br>[2011-06-09 17:01:35.817780] I [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817897] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.784870<br>[2011-06-09 17:01:35.817918] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote operation failed: Transport endpoint is not connected<br>[2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 0-poolsave-client-1: disconnected<br>[2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 0-poolsave-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.<br>[2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-1: connection to 10.68.217.86:24011 failed (Connection refused)<br>[2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 0-poolsave-replicate-0: no subvolumes up<br>[2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up<br>[2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log: no child is up<br>[2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00: no child is up<br>[2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br>[2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-0: connection to 10.68.217.85:24014 failed (Connection refused)<br>[2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no child is up<br><br><br><br>> Message: 7<br>> Date: Thu, 9 Jun 2011 12:56:39 +0530<br>> From: Shehjar Tikoo <shehjart@gluster.com><br>> Subject: Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem<br>> To: J?rgen Winkler <juergen.winkler@xidras.com><br>> Cc: gluster-users@gluster.org<br>> Message-ID: <4DF075AF.3040509@gluster.com><br>> Content-Type: text/plain; charset="us-ascii"; format=flowed<br>> <br>> This can happen if all your servers were unreachable for a few seconds. The <br>> situation must have rectified during the restart. We could confirm if you <br>> change the log level on nfs to DEBUG and send us the log.<br>> <br>> Thanks<br>> -Shehjar<br>> <br>> Ju"rgen Winkler wrote:<br>> > Hi,<br>> > <br>> > i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our <br>> > Servers are loosing the Mount but when you restart the Volume on the <br>> > Server it works again without a remount.<br>> > <br>> > On the server i noticed this entries in the Glusterfs/Nfs log-file when <br>> > the mount on the Client becomes unavailable :<br>> > <br>> > [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > [2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat] <br>> > 0-ksc-replicate-0: /: no child is up<br>> > <br>> > <br>> > Thx for the help<br>> > <br>> > _______________________________________________<br>> > Gluster-users mailing list<br>> > Gluster-users@gluster.org<br>> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>> <br>> <br>> <br>> ------------------------------<br>> <br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>> <br>> <br>> End of Gluster-users Digest, Vol 38, Issue 14<br>> *********************************************<br>                                            </body>
</html>