<div dir="ltr">mount<div><br></div><div><div>[2014-10-13 17:36:56.758654] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs --direct-io-mode=enable --fuse-mountopts=default_permissions,allow_other,max_read=131072 --volfile-server=stor1 --volfile-server=stor2 --volfile-id=HA-WIN-TT-1T --fuse-mountopts=default_permissions,allow_other,max_read=131072 /srv/nfs/HA-WIN-TT-1T)</div><div>[2014-10-13 17:36:56.762162] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:36:56.762223] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:36:56.766686] I [dht-shared.c:311:dht_init_regex] 0-HA-WIN-TT-1T-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$</div><div>[2014-10-13 17:36:56.768887] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:36:56.768939] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-1: using system polling thread</div><div>[2014-10-13 17:36:56.769280] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:36:56.769294] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-0: using system polling thread</div><div>[2014-10-13 17:36:56.769336] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:36:56.769829] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-1: parent translators are ready, attempting connect on transport</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-WIN-TT-1T-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host stor1</div><div>  4:     option remote-subvolume /exports/NFS-WIN/1T</div><div>  5:     option transport-type socket</div><div>  6:     option ping-timeout 10</div><div>  7:     option send-gids true</div><div>  8: end-volume</div><div>  9:</div><div> 10: volume HA-WIN-TT-1T-client-1</div><div> 11:     type protocol/client</div><div> 12:     option remote-host stor2</div><div> 13:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 14:     option transport-type socket</div><div> 15:     option ping-timeout 10</div><div> 16:     option send-gids true</div><div> 17: end-volume</div><div> 18:</div><div> 19: volume HA-WIN-TT-1T-replicate-0</div><div> 20:     type cluster/replicate</div><div> 21:     subvolumes HA-WIN-TT-1T-client-0 HA-WIN-TT-1T-client-1</div><div> 22: end-volume</div><div> 23:</div><div> 24: volume HA-WIN-TT-1T-dht</div><div> 25:     type cluster/distribute</div><div> 26:     subvolumes HA-WIN-TT-1T-replicate-0</div><div> 27: end-volume</div><div> 28:</div><div> 29: volume HA-WIN-TT-1T-write-behind</div><div> 30:     type performance/write-behind</div><div> 31:     subvolumes HA-WIN-TT-1T-dht</div><div> 32: end-volume</div><div> 33:</div><div> 34: volume HA-WIN-TT-1T-read-ahead</div><div> 35:     type performance/read-ahead</div><div> 36:     subvolumes HA-WIN-TT-1T-write-behind</div><div> 37: end-volume</div><div> 38:</div><div> 39: volume HA-WIN-TT-1T-io-cache</div><div> 40:     type performance/io-cache</div><div> 41:     subvolumes HA-WIN-TT-1T-read-ahead</div><div> 42: end-volume</div><div> 43:</div><div> 44: volume HA-WIN-TT-1T-quick-read</div><div> 45:     type performance/quick-read</div><div> 46:     subvolumes HA-WIN-TT-1T-io-cache</div><div> 47: end-volume</div><div> 48:</div><div> 49: volume HA-WIN-TT-1T-open-behind</div><div> 50:     type performance/open-behind</div><div> 51:     subvolumes HA-WIN-TT-1T-quick-read</div><div> 52: end-volume</div><div> 53:</div><div> 54: volume HA-WIN-TT-1T-md-cache</div><div> 55:     type performance/md-cache</div><div> 56:     subvolumes HA-WIN-TT-1T-open-behind</div><div> 57: end-volume</div><div> 58:</div><div> 59: volume HA-WIN-TT-1T</div><div> 60:     type debug/io-stats</div><div> 61:     option latency-measurement off</div><div> 62:     option count-fop-hits off</div><div> 63:     subvolumes HA-WIN-TT-1T-md-cache</div><div> 64: end-volume</div><div> 65:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:36:56.770718] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-1: changing port to 49160 (from 0)</div><div>[2014-10-13 17:36:56.771378] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-0: changing port to 49160 (from 0)</div><div>[2014-10-13 17:36:56.772008] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:36:56.772083] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:36:56.772338] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Connected to <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:36:56.772361] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:36:56.772424] I [afr-common.c:4131:afr_notify] 0-HA-WIN-TT-1T-replicate-0: Subvolume &#39;HA-WIN-TT-1T-client-1&#39; came back up; going online.</div><div>[2014-10-13 17:36:56.772463] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Connected to <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:36:56.772477] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:36:56.779099] I [fuse-bridge.c:4977:fuse_graph_setup] 0-fuse: switched to graph 0</div><div>[2014-10-13 17:36:56.779338] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-0: Server lk version = 1</div><div>[2014-10-13 17:36:56.779367] I [fuse-bridge.c:3914:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.17</div><div>[2014-10-13 17:36:56.779438] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-1: Server lk version = 1</div><div>[2014-10-13 17:37:02.010942] I [fuse-bridge.c:4818:fuse_thread_proc] 0-fuse: unmounting /srv/nfs/HA-WIN-TT-1T</div><div>[2014-10-13 17:37:02.011296] W [glusterfsd.c:1095:cleanup_and_exit] (--&gt;/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fc7b7672e6d] (--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7fc7b7d20b50] (--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7fc7b95add55]))) 0-: received signum (15), shutting down</div><div>[2014-10-13 17:37:02.011316] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting &#39;/srv/nfs/HA-WIN-TT-1T&#39;.</div><div>[2014-10-13 17:37:31.133036] W [socket.c:522:__socket_rwv] 0-HA-WIN-TT-1T-client-0: readv on <a href="http://10.250.0.1:49160">10.250.0.1:49160</a> failed (No data available)</div><div>[2014-10-13 17:37:31.133110] I [client.c:2229:client_rpc_notify] 0-HA-WIN-TT-1T-client-0: disconnected from <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>. Client process will keep trying to connect to glusterd until brick&#39;s port is available</div><div>[2014-10-13 17:37:33.317437] W [socket.c:522:__socket_rwv] 0-HA-WIN-TT-1T-client-1: readv on <a href="http://10.250.0.2:49160">10.250.0.2:49160</a> failed (No data available)</div><div>[2014-10-13 17:37:33.317478] I [client.c:2229:client_rpc_notify] 0-HA-WIN-TT-1T-client-1: disconnected from <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>. Client process will keep trying to connect to glusterd until brick&#39;s port is available</div><div>[2014-10-13 17:37:33.317496] E [afr-common.c:4168:afr_notify] 0-HA-WIN-TT-1T-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.</div><div>[2014-10-13 17:37:42.045604] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-0: changing port to 49160 (from 0)</div><div>[2014-10-13 17:37:42.046177] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:37:42.048863] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Connected to <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:37:42.048883] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:37:42.048897] I [client-handshake.c:1314:client_post_handshake] 0-HA-WIN-TT-1T-client-0: 1 fds open - Delaying child_up until they are re-opened</div><div>[2014-10-13 17:37:42.049299] W [client-handshake.c:980:client3_3_reopen_cbk] 0-HA-WIN-TT-1T-client-0: reopen on &lt;gfid:b00e322a-7bae-479f-91e0-1fd77c73692b&gt; failed (Stale NFS file handle)</div><div>[2014-10-13 17:37:42.049328] I [client-handshake.c:936:client_child_up_reopen_done] 0-HA-WIN-TT-1T-client-0: last fd open&#39;d/lock-self-heal&#39;d - notifying CHILD-UP</div><div>[2014-10-13 17:37:42.049360] I [afr-common.c:4131:afr_notify] 0-HA-WIN-TT-1T-replicate-0: Subvolume &#39;HA-WIN-TT-1T-client-0&#39; came back up; going online.</div><div>[2014-10-13 17:37:42.049446] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-0: Server lk version = 1</div><div>[2014-10-13 17:37:45.087592] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-1: changing port to 49160 (from 0)</div><div>[2014-10-13 17:37:45.088132] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:37:45.088343] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Connected to <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:37:45.088360] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:37:45.088373] I [client-handshake.c:1314:client_post_handshake] 0-HA-WIN-TT-1T-client-1: 1 fds open - Delaying child_up until they are re-opened</div><div>[2014-10-13 17:37:45.088681] W [client-handshake.c:980:client3_3_reopen_cbk] 0-HA-WIN-TT-1T-client-1: reopen on &lt;gfid:b00e322a-7bae-479f-91e0-1fd77c73692b&gt; failed (Stale NFS file handle)</div><div>[2014-10-13 17:37:45.088697] I [client-handshake.c:936:client_child_up_reopen_done] 0-HA-WIN-TT-1T-client-1: last fd open&#39;d/lock-self-heal&#39;d - notifying CHILD-UP</div><div>[2014-10-13 17:37:45.088819] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-1: Server lk version = 1</div><div>[2014-10-13 17:37:54.601822] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs --direct-io-mode=enable --fuse-mountopts=default_permissions,allow_other,max_read=131072 --volfile-server=stor1 --volfile-server=stor2 --volfile-id=HA-WIN-TT-1T --fuse-mountopts=default_permissions,allow_other,max_read=131072 /srv/nfs/HA-WIN-TT-1T)</div><div>[2014-10-13 17:37:54.604972] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:37:54.605034] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:37:54.609219] I [dht-shared.c:311:dht_init_regex] 0-HA-WIN-TT-1T-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$</div><div>[2014-10-13 17:37:54.611421] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:37:54.611466] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-1: using system polling thread</div><div>[2014-10-13 17:37:54.611808] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:37:54.611821] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-0: using system polling thread</div><div>[2014-10-13 17:37:54.611862] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:37:54.612354] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-1: parent translators are ready, attempting connect on transport</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-WIN-TT-1T-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host stor1</div><div>  4:     option remote-subvolume /exports/NFS-WIN/1T</div><div>  5:     option transport-type socket</div><div>  6:     option ping-timeout 10</div><div>  7:     option send-gids true</div><div>  8: end-volume</div><div>  9:</div><div> 10: volume HA-WIN-TT-1T-client-1</div><div> 11:     type protocol/client</div><div> 12:     option remote-host stor2</div><div> 13:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 14:     option transport-type socket</div><div> 15:     option ping-timeout 10</div><div> 16:     option send-gids true</div><div> 17: end-volume</div><div> 18:</div><div> 19: volume HA-WIN-TT-1T-replicate-0</div><div> 20:     type cluster/replicate</div><div> 21:     subvolumes HA-WIN-TT-1T-client-0 HA-WIN-TT-1T-client-1</div><div> 22: end-volume</div><div> 23:</div><div> 24: volume HA-WIN-TT-1T-dht</div><div> 25:     type cluster/distribute</div><div> 26:     subvolumes HA-WIN-TT-1T-replicate-0</div><div> 27: end-volume</div><div> 28:</div><div> 29: volume HA-WIN-TT-1T-write-behind</div><div> 30:     type performance/write-behind</div><div> 31:     subvolumes HA-WIN-TT-1T-dht</div><div> 32: end-volume</div><div> 33:</div><div> 34: volume HA-WIN-TT-1T-read-ahead</div><div> 35:     type performance/read-ahead</div><div> 36:     subvolumes HA-WIN-TT-1T-write-behind</div><div> 37: end-volume</div><div> 38:</div><div> 39: volume HA-WIN-TT-1T-io-cache</div><div> 40:     type performance/io-cache</div><div> 41:     subvolumes HA-WIN-TT-1T-read-ahead</div><div> 42: end-volume</div><div> 43:</div><div> 44: volume HA-WIN-TT-1T-quick-read</div><div> 45:     type performance/quick-read</div><div> 46:     subvolumes HA-WIN-TT-1T-io-cache</div><div> 47: end-volume</div><div> 48:</div><div> 49: volume HA-WIN-TT-1T-open-behind</div><div> 50:     type performance/open-behind</div><div> 51:     subvolumes HA-WIN-TT-1T-quick-read</div><div> 52: end-volume</div><div> 53:</div><div> 54: volume HA-WIN-TT-1T-md-cache</div><div> 55:     type performance/md-cache</div><div> 56:     subvolumes HA-WIN-TT-1T-open-behind</div><div> 57: end-volume</div><div> 58:</div><div> 59: volume HA-WIN-TT-1T</div><div> 60:     type debug/io-stats</div><div> 61:     option latency-measurement off</div><div> 62:     option count-fop-hits off</div><div> 63:     subvolumes HA-WIN-TT-1T-md-cache</div><div> 64: end-volume</div><div> 65:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:37:54.613137] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-0: changing port to 49160 (from 0)</div><div>[2014-10-13 17:37:54.613521] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-1: changing port to 49160 (from 0)</div><div>[2014-10-13 17:37:54.614228] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:37:54.614399] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:37:54.614483] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Connected to <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:37:54.614499] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:37:54.614557] I [afr-common.c:4131:afr_notify] 0-HA-WIN-TT-1T-replicate-0: Subvolume &#39;HA-WIN-TT-1T-client-0&#39; came back up; going online.</div><div>[2014-10-13 17:37:54.614625] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-0: Server lk version = 1</div><div>[2014-10-13 17:37:54.614709] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Connected to <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:37:54.614724] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:37:54.621318] I [fuse-bridge.c:4977:fuse_graph_setup] 0-fuse: switched to graph 0</div><div>[2014-10-13 17:37:54.621545] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-1: Server lk version = 1</div><div>[2014-10-13 17:37:54.621617] I [fuse-bridge.c:3914:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.17</div><div>[2014-10-13 17:38:25.951778] W [client-rpc-fops.c:4235:client3_3_flush] 0-HA-WIN-TT-1T-client-0:  (b00e322a-7bae-479f-91e0-1fd77c73692b) remote_fd is -1. EBADFD</div><div>[2014-10-13 17:38:25.951827] W [client-rpc-fops.c:4235:client3_3_flush] 0-HA-WIN-TT-1T-client-1:  (b00e322a-7bae-479f-91e0-1fd77c73692b) remote_fd is -1. EBADFD</div><div>[2014-10-13 17:38:25.966963] I [fuse-bridge.c:4818:fuse_thread_proc] 0-fuse: unmounting /srv/nfs/HA-WIN-TT-1T</div><div>[2014-10-13 17:38:25.967174] W [glusterfsd.c:1095:cleanup_and_exit] (--&gt;/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ffec893de6d] (--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7ffec8febb50] (--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7ffeca878d55]))) 0-: received signum (15), shutting down</div><div>[2014-10-13 17:38:25.967194] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting &#39;/srv/nfs/HA-WIN-TT-1T&#39;.</div><div>[2014-10-13 17:40:21.500514] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:40:21.517782] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:40:21.524056] I [dht-shared.c:311:dht_init_regex] 0-HA-WIN-TT-1T-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$</div><div>[2014-10-13 17:40:21.528430] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div></div><div><br></div><div>glusterfshd stor1</div><div><br></div><div><div>2014-10-13 17:38:17.203360] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/75bbc77a676bde0d0afe20f40dc9e3e1.socket --xlator-option *replicate*.node-uuid=e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3)</div><div>[2014-10-13 17:38:17.204958] I [socket.c:3561:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled</div><div>[2014-10-13 17:38:17.205016] I [socket.c:3576:socket_init] 0-socket.glusterfsd: using system polling thread</div><div>[2014-10-13 17:38:17.205188] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:38:17.205209] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:38:17.207840] I [graph.c:254:gf_add_cmdline_options] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;HA-2TB-TT-Proxmox-cluster-replicate-0&#39; with value &#39;e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3&#39;</div><div>[2014-10-13 17:38:17.209433] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:38:17.209448] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: using system polling thread</div><div>[2014-10-13 17:38:17.209625] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:38:17.209634] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: using system polling thread</div><div>[2014-10-13 17:38:17.209652] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:17.210241] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-1: parent translators are ready, attempting connect on transport</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-2TB-TT-Proxmox-cluster-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host stor1</div><div>  4:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div>  5:     option transport-type socket</div><div>  6:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div>  7:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div>  8:     option ping-timeout 10</div><div>  9: end-volume</div><div> 10:</div><div> 11: volume HA-2TB-TT-Proxmox-cluster-client-1</div><div> 12:     type protocol/client</div><div> 13:     option remote-host stor2</div><div> 14:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div> 15:     option transport-type socket</div><div> 16:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div> 17:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div> 18:     option ping-timeout 10</div><div> 19: end-volume</div><div> 20:</div><div> 21: volume HA-2TB-TT-Proxmox-cluster-replicate-0</div><div> 22:     type cluster/replicate</div><div> 23:     option node-uuid e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3</div><div> 24:     option background-self-heal-count 0</div><div> 25:     option metadata-self-heal on</div><div> 26:     option data-self-heal on</div><div> 27:     option entry-self-heal on</div><div> 28:     option self-heal-daemon on</div><div> 29:     option iam-self-heal-daemon yes</div><div> 30:     subvolumes HA-2TB-TT-Proxmox-cluster-client-0 HA-2TB-TT-Proxmox-cluster-client-1</div><div> 31: end-volume</div><div> 32:</div><div> 33: volume glustershd</div><div> 34:     type debug/io-stats</div><div> 35:     subvolumes HA-2TB-TT-Proxmox-cluster-replicate-0</div><div> 36: end-volume</div><div> 37:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:38:17.210709] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-0: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:17.211008] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:17.211170] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Connected to <a href="http://10.250.0.1:49159">10.250.0.1:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:17.211195] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:17.211250] I [afr-common.c:4131:afr_notify] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Subvolume &#39;HA-2TB-TT-Proxmox-cluster-client-0&#39; came back up; going online.</div><div>[2014-10-13 17:38:17.211297] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server lk version = 1</div><div>[2014-10-13 17:38:17.211656] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Another crawl is in progress for HA-2TB-TT-Proxmox-cluster-client-0</div><div>[2014-10-13 17:38:17.211661] E [afr-self-heald.c:1479:afr_find_child_position] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: getxattr failed on HA-2TB-TT-Proxmox-cluster-client-1 - (Transport endpoint is not connected)</div><div>[2014-10-13 17:38:17.216327] E [afr-self-heal-data.c:1611:afr_sh_data_open_cbk] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: open of &lt;gfid:65381af4-8e0b-4721-8214-71d29dcf5237&gt; failed on child HA-2TB-TT-Proxmox-cluster-client-1 (Transport endpoint is not connected)</div><div>[2014-10-13 17:38:17.217372] E [afr-self-heal-data.c:1611:afr_sh_data_open_cbk] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: open of &lt;gfid:65381af4-8e0b-4721-8214-71d29dcf5237&gt; failed on child HA-2TB-TT-Proxmox-cluster-client-1 (Transport endpoint is not connected)</div><div>[2014-10-13 17:38:19.226057] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-1: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:19.226704] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:19.226896] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Connected to <a href="http://10.250.0.2:49159">10.250.0.2:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:19.226916] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:19.227031] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server lk version = 1</div><div>[2014-10-13 17:38:25.933950] W [glusterfsd.c:1095:cleanup_and_exit] (--&gt;/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f1a7c03ce6d] (--&gt;/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f1a7c6eab50] (--&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7f1a7df77d55]))) 0-: received signum (15), shutting down</div><div>[2014-10-13 17:38:26.942918] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/75bbc77a676bde0d0afe20f40dc9e3e1.socket --xlator-option *replicate*.node-uuid=e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3)</div><div>[2014-10-13 17:38:26.944548] I [socket.c:3561:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.944584] I [socket.c:3576:socket_init] 0-socket.glusterfsd: using system polling thread</div><div>[2014-10-13 17:38:26.944689] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.944701] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:38:26.946667] I [graph.c:254:gf_add_cmdline_options] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;HA-2TB-TT-Proxmox-cluster-replicate-0&#39; with value &#39;e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3&#39;</div><div>[2014-10-13 17:38:26.946684] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;HA-WIN-TT-1T-replicate-0&#39; with value &#39;e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3&#39;</div><div>[2014-10-13 17:38:26.948783] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.948809] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: using system polling thread</div><div>[2014-10-13 17:38:26.949118] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.949134] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: using system polling thread</div><div>[2014-10-13 17:38:26.951698] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.951715] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-1: using system polling thread</div><div>[2014-10-13 17:38:26.951921] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.951932] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-0: using system polling thread</div><div>[2014-10-13 17:38:26.951959] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:26.952612] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-1: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:26.952862] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:26.953447] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-1: parent translators are ready, attempting connect on transport</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-2TB-TT-Proxmox-cluster-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host stor1</div><div>  4:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div>  5:     option transport-type socket</div><div>  6:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div>  7:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div>  8:     option ping-timeout 10</div><div>  9: end-volume</div><div> 10:</div><div> 11: volume HA-2TB-TT-Proxmox-cluster-client-1</div><div> 12:     type protocol/client</div><div> 13:     option remote-host stor2</div><div> 14:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div> 15:     option transport-type socket</div><div> 16:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div> 17:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div> 18:     option ping-timeout 10</div><div> 19: end-volume</div><div> 20:</div><div> 21: volume HA-2TB-TT-Proxmox-cluster-replicate-0</div><div> 22:     type cluster/replicate</div><div> 23:     option node-uuid e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3</div><div> 24:     option background-self-heal-count 0</div><div> 25:     option metadata-self-heal on</div><div> 26:     option data-self-heal on</div><div> 27:     option entry-self-heal on</div><div> 28:     option self-heal-daemon on</div><div> 29:     option iam-self-heal-daemon yes</div><div> 30:     subvolumes HA-2TB-TT-Proxmox-cluster-client-0 HA-2TB-TT-Proxmox-cluster-client-1</div><div> 31: end-volume</div><div> 32:</div><div> 33: volume HA-WIN-TT-1T-client-0</div><div> 34:     type protocol/client</div><div> 35:     option remote-host stor1</div><div> 36:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 37:     option transport-type socket</div><div> 38:     option username 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 39:     option password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 40:     option ping-timeout 10</div><div> 41: end-volume</div><div> 42:</div><div> 43: volume HA-WIN-TT-1T-client-1</div><div> 44:     type protocol/client</div><div> 45:     option remote-host stor2</div><div> 46:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 47:     option transport-type socket</div><div> 48:     option username 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 49:     option password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 50:     option ping-timeout 10</div><div> 51: end-volume</div><div> 52:</div><div> 53: volume HA-WIN-TT-1T-replicate-0</div><div> 54:     type cluster/replicate</div><div> 55:     option node-uuid e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3</div><div> 56:     option background-self-heal-count 0</div><div> 57:     option metadata-self-heal on</div><div> 58:     option data-self-heal on</div><div> 59:     option entry-self-heal on</div><div> 60:     option self-heal-daemon on</div><div> 61:     option iam-self-heal-daemon yes</div><div> 62:     subvolumes HA-WIN-TT-1T-client-0 HA-WIN-TT-1T-client-1</div><div> 63: end-volume</div><div> 64:</div><div> 65: volume glustershd</div><div> 66:     type debug/io-stats</div><div> 67:     subvolumes HA-2TB-TT-Proxmox-cluster-replicate-0 HA-WIN-TT-1T-replicate-0</div><div> 68: end-volume</div><div> 69:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:38:26.954036] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-0: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:26.954308] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-0: changing port to 49160 (from 0)</div><div>[2014-10-13 17:38:26.954741] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:26.954815] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:26.954999] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Connected to <a href="http://10.250.0.1:49159">10.250.0.1:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:26.955017] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:26.955073] I [afr-common.c:4131:afr_notify] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Subvolume &#39;HA-2TB-TT-Proxmox-cluster-client-0&#39; came back up; going online.</div><div>[2014-10-13 17:38:26.955127] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server lk version = 1</div><div>[2014-10-13 17:38:26.955151] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Connected to <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:38:26.955161] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:26.955226] I [afr-common.c:4131:afr_notify] 0-HA-WIN-TT-1T-replicate-0: Subvolume &#39;HA-WIN-TT-1T-client-0&#39; came back up; going online.</div><div>[2014-10-13 17:38:26.955297] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-0: Server lk version = 1</div><div>[2014-10-13 17:38:26.955583] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Another crawl is in progress for HA-2TB-TT-Proxmox-cluster-client-0</div><div>[2014-10-13 17:38:26.955589] E [afr-self-heald.c:1479:afr_find_child_position] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: getxattr failed on HA-2TB-TT-Proxmox-cluster-client-1 - (Transport endpoint is not connected)</div><div>[2014-10-13 17:38:26.955832] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-WIN-TT-1T-replicate-0: Another crawl is in progress for HA-WIN-TT-1T-client-0</div><div>[2014-10-13 17:38:26.955858] E [afr-self-heald.c:1479:afr_find_child_position] 0-HA-WIN-TT-1T-replicate-0: getxattr failed on HA-WIN-TT-1T-client-1 - (Transport endpoint is not connected)</div><div>[2014-10-13 17:38:26.964913] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-1: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:26.965553] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:26.965794] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Connected to <a href="http://10.250.0.2:49159">10.250.0.2:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:26.965815] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:26.965968] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server lk version = 1</div><div>[2014-10-13 17:38:26.967510] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Another crawl is in progress for HA-2TB-TT-Proxmox-cluster-client-0</div><div>[2014-10-13 17:38:27.971374] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-1: changing port to 49160 (from 0)</div><div>[2014-10-13 17:38:27.971940] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:27.975460] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Connected to <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:38:27.975481] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:27.976656] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-1: Server lk version = 1</div><div>[2014-10-13 17:41:05.390992] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.408292] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.412221] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div><div>[2014-10-13 17:41:05.417388] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div><div>root@stor1:~#</div></div><div><br></div><div>glusterfshd stor2</div><div><br></div><div><div>[2014-10-13 17:38:28.992891] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/b1494ca4d047df6e8590d7080131908f.socket --xlator-option *replicate*.node-uuid=abf9e3a7-eb91-4273-acdf-876cd6ba1fe3)</div><div>[2014-10-13 17:38:28.994439] I [socket.c:3561:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled</div><div>[2014-10-13 17:38:28.994476] I [socket.c:3576:socket_init] 0-socket.glusterfsd: using system polling thread</div><div>[2014-10-13 17:38:28.994581] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:38:28.994594] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:38:28.996569] I [graph.c:254:gf_add_cmdline_options] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;HA-2TB-TT-Proxmox-cluster-replicate-0&#39; with value &#39;abf9e3a7-eb91-4273-acdf-876cd6ba1fe3&#39;</div><div>[2014-10-13 17:38:28.996585] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;HA-WIN-TT-1T-replicate-0&#39; with value &#39;abf9e3a7-eb91-4273-acdf-876cd6ba1fe3&#39;</div><div>[2014-10-13 17:38:28.998463] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:38:28.998483] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-1: using system polling thread</div><div>[2014-10-13 17:38:28.998695] I [socket.c:3561:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:38:28.998707] I [socket.c:3576:socket_init] 0-HA-2TB-TT-Proxmox-cluster-client-0: using system polling thread</div><div>[2014-10-13 17:38:29.000506] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-1: SSL support is NOT enabled</div><div>[2014-10-13 17:38:29.000520] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-1: using system polling thread</div><div>[2014-10-13 17:38:29.000723] I [socket.c:3561:socket_init] 0-HA-WIN-TT-1T-client-0: SSL support is NOT enabled</div><div>[2014-10-13 17:38:29.000734] I [socket.c:3576:socket_init] 0-HA-WIN-TT-1T-client-0: using system polling thread</div><div>[2014-10-13 17:38:29.000762] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:29.001064] I [client.c:2294:notify] 0-HA-2TB-TT-Proxmox-cluster-client-1: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:29.001639] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-0: parent translators are ready, attempting connect on transport</div><div>[2014-10-13 17:38:29.001877] I [client.c:2294:notify] 0-HA-WIN-TT-1T-client-1: parent translators are ready, attempting connect on transport</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-2TB-TT-Proxmox-cluster-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host stor1</div><div>  4:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div>  5:     option transport-type socket</div><div>  6:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div>  7:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div>  8:     option ping-timeout 10</div><div>  9: end-volume</div><div> 10:</div><div> 11: volume HA-2TB-TT-Proxmox-cluster-client-1</div><div> 12:     type protocol/client</div><div> 13:     option remote-host stor2</div><div> 14:     option remote-subvolume /exports/HA-2TB-TT-Proxmox-cluster/2TB</div><div> 15:     option transport-type socket</div><div> 16:     option username 59c66122-55c1-4c28-956e-6189fcb1aff5</div><div> 17:     option password 34b79afb-a93c-431b-900a-b688e67cdbc9</div><div> 18:     option ping-timeout 10</div><div> 19: end-volume</div><div> 20:</div><div> 21: volume HA-2TB-TT-Proxmox-cluster-replicate-0</div><div> 22:     type cluster/replicate</div><div> 23:     option node-uuid abf9e3a7-eb91-4273-acdf-876cd6ba1fe3</div><div> 24:     option background-self-heal-count 0</div><div> 25:     option metadata-self-heal on</div><div> 26:     option data-self-heal on</div><div> 27:     option entry-self-heal on</div><div> 28:     option self-heal-daemon on</div><div> 29:     option iam-self-heal-daemon yes</div><div> 30:     subvolumes HA-2TB-TT-Proxmox-cluster-client-0 HA-2TB-TT-Proxmox-cluster-client-1</div><div> 31: end-volume</div><div> 32:</div><div> 33: volume HA-WIN-TT-1T-client-0</div><div> 34:     type protocol/client</div><div> 35:     option remote-host stor1</div><div> 36:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 37:     option transport-type socket</div><div> 38:     option username 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 39:     option password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 40:     option ping-timeout 10</div><div> 41: end-volume</div><div> 42:</div><div> 43: volume HA-WIN-TT-1T-client-1</div><div> 44:     type protocol/client</div><div> 45:     option remote-host stor2</div><div> 46:     option remote-subvolume /exports/NFS-WIN/1T</div><div> 47:     option transport-type socket</div><div> 48:     option username 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 49:     option password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 50:     option ping-timeout 10</div><div> 51: end-volume</div><div> 52:</div><div> 53: volume HA-WIN-TT-1T-replicate-0</div><div> 54:     type cluster/replicate</div><div> 55:     option node-uuid abf9e3a7-eb91-4273-acdf-876cd6ba1fe3</div><div> 56:     option background-self-heal-count 0</div><div> 57:     option metadata-self-heal on</div><div> 58:     option data-self-heal on</div><div> 59:     option entry-self-heal on</div><div> 60:     option self-heal-daemon on</div><div> 61:     option iam-self-heal-daemon yes</div><div> 62:     subvolumes HA-WIN-TT-1T-client-0 HA-WIN-TT-1T-client-1</div><div> 63: end-volume</div><div> 64:</div><div> 65: volume glustershd</div><div> 66:     type debug/io-stats</div><div> 67:     subvolumes HA-2TB-TT-Proxmox-cluster-replicate-0 HA-WIN-TT-1T-replicate-0</div><div> 68: end-volume</div><div> 69:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:38:29.002743] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-1: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:29.003027] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-1: changing port to 49160 (from 0)</div><div>[2014-10-13 17:38:29.003290] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-2TB-TT-Proxmox-cluster-client-0: changing port to 49159 (from 0)</div><div>[2014-10-13 17:38:29.003334] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-HA-WIN-TT-1T-client-0: changing port to 49160 (from 0)</div><div>[2014-10-13 17:38:29.003922] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:29.004023] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:29.004139] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-2TB-TT-Proxmox-cluster-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:29.004202] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Connected to <a href="http://10.250.0.2:49159">10.250.0.2:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:29.004217] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:29.004266] I [afr-common.c:4131:afr_notify] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Subvolume &#39;HA-2TB-TT-Proxmox-cluster-client-1&#39; came back up; going online.</div><div>[2014-10-13 17:38:29.004318] I [client-handshake.c:1677:select_server_supported_programs] 0-HA-WIN-TT-1T-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)</div><div>[2014-10-13 17:38:29.004368] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Connected to <a href="http://10.250.0.2:49160">10.250.0.2:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:38:29.004383] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:29.004429] I [afr-common.c:4131:afr_notify] 0-HA-WIN-TT-1T-replicate-0: Subvolume &#39;HA-WIN-TT-1T-client-1&#39; came back up; going online.</div><div>[2014-10-13 17:38:29.004483] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-1: Server lk version = 1</div><div>[2014-10-13 17:38:29.004506] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-1: Server lk version = 1</div><div>[2014-10-13 17:38:29.004526] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Connected to <a href="http://10.250.0.1:49159">10.250.0.1:49159</a>, attached to remote volume &#39;/exports/HA-2TB-TT-Proxmox-cluster/2TB&#39;.</div><div>[2014-10-13 17:38:29.004535] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:29.004613] I [client-handshake.c:1462:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Connected to <a href="http://10.250.0.1:49160">10.250.0.1:49160</a>, attached to remote volume &#39;/exports/NFS-WIN/1T&#39;.</div><div>[2014-10-13 17:38:29.004626] I [client-handshake.c:1474:client_setvolume_cbk] 0-HA-WIN-TT-1T-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2014-10-13 17:38:29.004731] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-2TB-TT-Proxmox-cluster-client-0: Server lk version = 1</div><div>[2014-10-13 17:38:29.004796] I [client-handshake.c:450:client_set_lk_version_cbk] 0-HA-WIN-TT-1T-client-0: Server lk version = 1</div><div>[2014-10-13 17:38:29.005291] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-WIN-TT-1T-replicate-0: Another crawl is in progress for HA-WIN-TT-1T-client-1</div><div>[2014-10-13 17:38:29.005303] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Another crawl is in progress for HA-2TB-TT-Proxmox-cluster-client-1</div><div>[2014-10-13 17:38:29.005443] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-HA-2TB-TT-Proxmox-cluster-replicate-0: Another crawl is in progress for HA-2TB-TT-Proxmox-cluster-client-1</div><div>[2014-10-13 17:41:05.427867] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.443271] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.444111] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div><div>[2014-10-13 17:41:05.444807] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div></div><div><br></div><div>brick stor2</div><div><br></div><div><div>[2014-10-13 17:38:17.213386] W [glusterfsd.c:1095:cleanup_and_exit] (--&gt;/lib/x86_64-linux-gnu/libc.so.6(+0x462a0) [0x7f343271f2a0] (--&gt;/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(synctask_wrap+0x12) [0x7f343371db12] (--&gt;/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x7f3434790dd5]))) 0-: received signum (15), shutting down</div><div>[2014-10-13 17:38:26.957312] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.5.2 (/usr/sbin/glusterfsd -s stor2 --volfile-id HA-WIN-TT-1T.stor2.exports-NFS-WIN-1T -p /var/lib/glusterd/vols/HA-WIN-TT-1T/run/stor2-exports-NFS-WIN-1T.pid -S /var/run/91514691033d00e666bb151f9c771a26.socket --brick-name /exports/NFS-WIN/1T -l /var/log/glusterfs/bricks/exports-NFS-WIN-1T.log --xlator-option *-posix.glusterd-uuid=abf9e3a7-eb91-4273-acdf-876cd6ba1fe3 --brick-port 49160 --xlator-option HA-WIN-TT-1T-server.listen-port=49160)</div><div>[2014-10-13 17:38:26.958864] I [socket.c:3561:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.958899] I [socket.c:3576:socket_init] 0-socket.glusterfsd: using system polling thread</div><div>[2014-10-13 17:38:26.959003] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.959015] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:38:26.961860] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-server: adding option &#39;listen-port&#39; for volume &#39;HA-WIN-TT-1T-server&#39; with value &#39;49160&#39;</div><div>[2014-10-13 17:38:26.961878] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-posix: adding option &#39;glusterd-uuid&#39; for volume &#39;HA-WIN-TT-1T-posix&#39; with value &#39;abf9e3a7-eb91-4273-acdf-876cd6ba1fe3&#39;</div><div>[2014-10-13 17:38:26.965032] I [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64</div><div>[2014-10-13 17:38:26.965075] W [options.c:888:xl_opt_validate] 0-HA-WIN-TT-1T-server: option &#39;listen-port&#39; is deprecated, preferred is &#39;transport.socket.listen-port&#39;, continuing with correction</div><div>[2014-10-13 17:38:26.965097] I [socket.c:3561:socket_init] 0-tcp.HA-WIN-TT-1T-server: SSL support is NOT enabled</div><div>[2014-10-13 17:38:26.965105] I [socket.c:3576:socket_init] 0-tcp.HA-WIN-TT-1T-server: using system polling thread</div><div>[2014-10-13 17:38:26.965602] W [graph.c:329:_log_if_unknown_option] 0-HA-WIN-TT-1T-quota: option &#39;timeout&#39; is not recognized</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-WIN-TT-1T-posix</div><div>  2:     type storage/posix</div><div>  3:     option glusterd-uuid abf9e3a7-eb91-4273-acdf-876cd6ba1fe3</div><div>  4:     option directory /exports/NFS-WIN/1T</div><div>  5:     option volume-id 2937ac01-4cba-44a8-8ff8-0161b67f8ee4</div><div>  6: end-volume</div><div>  7:</div><div>  8: volume HA-WIN-TT-1T-changelog</div><div>  9:     type features/changelog</div><div> 10:     option changelog-brick /exports/NFS-WIN/1T</div><div> 11:     option changelog-dir /exports/NFS-WIN/1T/.glusterfs/changelogs</div><div> 12:     subvolumes HA-WIN-TT-1T-posix</div><div> 13: end-volume</div><div> 14:</div><div> 15: volume HA-WIN-TT-1T-access-control</div><div> 16:     type features/access-control</div><div> 17:     subvolumes HA-WIN-TT-1T-changelog</div><div> 18: end-volume</div><div> 19:</div><div> 20: volume HA-WIN-TT-1T-locks</div><div> 21:     type features/locks</div><div> 22:     subvolumes HA-WIN-TT-1T-access-control</div><div> 23: end-volume</div><div> 24:</div><div> 25: volume HA-WIN-TT-1T-io-threads</div><div> 26:     type performance/io-threads</div><div> 27:     subvolumes HA-WIN-TT-1T-locks</div><div> 28: end-volume</div><div> 29:</div><div> 30: volume HA-WIN-TT-1T-index</div><div> 31:     type features/index</div><div> 32:     option index-base /exports/NFS-WIN/1T/.glusterfs/indices</div><div> 33:     subvolumes HA-WIN-TT-1T-io-threads</div><div> 34: end-volume</div><div> 35:</div><div> 36: volume HA-WIN-TT-1T-marker</div><div> 37:     type features/marker</div><div> 38:     option volume-uuid 2937ac01-4cba-44a8-8ff8-0161b67f8ee4</div><div> 39:     option timestamp-file /var/lib/glusterd/vols/HA-WIN-TT-1T/marker.tstamp</div><div> 40:     option xtime off</div><div> 41:     option gsync-force-xtime off</div><div> 42:     option quota off</div><div> 43:     subvolumes HA-WIN-TT-1T-index</div><div> 44: end-volume</div><div> 45:</div><div> 46: volume HA-WIN-TT-1T-quota</div><div> 47:     type features/quota</div><div> 48:     option volume-uuid HA-WIN-TT-1T</div><div> 49:     option server-quota off</div><div> 50:     option timeout 0</div><div> 51:     option deem-statfs off</div><div> 52:     subvolumes HA-WIN-TT-1T-marker</div><div> 53: end-volume</div><div> 54:</div><div> 55: volume /exports/NFS-WIN/1T</div><div> 56:     type debug/io-stats</div><div> 57:     option latency-measurement off</div><div> 58:     option count-fop-hits off</div><div> 59:     subvolumes HA-WIN-TT-1T-quota</div><div> 60: end-volume</div><div> 61:</div><div> 62: volume HA-WIN-TT-1T-server</div><div> 63:     type protocol/server</div><div> 64:     option transport.socket.listen-port 49160</div><div> 65:     option rpc-auth.auth-glusterfs on</div><div> 66:     option rpc-auth.auth-unix on</div><div> 67:     option rpc-auth.auth-null on</div><div> 68:     option transport-type tcp</div><div> 69:     option auth.login./exports/NFS-WIN/1T.allow 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 70:     option auth.login.101b907c-ff21-47da-8ba6-37e2920691ce.password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 71:     option auth.addr./exports/NFS-WIN/1T.allow *</div><div> 72:     subvolumes /exports/NFS-WIN/1T</div><div> 73: end-volume</div><div> 74:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:38:27.985048] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from stor1-14362-2014/10/13-17:38:26:938194-HA-WIN-TT-1T-client-1-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:38:28.988700] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-1-0-1 (version: 3.5.2)</div><div>[2014-10-13 17:38:29.004121] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from stor2-15494-2014/10/13-17:38:28:989227-HA-WIN-TT-1T-client-1-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:38:38.515315] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from glstor-cli-23823-2014/10/13-17:37:54:595571-HA-WIN-TT-1T-client-1-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:39:09.872223] I [server.c:520:server_rpc_notify] 0-HA-WIN-TT-1T-server: disconnecting connectionfrom glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-1-0-1</div><div>[2014-10-13 17:39:09.872299] I [client_t.c:417:gf_client_unref] 0-HA-WIN-TT-1T-server: Shutting down connection glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-1-0-1</div><div>[2014-10-13 17:41:05.427810] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.443234] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.445049] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div><div>root@stor2:~#</div></div><div><br></div><div>brick stor1</div><div><br></div><div><div>[2014-10-13 17:38:24.900066] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.5.2 (/usr/sbin/glusterfsd -s stor1 --volfile-id HA-WIN-TT-1T.stor1.exports-NFS-WIN-1T -p /var/lib/glusterd/vols/HA-WIN-TT-1T/run/stor1-exports-NFS-WIN-1T.pid -S /var/run/02580c93278849804f3f34f7ed8314b2.socket --brick-name /exports/NFS-WIN/1T -l /var/log/glusterfs/bricks/exports-NFS-WIN-1T.log --xlator-option *-posix.glusterd-uuid=e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3 --brick-port 49160 --xlator-option HA-WIN-TT-1T-server.listen-port=49160)</div><div>[2014-10-13 17:38:24.902022] I [socket.c:3561:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled</div><div>[2014-10-13 17:38:24.902077] I [socket.c:3576:socket_init] 0-socket.glusterfsd: using system polling thread</div><div>[2014-10-13 17:38:24.902214] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled</div><div>[2014-10-13 17:38:24.902239] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread</div><div>[2014-10-13 17:38:24.906698] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-server: adding option &#39;listen-port&#39; for volume &#39;HA-WIN-TT-1T-server&#39; with value &#39;49160&#39;</div><div>[2014-10-13 17:38:24.906731] I [graph.c:254:gf_add_cmdline_options] 0-HA-WIN-TT-1T-posix: adding option &#39;glusterd-uuid&#39; for volume &#39;HA-WIN-TT-1T-posix&#39; with value &#39;e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3&#39;</div><div>[2014-10-13 17:38:24.908378] I [rpcsvc.c:2127:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64</div><div>[2014-10-13 17:38:24.908435] W [options.c:888:xl_opt_validate] 0-HA-WIN-TT-1T-server: option &#39;listen-port&#39; is deprecated, preferred is &#39;transport.socket.listen-port&#39;, continuing with correction</div><div>[2014-10-13 17:38:24.908472] I [socket.c:3561:socket_init] 0-tcp.HA-WIN-TT-1T-server: SSL support is NOT enabled</div><div>[2014-10-13 17:38:24.908485] I [socket.c:3576:socket_init] 0-tcp.HA-WIN-TT-1T-server: using system polling thread</div><div>[2014-10-13 17:38:24.909105] W [graph.c:329:_log_if_unknown_option] 0-HA-WIN-TT-1T-quota: option &#39;timeout&#39; is not recognized</div><div>Final graph:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume HA-WIN-TT-1T-posix</div><div>  2:     type storage/posix</div><div>  3:     option glusterd-uuid e09cbbc2-08a3-4e5b-83b8-48eb11a1c7b3</div><div>  4:     option directory /exports/NFS-WIN/1T</div><div>  5:     option volume-id 2937ac01-4cba-44a8-8ff8-0161b67f8ee4</div><div>  6: end-volume</div><div>  7:</div><div>  8: volume HA-WIN-TT-1T-changelog</div><div>  9:     type features/changelog</div><div> 10:     option changelog-brick /exports/NFS-WIN/1T</div><div> 11:     option changelog-dir /exports/NFS-WIN/1T/.glusterfs/changelogs</div><div> 12:     subvolumes HA-WIN-TT-1T-posix</div><div> 13: end-volume</div><div> 14:</div><div> 15: volume HA-WIN-TT-1T-access-control</div><div> 16:     type features/access-control</div><div> 17:     subvolumes HA-WIN-TT-1T-changelog</div><div> 18: end-volume</div><div> 19:</div><div> 20: volume HA-WIN-TT-1T-locks</div><div> 21:     type features/locks</div><div> 22:     subvolumes HA-WIN-TT-1T-access-control</div><div> 23: end-volume</div><div> 24:</div><div> 25: volume HA-WIN-TT-1T-io-threads</div><div> 26:     type performance/io-threads</div><div> 27:     subvolumes HA-WIN-TT-1T-locks</div><div> 28: end-volume</div><div> 29:</div><div> 30: volume HA-WIN-TT-1T-index</div><div> 31:     type features/index</div><div> 32:     option index-base /exports/NFS-WIN/1T/.glusterfs/indices</div><div> 33:     subvolumes HA-WIN-TT-1T-io-threads</div><div> 34: end-volume</div><div> 35:</div><div> 36: volume HA-WIN-TT-1T-marker</div><div> 37:     type features/marker</div><div> 38:     option volume-uuid 2937ac01-4cba-44a8-8ff8-0161b67f8ee4</div><div> 39:     option timestamp-file /var/lib/glusterd/vols/HA-WIN-TT-1T/marker.tstamp</div><div> 40:     option xtime off</div><div> 41:     option gsync-force-xtime off</div><div> 42:     option quota off</div><div> 43:     subvolumes HA-WIN-TT-1T-index</div><div> 44: end-volume</div><div> 45:</div><div> 46: volume HA-WIN-TT-1T-quota</div><div> 47:     type features/quota</div><div> 48:     option volume-uuid HA-WIN-TT-1T</div><div> 49:     option server-quota off</div><div> 50:     option timeout 0</div><div> 51:     option deem-statfs off</div><div> 52:     subvolumes HA-WIN-TT-1T-marker</div><div> 53: end-volume</div><div> 54:</div><div> 55: volume /exports/NFS-WIN/1T</div><div> 56:     type debug/io-stats</div><div> 57:     option latency-measurement off</div><div> 58:     option count-fop-hits off</div><div> 59:     subvolumes HA-WIN-TT-1T-quota</div><div> 60: end-volume</div><div> 61:</div><div> 62: volume HA-WIN-TT-1T-server</div><div> 63:     type protocol/server</div><div> 64:     option transport.socket.listen-port 49160</div><div> 65:     option rpc-auth.auth-glusterfs on</div><div> 66:     option rpc-auth.auth-unix on</div><div> 67:     option rpc-auth.auth-null on</div><div> 68:     option transport-type tcp</div><div> 69:     option auth.login./exports/NFS-WIN/1T.allow 101b907c-ff21-47da-8ba6-37e2920691ce</div><div> 70:     option auth.login.101b907c-ff21-47da-8ba6-37e2920691ce.password f4f29094-891f-4241-8736-5e3302ed8bc8</div><div> 71:     option auth.addr./exports/NFS-WIN/1T.allow *</div><div> 72:     subvolumes /exports/NFS-WIN/1T</div><div> 73: end-volume</div><div> 74:</div><div>+------------------------------------------------------------------------------+</div><div>[2014-10-13 17:38:25.933796] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-0-0-1 (version: 3.5.2)</div><div>[2014-10-13 17:38:26.954924] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from stor1-14362-2014/10/13-17:38:26:938194-HA-WIN-TT-1T-client-0-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:38:28.991488] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from stor2-15494-2014/10/13-17:38:28:989227-HA-WIN-TT-1T-client-0-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:38:38.502056] I [server-handshake.c:575:server_setvolume] 0-HA-WIN-TT-1T-server: accepted client from glstor-cli-23823-2014/10/13-17:37:54:595571-HA-WIN-TT-1T-client-0-0-0 (version: 3.5.2)</div><div>[2014-10-13 17:39:09.858784] I [server.c:520:server_rpc_notify] 0-HA-WIN-TT-1T-server: disconnecting connectionfrom glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-0-0-1</div><div>[2014-10-13 17:39:09.858863] I [client_t.c:417:gf_client_unref] 0-HA-WIN-TT-1T-server: Shutting down connection glstor-cli-20753-2014/10/13-11:50:40:959211-HA-WIN-TT-1T-client-0-0-1</div><div>[2014-10-13 17:41:05.390918] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.408236] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt: Volume file changed</div><div>[2014-10-13 17:41:05.414813] I [glusterfsd-mgmt.c:1307:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing</div></div><div><br></div><div><br></div><div>seems to be the right part of logs :)</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2014-10-15 18:24 GMT+03:00 Pranith Kumar Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF"><span class="">
    <br>
    <div>On 10/14/2014 01:20 AM, Roman wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">ok. done.
        <div>this time there were no disconnects, at least all of vms
          are working, but got some mails from VM about IO writes again.</div>
        <div><span style="font-size:11pt;font-family:Calibri,sans-serif"><br>
          </span></div>
        <div><span style="font-size:11pt;font-family:Calibri,sans-serif">WARNINGs:
            Read IO Wait time is 1.45 (outside
            range [0:1]).</span><br>
        </div>
      </div>
    </blockquote></span>
    This warning says &#39;Read IO wait&#39; and there is not a single READ
    operation that came to gluster. Wondering why that is :-/. Any clue?
    There is at least one write which took 3 seconds according to the
    stats. At least one synchronization operation (FINODELK) took 23
    seconds. Could you give logs of this run? for  mount, glustershd,
    bricks.<span class="HOEnZb"><font color="#888888"><br>
    <br>
    Pranith</font></span><div><div class="h5"><br>
    <blockquote type="cite">
      <div dir="ltr">
        <div><span style="font-size:11pt;font-family:Calibri,sans-serif"><br>
          </span></div>
        <div>here is the output</div>
        <div><br>
        </div>
        <div>
          <div>root@stor1:~# gluster volume profile HA-WIN-TT-1T info</div>
          <div>Brick: stor1:/exports/NFS-WIN/1T</div>
          <div>--------------------------------</div>
          <div>Cumulative Stats:</div>
          <div>   Block Size:             131072b+              262144b+</div>
          <div> No. of Reads:                    0                     0</div>
          <div>No. of Writes:              7372798                     1</div>
          <div> %-latency   Avg-latency   Min-Latency   Max-Latency  
            No. of calls         Fop</div>
          <div> ---------   -----------   -----------   -----------  
            ------------        ----</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    25     RELEASE</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    16  RELEASEDIR</div>
          <div>      0.00      64.00 us      52.00 us      76.00 us    
                     2     ENTRYLK</div>
          <div>      0.00      73.50 us      51.00 us      96.00 us    
                     2       FLUSH</div>
          <div>      0.00      68.43 us      30.00 us     135.00 us    
                     7      STATFS</div>
          <div>      0.00      54.31 us      44.00 us     109.00 us    
                    16     OPENDIR</div>
          <div>      0.00      50.75 us      16.00 us      74.00 us    
                    24       FSTAT</div>
          <div>      0.00      47.77 us      19.00 us     119.00 us    
                    26    GETXATTR</div>
          <div>      0.00      59.21 us      21.00 us      89.00 us    
                    24        OPEN</div>
          <div>      0.00      59.39 us      22.00 us     296.00 us    
                    28     READDIR</div>
          <div>      0.00    4972.00 us    4972.00 us    4972.00 us    
                     1      CREATE</div>
          <div>      0.00      97.42 us      19.00 us     184.00 us    
                    62      LOOKUP</div>
          <div>      0.00      89.49 us      20.00 us     656.00 us    
                   324    FXATTROP</div>
          <div>      3.91 1255944.81 us     127.00 us 23397532.00 us    
                   189       FSYNC</div>
          <div>      7.40 3406275.50 us      17.00 us 23398013.00 us    
                   132     INODELK</div>
          <div>     34.96   94598.02 us       8.00 us 23398705.00 us    
                 22445    FINODELK</div>
          <div>     53.73     442.66 us      79.00 us 3116494.00 us    
               7372799       WRITE</div>
          <div><br>
          </div>
          <div>    Duration: 7813 seconds</div>
          <div>   Data Read: 0 bytes</div>
          <div>Data Written: 966367641600 bytes</div>
          <div><br>
          </div>
          <div>Interval 0 Stats:</div>
          <div>   Block Size:             131072b+              262144b+</div>
          <div> No. of Reads:                    0                     0</div>
          <div>No. of Writes:              7372798                     1</div>
          <div> %-latency   Avg-latency   Min-Latency   Max-Latency  
            No. of calls         Fop</div>
          <div> ---------   -----------   -----------   -----------  
            ------------        ----</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    25     RELEASE</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    16  RELEASEDIR</div>
          <div>      0.00      64.00 us      52.00 us      76.00 us    
                     2     ENTRYLK</div>
          <div>      0.00      73.50 us      51.00 us      96.00 us    
                     2       FLUSH</div>
          <div>      0.00      68.43 us      30.00 us     135.00 us    
                     7      STATFS</div>
          <div>      0.00      54.31 us      44.00 us     109.00 us    
                    16     OPENDIR</div>
          <div>      0.00      50.75 us      16.00 us      74.00 us    
                    24       FSTAT</div>
          <div>      0.00      47.77 us      19.00 us     119.00 us    
                    26    GETXATTR</div>
          <div>      0.00      59.21 us      21.00 us      89.00 us    
                    24        OPEN</div>
          <div>      0.00      59.39 us      22.00 us     296.00 us    
                    28     READDIR</div>
          <div>      0.00    4972.00 us    4972.00 us    4972.00 us    
                     1      CREATE</div>
          <div>      0.00      97.42 us      19.00 us     184.00 us    
                    62      LOOKUP</div>
          <div>      0.00      89.49 us      20.00 us     656.00 us    
                   324    FXATTROP</div>
          <div>      3.91 1255944.81 us     127.00 us 23397532.00 us    
                   189       FSYNC</div>
          <div>      7.40 3406275.50 us      17.00 us 23398013.00 us    
                   132     INODELK</div>
          <div>     34.96   94598.02 us       8.00 us 23398705.00 us    
                 22445    FINODELK</div>
          <div>     53.73     442.66 us      79.00 us 3116494.00 us    
               7372799       WRITE</div>
          <div><br>
          </div>
          <div>    Duration: 7813 seconds</div>
          <div>   Data Read: 0 bytes</div>
          <div>Data Written: 966367641600 bytes</div>
          <div><br>
          </div>
          <div>Brick: stor2:/exports/NFS-WIN/1T</div>
          <div>--------------------------------</div>
          <div>Cumulative Stats:</div>
          <div>   Block Size:             131072b+              262144b+</div>
          <div> No. of Reads:                    0                     0</div>
          <div>No. of Writes:              7372798                     1</div>
          <div> %-latency   Avg-latency   Min-Latency   Max-Latency  
            No. of calls         Fop</div>
          <div> ---------   -----------   -----------   -----------  
            ------------        ----</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    25     RELEASE</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    16  RELEASEDIR</div>
          <div>      0.00      61.50 us      46.00 us      77.00 us    
                     2     ENTRYLK</div>
          <div>      0.00      82.00 us      67.00 us      97.00 us    
                     2       FLUSH</div>
          <div>      0.00     265.00 us     265.00 us     265.00 us    
                     1      CREATE</div>
          <div>      0.00      57.43 us      30.00 us      85.00 us    
                     7      STATFS</div>
          <div>      0.00      61.12 us      37.00 us     107.00 us    
                    16     OPENDIR</div>
          <div>      0.00      44.04 us      12.00 us      86.00 us    
                    24       FSTAT</div>
          <div>      0.00      41.42 us      24.00 us      96.00 us    
                    26    GETXATTR</div>
          <div>      0.00      45.93 us      24.00 us     133.00 us    
                    28     READDIR</div>
          <div>      0.00      57.17 us      25.00 us     147.00 us    
                    24        OPEN</div>
          <div>      0.00     145.28 us      31.00 us     288.00 us    
                    32    READDIRP</div>
          <div>      0.00      39.50 us      10.00 us     152.00 us    
                   132     INODELK</div>
          <div>      0.00     330.97 us      20.00 us   14280.00 us    
                    62      LOOKUP</div>
          <div>      0.00      79.06 us      19.00 us     851.00 us    
                   430    FXATTROP</div>
          <div>      0.02      29.32 us       7.00 us   28154.00 us    
                 22568    FINODELK</div>
          <div>      7.80 1313096.68 us     125.00 us 23281862.00 us    
                   189       FSYNC</div>
          <div>     92.18     397.92 us      76.00 us 1838343.00 us    
               7372799       WRITE</div>
          <div><br>
          </div>
          <div>    Duration: 7811 seconds</div>
          <div>   Data Read: 0 bytes</div>
          <div>Data Written: 966367641600 bytes</div>
          <div><br>
          </div>
          <div>Interval 0 Stats:</div>
          <div>   Block Size:             131072b+              262144b+</div>
          <div> No. of Reads:                    0                     0</div>
          <div>No. of Writes:              7372798                     1</div>
          <div> %-latency   Avg-latency   Min-Latency   Max-Latency  
            No. of calls         Fop</div>
          <div> ---------   -----------   -----------   -----------  
            ------------        ----</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    25     RELEASE</div>
          <div>      0.00       0.00 us       0.00 us       0.00 us    
                    16  RELEASEDIR</div>
          <div>      0.00      61.50 us      46.00 us      77.00 us    
                     2     ENTRYLK</div>
          <div>      0.00      82.00 us      67.00 us      97.00 us    
                     2       FLUSH</div>
          <div>      0.00     265.00 us     265.00 us     265.00 us    
                     1      CREATE</div>
          <div>      0.00      57.43 us      30.00 us      85.00 us    
                     7      STATFS</div>
          <div>      0.00      61.12 us      37.00 us     107.00 us    
                    16     OPENDIR</div>
          <div>      0.00      44.04 us      12.00 us      86.00 us    
                    24       FSTAT</div>
          <div>      0.00      41.42 us      24.00 us      96.00 us    
                    26    GETXATTR</div>
          <div>      0.00      45.93 us      24.00 us     133.00 us    
                    28     READDIR</div>
          <div>      0.00      57.17 us      25.00 us     147.00 us    
                    24        OPEN</div>
          <div>      0.00     145.28 us      31.00 us     288.00 us    
                    32    READDIRP</div>
          <div>      0.00      39.50 us      10.00 us     152.00 us    
                   132     INODELK</div>
          <div>      0.00     330.97 us      20.00 us   14280.00 us    
                    62      LOOKUP</div>
          <div>      0.00      79.06 us      19.00 us     851.00 us    
                   430    FXATTROP</div>
          <div>      0.02      29.32 us       7.00 us   28154.00 us    
                 22568    FINODELK</div>
          <div>      7.80 1313096.68 us     125.00 us 23281862.00 us    
                   189       FSYNC</div>
          <div>     92.18     397.92 us      76.00 us 1838343.00 us    
               7372799       WRITE</div>
          <div><br>
          </div>
          <div>    Duration: 7811 seconds</div>
          <div>   Data Read: 0 bytes</div>
          <div>Data Written: 966367641600 bytes</div>
          <div><br>
          </div>
        </div>
        <div>does it make something more clear?</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">2014-10-13 20:40 GMT+03:00 Roman <span dir="ltr">&lt;<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a>&gt;</span>:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr">i think i may know what was an issue. There
              was an iscsitarget service runing, that was exporting this
              generated block device. so maybe my collegue Windows
              server picked it up and mountd :) I&#39;ll if it will happen
              again.</div>
            <div class="gmail_extra">
              <div>
                <div><br>
                  <div class="gmail_quote">2014-10-13 20:27 GMT+03:00
                    Roman <span dir="ltr">&lt;<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a>&gt;</span>:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      <div dir="ltr">So may I restart the volume and
                        start the test, or you need something else from
                        this issue?</div>
                      <div class="gmail_extra">
                        <div>
                          <div><br>
                            <div class="gmail_quote">2014-10-13 19:49
                              GMT+03:00 Pranith Kumar Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>:<br>
                              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                <div text="#000000" bgcolor="#FFFFFF"><span>
                                    <br>
                                    <div>On 10/13/2014 10:03 PM, Roman
                                      wrote:<br>
                                    </div>
                                    <blockquote type="cite">
                                      <div dir="ltr">hmm,
                                        <div>seems like another strange
                                          issue? Seen this before. Had
                                          to restart the volume to get
                                          my empty space back.</div>
                                        <div>
                                          <div>root@glstor-cli:/srv/nfs/HA-WIN-TT-1T#
                                            ls -l</div>
                                          <div>total 943718400</div>
                                          <div>-rw-r--r-- 1 root root
                                            966367641600 Oct 13 16:55
                                            disk</div>
                                          <div>root@glstor-cli:/srv/nfs/HA-WIN-TT-1T#
                                            rm disk</div>
                                          <div>root@glstor-cli:/srv/nfs/HA-WIN-TT-1T#
                                            df -h</div>
                                          <div>Filesystem              
                                                                       
                                               Size  Used Avail Use%
                                            Mounted on</div>
                                          <div>rootfs                  
                                                                       
                                               282G  1.1G  266G   1% /</div>
                                          <div>udev                    
                                                                       
                                                10M     0   10M   0%
                                            /dev</div>
                                          <div>tmpfs                    
                                                                       
                                              1.4G  228K  1.4G   1% /run</div>
                                          <div>/dev/disk/by-uuid/c62ee3c0-c0e5-44af-b0cd-7cb3fbcc0fba

                                             282G  1.1G  266G   1% /</div>
                                          <div>tmpfs                    
                                                                       
                                              5.0M     0  5.0M   0%
                                            /run/lock</div>
                                          <div>tmpfs                    
                                                                       
                                              5.2G     0  5.2G   0%
                                            /run/shm</div>
                                          <div>stor1:HA-WIN-TT-1T      
                                                                       
                                              1008G  901G   57G  95%
                                            /srv/nfs/HA-WIN-TT-1T</div>
                                        </div>
                                        <div><br>
                                        </div>
                                        <div>no file, but size is still
                                          901G.</div>
                                        <div>Both servers show the same.</div>
                                        <div>Do I really have to restart
                                          the volume to fix that?</div>
                                      </div>
                                    </blockquote>
                                  </span> IMO this can happen if there
                                  is an fd leak. open-fd is the only
                                  variable that can change with volume
                                  restart. How do you re-create the bug?<span><font color="#888888"><br>
                                      <br>
                                      Pranith</font></span>
                                  <div>
                                    <div><br>
                                      <blockquote type="cite">
                                        <div class="gmail_extra"><br>
                                          <div class="gmail_quote">2014-10-13
                                            19:30 GMT+03:00 Roman <span dir="ltr">&lt;<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a>&gt;</span>:<br>
                                            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                              <div dir="ltr">Sure.
                                                <div>I&#39;ll let it to run
                                                  for this night .</div>
                                              </div>
                                              <div class="gmail_extra">
                                                <div>
                                                  <div><br>
                                                    <div class="gmail_quote">2014-10-13
                                                      19:19 GMT+03:00
                                                      Pranith Kumar
                                                      Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>:<br>
                                                      <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                                        <div text="#000000" bgcolor="#FFFFFF"> hi Roman,<br>
                                                               Do you
                                                          think we can
                                                          run this test
                                                          again? this
                                                          time, could
                                                          you enable
                                                          &#39;gluster
                                                          volume profile
                                                          &lt;volname&gt;
                                                          start&#39;, do the
                                                          same test.
                                                          Provide output
                                                          of &#39;gluster
                                                          volume profile
                                                          &lt;volname&gt;
                                                          info&#39; and logs
                                                          after the
                                                          test?<span><font color="#888888"><br>
                                                          <br>
                                                          Pranith</font></span>
                                                          <div>
                                                          <div><br>
                                                          <div>On
                                                          10/13/2014
                                                          09:45 PM,
                                                          Roman wrote:<br>
                                                          </div>
                                                          <blockquote type="cite">
                                                          <div dir="ltr">Sure
                                                          !
                                                          <div><br>
                                                          </div>
                                                          <div>
                                                          <div>root@stor1:~#
                                                          gluster volume
                                                          info</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Volume
                                                          Name:
                                                          HA-2TB-TT-Proxmox-cluster</div>
                                                          <div>Type:
                                                          Replicate</div>
                                                          <div>Volume
                                                          ID:
                                                          66e38bde-c5fa-4ce2-be6e-6b2adeaa16c2</div>
                                                          <div>Status:
                                                          Started</div>
                                                          <div>Number of
                                                          Bricks: 1 x 2
                                                          = 2</div>
                                                          <div>Transport-type:
                                                          tcp</div>
                                                          <div>Bricks:</div>
                                                          <div>Brick1:
                                                          stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB</div>
                                                          <div>Brick2:
                                                          stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB</div>
                                                          <div>Options
                                                          Reconfigured:</div>
                                                          <div>nfs.disable:
                                                          0</div>
                                                          <div>network.ping-timeout:
                                                          10</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Volume
                                                          Name:
                                                          HA-WIN-TT-1T</div>
                                                          <div>Type:
                                                          Replicate</div>
                                                          <div>Volume
                                                          ID:
                                                          2937ac01-4cba-44a8-8ff8-0161b67f8ee4</div>
                                                          <div>Status:
                                                          Started</div>
                                                          <div>Number of
                                                          Bricks: 1 x 2
                                                          = 2</div>
                                                          <div>Transport-type:
                                                          tcp</div>
                                                          <div>Bricks:</div>
                                                          <div>Brick1:
                                                          stor1:/exports/NFS-WIN/1T</div>
                                                          <div>Brick2:
                                                          stor2:/exports/NFS-WIN/1T</div>
                                                          <div>Options
                                                          Reconfigured:</div>
                                                          <div>nfs.disable:
                                                          1</div>
                                                          <div>network.ping-timeout:
                                                          10</div>
                                                          <div><br>
                                                          </div>
                                                          <div><br>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          <div class="gmail_extra"><br>
                                                          <div class="gmail_quote">2014-10-13

                                                          19:09
                                                          GMT+03:00
                                                          Pranith Kumar
                                                          Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>:<br>
                                                          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                                          <div text="#000000" bgcolor="#FFFFFF"> Could you give your &#39;gluster volume info&#39; output?<br>
                                                          <br>
                                                          Pranith
                                                          <div>
                                                          <div><br>
                                                          <div>On
                                                          10/13/2014
                                                          09:36 PM,
                                                          Roman wrote:<br>
                                                          </div>
                                                          </div>
                                                          </div>
                                                          <blockquote type="cite">
                                                          <div>
                                                          <div>
                                                          <div dir="ltr">Hi,

                                                          <div><br>
                                                          </div>
                                                          <div>I&#39;ve got
                                                          this kind of
                                                          setup (servers
                                                          run replica)</div>
                                                          <div><br>
                                                          </div>
                                                          <div><br>
                                                          </div>
                                                          <div>@ 10G
                                                          backend</div>
                                                          <div>gluster
                                                          storage1</div>
                                                          <div>gluster
                                                          storage2</div>
                                                          <div>gluster
                                                          client1</div>
                                                          <div><br>
                                                          </div>
                                                          <div>@1g
                                                          backend</div>
                                                          <div>other
                                                          gluster
                                                          clients</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Servers
                                                          got HW RAID5
                                                          with SAS
                                                          disks.</div>
                                                          <div><br>
                                                          </div>
                                                          <div>So today
                                                          I&#39;ve desided
                                                          to create a
                                                          900GB file for
                                                          iscsi target
                                                          that will be
                                                          located @
                                                          glusterfs
                                                          separate
                                                          volume, using
                                                          dd (just a
                                                          dummy file
                                                          filled with
                                                          zeros, bs=1G
                                                          count 900)</div>
                                                          <div>For the
                                                          first of all
                                                          the process
                                                          took pretty
                                                          lots of time,
                                                          the writing
                                                          speed was 130
                                                          MB/sec (client
                                                          port was 2
                                                          gbps, servers
                                                          ports were
                                                          running @
                                                          1gbps).</div>
                                                          <div>Then it
                                                          reported
                                                          something like
                                                          &quot;endpoint is
                                                          not connected&quot;
                                                          and all of my
                                                          VMs on the
                                                          other volume
                                                          started to
                                                          give me IO
                                                          errors.</div>
                                                          <div>Servers
                                                          load was
                                                          around 4,6
                                                          (total 12
                                                          cores)</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Maybe it
                                                          was due to
                                                          timeout of 2
                                                          secs, so I&#39;ve
                                                          made it a big
                                                          higher, 10
                                                          sec.</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Also
                                                          during the dd
                                                          image creation
                                                          time, VMs very
                                                          often reported
                                                          me that their
                                                          disks are slow
                                                          like</div>
                                                          <div>
                                                          <p>WARNINGs:
                                                          Read IO Wait
                                                          time is -0.02
                                                          (outside range
                                                          [0:1]).</p>
                                                          <p>Is 130MB
                                                          /sec is the
                                                          maximum
                                                          bandwidth for
                                                          all of the
                                                          volumes in
                                                          total? That
                                                          why would we
                                                          need 10g
                                                          backends?</p>
                                                          <p>HW Raid
                                                          local speed is
                                                          300 MB/sec, so
                                                          it should not
                                                          be an issue.
                                                          any ideas or
                                                          mby any
                                                          advices?</p>
                                                          <p><br>
                                                          </p>
                                                          <p>Maybe some1
                                                          got optimized
                                                          sysctl.conf
                                                          for 10G
                                                          backend?</p>
                                                          <p>mine is
                                                          pretty simple,
                                                          which can be
                                                          found from
                                                          googling.</p>
                                                          <p><br>
                                                          </p>
                                                          <p>just to
                                                          mention: those
                                                          VM-s were
                                                          connected
                                                          using separate
                                                          1gbps
                                                          intraface,
                                                          which means,
                                                          they should
                                                          not be
                                                          affected by
                                                          the client
                                                          with 10g
                                                          backend.</p>
                                                          <p><br>
                                                          </p>
                                                          <p>logs are
                                                          pretty
                                                          useless, they
                                                          just say  this
                                                          during the
                                                          outage</p>
                                                          <p><br>
                                                          </p>
                                                          <p>[2014-10-13
                                                          12:09:18.392910]
                                                          W
                                                          [client-handshake.c:276:client_ping_cbk]
                                                          0-HA-2TB-TT-Proxmox-cluster-client-0:

                                                          timer must
                                                          have expired</p>
                                                          <p>[2014-10-13
                                                          12:10:08.389708]
                                                          C
                                                          [client-handshake.c:127:rpc_client_ping_timer_expired]
                                                          0-HA-2TB-TT-Proxmox-cluster-client-0:

                                                          server <a href="http://10.250.0.1:49159" target="_blank">10.250.0.1:49159</a> has
                                                          not responded
                                                          in the last 2
                                                          seconds,
                                                          disconnecting.</p>
                                                          <p>[2014-10-13
                                                          12:10:08.390312]
                                                          W
                                                          [client-handshake.c:276:client_ping_cbk]
                                                          0-HA-2TB-TT-Proxmox-cluster-client-0:

                                                          timer must
                                                          have expired</p>
                                                          </div>
                                                          <div>so I
                                                          decided to set
                                                          the timout a
                                                          bit higher.</div>
                                                          <div>
                                                          <div><br>
                                                          </div>
                                                          <div>So it
                                                          seems to me,
                                                          that under
                                                          high load
                                                          GlusterFS is
                                                          not useable?
                                                          130 MB/s is
                                                          not that much
                                                          to get some
                                                          kind of
                                                          timeouts or
                                                          makeing the
                                                          systme so
                                                          slow, that
                                                          VM-s feeling
                                                          themselves
                                                          bad.</div>
                                                          <div><br>
                                                          </div>
                                                          <div>Of
                                                          course, after
                                                          the
                                                          disconnection,
                                                          healing
                                                          process was
                                                          started, but
                                                          as VM-s lost
                                                          connection to
                                                          both of
                                                          servers, it
                                                          was pretty
                                                          useless, they
                                                          could not run
                                                          anymore. and
                                                          BTW, when u
                                                          load the
                                                          server with
                                                          such huge job
                                                          (dd of 900GB),
                                                          healing
                                                          process goes
                                                          soooooo slow
                                                          :)</div>
                                                          <div><br>
                                                          </div>
                                                          <div><br>
                                                          </div>
                                                          <div><br>
                                                          </div>
                                                          -- <br>
                                                          Best regards,<br>
                                                          Roman. </div>
                                                          </div>
                                                          <br>
                                                          <fieldset></fieldset>
                                                          <br>
                                                          </div>
                                                          </div>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
                                                          </blockquote>
                                                          <br>
                                                          </div>
                                                          </blockquote>
                                                          </div>
                                                          <br>
                                                          <br clear="all">
                                                          <div><br>
                                                          </div>
                                                          -- <br>
                                                          Best regards,<br>
                                                          Roman. </div>
                                                          </blockquote>
                                                          <br>
                                                          </div>
                                                          </div>
                                                        </div>
                                                      </blockquote>
                                                    </div>
                                                    <br>
                                                    <br clear="all">
                                                    <div><br>
                                                    </div>
                                                  </div>
                                                </div>
                                                <span><font color="#888888">-- <br>
                                                    Best regards,<br>
                                                    Roman. </font></span></div>
                                            </blockquote>
                                          </div>
                                          <br>
                                          <br clear="all">
                                          <div><br>
                                          </div>
                                          -- <br>
                                          Best regards,<br>
                                          Roman. </div>
                                      </blockquote>
                                      <br>
                                    </div>
                                  </div>
                                </div>
                              </blockquote>
                            </div>
                            <br>
                            <br clear="all">
                            <div><br>
                            </div>
                          </div>
                        </div>
                        <span><font color="#888888">-- <br>
                            Best regards,<br>
                            Roman.
                          </font></span></div>
                    </blockquote>
                  </div>
                  <br>
                  <br clear="all">
                  <div><br>
                  </div>
                </div>
              </div>
              <span><font color="#888888">-- <br>
                  Best regards,<br>
                  Roman.
                </font></span></div>
          </blockquote>
        </div>
        <br>
        <br clear="all">
        <div><br>
        </div>
        -- <br>
        Best regards,<br>
        Roman.
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br><br clear="all"><div><br></div>-- <br>Best regards,<br>Roman.
</div>