<div dir="ltr">furthermore, when I stop gluster, and restartd glusterfs, in the log, I have <div><br></div><div><div>==&gt; etc-glusterfs-glusterd.vol.log &lt;==</div><div>[2013-01-11 16:39:55.438506] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.3.1</div>
<div>[2013-01-11 16:39:55.440098] I [glusterd.c:807:init] 0-management: Using /var/lib/glusterd as working directory</div><div>[2013-01-11 16:39:55.440797] C [rdma.c:4102:gf_rdma_init] 0-rpc-transport/rdma: Failed to get IB devices</div>
<div>[2013-01-11 16:39:55.440859] E [rdma.c:4993:init] 0-rdma.management: Failed to initialize IB Device</div><div>[2013-01-11 16:39:55.440881] E [rpc-transport.c:316:rpc_transport_load] 0-rpc-transport: &#39;rdma&#39; initialization failed</div>
<div>[2013-01-11 16:39:55.440901] W [rpcsvc.c:1356:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed</div><div>[2013-01-11 16:39:55.440992] I [glusterd.c:95:glusterd_uuid_init] 0-glusterd: retrieved UUID: eece061b-1cd0-4f30-ad17-61809297aba9</div>
<div>[2013-01-11 16:39:56.050996] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0</div><div>[2013-01-11 16:39:56.051041] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1</div>
<div>[2013-01-11 16:39:56.235444] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0</div><div>[2013-01-11 16:39:56.235482] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1</div>
<div>[2013-01-11 16:39:56.235810] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0</div><div>[2013-01-11 16:39:56.235831] E [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1</div>
<div>[2013-01-11 16:39:56.236277] I [rpc-clnt.c:968:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600</div><div>[2013-01-11 16:39:56.236843] I [glusterd-handler.c:2227:glusterd_friend_add] 0-management: connect returned 0</div>
<div>[2013-01-11 16:39:56.241266] E [glusterd-store.c:2586:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore</div><div>[2013-01-11 16:39:56.243958] E [glusterd-utils.c:3418:glusterd_brick_start] 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/ssl</div>
<div>[2013-01-11 16:39:56.247827] E [glusterd-utils.c:3418:glusterd_brick_start] 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/dist</div><div>[2013-01-11 16:39:56.251832] E [glusterd-utils.c:3418:glusterd_brick_start] 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/bucket</div>
<div>[2013-01-11 16:39:56.258909] I [rpc-clnt.c:968:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600</div><div><br></div><div>==&gt; nfs.log.1 &lt;==</div><div>[2013-01-11 16:39:56.259055] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 7</div>
<div>[2013-01-11 16:39:56.259108] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported</div><div>[2013-01-11 16:39:56.259154] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8</div>
<div>[2013-01-11 16:39:56.259172] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported</div><div><br></div><div>==&gt; etc-glusterfs-glusterd.vol.log &lt;==</div>
<div>[2013-01-11 16:39:56.266390] I [rpc-clnt.c:968:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600</div><div><br></div><div>==&gt; glustershd.log.1 &lt;==</div><div>[2013-01-11 16:39:56.266520] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 7</div>
<div>[2013-01-11 16:39:56.266562] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported</div><div><br></div><div>==&gt; etc-glusterfs-glusterd.vol.log &lt;==</div>
<div>Given volfile:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume management</div><div>  2:     type mgmt/glusterd</div><div>  3:     option working-directory /var/lib/glusterd</div>
<div>  4:     option transport-type socket,rdma</div><div>  5:     option transport.socket.keepalive-time 10</div><div>  6:     option transport.socket.keepalive-interval 2</div><div>  7:     option transport.socket.read-fail-log off</div>
<div>  8: end-volume</div><div><br></div><div>+------------------------------------------------------------------------------+</div><div><br></div><div>==&gt; glustershd.log.1 &lt;==</div><div>[2013-01-11 16:39:56.266610] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8</div>
<div>[2013-01-11 16:39:56.266624] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported</div><div><br></div><div>==&gt; etc-glusterfs-glusterd.vol.log &lt;==</div>
<div>[2013-01-11 16:39:56.267030] I [glusterd-handshake.c:397:glusterd_set_clnt_mgmt_program] 0-: Using Program glusterd mgmt, Num (1238433), Version (2)</div><div>[2013-01-11 16:39:56.267053] I [glusterd-handshake.c:403:glusterd_set_clnt_mgmt_program] 0-: Using Program Peer mgmt, Num (1238437), Version (2)</div>
<div><br></div><div>==&gt; nfs.log.1 &lt;==</div><div>[2013-01-11 16:39:58.148908] W [nfs.c:735:nfs_init_state] 1-nfs: /sbin/rpc.statd not found. Disabling NLM</div><div><br></div><div>==&gt; etc-glusterfs-glusterd.vol.log &lt;==</div>
<div>[2013-01-11 16:39:58.149702] I [glusterd-handler.c:1486:glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 184a81f4-ff0f-48d6-adb8-798b98957b1a</div><div>[2013-01-11 16:39:58.149818] E [glusterd-utils.c:1926:glusterd_compare_friend_volume] 0-: Cksums of volume puppet-bucket differ. local cksum = 1273524870, remote cksum = 1932840611</div>
<div>[2013-01-11 16:39:58.149858] I [glusterd-handler.c:2395:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to bastille.mdc (0), ret: 0</div><div><br></div><div>==&gt; nfs.log.1 &lt;==</div><div>[2013-01-11 16:39:58.179450] E [socket.c:333:__socket_server_bind] 1-socket.nfs-server: binding to  failed: Address already in use</div>
<div>[2013-01-11 16:39:58.179512] E [socket.c:336:__socket_server_bind] 1-socket.nfs-server: Port is already in use</div><div>[2013-01-11 16:39:58.179535] W [rpcsvc.c:1363:rpcsvc_transport_create] 1-rpc-service: listening on transport failed</div>
<div>[2013-01-11 16:39:58.179663] E [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not register with portmap</div><div>[2013-01-11 16:39:58.179710] E [socket.c:333:__socket_server_bind] 1-socket.nfs-server: binding to  failed: Address already in use</div>
<div>[2013-01-11 16:39:58.179727] E [socket.c:336:__socket_server_bind] 1-socket.nfs-server: Port is already in use</div><div>[2013-01-11 16:39:58.179743] W [rpcsvc.c:1363:rpcsvc_transport_create] 1-rpc-service: listening on transport failed</div>
<div>[2013-01-11 16:39:58.179815] E [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not register with portmap</div><div>[2013-01-11 16:39:58.180193] E [socket.c:333:__socket_server_bind] 1-socket.nfs-server: binding to  failed: Address already in use</div>
<div>[2013-01-11 16:39:58.180214] E [socket.c:336:__socket_server_bind] 1-socket.nfs-server: Port is already in use</div><div>[2013-01-11 16:39:58.180230] W [rpcsvc.c:1363:rpcsvc_transport_create] 1-rpc-service: listening on transport failed</div>
<div>[2013-01-11 16:39:58.180300] E [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not register with portmap</div><div>[2013-01-11 16:39:58.180319] I [nfs.c:821:init] 1-nfs: NFS service started</div>
<div>[2013-01-11 16:39:58.186245] W [graph.c:316:_log_if_unknown_option] 1-nfs-server: option &#39;rpc-auth.auth-glusterfs&#39; is not recognized</div><div>[2013-01-11 16:39:58.186346] W [graph.c:316:_log_if_unknown_option] 1-nfs-server: option &#39;rpc-auth-allow-insecure&#39; is not recognized</div>
<div>[2013-01-11 16:39:58.186366] W [graph.c:316:_log_if_unknown_option] 1-nfs-server: option &#39;transport-type&#39; is not recognized</div><div>[2013-01-11 16:39:58.186400] I [client.c:2142:notify] 1-puppet-ssl-client-0: parent translators are ready, attempting connect on transport</div>
<div>[2013-01-11 16:39:58.187286] I [client.c:2142:notify] 1-puppet-ssl-client-1: parent translators are ready, attempting connect on transport</div><div>[2013-01-11 16:39:58.188173] I [client.c:2142:notify] 1-puppet-dist-client-0: parent translators are ready, attempting connect on transport</div>
<div>[2013-01-11 16:39:58.189031] I [client.c:2142:notify] 1-puppet-dist-client-1: parent translators are ready, attempting connect on transport</div><div>[2013-01-11 16:39:58.189703] I [client.c:2142:notify] 1-puppet-bucket-client-0: parent translators are ready, attempting connect on transport</div>
<div>[2013-01-11 16:39:58.190559] I [client.c:2142:notify] 1-puppet-bucket-client-1: parent translators are ready, attempting connect on transport</div><div>Given volfile:</div><div>+------------------------------------------------------------------------------+</div>
<div>  1: volume puppet-bucket-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host sandy.mdc</div><div>  4:     option remote-subvolume /opt/gluster-data/snake-puppet/bucket</div><div>  5:     option transport-type tcp</div>
<div>  6: end-volume</div><div>  7: </div><div>  8: volume puppet-bucket-client-1</div><div>  9:     type protocol/client</div><div> 10:     option remote-host irene.mdc</div><div> 11:     option remote-subvolume /opt/gluster-data/puppet/bucket</div>
<div> 12:     option transport-type tcp</div><div> 13: end-volume</div><div> 14: </div><div> 15: volume puppet-bucket-replicate-0</div><div> 16:     type cluster/replicate</div><div> 17:     subvolumes puppet-bucket-client-0 puppet-bucket-client-1</div>
<div> 18: end-volume</div><div> 19: </div><div> 20: volume puppet-bucket</div><div> 21:     type debug/io-stats</div><div> 22:     option latency-measurement off</div><div> 23:     option count-fop-hits off</div><div> 24:     subvolumes puppet-bucket-replicate-0</div>
<div> 25: end-volume</div><div> 26: </div><div> 27: volume puppet-dist-client-0</div><div> 28:     type protocol/client</div><div> 29:     option remote-host sandy.mdc</div><div> 30:     option remote-subvolume /opt/gluster-data/snake-puppet/dist</div>
<div> 31:     option transport-type tcp</div><div> 32: end-volume</div><div> 33: </div><div> 34: volume puppet-dist-client-1</div><div> 35:     type protocol/client</div><div> 36:     option remote-host irene.mdc</div><div>
 37:     option remote-subvolume /opt/gluster-data/puppet/dist</div><div> 38:     option transport-type tcp</div><div> 39: end-volume</div><div> 40: </div><div> 41: volume puppet-dist-replicate-0</div><div> 42:     type cluster/replicate</div>
<div> 43:     option data-self-heal-algorithm full</div><div> 44:     subvolumes puppet-dist-client-0 puppet-dist-client-1</div><div> 45: end-volume</div><div> 46: </div><div> 47: volume puppet-dist</div><div> 48:     type debug/io-stats</div>
<div> 49:     option latency-measurement off</div><div> 50:     option count-fop-hits off</div><div> 51:     subvolumes puppet-dist-replicate-0</div><div> 52: end-volume</div><div> 53: </div><div> 54: volume puppet-ssl-client-0</div>
<div> 55:     type protocol/client</div><div> 56:     option remote-host sandy.mdc</div><div> 57:     option remote-subvolume /opt/gluster-data/snake-puppet/ssl</div><div> 58:     option transport-type tcp</div><div> 59: end-volume</div>
<div> 60: </div><div> 61: volume puppet-ssl-client-1</div><div> 62:     type protocol/client</div><div> 63:     option remote-host irene.mdc</div><div> 64:     option remote-subvolume /opt/gluster-data/puppet/ssl</div><div>
 65:     option transport-type tcp</div><div> 66: end-volume</div><div> 67: </div><div> 68: volume puppet-ssl-replicate-0</div><div> 69:     type cluster/replicate</div><div> 70:     option metadata-change-log on</div><div>
 71:     option data-self-heal-algorithm full</div><div> 72:     subvolumes puppet-ssl-client-0 puppet-ssl-client-1</div><div> 73: end-volume</div><div> 74: </div><div> 75: volume puppet-ssl</div><div> 76:     type debug/io-stats</div>
<div> 77:     option latency-measurement off</div><div> 78:     option count-fop-hits off</div><div> 79:     subvolumes puppet-ssl-replicate-0</div><div> 80: end-volume</div><div> 81: </div><div> 82: volume nfs-server</div>
<div> 83:     type nfs/server</div><div> 84:     option nfs.dynamic-volumes on</div><div> 85:     option nfs.nlm on</div><div> 86:     option rpc-auth.addr.puppet-ssl.allow *</div><div> 87:     option nfs3.puppet-ssl.volume-id bb2ffdd5-f00c-4016-ab07-301a6ede3042</div>
<div> 88:     option rpc-auth.addr.puppet-dist.allow *</div><div> 89:     option nfs3.puppet-dist.volume-id 376220d6-dcdd-4f3f-9809-397046a78f5a</div><div> 90:     option rpc-auth.addr.puppet-bucket.allow *</div><div> 91:     option nfs3.puppet-bucket.volume-id 3a7e146c-7c37-41ea-baa5-5262c79b1232</div>
<div> 92:     subvolumes puppet-ssl puppet-dist puppet-bucket</div><div> 93: end-volume</div><div><br></div><div>+------------------------------------------------------------------------------+</div><div>[2013-01-11 16:39:58.191727] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-ssl-client-1: changing port to 24010 (from 0)</div>
<div>[2013-01-11 16:39:58.191806] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-dist-client-1: changing port to 24012 (from 0)</div><div>[2013-01-11 16:39:58.191844] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-bucket-client-1: changing port to 24014 (from 0)</div>
<div>[2013-01-11 16:39:58.191881] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-ssl-client-0: changing port to 24012 (from 0)</div><div>[2013-01-11 16:39:58.191974] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-dist-client-0: changing port to 24010 (from 0)</div>
<div>[2013-01-11 16:39:58.192024] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-bucket-client-0: changing port to 24014 (from 0)</div><div><br></div><div>==&gt; glustershd.log.1 &lt;==</div><div>[2013-01-11 16:39:58.381647] I [graph.c:241:gf_add_cmdline_options] 0-puppet-ssl-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;puppet-ssl-replicate-0&#39; with value &#39;eece061b-1cd0-4f30-ad17-61809297aba9&#39;</div>
<div>[2013-01-11 16:39:58.381673] I [graph.c:241:gf_add_cmdline_options] 0-puppet-dist-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;puppet-dist-replicate-0&#39; with value &#39;eece061b-1cd0-4f30-ad17-61809297aba9&#39;</div>
<div>[2013-01-11 16:39:58.381686] I [graph.c:241:gf_add_cmdline_options] 0-puppet-bucket-replicate-0: adding option &#39;node-uuid&#39; for volume &#39;puppet-bucket-replicate-0&#39; with value &#39;eece061b-1cd0-4f30-ad17-61809297aba9&#39;</div>
<div>[2013-01-11 16:39:58.390396] I [client.c:2142:notify] 1-puppet-ssl-client-0: parent translators are ready, attempting connect on transport</div><div>[2013-01-11 16:39:58.391487] I [client.c:2142:notify] 1-puppet-ssl-client-1: parent translators are ready, attempting connect on transport</div>
<div>[2013-01-11 16:39:58.392209] I [client.c:2142:notify] 1-puppet-dist-client-0: parent translators are ready, attempting connect on transport</div><div>[2013-01-11 16:39:58.392995] I [client.c:2142:notify] 1-puppet-dist-client-1: parent translators are ready, attempting connect on transport</div>
<div>[2013-01-11 16:39:58.393804] I [client.c:2142:notify] 1-puppet-bucket-client-0: parent translators are ready, attempting connect on transport</div><div>[2013-01-11 16:39:58.394598] I [client.c:2142:notify] 1-puppet-bucket-client-1: parent translators are ready, attempting connect on transport</div>
<div>Given volfile:</div><div>+------------------------------------------------------------------------------+</div><div>  1: volume puppet-bucket-client-0</div><div>  2:     type protocol/client</div><div>  3:     option remote-host sandy.mdc</div>
<div>  4:     option remote-subvolume /opt/gluster-data/snake-puppet/bucket</div><div>  5:     option transport-type tcp</div><div>  6: end-volume</div><div>  7: </div><div>  8: volume puppet-bucket-client-1</div><div>  9:     type protocol/client</div>
<div> 10:     option remote-host irene.mdc</div><div> 11:     option remote-subvolume /opt/gluster-data/puppet/bucket</div><div> 12:     option transport-type tcp</div><div> 13: end-volume</div><div> 14: </div><div> 15: volume puppet-bucket-replicate-0</div>
<div> 16:     type cluster/replicate</div><div> 17:     option background-self-heal-count 0</div><div> 18:     option metadata-self-heal on</div><div> 19:     option data-self-heal on</div><div> 20:     option entry-self-heal on</div>
<div> 21:     option self-heal-daemon on</div><div> 22:     option iam-self-heal-daemon yes</div><div> 23:     subvolumes puppet-bucket-client-0 puppet-bucket-client-1</div><div> 24: end-volume</div><div> 25: </div><div> 26: volume puppet-dist-client-0</div>
<div> 27:     type protocol/client</div><div> 28:     option remote-host sandy.mdc</div><div> 29:     option remote-subvolume /opt/gluster-data/snake-puppet/dist</div><div> 30:     option transport-type tcp</div><div> 31: end-volume</div>
<div> 32: </div><div> 33: volume puppet-dist-client-1</div><div> 34:     type protocol/client</div><div> 35:     option remote-host irene.mdc</div><div> 36:     option remote-subvolume /opt/gluster-data/puppet/dist</div><div>
 37:     option transport-type tcp</div><div> 38: end-volume</div><div> 39: </div><div> 40: volume puppet-dist-replicate-0</div><div> 41:     type cluster/replicate</div><div> 42:     option background-self-heal-count 0</div>
<div> 43:     option metadata-self-heal on</div><div> 44:     option data-self-heal on</div><div> 45:     option entry-self-heal on</div><div> 46:     option self-heal-daemon on</div><div> 47:     option data-self-heal-algorithm full</div>
<div> 48:     option iam-self-heal-daemon yes</div><div> 49:     subvolumes puppet-dist-client-0 puppet-dist-client-1</div><div> 50: end-volume</div><div> 51: </div><div> 52: volume puppet-ssl-client-0</div><div> 53:     type protocol/client</div>
<div> 54:     option remote-host sandy.mdc</div><div> 55:     option remote-subvolume /opt/gluster-data/snake-puppet/ssl</div><div> 56:     option transport-type tcp</div><div> 57: end-volume</div><div> 58: </div><div> 59: volume puppet-ssl-client-1</div>
<div> 60:     type protocol/client</div><div> 61:     option remote-host irene.mdc</div><div> 62:     option remote-subvolume /opt/gluster-data/puppet/ssl</div><div> 63:     option transport-type tcp</div><div> 64: end-volume</div>
<div> 65: </div><div> 66: volume puppet-ssl-replicate-0</div><div> 67:     type cluster/replicate</div><div> 68:     option background-self-heal-count 0</div><div> 69:     option metadata-self-heal on</div><div> 70:     option data-self-heal on</div>
<div> 71:     option entry-self-heal on</div><div> 72:     option self-heal-daemon on</div><div> 73:     option metadata-change-log on</div><div> 74:     option data-self-heal-algorithm full</div><div> 75:     option iam-self-heal-daemon yes</div>
<div> 76:     subvolumes puppet-ssl-client-0 puppet-ssl-client-1</div><div> 77: end-volume</div><div> 78: </div><div> 79: volume glustershd</div><div> 80:     type debug/io-stats</div><div> 81:     subvolumes puppet-ssl-replicate-0 puppet-dist-replicate-0 puppet-bucket-replicate-0</div>
<div> 82: end-volume</div><div><br></div><div>+------------------------------------------------------------------------------+</div><div>[2013-01-11 16:39:58.395877] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-ssl-client-1: changing port to 24010 (from 0)</div>
<div>[2013-01-11 16:39:58.395978] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-bucket-client-0: changing port to 24014 (from 0)</div><div>[2013-01-11 16:39:58.396048] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-dist-client-1: changing port to 24012 (from 0)</div>
<div>[2013-01-11 16:39:58.396106] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-bucket-client-1: changing port to 24014 (from 0)</div><div>[2013-01-11 16:39:58.396161] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-ssl-client-0: changing port to 24012 (from 0)</div>
<div>[2013-01-11 16:39:58.396223] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 1-puppet-dist-client-0: changing port to 24010 (from 0)</div><div><br></div><div>==&gt; nfs.log.1 &lt;==</div><div>[2013-01-11 16:40:02.148931] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-ssl-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.149212] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-1: Connected to <a href="http://10.136.200.16:24010">10.136.200.16:24010</a>, attached to remote volume &#39;/opt/gluster-data/puppet/ssl&#39;.</div>
<div>[2013-01-11 16:40:02.149238] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.149289] I [afr-common.c:3628:afr_notify] 1-puppet-ssl-replicate-0: Subvolume &#39;puppet-ssl-client-1&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.149382] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-1: Server lk version = 1</div><div>[2013-01-11 16:40:02.149711] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-dist-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.149931] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-1: Connected to <a href="http://10.136.200.16:24012">10.136.200.16:24012</a>, attached to remote volume &#39;/opt/gluster-data/puppet/dist&#39;.</div>
<div>[2013-01-11 16:40:02.149951] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.149995] I [afr-common.c:3628:afr_notify] 1-puppet-dist-replicate-0: Subvolume &#39;puppet-dist-client-1&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.150086] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-1: Server lk version = 1</div><div>[2013-01-11 16:40:02.150727] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-bucket-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.151013] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-1: Connected to <a href="http://10.136.200.16:24014">10.136.200.16:24014</a>, attached to remote volume &#39;/opt/gluster-data/puppet/bucket&#39;.</div>
<div>[2013-01-11 16:40:02.151042] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.151091] I [afr-common.c:3628:afr_notify] 1-puppet-bucket-replicate-0: Subvolume &#39;puppet-bucket-client-1&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.151187] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-bucket-client-1: Server lk version = 1</div><div>[2013-01-11 16:40:02.151623] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-ssl-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.151924] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-0: Connected to <a href="http://10.136.200.27:24012">10.136.200.27:24012</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/ssl&#39;.</div>
<div>[2013-01-11 16:40:02.151950] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.152166] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-0: Server lk version = 1</div>
<div>[2013-01-11 16:40:02.152472] I [afr-common.c:1965:afr_set_root_inode_on_first_lookup] 1-puppet-ssl-replicate-0: added root inode</div><div>[2013-01-11 16:40:02.152566] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-dist-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.152807] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-0: Connected to <a href="http://10.136.200.27:24010">10.136.200.27:24010</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/dist&#39;.</div>
<div>[2013-01-11 16:40:02.152827] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.152991] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-0: Server lk version = 1</div>
<div>[2013-01-11 16:40:02.153187] I [afr-common.c:1965:afr_set_root_inode_on_first_lookup] 1-puppet-dist-replicate-0: added root inode</div><div>[2013-01-11 16:40:02.153403] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-bucket-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.153644] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-0: Connected to <a href="http://10.136.200.27:24014">10.136.200.27:24014</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/bucket&#39;.</div>
<div>[2013-01-11 16:40:02.153665] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.153797] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-bucket-client-0: Server lk version = 1</div>
<div>[2013-01-11 16:40:02.154054] I [afr-common.c:1965:afr_set_root_inode_on_first_lookup] 1-puppet-bucket-replicate-0: added root inode</div><div><br></div><div>==&gt; glustershd.log.1 &lt;==</div><div>[2013-01-11 16:40:02.381825] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-ssl-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.382098] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-1: Connected to <a href="http://10.136.200.16:24010">10.136.200.16:24010</a>, attached to remote volume &#39;/opt/gluster-data/puppet/ssl&#39;.</div>
<div>[2013-01-11 16:40:02.382119] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.382203] I [afr-common.c:3628:afr_notify] 1-puppet-ssl-replicate-0: Subvolume &#39;puppet-ssl-client-1&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.382321] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-1: Server lk version = 1</div><div>[2013-01-11 16:40:02.382889] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-bucket-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.383190] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-0: Connected to <a href="http://10.136.200.27:24014">10.136.200.27:24014</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/bucket&#39;.</div>
<div>[2013-01-11 16:40:02.383213] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.383284] I [afr-common.c:3628:afr_notify] 1-puppet-bucket-replicate-0: Subvolume &#39;puppet-bucket-client-0&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.384825] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-bucket-client-0: Server lk version = 1</div><div>[2013-01-11 16:40:02.384999] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-dist-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.385614] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-1: Connected to <a href="http://10.136.200.16:24012">10.136.200.16:24012</a>, attached to remote volume &#39;/opt/gluster-data/puppet/dist&#39;.</div>
<div>[2013-01-11 16:40:02.385646] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.385725] I [afr-common.c:3628:afr_notify] 1-puppet-dist-replicate-0: Subvolume &#39;puppet-dist-client-1&#39; came back up; going online.</div>
<div>[2013-01-11 16:40:02.386268] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-1: Server lk version = 1</div><div>[2013-01-11 16:40:02.386381] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-bucket-client-1: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.386710] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-ssl-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div><div>[2013-01-11 16:40:02.386817] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-1: Connected to <a href="http://10.136.200.16:24014">10.136.200.16:24014</a>, attached to remote volume &#39;/opt/gluster-data/puppet/bucket&#39;.</div>
<div>[2013-01-11 16:40:02.386842] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-1: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.387051] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-0: Connected to <a href="http://10.136.200.27:24012">10.136.200.27:24012</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/ssl&#39;.</div>
<div>[2013-01-11 16:40:02.387087] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.387222] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-bucket-client-1: Server lk version = 1</div>
<div>[2013-01-11 16:40:02.387345] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-0: Server lk version = 1</div><div>[2013-01-11 16:40:02.387427] I [client-handshake.c:1636:select_server_supported_programs] 1-puppet-dist-client-0: Using Program GlusterFS 3.3.1, Num (1298437), Version (330)</div>
<div>[2013-01-11 16:40:02.388029] I [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-0: Connected to <a href="http://10.136.200.27:24010">10.136.200.27:24010</a>, attached to remote volume &#39;/opt/gluster-data/snake-puppet/dist&#39;.</div>
<div>[2013-01-11 16:40:02.388058] I [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-0: Server and Client lk-version numbers are not same, reopening the fds</div><div>[2013-01-11 16:40:02.389682] I [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-0: Server lk version = 1</div>
<div>^C</div></div><div><br></div></div><div class="gmail_extra"><br clear="all"><div>--<br>Yang<br><div><span>Orange Key: </span>35745318S1</div></div>
<br><br><div class="gmail_quote">On Fri, Jan 11, 2013 at 11:00 AM, YANG ChengFu <span dir="ltr">&lt;<a href="mailto:youngseph@gmail.com" target="_blank">youngseph@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hello Fu Yong Tao,<div><br></div><div>thanks for your suggest, after I did your steps, I got the following:</div><div><br></div><div><div>gluster&gt; volume sync new-host</div><div class="im"><div>please delete all the volumes before full sync</div>

</div><div>gluster&gt; peer status </div><div>Number of Peers: 1</div><div><br></div><div>Hostname: 10.136.200.27</div><div>Uuid: 184a81f4-ff0f-48d6-adb8-798b98957b1a</div><div>State: Accepted peer request (Connected)</div>
<div>
<br></div><div>I still can not put the server in the truested pool !</div></div></div><div class="gmail_extra"><div class="im"><br clear="all"><div>--<br>Yang<br><div><span>Orange Key: </span>35745318S1</div></div>
<br><br></div><div><div class="h5"><div class="gmail_quote">On Fri, Jan 11, 2013 at 5:22 AM, 符永涛 <span dir="ltr">&lt;<a href="mailto:yongtaofu@gmail.com" target="_blank">yongtaofu@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

Reinstall gluster server or upgrade is a dangerous task before it it&#39;s<br>
better to backup /etc/glusterfs /var/lib/glusterd.<br>
<br>
/var/lib/glusterd/<a href="http://glusterd.info" target="_blank">glusterd.info</a> contains the uuid of current server<br>
and /var/lib/glusterd/peers contain it&#39;s peers<br>
make sure above two files are all correct<br>
<br>
If other servers status are fine then with only above configuration<br>
files you can start current host and gluster volumes files will<br>
automatically sync to current host.<br>
<br>
Always remember backup<br>
<br>
2013/1/11, YANG ChengFu &lt;<a href="mailto:youngseph@gmail.com" target="_blank">youngseph@gmail.com</a>&gt;:<br>
<div><div>&gt; Hello,<br>
&gt;<br>
&gt; I did an upgrade glusterfs from 3.0.5 to 3.3.1, before I did it, I have<br>
&gt; other two 3.3.1 hosts(new-host) ready and made a cluster.<br>
&gt;<br>
&gt; After I upgraded old hosts, I tried to add them to the cluster,  I<br>
&gt; got State: Peer Rejected (Connected), for sure it could be about same<br>
&gt; volumes on the old  hosts, but I have tried to stop glusterd, remove<br>
&gt; everything from the old host, such /etc/glusterd, /etc/glusterfs<br>
&gt; and /var/lib/glusterd/, and start glusterd, then I readded it to cluster,<br>
&gt; the problem is still there.<br>
&gt;<br>
&gt; I also did &#39;volume sync&#39;, but I failed, because of the following error<br>
&gt; message<br>
&gt;<br>
&gt; gluster&gt; volume sync new-hosts<br>
&gt; please delete all the volumes before full sync<br>
&gt;<br>
&gt; I can not do it, or I will lose all my data!<br>
&gt;<br>
&gt; The most funny thing I found, even if the peer status is rejected, but I<br>
&gt; can mount the volume from the old host.<br>
&gt;<br>
&gt; Any ideas !<br>
&gt;<br>
&gt; --<br>
&gt; Yang<br>
&gt; Orange Key: 35745318S1<br>
&gt;<br>
<br>
<br>
</div></div><span><font color="#888888">--<br>
符永涛<br>
</font></span></blockquote></div><br></div></div></div>
</blockquote></div><br></div>