<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:12pt">since the VM files (VHD) are available on LVM - can a new gluster volume be created and exported over NFS - without risk of dataloss?  <div style="font-family: times new roman, new york, times, serif; font-size: 12pt;"> <div style="font-family: times new roman, new york, times, serif; font-size: 12pt;"> <div dir="ltr"> <hr size="1">  <font face="Arial" size="2"> <b><span style="font-weight:bold;">From:</span></b> srinivas jonn &lt;jmsrinivas@yahoo.com&gt;<br> <b><span style="font-weight: bold;">To:</span></b> Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;; "gluster-users@gluster.org" &lt;gluster-users@gluster.org&gt; <br> <b><span style="font-weight: bold;">Sent:</span></b> Monday, 3 June 2013 11:42 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> Re: [Gluster-users] gluster startup failure<br> </font> </div>
 <div class="y_msg_container"><br><div id="yiv1883263050"><div><div style="color:#000;background-color:#fff;font-family:times new roman, new york, times, serif;font-size:12pt;"><div style="" id="yiv1883263050">
<div style="">
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;">
<div id="yiv1883263050yui_3_7_2_33_1370255828883_55"><span id="yiv1883263050yui_3_7_2_33_1370255828883_81">Hello Gluster users,</span></div>
<div style="BACKGROUND-COLOR:transparent;FONT-STYLE:normal;FONT-FAMILY:times new roman, new york, times, serif;COLOR:rgb(0,0,0);FONT-SIZE:16px;" id="yiv1883263050yui_3_7_2_33_1370255828883_103"><br><span id="yiv1883263050yui_3_7_2_33_1370255828883_81"></span></div>thought of posing a more refined question, thanks to the support of Krish.<br><br>problem statement: Gluster volume start - failure<br>
<div style="FONT-FAMILY:times new roman, new york, times, serif;FONT-SIZE:12pt;" id="yiv1883263050yui_3_7_2_33_1370255828883_60" class="yiv1883263050yui_3_7_2_33_1370255828883_59">
<div style="FONT-FAMILY:times new roman, new york, times, serif;FONT-SIZE:12pt;" id="yiv1883263050yui_3_7_2_33_1370255828883_91" class="yiv1883263050yui_3_7_2_33_1370255828883_63">
<div style="" id="yiv1883263050yui_3_7_2_33_1370255828883_90" class="yiv1883263050y_msg_container">RPM installation of 3.3.0 on CentOS 6.1 - XFS is filesystem layer - <br>NFS export , distributed single node, TCP<br>
<div style="" id="yiv1883263050">
<div style="" id="yiv1883263050yui_3_7_2_33_1370255828883_89">
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" id="yiv1883263050yui_3_7_2_33_1370255828883_88" class="yiv1883263050yui_3_7_2_33_1370255828883_69">&nbsp;</div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685">this server has experienced a accidental powerloss while in operation. any help in resolving or debug issue is appreciated.</div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685"><var id="yiv1883263050yui-ie-cursor"></var>&nbsp;</div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685">glusterd logs indicate failure to resolve the brick:</div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685">&nbsp;</div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685"><span style="">[2013-06-03 12:03:24.660330] I [glusterd-volume-ops.c:290:glusterd_handle_cli_start_volume] 0-glusterd: Received start vol reqfor volume gvol1<br>[2013-06-03 12:03:24.660384] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by 16ee7a4e-ee9b-4543-bd61-9b444100693d<br>[2013-06-03 12:03:24.660398] I [glusterd-handler.c:463:glusterd_op_txn_begin] 0-management: Acquired local lock<br>[2013-06-03 12:03:24.842904] E [glusterd-volume-ops.c:842:glusterd_op_stage_start_volume] 0-: Unable to resolve brick 10.0.0.30:/export/brick1<br>[2013-06-03 12:03:24.842938] E [glusterd-op-sm.c:1999:glusterd_op_ac_send_stage_op] 0-: Staging failed<br>[2013-06-03 12:03:24.842959] I [glusterd-op-sm.c:2039:glusterd_op_ac_send_stage_op]
 0-glusterd: Sent op req to 0 peers<br>[2013-06-03 12:03:24.842982] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock<br style=""></span></div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" class="yiv1883263050yui_3_7_2_33_1370255828883_69 yiv1883263050ms__id3685"><br></div>
<div style="BACKGROUND-COLOR:#fff;FONT-FAMILY:times new roman, new york, times, serif;COLOR:#000;FONT-SIZE:12pt;" id="yiv1883263050yui_3_7_2_33_1370255828883_87" class="yiv1883263050yui_3_7_2_33_1370255828883_70">
<div style="FONT-FAMILY:times new roman, new york, times, serif;FONT-SIZE:12pt;" id="yiv1883263050yui_3_7_2_33_1370255828883_86" class="yiv1883263050yui_3_7_2_33_1370255828883_71">
<div dir="ltr">
<div style="BORDER-BOTTOM:#ccc 1px solid;BORDER-LEFT:#ccc 1px solid;PADDING-BOTTOM:0px;LINE-HEIGHT:0;MARGIN:5px 0px;PADDING-LEFT:0px;PADDING-RIGHT:0px;HEIGHT:0px;FONT-SIZE:0px;BORDER-TOP:#ccc 1px solid;BORDER-RIGHT:#ccc 1px solid;PADDING-TOP:0px;" class="yiv1883263050hr"></div><font face="Arial" size="2"><b><span style="FONT-WEIGHT:bold;">From:</span></b> Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;<br><b><span style="FONT-WEIGHT:bold;">To:</span></b> srinivas jonn &lt;jmsrinivas@yahoo.com&gt; <br><b><span style="FONT-WEIGHT:bold;">Sent:</span></b> Monday, 3 June 2013 4:24 PM<br><b><span style="FONT-WEIGHT:bold;">Subject:</span></b> Re: [Gluster-users] recovering gluster volume || startup failure<br></font></div>
<div id="yiv1883263050yui_3_7_2_33_1370255828883_85" class="yiv1883263050y_msg_container"><br>Is this a source install or an rpm install? If it is a source install,<br>the logs would be present under &lt;install-prefix&gt;/var/log/glusterfs<br><br>Having said that, could you attach etc-glusterfs-glusterd.log file?<br>Does the gluster CLI print any error messages to the terminal, when<br>volume-start fails?<br><br>thanks,<br>krish<br><br>----- Original Message -----<br>&gt; there is no /var/log/glusterfs/.cmd_log_history file .<br>&gt; <br>&gt; gluster volume start &lt;volume&gt; - volume start has been unsuccessful<br>&gt; <br>&gt; <br>&gt; let me know for any specific log, I am trying to debug why volume is not<br>&gt; starting -<br>&gt; <br>&gt; feel free to copy the gluster-user DL if you think right<br>&gt; <br>&gt; <br>&gt; <br>&gt; ________________________________<br>&gt;&nbsp; From: Krishnan Parthasarathi &lt;<a rel="nofollow"
 ymailto="mailto:kparthas@redhat.com" target="_blank" href="mailto:kparthas@redhat.com">kparthas@redhat.com</a>&gt;<br>&gt; To: srinivas jonn &lt;<a rel="nofollow" ymailto="mailto:jmsrinivas@yahoo.com" target="_blank" href="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</a>&gt;<br>&gt; Cc: <a rel="nofollow" id="yiv1883263050yui_3_7_2_33_1370255828883_96" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>&gt; Sent: Monday, 3 June 2013 3:56 PM<br>&gt; Subject: Re: [Gluster-users] recovering gluster volume || startup failure<br>&gt;&nbsp; <br>&gt; <br>&gt; Did you run "gluster volume start gvol1"? Could you attach<br>&gt; /var/log/glusterfs/.cmd_log_history (log file)?<br>&gt; From the logs you have pasted, it looks like volume-stop is the last command<br>&gt; you executed.<br>&gt; <br>&gt; thanks,<br>&gt; krish<br>&gt; <br>&gt; ----- Original Message -----<br>&gt; &gt; the
 volume is not starting - this
 was the issue.. please let mw know the<br>&gt; &gt; diagnostic or debug procedures,<br>&gt; &gt; &nbsp;<br>&gt; &gt; &nbsp;<br>&gt; &gt; logs:<br>&gt; &gt; &nbsp;<br>&gt; &gt; usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293) [0x30cac0a443]<br>&gt; &gt; &nbsp;/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955]))) 0-:<br>&gt; &gt; &nbsp;received signum (15), shutting down<br>&gt; &gt; &nbsp;[2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<br>&gt; &gt; &nbsp;(--&gt;/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<br>&gt; &gt; &nbsp;(--&gt;/lib64/libpthread.so.0()<br>&gt; &gt; &nbsp;[0x3ef5a077e1] (--&gt;/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<br>&gt; &gt; &nbsp;[0x405d4d]))) 0-: received signum (15), shutting down<br>&gt; &gt;&nbsp; <br>&gt; &gt; <br>&gt; &gt; ________________________________<br>&gt; &gt;&nbsp; From: Krishnan Parthasarathi &lt;<a rel="nofollow" ymailto="mailto:kparthas@redhat.com" target="_blank"
 href="mailto:kparthas@redhat.com">kparthas@redhat.com</a>&gt;<br>&gt; &gt; To: srinivas jonn &lt;<a rel="nofollow" ymailto="mailto:jmsrinivas@yahoo.com" target="_blank" href="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</a>&gt;<br>&gt; &gt; Cc: <a rel="nofollow" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>&gt; &gt; Sent: Monday, 3 June 2013 3:27 PM<br>&gt; &gt; Subject: Re: [Gluster-users] recovering gluster volume || startup failure<br>&gt; &gt;&nbsp; <br>&gt; &gt; <br>&gt; &gt; Srinivas,<br>&gt; &gt; <br>&gt; &gt; The volume is in stopped state. You could start the volume by running<br>&gt; &gt; "gluster volume start gvol1". This should make your attempts at mounting<br>&gt; &gt; the volume successful.<br>&gt; &gt; <br>&gt; &gt; thanks,<br>&gt; &gt; krish<br>&gt; &gt; <br>&gt; &gt; ----- Original Message -----<br>&gt; &gt; &gt; Krish,<br>&gt; &gt; &gt; this is
 giving general volume information , can the state of volume known<br>&gt; &gt; &gt; from any specific logs?<br>&gt; &gt; &gt; #gluster volume info gvol1<br>&gt; &gt; &gt; Volume Name: gvol1<br>&gt; &gt; &gt; Type: Distribute<br>&gt; &gt; &gt; Volume ID: aa25aa58-d191-432a-a84b-325051347af6<br>&gt; &gt; &gt; Status: Stopped<br>&gt; &gt; &gt; Number of Bricks: 1<br>&gt; &gt; &gt; Transport-type: tcp<br>&gt; &gt; &gt; Bricks:<br>&gt; &gt; &gt; Brick1: 10.0.0.30:/export/brick1<br>&gt; &gt; &gt; Options Reconfigured:<br>&gt; &gt; &gt; nfs.addr-namelookup: off<br>&gt; &gt; &gt; nfs.port: 2049<br>&gt; &gt; &gt; From: Krishnan Parthasarathi &lt;<a rel="nofollow" ymailto="mailto:kparthas@redhat.com" target="_blank" href="mailto:kparthas@redhat.com">kparthas@redhat.com</a>&gt;<br>&gt; &gt; &gt; To: srinivas jonn &lt;<a rel="nofollow" ymailto="mailto:jmsrinivas@yahoo.com" target="_blank" href="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</a>&gt;<br>&gt; &gt;
 &gt; Cc:
 <a rel="nofollow" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>&gt; &gt; &gt; Sent: Monday, 3 June 2013 3:14 PM<br>&gt; &gt; &gt; Subject: Re: [Gluster-users] recovering gluster volume || startup failure<br>&gt; &gt; &gt; <br>&gt; &gt; &gt; Srinivas,<br>&gt; &gt; &gt; <br>&gt; &gt; &gt; Could you paste the output of "gluster volume info gvol1"?<br>&gt; &gt; &gt; This should give us an idea as to what was the state of the volume<br>&gt; &gt; &gt; before the power loss.<br>&gt; &gt; &gt; <br>&gt; &gt; &gt; thanks,<br>&gt; &gt; &gt; krish<br>&gt; &gt; &gt; <br>&gt; &gt; &gt; ----- Original Message -----<br>&gt; &gt; &gt; &gt; Hello Gluster users:<br>&gt; &gt; &gt; &gt; sorry for long post, I have run out of ideas here, kindly let me know<br>&gt; &gt; &gt; &gt; if<br>&gt; &gt; &gt; &gt; i<br>&gt; &gt; &gt; &gt; am<br>&gt; &gt; &gt; &gt; looking at right places for logs and
 any suggested actions.....thanks<br>&gt; &gt; &gt; &gt; a sudden power loss casued hard reboot - now the volume does not start<br>&gt; &gt; &gt; &gt; Glusterfs- 3.3.1 on Centos 6.1 transport: TCP<br>&gt; &gt; &gt; &gt; sharing volume over NFS for VM storage - VHD Files<br>&gt; &gt; &gt; &gt; Type: distributed - only 1 node (brick)<br>&gt; &gt; &gt; &gt; XFS (LVM)<br>&gt; &gt; &gt; &gt; mount /dev/datastore1/mylv1 /export/brick1 - mounts VHD files.......is<br>&gt; &gt; &gt; &gt; there<br>&gt; &gt; &gt; &gt; a way to recover these files?<br>&gt; &gt; &gt; &gt; cat export-brick1.log<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:00.832914] I [glusterfsd.c:1666:main]<br>&gt; &gt; &gt; &gt; 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version<br>&gt; &gt; &gt; &gt; 3.3.1<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:00.845515] I [graph.c:241:gf_add_cmdline_options]<br>&gt; &gt; &gt; &gt; 0-gvol1-server: adding option 'listen-port' for volume
 'gvol1-server'<br>&gt; &gt; &gt; &gt; with<br>&gt; &gt; &gt; &gt; value '24009'<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:00.845558] I [graph.c:241:gf_add_cmdline_options]<br>&gt; &gt; &gt; &gt; 0-gvol1-posix: adding option 'glusterd-uuid' for volume 'gvol1-posix'<br>&gt; &gt; &gt; &gt; with<br>&gt; &gt; &gt; &gt; value '16ee7a4e-ee9b-4543-bd61-9b444100693d'<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:00.846654] W [options.c:782:xl_opt_validate]<br>&gt; &gt; &gt; &gt; 0-gvol1-server: option 'listen-port' is deprecated, preferred is<br>&gt; &gt; &gt; &gt; 'transport.socket.listen-port', continuing with correction<br>&gt; &gt; &gt; &gt; Given volfile:<br>&gt; &gt; &gt; &gt; +------------------------------------------------------------------------------+<br>&gt; &gt; &gt; &gt; 1: volume gvol1-posix<br>&gt; &gt; &gt; &gt; 2: type storage/posix<br>&gt; &gt; &gt; &gt; 3: option directory /export/brick1<br>&gt; &gt; &gt; &gt; 4: option volume-id
 aa25aa58-d191-432a-a84b-325051347af6<br>&gt; &gt; &gt; &gt; 5: end-volume<br>&gt; &gt; &gt; &gt; 6:<br>&gt; &gt; &gt; &gt; 7: volume gvol1-access-control<br>&gt; &gt; &gt; &gt; 8: type features/access-control<br>&gt; &gt; &gt; &gt; 9: subvolumes gvol1-posix<br>&gt; &gt; &gt; &gt; 10: end-volume<br>&gt; &gt; &gt; &gt; 11:<br>&gt; &gt; &gt; &gt; 12: volume gvol1-locks<br>&gt; &gt; &gt; &gt; 13: type features/locks<br>&gt; &gt; &gt; &gt; 14: subvolumes gvol1-access-control<br>&gt; &gt; &gt; &gt; ----------<br>&gt; &gt; &gt; &gt; -----------------<br>&gt; &gt; &gt; &gt; <br>&gt; &gt; &gt; &gt; 46: option transport-type tcp<br>&gt; &gt; &gt; &gt; 47: option auth.login./export/brick1.allow<br>&gt; &gt; &gt; &gt; 6c4653bb-b708-46e8-b3f9-177b4cdbbf28<br>&gt; &gt; &gt; &gt; 48: option auth.login.6c4653bb-b708-46e8-b3f9-177b4cdbbf28.password<br>&gt; &gt; &gt; &gt; 091ae3b1-40c2-4d48-8870-6ad7884457ac<br>&gt; &gt; &gt; &gt; 49: option
 auth.addr./export/brick1.allow *<br>&gt; &gt; &gt; &gt; 50: subvolumes /export/brick1<br>&gt; &gt; &gt; &gt; 51: end-volume<br>&gt; &gt; &gt; &gt; +------------------------------------------------------------------------------+<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:03.963001] W [socket.c:410:__socket_keepalive]<br>&gt; &gt; &gt; &gt; 0-socket:<br>&gt; &gt; &gt; &gt; failed to set keep idle on socket 8<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:03.963046] W<br>&gt; &gt; &gt; &gt; [socket.c:1876:socket_server_event_handler]<br>&gt; &gt; &gt; &gt; 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:04.850120] I<br>&gt; &gt; &gt; &gt; [server-handshake.c:571:server_setvolume]<br>&gt; &gt; &gt; &gt; 0-gvol1-server: accepted client from<br>&gt; &gt; &gt; &gt; iiclab-oel1-9347-2013/06/02-09:29:00:835397-gvol1-client-0-0 (version:<br>&gt; &gt; &gt; &gt; 3.3.1)<br>&gt; &gt; &gt; &gt; [2013-06-02
 09:32:16.973786] W [glusterfsd.c:831:cleanup_and_exit]<br>&gt; &gt; &gt; &gt; (--&gt;/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x93) [0x30cac0a5b3]<br>&gt; &gt; &gt; &gt; (--&gt;/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293)<br>&gt; &gt; &gt; &gt; [0x30cac0a443]<br>&gt; &gt; &gt; &gt; (--&gt;/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955])))<br>&gt; &gt; &gt; &gt; 0-:<br>&gt; &gt; &gt; &gt; received signum (15), shutting down<br>&gt; &gt; &gt; &gt; [2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<br>&gt; &gt; &gt; &gt; (--&gt;/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<br>&gt; &gt; &gt; &gt; (--&gt;/lib64/libpthread.so.0()<br>&gt; &gt; &gt; &gt; [0x3ef5a077e1] (--&gt;/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<br>&gt; &gt; &gt; &gt; [0x405d4d]))) 0-: received signum (15), shutting down<br>&gt; &gt; &gt; &gt; NFS LOG<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:00.918906] I
 [rpc-clnt.c:1657:rpc_clnt_reconfig]<br>&gt; &gt; &gt; &gt; 0-gvol1-client-0: changing port to 24009 (from 0)<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:03.963023] W [socket.c:410:__socket_keepalive]<br>&gt; &gt; &gt; &gt; 0-socket:<br>&gt; &gt; &gt; &gt; failed to set keep idle on socket 8<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:03.963062] W<br>&gt; &gt; &gt; &gt; [socket.c:1876:socket_server_event_handler]<br>&gt; &gt; &gt; &gt; 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:04.849941] I<br>&gt; &gt; &gt; &gt; [client-handshake.c:1636:select_server_supported_programs]<br>&gt; &gt; &gt; &gt; 0-gvol1-client-0:<br>&gt; &gt; &gt; &gt; Using Program GlusterFS 3.3.1, Num (1298437), Version (330)<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:04.853016] I<br>&gt; &gt; &gt; &gt; [client-handshake.c:1433:client_setvolume_cbk]<br>&gt; &gt; &gt; &gt; 0-gvol1-client-0: Connected to 10.0.0.30:24009, attached
 to remote<br>&gt; &gt; &gt; &gt; volume<br>&gt; &gt; &gt; &gt; '/export/brick1'.<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:04.853048] I<br>&gt; &gt; &gt; &gt; [client-handshake.c:1445:client_setvolume_cbk]<br>&gt; &gt; &gt; &gt; 0-gvol1-client-0: Server and Client lk-version numbers are not same,<br>&gt; &gt; &gt; &gt; reopening the fds<br>&gt; &gt; &gt; &gt; [2013-06-02 09:29:04.853262] I<br>&gt; &gt; &gt; &gt; [client-handshake.c:453:client_set_lk_version_cbk] 0-gvol1-client-0:<br>&gt; &gt; &gt; &gt; Server<br>&gt; &gt; &gt; &gt; lk version = 1<br>&gt; &gt; &gt; &gt; <br>&gt; &gt; &gt; &gt; _______________________________________________<br>&gt; &gt; &gt; &gt; Gluster-users mailing list<br>&gt; &gt; &gt; &gt; <a rel="nofollow" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>&gt; &gt; &gt; &gt; <a rel="nofollow" target="_blank"
 href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>&gt; &gt; &gt; <br>&gt; &gt; &gt; <br>&gt; &gt; &gt; <br>&gt; &gt; &gt; _______________________________________________<br>&gt; &gt; &gt; Gluster-users mailing list<br>&gt; &gt; &gt; <a rel="nofollow" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>&gt; &gt; &gt; <a rel="nofollow" target="_blank" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br><br></div></div></div></div></div><br><br></div></div></div></div></div></div></div></div></div><br>_______________________________________________<br>Gluster-users mailing list<br><a ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a
 href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br><br></div> </div> </div>  </div></body></html>