<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:12pt"><DIV style="RIGHT: auto" id=yiv4780567109>
<DIV style="RIGHT: auto">
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt">
<DIV id=yiv4780567109yui_3_7_2_33_1370255828883_55><SPAN id=yiv4780567109yui_3_7_2_33_1370255828883_81>Hello Gluster users,</SPAN></DIV>
<DIV style="BACKGROUND-COLOR: transparent; FONT-STYLE: normal; FONT-FAMILY: times new roman, new york, times, serif; COLOR: rgb(0,0,0); FONT-SIZE: 16px" id=yiv4780567109yui_3_7_2_33_1370255828883_103><BR><SPAN id=yiv4780567109yui_3_7_2_33_1370255828883_81></SPAN></DIV>thought of posing a more refined question, thanks to the support of Krish.<BR><BR>problem statement: Gluster volume start - failure<BR>
<DIV style="FONT-FAMILY: times new roman, new york, times, serif; FONT-SIZE: 12pt" id=yiv4780567109yui_3_7_2_33_1370255828883_60 class=yiv4780567109yui_3_7_2_33_1370255828883_59>
<DIV style="FONT-FAMILY: times new roman, new york, times, serif; FONT-SIZE: 12pt" id=yiv4780567109yui_3_7_2_33_1370255828883_91 class=yiv4780567109yui_3_7_2_33_1370255828883_63>
<DIV style="RIGHT: auto" id=yiv4780567109yui_3_7_2_33_1370255828883_90 class=yiv4780567109y_msg_container>RPM installation of 3.3.0 on CentOS 6.1 - XFS is filesystem layer - <BR>NFS export , distributed single node, TCP<BR>
<DIV style="RIGHT: auto" id=yiv4780567109>
<DIV style="RIGHT: auto" id=yiv4780567109yui_3_7_2_33_1370255828883_89>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" id=yiv4780567109yui_3_7_2_33_1370255828883_88 class=yiv4780567109yui_3_7_2_33_1370255828883_69> </DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685">this server has experienced a accidental powerloss while in operation. any help in resolving or debug issue is appreciated.</DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685"><VAR id=yui-ie-cursor></VAR> </DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685">glusterd logs indicate failure to resolve the brick:</DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685"> </DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt; RIGHT: auto" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685"><SPAN style="RIGHT: auto">[2013-06-03 12:03:24.660330] I [glusterd-volume-ops.c:290:glusterd_handle_cli_start_volume] 0-glusterd: Received start vol reqfor volume gvol1<BR>[2013-06-03 12:03:24.660384] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by 16ee7a4e-ee9b-4543-bd61-9b444100693d<BR>[2013-06-03 12:03:24.660398] I [glusterd-handler.c:463:glusterd_op_txn_begin] 0-management: Acquired local lock<BR>[2013-06-03 12:03:24.842904] E [glusterd-volume-ops.c:842:glusterd_op_stage_start_volume] 0-: Unable to resolve brick 10.0.0.30:/export/brick1<BR>[2013-06-03 12:03:24.842938] E [glusterd-op-sm.c:1999:glusterd_op_ac_send_stage_op] 0-: Staging failed<BR>[2013-06-03 12:03:24.842959] I [glusterd-op-sm.c:2039:glusterd_op_ac_send_stage_op]
0-glusterd: Sent op req to 0 peers<BR>[2013-06-03 12:03:24.842982] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock<BR style="RIGHT: auto"></SPAN></DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" class="yiv4780567109yui_3_7_2_33_1370255828883_69 ms__id3685"><BR></DIV>
<DIV style="BACKGROUND-COLOR: #fff; FONT-FAMILY: times new roman, new york, times, serif; COLOR: #000; FONT-SIZE: 12pt" id=yiv4780567109yui_3_7_2_33_1370255828883_87 class=yiv4780567109yui_3_7_2_33_1370255828883_70>
<DIV style="FONT-FAMILY: times new roman, new york, times, serif; FONT-SIZE: 12pt" id=yiv4780567109yui_3_7_2_33_1370255828883_86 class=yiv4780567109yui_3_7_2_33_1370255828883_71>
<DIV dir=ltr>
<DIV style="BORDER-BOTTOM: #ccc 1px solid; BORDER-LEFT: #ccc 1px solid; PADDING-BOTTOM: 0px; LINE-HEIGHT: 0; MARGIN: 5px 0px; PADDING-LEFT: 0px; PADDING-RIGHT: 0px; HEIGHT: 0px; FONT-SIZE: 0px; BORDER-TOP: #ccc 1px solid; BORDER-RIGHT: #ccc 1px solid; PADDING-TOP: 0px" class=hr contentEditable=false readonly="true"></DIV><FONT size=2 face=Arial><B><SPAN style="FONT-WEIGHT: bold">From:</SPAN></B> Krishnan Parthasarathi <kparthas@redhat.com><BR><B><SPAN style="FONT-WEIGHT: bold">To:</SPAN></B> srinivas jonn <jmsrinivas@yahoo.com> <BR><B><SPAN style="FONT-WEIGHT: bold">Sent:</SPAN></B> Monday, 3 June 2013 4:24 PM<BR><B><SPAN style="FONT-WEIGHT: bold">Subject:</SPAN></B> Re: [Gluster-users] recovering gluster volume || startup failure<BR></FONT></DIV>
<DIV id=yiv4780567109yui_3_7_2_33_1370255828883_85 class=yiv4780567109y_msg_container><BR>Is this a source install or an rpm install? If it is a source install,<BR>the logs would be present under <install-prefix>/var/log/glusterfs<BR><BR>Having said that, could you attach etc-glusterfs-glusterd.log file?<BR>Does the gluster CLI print any error messages to the terminal, when<BR>volume-start fails?<BR><BR>thanks,<BR>krish<BR><BR>----- Original Message -----<BR>> there is no /var/log/glusterfs/.cmd_log_history file .<BR>> <BR>> gluster volume start <volume> - volume start has been unsuccessful<BR>> <BR>> <BR>> let me know for any specific log, I am trying to debug why volume is not<BR>> starting -<BR>> <BR>> feel free to copy the gluster-user DL if you think right<BR>> <BR>> <BR>> <BR>> ________________________________<BR>> From: Krishnan Parthasarathi <<A href="mailto:kparthas@redhat.com"
rel=nofollow target=_blank ymailto="mailto:kparthas@redhat.com">kparthas@redhat.com</A>><BR>> To: srinivas jonn <<A href="mailto:jmsrinivas@yahoo.com" rel=nofollow target=_blank ymailto="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</A>><BR>> Cc: <A id=yiv4780567109yui_3_7_2_33_1370255828883_96 href="mailto:gluster-users@gluster.org" rel=nofollow target=_blank ymailto="mailto:gluster-users@gluster.org">gluster-users@gluster.org</A><BR>> Sent: Monday, 3 June 2013 3:56 PM<BR>> Subject: Re: [Gluster-users] recovering gluster volume || startup failure<BR>> <BR>> <BR>> Did you run "gluster volume start gvol1"? Could you attach<BR>> /var/log/glusterfs/.cmd_log_history (log file)?<BR>> From the logs you have pasted, it looks like volume-stop is the last command<BR>> you executed.<BR>> <BR>> thanks,<BR>> krish<BR>> <BR>> ----- Original Message -----<BR>> > the volume is not starting - this
was the issue.. please let mw know the<BR>> > diagnostic or debug procedures,<BR>> > <BR>> > <BR>> > logs:<BR>> > <BR>> > usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293) [0x30cac0a443]<BR>> > /usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955]))) 0-:<BR>> > received signum (15), shutting down<BR>> > [2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<BR>> > (-->/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<BR>> > (-->/lib64/libpthread.so.0()<BR>> > [0x3ef5a077e1] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<BR>> > [0x405d4d]))) 0-: received signum (15), shutting down<BR>> > <BR>> > <BR>> > ________________________________<BR>> > From: Krishnan Parthasarathi <<A href="mailto:kparthas@redhat.com" rel=nofollow target=_blank
ymailto="mailto:kparthas@redhat.com">kparthas@redhat.com</A>><BR>> > To: srinivas jonn <<A href="mailto:jmsrinivas@yahoo.com" rel=nofollow target=_blank ymailto="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</A>><BR>> > Cc: <A href="mailto:gluster-users@gluster.org" rel=nofollow target=_blank ymailto="mailto:gluster-users@gluster.org">gluster-users@gluster.org</A><BR>> > Sent: Monday, 3 June 2013 3:27 PM<BR>> > Subject: Re: [Gluster-users] recovering gluster volume || startup failure<BR>> > <BR>> > <BR>> > Srinivas,<BR>> > <BR>> > The volume is in stopped state. You could start the volume by running<BR>> > "gluster volume start gvol1". This should make your attempts at mounting<BR>> > the volume successful.<BR>> > <BR>> > thanks,<BR>> > krish<BR>> > <BR>> > ----- Original Message -----<BR>> > > Krish,<BR>> > > this is
giving general volume information , can the state of volume known<BR>> > > from any specific logs?<BR>> > > #gluster volume info gvol1<BR>> > > Volume Name: gvol1<BR>> > > Type: Distribute<BR>> > > Volume ID: aa25aa58-d191-432a-a84b-325051347af6<BR>> > > Status: Stopped<BR>> > > Number of Bricks: 1<BR>> > > Transport-type: tcp<BR>> > > Bricks:<BR>> > > Brick1: 10.0.0.30:/export/brick1<BR>> > > Options Reconfigured:<BR>> > > nfs.addr-namelookup: off<BR>> > > nfs.port: 2049<BR>> > > From: Krishnan Parthasarathi <<A href="mailto:kparthas@redhat.com" rel=nofollow target=_blank ymailto="mailto:kparthas@redhat.com">kparthas@redhat.com</A>><BR>> > > To: srinivas jonn <<A href="mailto:jmsrinivas@yahoo.com" rel=nofollow target=_blank ymailto="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</A>><BR>> > > Cc:
<A href="mailto:gluster-users@gluster.org" rel=nofollow target=_blank ymailto="mailto:gluster-users@gluster.org">gluster-users@gluster.org</A><BR>> > > Sent: Monday, 3 June 2013 3:14 PM<BR>> > > Subject: Re: [Gluster-users] recovering gluster volume || startup failure<BR>> > > <BR>> > > Srinivas,<BR>> > > <BR>> > > Could you paste the output of "gluster volume info gvol1"?<BR>> > > This should give us an idea as to what was the state of the volume<BR>> > > before the power loss.<BR>> > > <BR>> > > thanks,<BR>> > > krish<BR>> > > <BR>> > > ----- Original Message -----<BR>> > > > Hello Gluster users:<BR>> > > > sorry for long post, I have run out of ideas here, kindly let me know<BR>> > > > if<BR>> > > > i<BR>> > > > am<BR>> > > > looking at right places for logs and
any suggested actions.....thanks<BR>> > > > a sudden power loss casued hard reboot - now the volume does not start<BR>> > > > Glusterfs- 3.3.1 on Centos 6.1 transport: TCP<BR>> > > > sharing volume over NFS for VM storage - VHD Files<BR>> > > > Type: distributed - only 1 node (brick)<BR>> > > > XFS (LVM)<BR>> > > > mount /dev/datastore1/mylv1 /export/brick1 - mounts VHD files.......is<BR>> > > > there<BR>> > > > a way to recover these files?<BR>> > > > cat export-brick1.log<BR>> > > > [2013-06-02 09:29:00.832914] I [glusterfsd.c:1666:main]<BR>> > > > 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version<BR>> > > > 3.3.1<BR>> > > > [2013-06-02 09:29:00.845515] I [graph.c:241:gf_add_cmdline_options]<BR>> > > > 0-gvol1-server: adding option 'listen-port' for volume
'gvol1-server'<BR>> > > > with<BR>> > > > value '24009'<BR>> > > > [2013-06-02 09:29:00.845558] I [graph.c:241:gf_add_cmdline_options]<BR>> > > > 0-gvol1-posix: adding option 'glusterd-uuid' for volume 'gvol1-posix'<BR>> > > > with<BR>> > > > value '16ee7a4e-ee9b-4543-bd61-9b444100693d'<BR>> > > > [2013-06-02 09:29:00.846654] W [options.c:782:xl_opt_validate]<BR>> > > > 0-gvol1-server: option 'listen-port' is deprecated, preferred is<BR>> > > > 'transport.socket.listen-port', continuing with correction<BR>> > > > Given volfile:<BR>> > > > +------------------------------------------------------------------------------+<BR>> > > > 1: volume gvol1-posix<BR>> > > > 2: type storage/posix<BR>> > > > 3: option directory /export/brick1<BR>> > > > 4: option volume-id
aa25aa58-d191-432a-a84b-325051347af6<BR>> > > > 5: end-volume<BR>> > > > 6:<BR>> > > > 7: volume gvol1-access-control<BR>> > > > 8: type features/access-control<BR>> > > > 9: subvolumes gvol1-posix<BR>> > > > 10: end-volume<BR>> > > > 11:<BR>> > > > 12: volume gvol1-locks<BR>> > > > 13: type features/locks<BR>> > > > 14: subvolumes gvol1-access-control<BR>> > > > ----------<BR>> > > > -----------------<BR>> > > > <BR>> > > > 46: option transport-type tcp<BR>> > > > 47: option auth.login./export/brick1.allow<BR>> > > > 6c4653bb-b708-46e8-b3f9-177b4cdbbf28<BR>> > > > 48: option auth.login.6c4653bb-b708-46e8-b3f9-177b4cdbbf28.password<BR>> > > > 091ae3b1-40c2-4d48-8870-6ad7884457ac<BR>> > > > 49: option
auth.addr./export/brick1.allow *<BR>> > > > 50: subvolumes /export/brick1<BR>> > > > 51: end-volume<BR>> > > > +------------------------------------------------------------------------------+<BR>> > > > [2013-06-02 09:29:03.963001] W [socket.c:410:__socket_keepalive]<BR>> > > > 0-socket:<BR>> > > > failed to set keep idle on socket 8<BR>> > > > [2013-06-02 09:29:03.963046] W<BR>> > > > [socket.c:1876:socket_server_event_handler]<BR>> > > > 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<BR>> > > > [2013-06-02 09:29:04.850120] I<BR>> > > > [server-handshake.c:571:server_setvolume]<BR>> > > > 0-gvol1-server: accepted client from<BR>> > > > iiclab-oel1-9347-2013/06/02-09:29:00:835397-gvol1-client-0-0 (version:<BR>> > > > 3.3.1)<BR>> > > > [2013-06-02
09:32:16.973786] W [glusterfsd.c:831:cleanup_and_exit]<BR>> > > > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x93) [0x30cac0a5b3]<BR>> > > > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293)<BR>> > > > [0x30cac0a443]<BR>> > > > (-->/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955])))<BR>> > > > 0-:<BR>> > > > received signum (15), shutting down<BR>> > > > [2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<BR>> > > > (-->/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<BR>> > > > (-->/lib64/libpthread.so.0()<BR>> > > > [0x3ef5a077e1] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<BR>> > > > [0x405d4d]))) 0-: received signum (15), shutting down<BR>> > > > NFS LOG<BR>> > > > [2013-06-02 09:29:00.918906] I
[rpc-clnt.c:1657:rpc_clnt_reconfig]<BR>> > > > 0-gvol1-client-0: changing port to 24009 (from 0)<BR>> > > > [2013-06-02 09:29:03.963023] W [socket.c:410:__socket_keepalive]<BR>> > > > 0-socket:<BR>> > > > failed to set keep idle on socket 8<BR>> > > > [2013-06-02 09:29:03.963062] W<BR>> > > > [socket.c:1876:socket_server_event_handler]<BR>> > > > 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<BR>> > > > [2013-06-02 09:29:04.849941] I<BR>> > > > [client-handshake.c:1636:select_server_supported_programs]<BR>> > > > 0-gvol1-client-0:<BR>> > > > Using Program GlusterFS 3.3.1, Num (1298437), Version (330)<BR>> > > > [2013-06-02 09:29:04.853016] I<BR>> > > > [client-handshake.c:1433:client_setvolume_cbk]<BR>> > > > 0-gvol1-client-0: Connected to 10.0.0.30:24009, attached
to remote<BR>> > > > volume<BR>> > > > '/export/brick1'.<BR>> > > > [2013-06-02 09:29:04.853048] I<BR>> > > > [client-handshake.c:1445:client_setvolume_cbk]<BR>> > > > 0-gvol1-client-0: Server and Client lk-version numbers are not same,<BR>> > > > reopening the fds<BR>> > > > [2013-06-02 09:29:04.853262] I<BR>> > > > [client-handshake.c:453:client_set_lk_version_cbk] 0-gvol1-client-0:<BR>> > > > Server<BR>> > > > lk version = 1<BR>> > > > <BR>> > > > _______________________________________________<BR>> > > > Gluster-users mailing list<BR>> > > > <A href="mailto:Gluster-users@gluster.org" rel=nofollow target=_blank ymailto="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</A><BR>> > > > <A href="http://supercolony.gluster.org/mailman/listinfo/gluster-users"
rel=nofollow target=_blank>http://supercolony.gluster.org/mailman/listinfo/gluster-users</A><BR>> > > <BR>> > > <BR>> > > <BR>> > > _______________________________________________<BR>> > > Gluster-users mailing list<BR>> > > <A href="mailto:Gluster-users@gluster.org" rel=nofollow target=_blank ymailto="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</A><BR>> > > <A href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" rel=nofollow target=_blank>http://supercolony.gluster.org/mailman/listinfo/gluster-users</A><BR><BR></DIV></DIV></DIV></DIV></DIV><BR><BR></DIV></DIV></DIV></DIV></DIV></DIV></div></body></html>