<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:12pt"><div><span>the volume is not starting - this was the issue.. please let mw know the diagnostic or debug procedures,</span></div><div><span></span> </div><div><span></span> </div><div><span>logs:</span></div><div><span></span> </div><div>usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293) [0x30cac0a443]<br> /usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955]))) 0-:<br> received signum (15), shutting down<br> [2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<br> (-->/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<br> (-->/lib64/libpthread.so.0()<br> [0x3ef5a077e1] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<br> [0x405d4d]))) 0-: received signum (15), shutting down<br></div> <div style="font-family: times new roman, new york, times, serif;
font-size: 12pt;"> <div style="font-family: times new roman, new york, times, serif; font-size: 12pt;"> <div dir="ltr"> <div style="margin: 5px 0px; padding: 0px; border: 1px solid rgb(204, 204, 204); height: 0px; line-height: 0; font-size: 0px;" class="hr" contentEditable="false" readonly="true"></div> <font size="2" face="Arial"> <b><span style="font-weight: bold;">From:</span></b> Krishnan Parthasarathi <kparthas@redhat.com><br> <b><span style="font-weight: bold;">To:</span></b> srinivas jonn <jmsrinivas@yahoo.com> <br><b><span style="font-weight: bold;">Cc:</span></b> gluster-users@gluster.org <br> <b><span style="font-weight: bold;">Sent:</span></b> Monday, 3 June 2013 3:27 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> Re: [Gluster-users] recovering gluster volume || startup failure<br> </font> </div> <div class="y_msg_container"><br>Srinivas,<br><br>The volume is in stopped state. You could start the volume by
running<br>"gluster volume start gvol1". This should make your attempts at mounting<br>the volume successful. <br><br>thanks,<br>krish<br><br>----- Original Message -----<br>> Krish,<br>> this is giving general volume information , can the state of volume known<br>> from any specific logs?<br>> #gluster volume info gvol1<br>> Volume Name: gvol1<br>> Type: Distribute<br>> Volume ID: aa25aa58-d191-432a-a84b-325051347af6<br>> Status: Stopped<br>> Number of Bricks: 1<br>> Transport-type: tcp<br>> Bricks:<br>> Brick1: 10.0.0.30:/export/brick1<br>> Options Reconfigured:<br>> nfs.addr-namelookup: off<br>> nfs.port: 2049<br>> From: Krishnan Parthasarathi <<a href="mailto:kparthas@redhat.com" ymailto="mailto:kparthas@redhat.com">kparthas@redhat.com</a>><br>> To: srinivas jonn <<a href="mailto:jmsrinivas@yahoo.com" ymailto="mailto:jmsrinivas@yahoo.com">jmsrinivas@yahoo.com</a>><br>> Cc: <a
href="mailto:gluster-users@gluster.org" ymailto="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>> Sent: Monday, 3 June 2013 3:14 PM<br>> Subject: Re: [Gluster-users] recovering gluster volume || startup failure<br>> <br>> Srinivas,<br>> <br>> Could you paste the output of "gluster volume info gvol1"?<br>> This should give us an idea as to what was the state of the volume<br>> before the power loss.<br>> <br>> thanks,<br>> krish<br>> <br>> ----- Original Message -----<br>> > Hello Gluster users:<br>> > sorry for long post, I have run out of ideas here, kindly let me know if i<br>> > am<br>> > looking at right places for logs and any suggested actions.....thanks<br>> > a sudden power loss casued hard reboot - now the volume does not start<br>> > Glusterfs- 3.3.1 on Centos 6.1 transport: TCP<br>> > sharing volume over NFS for VM storage - VHD Files<br>>
> Type: distributed - only 1 node (brick)<br>> > XFS (LVM)<br>> > mount /dev/datastore1/mylv1 /export/brick1 - mounts VHD files.......is<br>> > there<br>> > a way to recover these files?<br>> > cat export-brick1.log<br>> > [2013-06-02 09:29:00.832914] I [glusterfsd.c:1666:main]<br>> > 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.3.1<br>> > [2013-06-02 09:29:00.845515] I [graph.c:241:gf_add_cmdline_options]<br>> > 0-gvol1-server: adding option 'listen-port' for volume 'gvol1-server' with<br>> > value '24009'<br>> > [2013-06-02 09:29:00.845558] I [graph.c:241:gf_add_cmdline_options]<br>> > 0-gvol1-posix: adding option 'glusterd-uuid' for volume 'gvol1-posix' with<br>> > value '16ee7a4e-ee9b-4543-bd61-9b444100693d'<br>> > [2013-06-02 09:29:00.846654] W [options.c:782:xl_opt_validate]<br>> > 0-gvol1-server: option 'listen-port' is
deprecated, preferred is<br>> > 'transport.socket.listen-port', continuing with correction<br>> > Given volfile:<br>> > +------------------------------------------------------------------------------+<br>> > 1: volume gvol1-posix<br>> > 2: type storage/posix<br>> > 3: option directory /export/brick1<br>> > 4: option volume-id aa25aa58-d191-432a-a84b-325051347af6<br>> > 5: end-volume<br>> > 6:<br>> > 7: volume gvol1-access-control<br>> > 8: type features/access-control<br>> > 9: subvolumes gvol1-posix<br>> > 10: end-volume<br>> > 11:<br>> > 12: volume gvol1-locks<br>> > 13: type features/locks<br>> > 14: subvolumes gvol1-access-control<br>> > ----------<br>> > -----------------<br>> > <br>> > 46: option transport-type tcp<br>> > 47: option auth.login./export/brick1.allow<br>> >
6c4653bb-b708-46e8-b3f9-177b4cdbbf28<br>> > 48: option auth.login.6c4653bb-b708-46e8-b3f9-177b4cdbbf28.password<br>> > 091ae3b1-40c2-4d48-8870-6ad7884457ac<br>> > 49: option auth.addr./export/brick1.allow *<br>> > 50: subvolumes /export/brick1<br>> > 51: end-volume<br>> > +------------------------------------------------------------------------------+<br>> > [2013-06-02 09:29:03.963001] W [socket.c:410:__socket_keepalive] 0-socket:<br>> > failed to set keep idle on socket 8<br>> > [2013-06-02 09:29:03.963046] W [socket.c:1876:socket_server_event_handler]<br>> > 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<br>> > [2013-06-02 09:29:04.850120] I [server-handshake.c:571:server_setvolume]<br>> > 0-gvol1-server: accepted client from<br>> > iiclab-oel1-9347-2013/06/02-09:29:00:835397-gvol1-client-0-0 (version:<br>> > 3.3.1)<br>> > [2013-06-02
09:32:16.973786] W [glusterfsd.c:831:cleanup_and_exit]<br>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x93) [0x30cac0a5b3]<br>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293) [0x30cac0a443]<br>> > (-->/usr/sbin/glusterfsd(glusterfs_handle_terminate+0x15) [0x40a955]))) 0-:<br>> > received signum (15), shutting down<br>> > [2013-06-02 09:32:16.973895] W [glusterfsd.c:831:cleanup_and_exit]<br>> > (-->/lib64/libc.so.6(clone+0x6d) [0x3ef56e68ed]<br>> > (-->/lib64/libpthread.so.0()<br>> > [0x3ef5a077e1] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xdd)<br>> > [0x405d4d]))) 0-: received signum (15), shutting down<br>> > NFS LOG<br>> > [2013-06-02 09:29:00.918906] I [rpc-clnt.c:1657:rpc_clnt_reconfig]<br>> > 0-gvol1-client-0: changing port to 24009 (from 0)<br>> > [2013-06-02 09:29:03.963023] W [socket.c:410:__socket_keepalive] 0-socket:<br>>
> failed to set keep idle on socket 8<br>> > [2013-06-02 09:29:03.963062] W [socket.c:1876:socket_server_event_handler]<br>> > 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported<br>> > [2013-06-02 09:29:04.849941] I<br>> > [client-handshake.c:1636:select_server_supported_programs]<br>> > 0-gvol1-client-0:<br>> > Using Program GlusterFS 3.3.1, Num (1298437), Version (330)<br>> > [2013-06-02 09:29:04.853016] I<br>> > [client-handshake.c:1433:client_setvolume_cbk]<br>> > 0-gvol1-client-0: Connected to 10.0.0.30:24009, attached to remote volume<br>> > '/export/brick1'.<br>> > [2013-06-02 09:29:04.853048] I<br>> > [client-handshake.c:1445:client_setvolume_cbk]<br>> > 0-gvol1-client-0: Server and Client lk-version numbers are not same,<br>> > reopening the fds<br>> > [2013-06-02 09:29:04.853262] I<br>> >
[client-handshake.c:453:client_set_lk_version_cbk] 0-gvol1-client-0: Server<br>> > lk version = 1<br>> > <br>> > _______________________________________________<br>> > Gluster-users mailing list<br>> > <a href="mailto:Gluster-users@gluster.org" ymailto="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>> > <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>> <br>> <br>> <br>> _______________________________________________<br>> Gluster-users mailing list<br>> <a href="mailto:Gluster-users@gluster.org" ymailto="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br><br><br></div> </div> </div>
</div></body></html>