Missed the list :(<br><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">Kaushal M</b> <span dir="ltr"><<a href="mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>></span><br>
Date: Thu, Jul 5, 2012 at 12:46 PM<br>Subject: Re: [Gluster-devel] Fwd: Bug#679767: glusterfs-server: Crash when creating new volume with 'gluster volume create'<br>To: Louis Zuckerman <<a href="mailto:glusterdevel@louiszuckerman.com">glusterdevel@louiszuckerman.com</a>><br>
<br><br>Hi guys,<div>This looks like its caused by the optimizations done by gcc-4.7. This occuring when gluster is compiled with the default -O2 optimization. -O0 doesn't cause this. Can you confirm?</div><span class="HOEnZb"><font color="#888888"><div>
<br></div></font></span><div><span class="HOEnZb"><font color="#888888">
- Kaushal </font></span><div><div class="h5"><br><br><div class="gmail_quote">On Wed, Jul 4, 2012 at 6:31 PM, Louis Zuckerman <span dir="ltr"><<a href="mailto:glusterdevel@louiszuckerman.com" target="_blank">glusterdevel@louiszuckerman.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi J.J.B,<br>
<br>
Thanks for reporting the bug.<br>
<br>
I can confirm this is very easy to reproduce...<br>
<br>
Install the 3.2.7 package from Wheezy/Sid, try to create a volume<br>
(with a single brick on the local machine) and glusterd crashes.<br>
<br>
Restart glusterd and you can start the volume, but glusterd crashes<br>
again. You can restart glusterd again to stop the volume, but another<br>
crash. Then restarting glusterd again allows you to delete the<br>
volume, but still another crash.<br>
<br>
I'll check the glusterfs bugzilla for related issues & open a bug if<br>
there's not one already. Will follow up later today with the link.<br>
<br>
Also, want to clear this up:<br>
<br>
> > - if not, is the created volume working?<br>
> The volume is created and working, but I cannot stop it's process with<br>
> the init-script (/etc/init.d/glusterfs-server stop). The init-script<br>
> will only stop the management-daemon and I have to kill the volume<br>
> manually.<br>
<br>
That is expected behavior. The glusterfs-server initscript only<br>
controls glusterd, the management daemon. Stopping & starting the<br>
glusterfsd brick export daemons for bricks in a volume is done using<br>
"gluster volume stop/start" commands in the gluster CLI and has effect<br>
on all bricks in the volume across all servers.<br>
<br>
HTH<br>
<br>
-louis<br>
<div><div><br>
On Tue, Jul 3, 2012 at 1:11 PM, Patrick Matthäi <<a href="mailto:pmatthaei@debian.org" target="_blank">pmatthaei@debian.org</a>> wrote:<br>
> Hello gluster guys,<br>
><br>
> we have found a bug, where glusterd crashes everytime where a volume is<br>
> created or deleted. Full information and backtraces here:<br>
> <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679767" target="_blank">http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679767</a><br>
><br>
> Any idea?<br>
><br>
><br>
> Package: glusterfs-server<br>
> Version: 3.2.7-1<br>
> Severity: normal<br>
><br>
> Dear Maintainer,<br>
><br>
> After installing glusterfs-server, when I try to create a volume<br>
> "wheezy" the glusterd-daemon crashes. It seems that something goes wrong<br>
> in the communication between "gluster" and "glusterd". A request is<br>
> sent, but no reply arrives (checked this with wireshark).<br>
><br>
> Command executed:<br>
> # gluster volume create wheezy wheezy:/tmp<br>
><br>
> Trace of glusterd:<br>
> # gdb --args /usr/sbin/glusterd --debug -p /var/run/glusterd.pid<br>
> --volfile=/etc/glusterfs/glusterd.vol<br>
> GNU gdb (GDB) 7.4.1-debian<br>
> Copyright (C) 2012 Free Software Foundation, Inc.<br>
> License GPLv3+: GNU GPL version 3 or later<br>
> <<a href="http://gnu.org/licenses/gpl.html" target="_blank">http://gnu.org/licenses/gpl.html</a>><br>
> This is free software: you are free to change and redistribute it.<br>
> There is NO WARRANTY, to the extent permitted by law. Type "show copying"<br>
> and "show warranty" for details.<br>
> This GDB was configured as "x86_64-linux-gnu".<br>
> For bug reporting instructions, please see:<br>
> <<a href="http://www.gnu.org/software/gdb/bugs/" target="_blank">http://www.gnu.org/software/gdb/bugs/</a>>...<br>
> Reading symbols from /usr/sbin/glusterd...Reading symbols from<br>
> /usr/lib/debug/usr/sbin/glusterfsd...done.<br>
> done.<br>
> (gdb) run<br>
> Starting program: /usr/sbin/glusterd --debug -p /var/run/glusterd.pid<br>
> --volfile=/etc/glusterfs/glusterd.vol<br>
> [Thread debugging using libthread_db enabled]<br>
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".<br>
> [2012-07-01 14:47:13.420700] I [glusterfsd.c:1493:main]<br>
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.2.7<br>
> [2012-07-01 14:47:13.420888] D<br>
> [glusterfsd.c:1235:glusterfs_pidfile_update] 0-glusterfsd: pidfile<br>
> /var/run/glusterd.pid updated with pid 2511<br>
> [New Thread 0x7ffff6194700 (LWP 2514)]<br>
> [2012-07-01 14:47:13.422436] D [glusterfsd.c:374:get_volfp]<br>
> 0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol<br>
> [2012-07-01 14:47:13.459256] D [xlator.c:1302:xlator_dynload] 0-xlator:<br>
> dlsym(reconfigure) on /usr/lib/glusterfs/3.2.7/xlator/mgmt/glusterd.so:<br>
> undefined symbol: reconfigure -- neglecting<br>
> [2012-07-01 14:47:13.459350] D [xlator.c:1308:xlator_dynload] 0-xlator:<br>
> dlsym(validate_options) on<br>
> /usr/lib/glusterfs/3.2.7/xlator/mgmt/glusterd.so: undefined symbol:<br>
> validate_options -- neglecting<br>
> [2012-07-01 14:47:13.459561] I [glusterd.c:550:init] 0-management: Using<br>
> /etc/glusterd as working directory<br>
> [2012-07-01 14:47:13.459641] D<br>
> [glusterd.c:242:glusterd_rpcsvc_options_build] 0-: listen-backlog value: 128<br>
> [2012-07-01 14:47:13.460080] D [rpcsvc.c:1771:rpcsvc_init]<br>
> 0-rpc-service: RPC service inited.<br>
> [2012-07-01 14:47:13.460136] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1,<br>
> Port: 0<br>
> [2012-07-01 14:47:13.460223] D [rpc-transport.c:673:rpc_transport_load]<br>
> 0-rpc-transport: attempt to load file<br>
> /usr/lib/glusterfs/3.2.7/rpc-transport/socket.so<br>
> [2012-07-01 14:47:13.466313] D<br>
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:<br>
> no range check required for 'option transport.socket.listen-backlog 128'<br>
> [2012-07-01 14:47:13.466582] D<br>
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:<br>
> no range check required for 'option transport.socket.keepalive-interval 2'<br>
> [2012-07-01 14:47:13.466777] D<br>
> [rpc-transport.c:97:__volume_option_value_validate] 0-socket.management:<br>
> no range check required for 'option transport.socket.keepalive-time 10'<br>
> [2012-07-01 14:47:13.466976] D [name.c:552:server_fill_address_family]<br>
> 0-socket.management: option address-family not specified, defaulting to<br>
> inet/inet6<br>
> [2012-07-01 14:47:13.467371] D [rpc-transport.c:673:rpc_transport_load]<br>
> 0-rpc-transport: attempt to load file<br>
> /usr/lib/glusterfs/3.2.7/rpc-transport/rdma.so<br>
> [2012-07-01 14:47:13.475817] C [rdma.c:3934:rdma_init]<br>
> 0-rpc-transport/rdma: Failed to get IB devices<br>
> [2012-07-01 14:47:13.476685] E [rdma.c:4813:init] 0-rdma.management:<br>
> Failed to initialize IB Device<br>
> [2012-07-01 14:47:13.477039] E [rpc-transport.c:742:rpc_transport_load]<br>
> 0-rpc-transport: 'rdma' initialization failed<br>
> [2012-07-01 14:47:13.477339] W [rpcsvc.c:1288:rpcsvc_transport_create]<br>
> 0-rpc-service: cannot create listener, initing the transport failed<br>
> [2012-07-01 14:47:13.477721] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: GlusterD0.0.1, Num: 1298433, Ver:<br>
> 1, Port: 0<br>
> [2012-07-01 14:47:13.478053] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: GlusterD svc cli, Num: 1238463,<br>
> Ver: 1, Port: 0<br>
> [2012-07-01 14:47:13.478378] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: GlusterD svc mgmt, Num: 1238433,<br>
> Ver: 1, Port: 0<br>
> [2012-07-01 14:47:13.478742] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: Gluster Portmap, Num: 34123456,<br>
> Ver: 1, Port: 0<br>
> [2012-07-01 14:47:13.479008] D [rpcsvc.c:1568:rpcsvc_program_register]<br>
> 0-rpc-service: New program registered: GlusterFS Handshake, Num:<br>
> 14398633, Ver: 1, Port: 0<br>
> [2012-07-01 14:47:13.479308] D<br>
> [glusterd-utils.c:3136:glusterd_sm_tr_log_init] 0-: returning 0<br>
> [2012-07-01 14:47:13.479654] D<br>
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0<br>
> [2012-07-01 14:47:13.479960] D<br>
> [glusterd-store.c:1155:glusterd_store_handle_retrieve] 0-: Returning 0<br>
> [2012-07-01 14:47:13.480298] D<br>
> [glusterd-store.c:1038:glusterd_store_retrieve_value] 0-: key UUID read<br>
> [2012-07-01 14:47:13.480589] D<br>
> [glusterd-store.c:1041:glusterd_store_retrieve_value] 0-: key UUID found<br>
> [2012-07-01 14:47:13.480898] D<br>
> [glusterd-store.c:1272:glusterd_retrieve_uuid] 0-: Returning 0<br>
> [2012-07-01 14:47:13.481186] I [glusterd.c:88:glusterd_uuid_init]<br>
> 0-glusterd: retrieved UUID: 46aa3f36-9c98-4668-aff4-7234ef2b217e<br>
> [2012-07-01 14:47:13.534063] D<br>
> [glusterd.c:302:glusterd_check_gsync_present] 0-: Returning 0<br>
> [2012-07-01 14:47:13.534159] D<br>
> [glusterd.c:361:glusterd_crt_georep_folders] 0-: Returning 0<br>
> [2012-07-01 14:47:14.678812] D<br>
> [glusterd-store.c:1914:glusterd_store_retrieve_volumes] 0-: Returning with 0<br>
> [2012-07-01 14:47:14.678939] D<br>
> [glusterd-store.c:2262:glusterd_store_retrieve_peers] 0-: Returning with 0<br>
> [2012-07-01 14:47:14.678967] D<br>
> [glusterd-store.c:2292:glusterd_resolve_all_bricks] 0-: Returning with 0<br>
> [2012-07-01 14:47:14.678994] D [glusterd-store.c:2319:glusterd_restore]<br>
> 0-: Returning 0<br>
> Given volfile:<br>
> +------------------------------------------------------------------------------+<br>
> 1: volume management<br>
> 2: type mgmt/glusterd<br>
> 3: option working-directory /etc/glusterd<br>
> 4: option transport-type socket,rdma<br>
> 5: option transport.socket.keepalive-time 10<br>
> 6: option transport.socket.keepalive-interval 2<br>
> 7: end-volume<br>
> 8:<br>
><br>
> +------------------------------------------------------------------------------+<br>
> [2012-07-01 14:47:18.301475] D<br>
> [glusterd-op-sm.c:8544:glusterd_op_set_cli_op] 0-: Returning 0<br>
> [2012-07-01 14:47:18.301537] I<br>
> [glusterd-handler.c:900:glusterd_handle_create_volume] 0-glusterd:<br>
> Received create volume req<br>
> [2012-07-01 14:47:18.301607] D<br>
> [glusterd-utils.c:493:glusterd_check_volume_exists] 0-: Volume wheezy<br>
> does not exist.stat failed with errno : 2 on path: /etc/glusterd/vols/wheezy<br>
> [2012-07-01 14:47:18.301687] D<br>
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.301700] D<br>
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302509] D<br>
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:<br>
> Unable to find friend: wheezy<br>
> [2012-07-01 14:47:18.302631] D<br>
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local<br>
> [2012-07-01 14:47:18.302652] D<br>
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0<br>
> [2012-07-01 14:47:18.302661] D<br>
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302673] D<br>
> [glusterd-utils.c:2927:glusterd_new_brick_validate] 0-: returning 0<br>
> [2012-07-01 14:47:18.302682] D<br>
> [glusterd-utils.c:760:glusterd_volume_brickinfo_get] 0-: Returning -1<br>
> [2012-07-01 14:47:18.302699] I [glusterd-utils.c:243:glusterd_lock]<br>
> 0-glusterd: Cluster lock held by 46aa3f36-9c98-4668-aff4-7234ef2b217e<br>
> [2012-07-01 14:47:18.302709] I<br>
> [glusterd-handler.c:420:glusterd_op_txn_begin] 0-glusterd: Acquired<br>
> local lock<br>
> [2012-07-01 14:47:18.302722] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_START_LOCK'<br>
> [2012-07-01 14:47:18.302731] D<br>
> [glusterd-handler.c:424:glusterd_op_txn_begin] 0-glusterd: Returning 0<br>
> [2012-07-01 14:47:18.302756] D<br>
> [glusterd-utils.c:577:glusterd_volume_brickinfos_delete] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302769] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_START_LOCK'<br>
> [2012-07-01 14:47:18.302779] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.302787] D<br>
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302797] D<br>
> [glusterd-op-sm.c:6462:glusterd_op_ac_send_lock] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.302806] D<br>
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:<br>
> Transitioning from 'Default' to 'Lock sent' due to event<br>
> 'GD_OP_EVENT_START_LOCK'<br>
> [2012-07-01 14:47:18.302815] D<br>
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0<br>
> [2012-07-01 14:47:18.302823] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.302849] D<br>
> [glusterd-utils.c:493:glusterd_check_volume_exists] 0-: Volume wheezy<br>
> does not exist.stat failed with errno : 2 on path: /etc/glusterd/vols/wheezy<br>
> [2012-07-01 14:47:18.302867] D<br>
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302878] D<br>
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302918] D<br>
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:<br>
> Unable to find friend: wheezy<br>
> [2012-07-01 14:47:18.302955] D<br>
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local<br>
> [2012-07-01 14:47:18.302965] D<br>
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0<br>
> [2012-07-01 14:47:18.302973] D<br>
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.302984] D<br>
> [glusterd-utils.c:3013:glusterd_brick_create_path] 0-: returning 0<br>
> [2012-07-01 14:47:18.302993] D<br>
> [glusterd-op-sm.c:386:glusterd_op_stage_create_volume] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303001] D<br>
> [glusterd-op-sm.c:7584:glusterd_op_stage_validate] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303014] I<br>
> [glusterd-op-sm.c:6737:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op<br>
> req to 0 peers<br>
> [2012-07-01 14:47:18.303027] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.303039] D<br>
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303047] D<br>
> [glusterd-op-sm.c:6742:glusterd_op_ac_send_stage_op] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.303055] D<br>
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:<br>
> Transitioning from 'Lock sent' to 'Stage op sent' due to event<br>
> 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.303064] D<br>
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0<br>
> [2012-07-01 14:47:18.303077] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.303099] D<br>
> [glusterd-op-sm.c:8092:glusterd_op_bricks_select] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303112] D<br>
> [glusterd-rpc-ops.c:1903:glusterd3_1_brick_op] 0-glusterd: Sent op req<br>
> to 0 bricks<br>
> [2012-07-01 14:47:18.303120] D<br>
> [glusterd-rpc-ops.c:1911:glusterd3_1_brick_op] 0-glusterd: Returning 0<br>
> [2012-07-01 14:47:18.303129] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_ALL_ACK'<br>
> [2012-07-01 14:47:18.303137] D<br>
> [glusterd-op-sm.c:8007:glusterd_op_ac_send_brick_op] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.303145] D<br>
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:<br>
> Transitioning from 'Stage op sent' to 'Brick op sent' due to event<br>
> 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.303154] D<br>
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0<br>
> [2012-07-01 14:47:18.303161] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACK'<br>
> [2012-07-01 14:47:18.303179] D<br>
> [glusterd-utils.c:538:glusterd_volinfo_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303198] D<br>
> [glusterd-utils.c:630:glusterd_brickinfo_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303208] D<br>
> [glusterd-utils.c:687:glusterd_brickinfo_from_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303244] D<br>
> [glusterd-utils.c:2755:glusterd_friend_find_by_hostname] 0-glusterd:<br>
> Unable to find friend: wheezy<br>
> [2012-07-01 14:47:18.303278] D<br>
> [glusterd-utils.c:211:glusterd_is_local_addr] 0-glusterd: wheezy is local<br>
> [2012-07-01 14:47:18.303289] D<br>
> [glusterd-utils.c:2789:glusterd_hostname_to_uuid] 0-: returning 0<br>
> [2012-07-01 14:47:18.303297] D<br>
> [glusterd-utils.c:642:glusterd_resolve_brick] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303382] D<br>
> [glusterd-store.c:608:glusterd_store_create_volume_dir] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.303424] D<br>
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303448] D<br>
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303492] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303561] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303581] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303596] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303610] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303625] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303643] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303651] D<br>
> [glusterd-store.c:632:glusterd_store_volinfo_write] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303674] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303733] D<br>
> [glusterd-store.c:1134:glusterd_store_handle_new] 0-: Returning 0<br>
> [2012-07-01 14:47:18.303771] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303788] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303803] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303817] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.303825] D<br>
> [glusterd-store.c:292:glusterd_store_brickinfo_write] 0-: Returning 0<br>
> [2012-07-01 14:47:18.304055] D<br>
> [glusterd-store.c:319:glusterd_store_perform_brick_store] 0-: Returning 0<br>
> [2012-07-01 14:47:18.304075] D<br>
> [glusterd-store.c:349:glusterd_store_brickinfo] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.304086] D<br>
> [glusterd-store.c:710:glusterd_store_brickinfos] 0-: Returning 0<br>
> [2012-07-01 14:47:18.304153] D<br>
> [glusterd-store.c:808:glusterd_store_perform_volume_store] 0-: Returning 0<br>
> [2012-07-01 14:47:18.304201] D<br>
> [glusterd-store.c:1089:glusterd_store_save_value] 0-: returning: 0<br>
> [2012-07-01 14:47:18.304211] D<br>
> [glusterd-store.c:749:glusterd_store_rbstate_write] 0-management:<br>
> Returning 0<br>
> [2012-07-01 14:47:18.304254] D<br>
> [glusterd-store.c:777:glusterd_store_perform_rbstate_store] 0-: Returning 0<br>
> [2012-07-01 14:47:18.308029] D<br>
> [glusterd-utils.c:1348:glusterd_volume_compute_cksum] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.308124] D<br>
> [glusterd-store.c:860:glusterd_store_volinfo] 0-: Returning 0<br>
> [2012-07-01 14:47:18.308180] D<br>
> [glusterd-volgen.c:2342:generate_brick_volfiles] 0-: Found a brick -<br>
> wheezy:/tmp<br>
> [2012-07-01 14:47:18.308263] D<br>
> [glusterd-volgen.c:1311:server_check_marker_off] 0-: Returning 0<br>
> [2012-07-01 14:47:18.308409] D<br>
> [glusterd-volgen.c:2353:generate_brick_volfiles] 0-: Returning 0<br>
> [2012-07-01 14:47:18.312437] D<br>
> [glusterd-utils.c:1348:glusterd_volume_compute_cksum] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.312526] D<br>
> [glusterd-op-sm.c:7664:glusterd_op_commit_perform] 0-: Returning 0<br>
> [2012-07-01 14:47:18.312557] I<br>
> [glusterd-op-sm.c:6854:glusterd_op_ac_send_commit_op] 0-glusterd: Sent<br>
> op req to 0 peers<br>
> [2012-07-01 14:47:18.312589] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.312611] D<br>
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0<br>
> [2012-07-01 14:47:18.312629] D<br>
> [glusterd-op-sm.c:6875:glusterd_op_ac_send_commit_op] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.312655] D<br>
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:<br>
> Transitioning from 'Brick op sent' to 'Commit op sent' due to event<br>
> 'GD_OP_EVENT_ALL_ACK'<br>
> [2012-07-01 14:47:18.312679] D<br>
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0<br>
> [2012-07-01 14:47:18.312711] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.312733] D<br>
> [glusterd-op-sm.c:8393:glusterd_op_sm_inject_event] 0-glusterd:<br>
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.312752] D<br>
> [glusterd-op-sm.c:180:glusterd_op_sm_inject_all_acc] 0-: Returning 0<br>
> [2012-07-01 14:47:18.312770] D<br>
> [glusterd-op-sm.c:6509:glusterd_op_ac_send_unlock] 0-: Returning with 0<br>
> [2012-07-01 14:47:18.312788] D<br>
> [glusterd-utils.c:3182:glusterd_sm_tr_log_transition_add] 0-glusterd:<br>
> Transitioning from 'Commit op sent' to 'Unlock sent' due to event<br>
> 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.312809] D<br>
> [glusterd-utils.c:3184:glusterd_sm_tr_log_transition_add] 0-: returning 0<br>
> [2012-07-01 14:47:18.312827] D [glusterd-op-sm.c:8449:glusterd_op_sm]<br>
> 0-: Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'<br>
> [2012-07-01 14:47:18.312848] I<br>
> [glusterd-op-sm.c:7250:glusterd_op_txn_complete] 0-glusterd: Cleared<br>
> local lock<br>
><br>
> Program received signal SIGSEGV, Segmentation fault.<br>
> 0x00007ffff7021bf1 in ?? () from /lib/x86_64-linux-gnu/libc.so.6<br>
> (gdb) bt<br>
> #0 0x00007ffff7021bf1 in ?? () from /lib/x86_64-linux-gnu/libc.so.6<br>
> #1 0x00007ffff70a4357 in xdr_string () from /lib/x86_64-linux-gnu/libc.so.6<br>
> #2 0x00007ffff7756061 in xdr_gf1_cli_create_vol_rsp<br>
> (xdrs=xdrs@entry=0x7fffffffd150, objp=objp@entry=0x7fffffffd2b0) at<br>
> cli1-xdr.c:279<br>
> #3 0x00007ffff796ff11 in xdr_serialize_generic (outmsg=...,<br>
> res=0x7fffffffd2b0, proc=0x7ffff7756010 <xdr_gf1_cli_create_vol_rsp>) at<br>
> rpc-common.c:36<br>
> #4 0x00007ffff5751906 in glusterd_serialize_reply<br>
> (req=req@entry=0x7ffff7f37024, arg=0x7fffffffd2b0, sfunc=0x7ffff7757250<br>
> <gf_xdr_serialize_cli_create_vol_rsp>,<br>
> outmsg=outmsg@entry=0x7fffffffd1e0) at glusterd-utils.c:402<br>
> #5 0x00007ffff5751a25 in glusterd_submit_reply<br>
> (req=req@entry=0x7ffff7f37024, arg=<optimized out>,<br>
> payload=payload@entry=0x0, payloadcount=payloadcount@entry=0,<br>
> iobref=0x5555557916f0, iobref@entry=0x0, sfunc=<optimized out>)<br>
> at glusterd-utils.c:444<br>
> #6 0x00007ffff576027f in glusterd_op_send_cli_response<br>
> (op=op@entry=GD_OP_CREATE_VOLUME, op_ret=op_ret@entry=0,<br>
> op_errno=op_errno@entry=0, req=req@entry=0x7ffff7f37024,<br>
> op_ctx=op_ctx@entry=0x555555789f40, op_errstr=op_errstr@entry=0x0)<br>
> at glusterd-rpc-ops.c:414<br>
> #7 0x00007ffff574f84c in glusterd_op_txn_complete () at<br>
> glusterd-op-sm.c:7278<br>
> #8 0x00007ffff574fb39 in glusterd_op_ac_unlocked_all (event=<optimized<br>
> out>, ctx=<optimized out>) at glusterd-op-sm.c:7304<br>
> #9 0x00007ffff57489d2 in glusterd_op_sm () at glusterd-op-sm.c:8458<br>
> #10 0x00007ffff5730f60 in glusterd_handle_create_volume<br>
> (req=0x7ffff7f37024) at glusterd-handler.c:1047<br>
> #11 0x00007ffff79671ff in rpcsvc_handle_rpc_call (svc=0x555555789c50,<br>
> trans=trans@entry=0x55555578e370, msg=msg@entry=0x555555784270) at<br>
> rpcsvc.c:480<br>
> #12 0x00007ffff796778b in rpcsvc_notify (trans=0x55555578e370,<br>
> mydata=<optimized out>, event=<optimized out>, data=0x555555784270) at<br>
> rpcsvc.c:576<br>
> #13 0x00007ffff796af13 in rpc_transport_notify<br>
> (this=this@entry=0x55555578e370,<br>
> event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=<optimized out>) at<br>
> rpc-transport.c:919<br>
> #14 0x00007ffff54fb224 in socket_event_poll_in<br>
> (this=this@entry=0x55555578e370) at socket.c:1647<br>
> #15 0x00007ffff54fb565 in socket_event_handler (fd=<optimized out>,<br>
> idx=<optimized out>, data=0x55555578e370, poll_in=1, poll_out=0,<br>
> poll_err=0) at socket.c:1762<br>
> #16 0x00007ffff7bb64c8 in event_dispatch_epoll_handler (i=<optimized<br>
> out>, events=0x55555578d750, event_pool=0x5555557823a0) at event.c:794<br>
> #17 event_dispatch_epoll (event_pool=0x5555557823a0) at event.c:856<br>
> #18 0x0000555555558a6b in main (argc=5, argv=0x7fffffffe648) at<br>
> glusterfsd.c:1509<br>
> (gdb)<br>
><br>
> -- System Information:<br>
> Debian Release: wheezy/sid<br>
> APT prefers testing<br>
> APT policy: (500, 'testing'), (400, 'unstable')<br>
> Architecture: amd64 (x86_64)<br>
><br>
> Kernel: Linux 3.2.0-2-amd64 (SMP w/3 CPU cores)<br>
> Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)<br>
> Shell: /bin/sh linked to /bin/dash<br>
><br>
> Versions of packages glusterfs-server depends on:<br>
> ii glusterfs-client 3.2.7-1<br>
> ii glusterfs-common 3.2.7-1<br>
> ii libc6 2.13-33<br>
> ii libncurses5 5.9-9<br>
> ii libreadline6 6.2-8<br>
> ii libtinfo5 5.9-9<br>
> ii lsb-base 4.1+Debian7<br>
><br>
> glusterfs-server recommends no packages.<br>
><br>
> Versions of packages glusterfs-server suggests:<br>
> ii glusterfs-examples 3.2.7-1<br>
> ii nfs-common 1:1.2.6-2<br>
><br>
> -- Configuration Files:<br>
> /etc/glusterfs/glusterd.vol unchanged<br>
><br>
> -- no debconf information<br>
><br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
> <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
><br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br></div></div></div>
</div><br>