<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Tomoaki,<br>
<br>
You seem to have hit a bug in peer probe operation. This will be
tracked in<br>
bug 3432: see <a class="moz-txt-link-freetext" href="http://bugs.gluster.com/show_bug.cgi?id=3432">http://bugs.gluster.com/show_bug.cgi?id=3432</a><br>
<br>
thanks,<br>
kp<br>
<br>
On 08/17/2011 12:28 PM, Tomoaki Sato wrote:
<blockquote cite="mid:4E4B66A8.1060707@valinux.co.jp" type="cite">kp,
<br>
<br>
please find attached tar files.
<br>
<br>
[root@vhead-010 ~]# ssh foo-1-private
<br>
[root@foo-1-private ~]# gluster volume create foo
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been successful. Please start the
volume to access data.
<br>
[root@foo-1-private ~]# gluster volume start foo
<br>
Starting volume foo has been successful
<br>
[root@foo-1-private ~]# gluster peer probe foo-3-private
<br>
Probe successful
<br>
[root@foo-1-private ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: 0c76c6ec-a7ea-405e-9d1d-8e100e155ec3
<br>
State: Peer Rejected (Connected)
<br>
[root@foo-1-private ~]# cd /etc/glusterd/vols/
<br>
[root@foo-1-private vols]# tar cf /tmp/foo-1-private.tar foo
<br>
[root@foo-1-private vols]# ssh foo-3-private
<br>
[root@foo-3-private ~]# cd /etc/glusterd/vols
<br>
[root@foo-3-private vols]# tar cf /tmp/foo-3-private.tar foo
<br>
[root@foo-3-private vols]#
<br>
<br>
Thanks,
<br>
Tomo
<br>
<br>
(2011/08/17 15:46), krish wrote:
<br>
<blockquote type="cite">Tomoaki,
<br>
<br>
Can you send the info file under
/etc/glusterd/vols/<volname>/ from machines were the peer
probe
<br>
command was issued and the 'peer' who is being rejected?
<br>
<br>
thanks,
<br>
kp
<br>
<br>
On 08/17/2011 12:10 PM, Tomoaki Sato wrote:
<br>
<blockquote type="cite">Mohit
<br>
<br>
(2011/08/17 13:53), Mohit Anchlia wrote:
<br>
<blockquote type="cite">Not sure. It could be because the new
node doesn't have the volume
<br>
configs. Can you try gluster volume sync to sync the configs
and see
<br>
if that helps?
<br>
</blockquote>
<br>
- at foo-1-private -
<br>
gluster> volume sync foo-3-private
<br>
please delete all the volumes before full sync
<br>
gluster>
<br>
--
<br>
<br>
- at foo-3-private -
<br>
gluster> volume sync foo foo-1-private
<br>
sync from localhost not allowed
<br>
gluster>
<br>
--
<br>
<br>
Question is "How should I add extra peers to existing file
systems ?".
<br>
extra peers = new nodes.
<br>
<br>
Could you tell me right instructions to gluster probe new
nodes after the volume starting ?
<br>
<br>
<blockquote type="cite">
<br>
Also, not sure why you are getting "Unable to find hostname:
foo-3-private"
<br>
<br>
</blockquote>
<br>
"Unable to find hostname: foo-3-private" was printed out on
both OK and NG cases.
<br>
"Cksums of volume foo differ. local cksum = 1403573944, remote
cksum = -1413994823" was printed out on NG case only.
<br>
<br>
'Peer in Cluster', OK case:
<br>
[2011-08-17 15:08:47.462646] I
[glusterd-handler.c:602:glusterd_handle_cli_probe] 0-glusterd:
Received CLI probe req foo-3-private 24007
<br>
[2011-08-17 15:08:47.466194] I
[glusterd-handler.c:391:glusterd_friend_find] 0-glusterd:
Unable to find hostname: foo-3-private
<br>
[2011-08-17 15:08:47.466224] I
[glusterd-handler.c:3120:glusterd_probe_begin] 0-glusterd:
Unable to find peerinfo for host: foo-3-private (24007)
<br>
[2011-08-17 15:08:47.469365] W
[rpc-transport.c:728:rpc_transport_load] 0-rpc-transport:
missing 'option transport-type'. defaulting to "socket"
<br>
[2011-08-17 15:08:47.473671] I
[glusterd-handler.c:3102:glusterd_friend_add] 0-glusterd:
connect returned 0
<br>
[2011-08-17 15:08:47.474167] I
[glusterd-handshake.c:317:glusterd_set_clnt_mgmt_program] 0-:
Using Program glusterd clnt mgmt, Num (1238433), Version (1)
<br>
[2011-08-17 15:08:47.474214] I
[glusterd-utils.c:2127:glusterd_friend_find_by_hostname]
0-glusterd: Friend foo-3-private found.. state: 0
<br>
[2011-08-17 15:08:47.483485] I
[glusterd-rpc-ops.c:364:glusterd3_1_probe_cbk] 0-glusterd:
Received probe resp from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f, host: foo-3-private
<br>
[2011-08-17 15:08:47.483516] I
[glusterd-handler.c:379:glusterd_friend_find] 0-glusterd:
Unable to find peer by uuid
<br>
[2011-08-17 15:08:47.483562] I
[glusterd-utils.c:2127:glusterd_friend_find_by_hostname]
0-glusterd: Friend foo-3-private found.. state: 0
<br>
[2011-08-17 15:08:47.483764] I
[glusterd-rpc-ops.c:409:glusterd3_1_probe_cbk] 0-glusterd:
Received resp to probe req
<br>
[2011-08-17 15:08:47.484040] I
[glusterd-rpc-ops.c:454:glusterd3_1_friend_add_cbk]
0-glusterd: Received ACC from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f, host: foo-3-private,
port: 0
<br>
[2011-08-17 15:08:47.484088] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Probe Sent to Peer
<br>
[2011-08-17 15:08:47.484153] I
[glusterd-handler.c:3293:glusterd_xfer_cli_probe_resp]
0-glusterd: Responded to CLI, ret: 0
<br>
[2011-08-17 15:08:47.484554] I
[glusterd-handler.c:2882:glusterd_handle_probe_query]
0-glusterd: Received probe from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f
<br>
[2011-08-17 15:08:47.484585] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Accepted peer request
<br>
[2011-08-17 15:08:47.484647] I
[glusterd-handler.c:2917:glusterd_handle_probe_query]
0-glusterd: Responded to 192.168.1.129, op_ret: 0, op_errno:
0, ret: 0
<br>
[2011-08-17 15:08:47.485499] I
[glusterd-handler.c:2614:glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f
<br>
[2011-08-17 15:08:47.485536] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Accepted peer request
<br>
[2011-08-17 15:08:47.485590] I
[glusterd-handler.c:3270:glusterd_xfer_friend_add_resp]
0-glusterd: Responded to 192.168.1.129 (0), ret: 0
<br>
[2011-08-17 15:08:47.485713] I
[glusterd-sm.c:492:glusterd_ac_send_friend_update] 0-: Added
uuid: 464b3ea0-1b2b-4683-8209-72220dcb295f, host:
foo-3-private
<br>
[2011-08-17 15:08:47.486203] I
[glusterd-rpc-ops.c:636:glusterd3_1_friend_update_cbk]
0-glusterd: Received ACC from uuid:
<br>
[2011-08-17 15:08:47.486259] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Peer in Cluster
<br>
[2011-08-17 15:08:47.486284] I
[glusterd-handler.c:2761:glusterd_handle_friend_update]
0-glusterd: Received friend update from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f
<br>
[2011-08-17 15:08:47.486316] I
[glusterd-handler.c:2806:glusterd_handle_friend_update] 0-:
Received uuid: 4b5b0ecb-7d18-4ec4-90d9-0df2d392b63f,
hostname:192.168.1.129
<br>
[2011-08-17 15:08:47.486335] I
[glusterd-handler.c:2809:glusterd_handle_friend_update] 0-:
Received my uuid as Friend
<br>
<br>
<br>
'Peer Rejected', NG case:
<br>
[2011-08-17 15:10:19.21262] I
[glusterd-handler.c:602:glusterd_handle_cli_probe] 0-glusterd:
Received CLI probe req foo-3-private 24007
<br>
[2011-08-17 15:10:19.24605] I
[glusterd-handler.c:391:glusterd_friend_find] 0-glusterd:
Unable to find hostname: foo-3-private
<br>
[2011-08-17 15:10:19.24648] I
[glusterd-handler.c:3120:glusterd_probe_begin] 0-glusterd:
Unable to find peerinfo for host: foo-3-private (24007)
<br>
[2011-08-17 15:10:19.27736] W
[rpc-transport.c:728:rpc_transport_load] 0-rpc-transport:
missing 'option transport-type'. defaulting to "socket"
<br>
[2011-08-17 15:10:19.32034] I
[glusterd-handler.c:3102:glusterd_friend_add] 0-glusterd:
connect returned 0
<br>
[2011-08-17 15:10:19.32389] I
[glusterd-handshake.c:317:glusterd_set_clnt_mgmt_program] 0-:
Using Program glusterd clnt mgmt, Num (1238433), Version (1)
<br>
[2011-08-17 15:10:19.32426] I
[glusterd-utils.c:2127:glusterd_friend_find_by_hostname]
0-glusterd: Friend foo-3-private found.. state: 0
<br>
[2011-08-17 15:10:19.40671] I
[glusterd-rpc-ops.c:364:glusterd3_1_probe_cbk] 0-glusterd:
Received probe resp from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f, host: foo-3-private
<br>
[2011-08-17 15:10:19.40720] I
[glusterd-handler.c:379:glusterd_friend_find] 0-glusterd:
Unable to find peer by uuid
<br>
[2011-08-17 15:10:19.40748] I
[glusterd-utils.c:2127:glusterd_friend_find_by_hostname]
0-glusterd: Friend foo-3-private found.. state: 0
<br>
[2011-08-17 15:10:19.40983] I
[glusterd-rpc-ops.c:409:glusterd3_1_probe_cbk] 0-glusterd:
Received resp to probe req
<br>
[2011-08-17 15:10:19.42854] I
[rpc-clnt.c:696:rpc_clnt_handle_cbk] 0-rpc-clnt: recieved rpc
message (XID: 0x2a, Ver: 2, Program: 52743234, ProgVers: 1,
Proc: 1) from rpc-transport (management)
<br>
[2011-08-17 15:10:19.50762] I
[glusterd-rpc-ops.c:454:glusterd3_1_friend_add_cbk]
0-glusterd: Received ACC from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f, host: foo-3-private,
port: 0
<br>
[2011-08-17 15:10:19.50794] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Probe Sent to Peer
<br>
[2011-08-17 15:10:19.50851] I
[glusterd-handler.c:3293:glusterd_xfer_cli_probe_resp]
0-glusterd: Responded to CLI, ret: 0
<br>
[2011-08-17 15:10:19.51413] I
[glusterd-handler.c:2882:glusterd_handle_probe_query]
0-glusterd: Received probe from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f
<br>
[2011-08-17 15:10:19.51444] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Accepted peer request
<br>
[2011-08-17 15:10:19.51487] I
[glusterd-handler.c:2917:glusterd_handle_probe_query]
0-glusterd: Responded to 192.168.1.129, op_ret: 0, op_errno:
0, ret: 0
<br>
[2011-08-17 15:10:19.51853] I
[glusterd-handler.c:2614:glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
464b3ea0-1b2b-4683-8209-72220dcb295f
<br>
[2011-08-17 15:10:19.51885] I
[glusterd-utils.c:2088:glusterd_friend_find_by_uuid]
0-glusterd: Friend found... state: Accepted peer request
<br>
[2011-08-17 15:10:19.51930] E
[glusterd-utils.c:1407:glusterd_compare_friend_volume] 0-:
Cksums of volume foo differ. local cksum = 1403573944, remote
cksum = -1413994823
<br>
[2011-08-17 15:10:19.51975] I
[glusterd-handler.c:3270:glusterd_xfer_friend_add_resp]
0-glusterd: Responded to 192.168.1.129 (0), ret: 0
<br>
<br>
<br>
<blockquote type="cite">On Tue, Aug 16, 2011 at 8:18 PM,
Tomoaki Sato<a class="moz-txt-link-rfc2396E" href="mailto:tsato@valinux.co.jp"><tsato@valinux.co.jp></a> wrote:
<br>
<blockquote type="cite">Mohit,
<br>
<br>
let me say again.
<br>
3.1.6-1 fail to 'peer probe' after 'start volume' in my
environment.
<br>
case-A) peer probe foo-3-private --> Peer in Cluster
<br>
<br>
<delete all configuration files and reboot all
foo-X-private nodes>
<br>
<br>
[root@foo-1-private ~]# gluster peer probe foo-3-private
<br>
Probe successful
<br>
[root@foo-1-private ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: ef7d3c43-219a-4d13-a918-2639455cfbe7
<br>
State: Peer in Cluster (Connected)
<br>
<br>
case-B) create volume then peer probe foo-3-private -->
Peer in Cluster
<br>
<br>
<delete all configuration files and reboot all
foo-X-private nodes>
<br>
<br>
[root@foo-1-private ~]# gluster volume create foo
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been successful. Please start
the volume to
<br>
access data.
<br>
[root@foo-1-private ~]# gluster peer probe foo-3-private
<br>
Probe successful
<br>
[root@foo-1-private ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: fe44c954-4679-4389-a0e6-4c1fd4569a02
<br>
State: Peer in Cluster (Connected)
<br>
<br>
case-C) start volume then peer probe foo-3-private -->
Peer Rejected
<br>
<br>
<delete all configuration files and reboot all
foo-X-private nodes>
<br>
<br>
[root@foo-1-private ~]# gluster volume create foo
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been successful. Please start
the volume to
<br>
access data.
<br>
[root@foo-1-private ~]# gluster volume start foo
<br>
Starting volume foo has been successful
<br>
[root@foo-1-private ~]# gluster peer probe foo-3-private
<br>
Probe successful
<br>
[root@foo-1-private ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: bb6932e4-5bf0-4d34-872f-4a5fc1d0b6f8
<br>
State: Peer Rejected (Connected)
<br>
<br>
<br>
<blockquote type="cite">Can you for now put it in
/etc/hosts and test?
<br>
</blockquote>
<br>
All foo-X-private hosts have no entries in /etc/hosts.
<br>
All the nodes obtain IP addresses from a DHCP server and
register the IP
<br>
address - host name paires to a DNS server.
<br>
<br>
<blockquote type="cite">
<br>
also, make sure you have same version of gluster running
on all the nodes.
<br>
</blockquote>
<br>
Since all three foo-X-private hosts are generated from a
common VM template,
<br>
same version of gluster run on all the nodes.
<br>
<br>
<blockquote type="cite">
<br>
What's the result of gluster peer status on node 3?
<br>
</blockquote>
<br>
[root@foo-1-private ~]# ssh foo-3-private gluster peer
status
<br>
Number of Peers: 1
<br>
<br>
Hostname: 192.168.1.129
<br>
Uuid: 828bcc00-14d3-4505-8b35-d0ac6ca0730a
<br>
State: Peer Rejected (Connected)
<br>
[root@foo-1-private ~]#
<br>
<br>
<br>
Best,
<br>
<br>
<br>
(2011/08/17 0:53), Mohit Anchlia wrote:
<br>
<blockquote type="cite">
<br>
I see this in the logs:
<br>
<br>
[2011-08-16 11:57:05.642903] I
<br>
[glusterd-handler.c:391:glusterd_friend_find]
0-glusterd: Unable to
<br>
find hostname: foo-3-private
<br>
<br>
Can you for now put it in /etc/hosts and test?
<br>
<br>
also, make sure you have same version of gluster running
on all the nodes.
<br>
<br>
What's the result of gluster peer status on node 3?
<br>
<br>
On Mon, Aug 15, 2011 at 8:18 PM, Tomoaki
Sato<a class="moz-txt-link-rfc2396E" href="mailto:tsato@valinux.co.jp"><tsato@valinux.co.jp></a> wrote:
<br>
<blockquote type="cite">
<br>
Mohit
<br>
<br>
I've tried same test and reproduce the 'Peer Rejected'
status.
<br>
please find config files and log files in attached
taz.
<br>
<br>
<br>
[root@vhead-010 ~]# date
<br>
Tue Aug 16 11:55:15 JST 2011
<br>
[root@vhead-010 ~]# cat a.sh
<br>
#!/bin/bash
<br>
for i in foo-{1..3}-private
<br>
do
<br>
ssh ${i} service glusterd stop
<br>
ssh ${i} 'find /etc/glusterd -type f|xargs rm -f'
<br>
ssh ${i} rm -rf /etc/glusterd/vols/*
<br>
ssh ${i} service iptables stop
<br>
ssh ${i} cp /dev/null /var/log/glusterfs/nfs.log
<br>
ssh ${i} cp /dev/null
/var/log/glusterfs/bricks/mnt-brick.log
<br>
ssh ${i} cp /dev/null
/var/log/glusterfs/.cmd_log_history
<br>
ssh ${i} cp /dev/null
<br>
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log
<br>
ssh ${i} service glusterd start
<br>
ssh ${i} find /etc/glusterd
<br>
ssh ${i} service glusterd status
<br>
done
<br>
[root@vhead-010 ~]# bash a.sh
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
glusterd (pid 15617) is running...
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
glusterd (pid 15147) is running...
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
glusterd (pid 15177) is running...
<br>
[root@vhead-010 ~]# ssh foo-1-private
<br>
Last login: Tue Aug 16 09:51:49 2011 from
dlp.local.valinux.co.jp
<br>
[root@localhost ~]# gluster peer probe foo-2-private
<br>
Probe successful
<br>
[root@localhost ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: 20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
<br>
State: Peer in Cluster (Connected)
<br>
[root@localhost ~]# gluster volume create foo
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been successful. Please
start the volume to
<br>
access data.
<br>
[root@localhost ~]# gluster volume start foo
<br>
Starting volume foo has been successful
<br>
[root@localhost ~]# gluster volume add-brick foo
foo-2-private:/mnt/brick
<br>
Add Brick successful
<br>
[root@localhost ~]# gluster peer probe foo-3-private
<br>
Probe successful
<br>
[root@localhost ~]# gluster peer status
<br>
Number of Peers: 2
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: 20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
<br>
State: Peer in Cluster (Connected)
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: 7587ae34-9209-484a-9576-3939e061720c
<br>
State: Peer Rejected (Connected)
<br>
[root@localhost ~]# exit
<br>
logout
<br>
Connection to foo-1-private closed.
<br>
[root@vhead-010 ~]# find foo_log_and_conf
<br>
foo_log_and_conf
<br>
foo_log_and_conf/foo-2-private
<br>
foo_log_and_conf/foo-2-private/glusterd
<br>
foo_log_and_conf/foo-2-private/glusterd/glusterd.info
<br>
foo_log_and_conf/foo-2-private/glusterd/nfs
<br>
foo_log_and_conf/foo-2-private/glusterd/nfs/nfs-server.vol
<br>
foo_log_and_conf/foo-2-private/glusterd/nfs/run
<br>
foo_log_and_conf/foo-2-private/glusterd/nfs/run/nfs.pid
<br>
foo_log_and_conf/foo-2-private/glusterd/peers
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/peers/461f6e21-90c4-4b6c-bda8-7b99bacb2722
<br>
foo_log_and_conf/foo-2-private/glusterd/vols
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/info
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/cksum
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/run
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/run/foo-2-private-mnt-brick.pid
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo-fuse.vol
<br>
<br>
foo_log_and_conf/foo-2-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-2-private/glusterfs
<br>
foo_log_and_conf/foo-2-private/glusterfs/nfs.log
<br>
foo_log_and_conf/foo-2-private/glusterfs/bricks
<br>
foo_log_and_conf/foo-2-private/glusterfs/bricks/mnt-brick.log
<br>
foo_log_and_conf/foo-2-private/glusterfs/.cmd_log_history
<br>
foo_log_and_conf/foo-2-private/glusterfs/etc-glusterfs-glusterd.vol.log
<br>
foo_log_and_conf/foo-1-private
<br>
foo_log_and_conf/foo-1-private/glusterd
<br>
foo_log_and_conf/foo-1-private/glusterd/glusterd.info
<br>
foo_log_and_conf/foo-1-private/glusterd/nfs
<br>
foo_log_and_conf/foo-1-private/glusterd/nfs/nfs-server.vol
<br>
foo_log_and_conf/foo-1-private/glusterd/nfs/run
<br>
foo_log_and_conf/foo-1-private/glusterd/nfs/run/nfs.pid
<br>
foo_log_and_conf/foo-1-private/glusterd/peers
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/peers/20b73d9a-ede0-454f-9fbb-b0eee9ce26a3
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/peers/7587ae34-9209-484a-9576-3939e061720c
<br>
foo_log_and_conf/foo-1-private/glusterd/vols
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/info
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/cksum
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/run
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/run/foo-1-private-mnt-brick.pid
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo-fuse.vol
<br>
<br>
foo_log_and_conf/foo-1-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-1-private/glusterfs
<br>
foo_log_and_conf/foo-1-private/glusterfs/nfs.log
<br>
foo_log_and_conf/foo-1-private/glusterfs/bricks
<br>
foo_log_and_conf/foo-1-private/glusterfs/bricks/mnt-brick.log
<br>
foo_log_and_conf/foo-1-private/glusterfs/.cmd_log_history
<br>
foo_log_and_conf/foo-1-private/glusterfs/etc-glusterfs-glusterd.vol.log
<br>
foo_log_and_conf/foo-3-private
<br>
foo_log_and_conf/foo-3-private/glusterd
<br>
foo_log_and_conf/foo-3-private/glusterd/glusterd.info
<br>
foo_log_and_conf/foo-3-private/glusterd/nfs
<br>
foo_log_and_conf/foo-3-private/glusterd/nfs/run
<br>
foo_log_and_conf/foo-3-private/glusterd/peers
<br>
<br>
foo_log_and_conf/foo-3-private/glusterd/peers/461f6e21-90c4-4b6c-bda8-7b99bacb2722
<br>
foo_log_and_conf/foo-3-private/glusterd/vols
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/info
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks
<br>
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks/foo-2-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/bricks/foo-1-private:-mnt-brick
<br>
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo.foo-2-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/cksum
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo-fuse.vol
<br>
<br>
foo_log_and_conf/foo-3-private/glusterd/vols/foo/foo.foo-1-private.mnt-brick.vol
<br>
foo_log_and_conf/foo-3-private/glusterfs
<br>
foo_log_and_conf/foo-3-private/glusterfs/nfs.log
<br>
foo_log_and_conf/foo-3-private/glusterfs/bricks
<br>
foo_log_and_conf/foo-3-private/glusterfs/bricks/mnt-brick.log
<br>
foo_log_and_conf/foo-3-private/glusterfs/.cmd_log_history
<br>
foo_log_and_conf/foo-3-private/glusterfs/etc-glusterfs-glusterd.vol.log
<br>
[root@vhead-010 ~]# exit
<br>
<br>
Best,
<br>
<br>
(2011/08/16 9:35), Mohit Anchlia wrote:
<br>
<blockquote type="cite">
<br>
I should have also asked you to stop and delete
volume before getting
<br>
rid of gluster config files. Can you get rid of
directories also
<br>
inside vols and try to restart? It's trying to look
for volume files
<br>
that we just removed.
<br>
<br>
Also, just disable iptables for now explicitly.
<br>
<br>
On Mon, Aug 15, 2011 at 5:22 PM, Tomoaki
Sato<a class="moz-txt-link-rfc2396E" href="mailto:tsato@valinux.co.jp"><tsato@valinux.co.jp></a>
<br>
wrote:
<br>
<blockquote type="cite">
<br>
<blockquote type="cite">1) run peer detach for all
the servers
<br>
</blockquote>
<br>
done.
<br>
<br>
<blockquote type="cite">2) from server 1 ->3
and 3->1 make sure ports are open and
iptables
<br>
are turned off.
<br>
</blockquote>
<br>
done.
<br>
by the way, the same test on 3.1.5-1 works fine
with same environment.
<br>
<br>
<blockquote type="cite">3) remove config files
under /etc/glusterd
<br>
</blockquote>
<br>
please review following logs.
<br>
<br>
<blockquote type="cite">4) run your tests again.
<br>
</blockquote>
<br>
I don't know why but glusterd service failed to
start on all 3 hosts.
<br>
<br>
[root@vhead-010 ~]# date
<br>
Tue Aug 16 09:12:53 JST 2011
<br>
[root@vhead-010 ~]# cat a.sh
<br>
#!/bin/bash
<br>
for i in foo-{1..3}-private
<br>
do
<br>
ssh ${i} service glusterd stop
<br>
ssh ${i} 'find /etc/glusterd -type f|xargs rm -f'
<br>
ssh ${i} service iptables restart
<br>
ssh ${i} iptables -vL
<br>
ssh ${i} service glusterd start
<br>
ssh ${i} find /etc/glusterd
<br>
ssh ${i} service glusterd status
<br>
done
<br>
[root@vhead-010 ~]# bash a.sh
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
/etc/glusterd/vols/foo
<br>
/etc/glusterd/vols/foo/bricks
<br>
/etc/glusterd/vols/foo/run
<br>
glusterd is stopped
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
/etc/glusterd/vols/foo
<br>
/etc/glusterd/vols/foo/bricks
<br>
/etc/glusterd/vols/foo/run
<br>
glusterd is stopped
<br>
Stopping glusterd:[ OK ]
<br>
Flushing firewall rules: [ OK ]
<br>
Setting chains to policy ACCEPT: filter [ OK ]
<br>
Unloading iptables modules: [ OK ]
<br>
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
<br>
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
<br>
pkts bytes target prot opt in out source
<br>
destination
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
/etc/glusterd/vols/foo
<br>
/etc/glusterd/vols/foo/bricks
<br>
/etc/glusterd/vols/foo/run
<br>
glusterd is stopped
<br>
[root@vhead-010 ~]# date
<br>
Tue Aug 16 09:13:20 JST 2011
<br>
[root@vhead-010 ~]# ssh foo-1-private
<br>
Last login: Tue Aug 16 09:06:57 2011 from
dlp.local.valinux.co.jp
<br>
[root@localhost ~]# tail -20
<br>
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log
<br>
...
<br>
[2011-08-16 09:13:01.85858] I
[glusterd.c:304:init] 0-management: Using
<br>
/etc/glusterd as working directory
<br>
[2011-08-16 09:13:01.87294] E
[rpc-transport.c:799:rpc_transport_load]
<br>
0-rpc-transport:
<br>
/opt/glusterfs/3.1.6/lib64/glusterfs/3.1.6/rpc-transport/rdma.so:
<br>
cannot
<br>
open shared object file: No such file or directory
<br>
[2011-08-16 09:13:01.87340] E
[rpc-transport.c:803:rpc_transport_load]
<br>
0-rpc-transport: volume 'rdma.management':
transport-type 'rdma' is not
<br>
valid or not found on this machine
<br>
[2011-08-16 09:13:01.87402] E
<br>
[glusterd-store.c:654:glusterd_store_handle_retrieve]
0-glusterd:
<br>
Unable
<br>
to
<br>
retrieve store handle for
/etc/glusterd/glusterd.info, error: No such
<br>
file
<br>
or directory
<br>
[2011-08-16 09:13:01.87422] E
<br>
[glusterd-store.c:761:glusterd_retrieve_uuid]
<br>
0-: Unable to get store handle!
<br>
[2011-08-16 09:13:01.87514] I
[glusterd.c:95:glusterd_uuid_init]
<br>
0-glusterd:
<br>
generated UUID:
c0cef9f9-a79e-4189-8955-d83927db9cee
<br>
[2011-08-16 09:13:01.87681] E
<br>
[glusterd-store.c:654:glusterd_store_handle_retrieve]
0-glusterd:
<br>
Unable
<br>
to
<br>
retrieve store handle for
/etc/glusterd/vols/foo/info, error: No such
<br>
file
<br>
or directory
<br>
[2011-08-16 09:13:01.87704] E
<br>
[glusterd-store.c:1328:glusterd_store_retrieve_volumes]
0-: Unable to
<br>
restore volume: foo
<br>
</blockquote>
<br>
<blockquote type="cite">[2011-08-16 09:13:01.87732]
E [xlator.c:843:xlator_init] 0-management:
<br>
Initialization of volume 'management' failed,
review your volfile again
<br>
[2011-08-16 09:13:01.87751] E
[graph.c:331:glusterfs_graph_init]
<br>
0-management: initializing translator failed
<br>
[2011-08-16 09:13:01.87818] I
[glusterfsd.c:712:cleanup_and_exit]
<br>
0-glusterfsd: shutting down
<br>
[root@localhost ~]# exit
<br>
<br>
Best,
<br>
<br>
(2011/08/16 8:52), Mohit Anchlia wrote:
<br>
<blockquote type="cite">
<br>
Logs are generally in /var/log/gluster
<br>
<br>
Since you are playing with it. I would suggest
this:
<br>
<br>
1) run peer detach for all the servers
<br>
2) from server 1 ->3 and 3->1 make sure
ports are open and iptables
<br>
are turned off.
<br>
3) remove config files under /etc/glusterd
<br>
4) run your tests again.
<br>
<br>
On Mon, Aug 15, 2011 at 4:28 PM, Tomoaki
Sato<a class="moz-txt-link-rfc2396E" href="mailto:tsato@valinux.co.jp"><tsato@valinux.co.jp></a>
<br>
wrote:
<br>
<blockquote type="cite">
<br>
Thanks, Mohit
<br>
<br>
(2011/08/16 8:05), Mohit Anchlia wrote:
<br>
<blockquote type="cite">
<br>
What's in your logs?
<br>
</blockquote>
<br>
I can obtain logs needed. could you tell me
the instruction to take
<br>
the
<br>
logs?
<br>
<br>
<blockquote type="cite">
<br>
Did you have foo-3-private before in your
gluster cluster ever or
<br>
adding this host for the first time?
<br>
</blockquote>
<br>
It was first time.
<br>
All foo-X-private has no entries in
/etc/glusterd/peers/ and
<br>
/etc/glusterd/vols/.
<br>
<br>
<blockquote type="cite">
<br>
Try gluster peer detach and then remove any
left over configuration
<br>
in
<br>
/etc/glusterd config directory. After that
try again and see if that
<br>
works.
<br>
</blockquote>
<br>
[root@vhead-010 ~]# date
<br>
Tue Aug 16 08:17:49 JST 2011
<br>
[root@vhead-010 ~]# cat a.sh
<br>
#!/bin/bash
<br>
for i in foo-{1..3}-private
<br>
do
<br>
ssh ${i} service glusterd stop
<br>
ssh ${i} rm -rf /etc/glusterd/peers/*
<br>
ssh ${i} rm -rf /etc/glusterd/vols/*
<br>
ssh ${i} service glusterd start
<br>
ssh ${i} find /etc/glusterd
<br>
done
<br>
[root@vhead-010 ~]# bash a.sh
<br>
Stopping glusterd:[ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/nfs-server.vol
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
Stopping glusterd:[ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/nfs-server.vol
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
Stopping glusterd:[ OK ]
<br>
Starting glusterd:[ OK ]
<br>
/etc/glusterd
<br>
/etc/glusterd/glusterd.info
<br>
/etc/glusterd/nfs
<br>
/etc/glusterd/nfs/nfs-server.vol
<br>
/etc/glusterd/nfs/run
<br>
/etc/glusterd/peers
<br>
/etc/glusterd/vols
<br>
[root@vhead-010 ~]# ssh foo-1-private
<br>
[root@localhost ~]# gluster peer probe
foo-2-private
<br>
Probe successful
<br>
[root@localhost ~]# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
<br>
State: Peer in Cluster (Connected)
<br>
[root@localhost ~]# gluster volume create foo
<br>
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been successful.
Please start the volume
<br>
to
<br>
access da
<br>
ta.
<br>
[root@localhost ~]# gluster volume start foo
<br>
Starting volume foo has been successful
<br>
[root@localhost ~]# gluster volume add-brick
foo
<br>
foo-2-private:/mnt/brick
<br>
Add Brick successful
<br>
[root@localhost ~]# gluster peer probe
foo-3-private
<br>
Probe successful
<br>
[root@localhost ~]# gluster peer status
<br>
Number of Peers: 2
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
<br>
State: Peer in Cluster (Connected)
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: 7fb98dac-fef7-4b33-837c-6483a767ec3e
<br>
State: Peer Rejected (Connected)
<br>
[root@localhost ~]# cat
/var/log/glusterfs/.cmd_log_history
<br>
...
<br>
[2011-08-16 08:20:28.862619] peer probe : on
host
<br>
foo-2-private:24007
<br>
[2011-08-16 08:20:28.912419] peer probe : on
host foo-2-private:24007
<br>
FAILED
<br>
[2011-08-16 08:20:58.382350] Volume create :
on volname: foo
<br>
attempted
<br>
[2011-08-16 08:20:58.382461] Volume create :
on volname: foo
<br>
type:DEFAULT
<br>
count:
<br>
1 bricks: foo-1-private:/mnt/brick
<br>
[2011-08-16 08:20:58.384674] Volume create :
on volname: foo SUCCESS
<br>
[2011-08-16 08:21:04.831772] volume start : on
volname: foo SUCCESS
<br>
[2011-08-16 08:21:22.682292] Volume add-brick
: on volname: foo
<br>
attempted
<br>
[2011-08-16 08:21:22.682385] Volume add-brick
: volname: foo type
<br>
DEFAULT
<br>
count:
<br>
1 bricks: foo-2-private:/mnt/brick
<br>
[2011-08-16 08:21:22.682499] Volume add-brick
: on volname: foo
<br>
SUCCESS
<br>
[2011-08-16 08:21:39.124574] peer probe : on
host
<br>
foo-3-private:24007
<br>
[2011-08-16 08:21:39.135609] peer probe : on
host foo-3-private:24007
<br>
FAILED
<br>
<br>
Tomo
<br>
<br>
<blockquote type="cite">
<br>
<br>
<br>
On Mon, Aug 15, 2011 at 3:37 PM, Tomoaki
Sato<a class="moz-txt-link-rfc2396E" href="mailto:tsato@valinux.co.jp"><tsato@valinux.co.jp></a>
<br>
wrote:
<br>
<blockquote type="cite">
<br>
Hi,
<br>
<br>
following instructions work fine with
3.1.5-1 but with 3.1.6-1.
<br>
<br>
1. make a new file system without peers.
[OK]
<br>
<br>
foo-1-private# gluster volume create foo
foo-1-private:/mnt/brick
<br>
foo-1-private# gluster volume start foo
<br>
foo-1-private# gluster peer status
<br>
No peers present
<br>
foo-1-private#
<br>
<br>
2. add a peer to the file system. [NG]
<br>
<br>
foo-1-private# gluster peer probe
foo-2-private
<br>
Probe successful
<br>
foo-1-private# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
<br>
State: Peer Rejected (Connected)
<br>
foo-1-private# gluster volume add-brick
foo
<br>
foo-2-private:/mnt/brick
<br>
Host foo-2-private not connected
<br>
foo-1-private#
<br>
<br>
<br>
following instructions work fine even with
3.1.6-1.
<br>
<br>
1. make a new file system with single
peer. [OK]
<br>
<br>
foo-1-private# gluster peer status
<br>
No peer presents
<br>
foo-1-private# gluster peer probe
foo-2-private
<br>
Probe successful
<br>
foo-1-private# gluster peer status
<br>
Number of Peers: 1
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
<br>
State: Peer in Cluster (Connected)
<br>
foo-1-private# gluster volume create foo
foo-1-private:/mnt/brick
<br>
Creation of volume foo has been
successful. Please start the volume
<br>
to
<br>
access data.
<br>
foo-1-private# gluster volume start foo
<br>
Starting volume foo has been successful
<br>
foo-1-private# gluster volume add-brick
foo
<br>
foo-2-private:/mnt/brick
<br>
Add Brick successful
<br>
foo-1-private#
<br>
<br>
But ...
<br>
<br>
2. add a peer to the file system. [NG]
<br>
<br>
foo-1-private# gluster peer probe
foo-3-private
<br>
Probe successful
<br>
foo-1-private# gluster peer status
<br>
Number of Peers: 2
<br>
<br>
Hostname: foo-2-private
<br>
Uuid: c2b314ac-6ed1-455a-84d4-ec22041ee2b2
<br>
State: Peer in Cluster (Connected)
<br>
<br>
Hostname: foo-3-private
<br>
Uuid: 7fb98dac-fef704b33-837c-6483a767ec3e
<br>
State: Peer Rejected (Connected)
<br>
foo-1-private# gluster volume add-brick
foo
<br>
foo-3-private:/mnt/brick
<br>
Host foo-3-private not connected
<br>
foo-1-private#
<br>
<br>
How should I add extra peers to existing
file systems ?
<br>
<br>
Best,
<br>
_______________________________________________
<br>
Gluster-users mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<br>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
<br>
<br>
</blockquote>
</blockquote>
<br>
<br>
</blockquote>
</blockquote>
<br>
<br>
</blockquote>
</blockquote>
<br>
<br>
</blockquote>
</blockquote>
<br>
<br>
</blockquote>
</blockquote>
<br>
_______________________________________________
<br>
Gluster-users mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<br>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
<br>
</blockquote>
<br>
_______________________________________________
<br>
Gluster-users mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<br>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
<br>
</blockquote>
<br>
<pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
<br>
</body>
</html>