<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:arial,helvetica,sans-serif;font-size:10pt"><div>Sorry, I must've missed some back end directories. I recreated all of them again and the straight dht configuration now works! I'll try it with replication now.<br><br>thanks<br>Craig<br></div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><br><div style="font-family: times new roman,new york,times,serif; font-size: 12pt;"><font face="Tahoma" size="2"><hr size="1"><b><span style="font-weight: bold;">From:</span></b> Craig Flockhart <craigflockhart@yahoo.com><br><b><span style="font-weight: bold;">To:</span></b> Krishna Srinivas <krishna@zresearch.com><br><b><span style="font-weight: bold;">Cc:</span></b> gluster-users@gluster.org<br><b><span style="font-weight: bold;">Sent:</span></b> Monday, February 9, 2009 10:39:05 AM<br><b><span style="font-weight:
bold;">Subject:</span></b> Re: [Gluster-users] starting 4th node in 4 node dht cluster fails<br></font><br>
<div style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><div>In case it's helpful, when 3 nodes are started, the mount dir (/mnt/glusterfs) looks like:<br><br>drwxrwxrwx 2 root root 49152 Feb 5 19:14 glusterfs<br><br>After the 4th node it looks like:<br>?--------- ? ? ? ? ? /mnt/glusterfs<br><br>Also, my command line is:<br>/usr/local/sbin/glusterfs -f /usr/local/etc/glusterfs/cloud-no-repl.2.vol /mnt/glusterfs<br><br><br><br></div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><br><div style="font-family: arial,helvetica,sans-serif; font-size: 13px;"><font face="Tahoma" size="2"><hr size="1"><b><span style="font-weight: bold;">From:</span></b> Krishna Srinivas <krishna@zresearch.com><br><b><span style="font-weight: bold;">To:</span></b> Craig Flockhart
<craigflockhart@yahoo.com><br><b><span style="font-weight: bold;">Cc:</span></b> Amar Tumballi (bulde) <amar@gluster.com>; gluster-users@gluster.org<br><b><span style="font-weight: bold;">Sent:</span></b> Saturday, February 7, 2009 3:14:54 AM<br><b><span style="font-weight: bold;">Subject:</span></b> Re: [Gluster-users] starting 4th node in 4 node dht cluster fails<br></font><br>
Craig,<br>Delete the backend directories (or remove trusted.glusterfs.dht xattr<br>on them and empty the backend directories) and are create them and<br>then start DHT and see if it works fine.<br>Krishna<br><br>On Sat, Feb 7, 2009 at 4:41 AM, Craig Flockhart<br><<a rel="nofollow" ymailto="mailto:craigflockhart@yahoo.com" target="_blank" href="mailto:craigflockhart@yahoo.com">craigflockhart@yahoo.com</a>> wrote:<br>> Hi Amar,<br>> Thanks for the quick reply, but that doesn't work either. I just get more<br>> holes and overlaps:<br>><br>> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found<br>> anomalies in /. holes=3 overlaps=9<br>> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>> assignment on /<br>> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the<br>> directory is not a virgin<br>> 2009-02-06 15:04:04 W
[fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:<br>>
revalidate of / failed (Structure needs cleaning)<br>> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found<br>> anomalies in /. holes=3 overlaps=9<br>> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>> assignment on /<br>> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the<br>> directory is not a virgin<br>> 2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 2:<br>> LOOKUP() / => -1 (Structure needs cleaning)<br>> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found<br>> anomalies in /. holes=3 overlaps=9<br>> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>> assignment on /<br>> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the<br>> directory is not a virgin<br>> 2009-02-06 15:04:04 W [fuse-bridge.c:297:need_fresh_lookup]
fuse-bridge:<br>> revalidate of / failed (Structure needs cleaning)<br>> 2009-02-06 15:04:04 E [dht-layout.c:460:dht_layout_normalize] dist1: found<br>> anomalies in /. holes=3 overlaps=9<br>> 2009-02-06 15:04:04 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>> assignment on /<br>> 2009-02-06 15:04:04 E [dht-selfheal.c:422:dht_selfheal_directory] dist1: the<br>> directory is not a virgin<br>> 2009-02-06 15:04:04 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse: 3:<br>> LOOKUP() / => -1 (Structure needs cleaning)<br>> ________________________________<br>> From: Amar Tumballi (bulde) <<a rel="nofollow" ymailto="mailto:amar@gluster.com" target="_blank" href="mailto:amar@gluster.com">amar@gluster.com</a>><br>> To: Craig Flockhart <<a rel="nofollow" ymailto="mailto:craigflockhart@yahoo.com" target="_blank" href="mailto:craigflockhart@yahoo.com">craigflockhart@yahoo.com</a>><br>> Cc: <a
rel="nofollow" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>> Sent: Friday, February 6, 2009 2:37:08 PM<br>> Subject: Re: [Gluster-users] starting 4th node in 4 node dht cluster fails<br>><br>> Hi Craig,<br>> As you are using 'distribute' (client side) over 'distribute' (server<br>> side), this will not be working right now. To get it working right now, you<br>> can have 4 export volumes from each server exported, and in client have 4x4<br>> client protocol volumes, which you can aggregate with a single<br>> 'cluster/distribute' (which will have 16 subvolumes).<br>><br>> To get the below mentioned configuration working as is, you need to wait for<br>> a week more IMO.<br>><br>> Regards,<br>> Amar<br>><br>> 2009/2/6 Craig Flockhart <<a rel="nofollow" ymailto="mailto:craigflockhart@yahoo.com" target="_blank"
href="mailto:craigflockhart@yahoo.com">craigflockhart@yahoo.com</a>><br>>><br>>> Using dht
translator to cluster together 4 nodes each with 4 disks.<br>>> Starting glusterfs on the 4th causes "Structure needs cleaning" when<br>>> ls-ing the mount point on any of them. It's fine with 3 nodes started.<br>>> Using fuse-2.7.4<br>>> GlusterFS 2.0.0rc1<br>>> Linux 2.6.18-53.el5 kernel<br>>><br>>> Errors from the log:<br>>><br>>><br>>> 2009-02-06 15:23:51 E [dht-layout.c:460:dht_layout_normalize] dist1: found<br>>> anomalies in /. holes=1 overlaps=3<br>>> 2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>>> assignment on /<br>>> 2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1:<br>>> the directory is not a virgin<br>>> 2009-02-06 15:23:51 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:<br>>> revalidate of / failed (Structure needs cleaning)<br>>> 2009-02-06 15:23:51 E
[dht-layout.c:460:dht_layout_normalize] dist1: found<br>>> anomalies in /. holes=1 overlaps=3<br>>> 2009-02-06 15:23:51 W [dht-common.c:137:dht_lookup_dir_cbk] dist1: fixing<br>>> assignment on /<br>>> 2009-02-06 15:23:51 E [dht-selfheal.c:422:dht_selfheal_directory] dist1:<br>>> the directory is not a virgin<br>>> 2009-02-06 15:23:51 E [fuse-bridge.c:404:fuse_entry_cbk] glusterfs-fuse:<br>>> 2: LOOKUP() / => -1 (Structure needs cleaning)<br>>><br>>> Config for one of the machines:<br>>><br>>> volume posix-d1<br>>> type storage/posix<br>>> option directory /mnt/chard1/export<br>>> end-volume<br>>><br>>> volume locks1<br>>> type features/locks<br>>> subvolumes posix-d1<br>>> end-volume<br>>><br>>><br>>> volume posix-d2<br>>> type storage/posix<br>>> option directory
/mnt/chard2/export<br>>> end-volume<br>>><br>>><br>>> volume locks2<br>>> type features/locks<br>>> subvolumes posix-d2<br>>> end-volume<br>>><br>>><br>>> volume posix-d3<br>>> type storage/posix<br>>> option directory /mnt/chard3/export<br>>> end-volume<br>>><br>>> volume locks3<br>>> type features/locks<br>>> subvolumes posix-d3<br>>> end-volume<br>>><br>>><br>>> volume posix-d4<br>>> type storage/posix<br>>> option directory /mnt/chard4/export<br>>> end-volume<br>>><br>>> volume locks4<br>>> type features/locks<br>>> subvolumes posix-d4<br>>> end-volume<br>>><br>>> volume home-ns<br>>> type storage/posix<br>>> option directory /var/local/glusterfs/namespace1<br>>>
end-volume<br>>><br>>> volume home<br>>> type cluster/distribute<br>>> subvolumes locks1 locks2 locks3 locks4<br>>> end-volume<br>>><br>>> volume server<br>>> type protocol/server<br>>> option transport-type tcp<br>>> subvolumes home<br>>> option auth.addr.home.allow *<br>>> end-volume<br>>><br>>><br>>> volume zwei<br>>> type protocol/client<br>>> option transport-type tcp<br>>> option remote-host zwei<br>>> option remote-subvolume home<br>>> end-volume<br>>><br>>> volume char<br>>> type protocol/client<br>>> option transport-type tcp<br>>> option remote-host char<br>>> option remote-subvolume
home<br>>> end-volume<br>>><br>>> volume pente<br>>> type protocol/client<br>>> option transport-type tcp<br>>> option remote-host pente<br>>> option remote-subvolume home<br>>> end-volume<br>>><br>>> volume tres<br>>> type protocol/client<br>>> option transport-type tcp<br>>> option remote-host tres<br>>> option remote-subvolume home<br>>> end-volume<br>>><br>>> volume dist1<br>>> type cluster/distribute<br>>> subvolumes pente char tres zwei<br>>> end-volume<br>>><br>>><br>>><br>>><br>>><br>>> _______________________________________________<br>>> Gluster-users mailing list<br>>> <a
rel="nofollow" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>>> <a rel="nofollow" target="_blank" href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>>><br>><br>><br>><br>> --<br>> Amar Tumballi<br>> Gluster/GlusterFS Hacker<br>> [bulde on #gluster/irc.gnu.org]<br>> <a rel="nofollow" target="_blank" href="http://www.zresearch.com">http://www.zresearch.com</a> - Commoditizing Super Storage!<br>><br>> _______________________________________________<br>> Gluster-users mailing list<br>> <a rel="nofollow" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>> <a rel="nofollow" target="_blank"
href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>><br>><br></div></div></div></div></div></div></body></html>