Hello Frederico,<br><br>Can you please try with ip addresses instead of hostnames in volume file? There was a problem with long hostnames in 2.0.0 which has been subsequently fixed.<br><br>Thanks,<br>Vijay<br><br><div class="gmail_quote">
On Mon, May 11, 2009 at 10:19 PM, Sacerdoti, Federico <span dir="ltr"><<a href="mailto:Federico.Sacerdoti@deshawresearch.com">Federico.Sacerdoti@deshawresearch.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>Thanks. I updated to 2.0.0 but the daemons will not
start and give a very generic error that does not help</span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span></span></font> </div>
<div><font color="#0000ff" face="Arial" size="2">2009-05-11 12:41:56 E
[glusterfsd.c:483:_xlator_graph_init] drdan0199: validating translator
failed<br>2009-05-11 12:41:56 E [glusterfsd.c:1145:main] glusterfs: translator
initialization failed. exiting</font></div>
<div><font color="#0000ff" face="Arial" size="2"></font> </div>
<div><span><font color="#0000ff" face="Arial" size="2">Can
you see something wrong in the volume file? This works fine for
2.0.0rc4</font></span></div>
<div><span><font color="#0000ff" face="Arial" size="2"></font></span> </div>
<div><span><font color="#0000ff" face="Arial" size="2">--START--</font></span></div>
<div><span><font color="#0000ff" face="Arial" size="2">volume
storage<br> type storage/posix<br> option directory
/scratch/glusterfs/export<br>end-volume</font></span></div>
<div> </div>
<div><span><font color="#0000ff" face="Arial" size="2">#
Required for AFR (file replication) module<br>volume locks<br> type
features/locks<br> subvolumes storage<br>end-volume</font></span></div>
<div> </div>
<div><span><font color="#0000ff" face="Arial" size="2">volume
brick<br> type performance/io-threads<br>#option thread-count 1<br>
option thread-count 8 <br> subvolumes
locks<br>end-volume</font></span></div>
<div> </div>
<div><span><font color="#0000ff" face="Arial" size="2">volume
server<br> type protocol/server<br> subvolumes brick<br>
option transport-type tcp<br> option auth.addr.brick.allow
10.232.*<br>end-volume</font></span></div>
<div><font color="#0000ff" face="Arial" size="2"></font> </div>
<div><span><font color="#0000ff" face="Arial" size="2">volume
drdan0191<br> type protocol/client<br> option transport-type
tcp<br> option remote-host <a href="http://drdan0191.en.desres.deshaw.com" target="_blank">drdan0191.en.desres.deshaw.com</a><br> option
remote-subvolume brick<br>end-volume</font></span></div>
<div> </div>
<div><span><font color="#0000ff" face="Arial" size="2">volume
drdan0192<br> type protocol/client<br> option transport-type
tcp<br> option remote-host <a href="http://drdan0192.en.desres.deshaw.com" target="_blank">drdan0192.en.desres.deshaw.com</a><br> option
remote-subvolume brick<br>end-volume<br></font></span></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>[...]</span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span></span></font> </div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>volume nufa<br> type cluster/nufa<br>
option local-volume-name `hostname -s`<br> #subvolumes replicate1
replicate2 replicate3 replicate4 replicate5<br> subvolumes drdan0191
drdan0192 drdan0193 drdan0194 drdan0195 drdan0196 drdan0197 drdan0198 drdan0199
drdan0200<br>end-volume</span></font></div>
<div> </div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span># This, from <a href="https://savannah.nongnu.org/bugs/?24972" target="_blank">https://savannah.nongnu.org/bugs/?24972</a>,
does the <br># filesystem mounting at server start time. Like an /etc/fstab
entry<br>volume fuse<br> type mount/fuse<br> option direct-io-mode
1<br> option entry-timeout 1<br> #option attr-timeout 1 (not
recognized in 2.0)<br> option mountpoint /mnt/glusterfs<br>
subvolumes nufa<br>end-volume </span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>--END--</span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span></span></font> </div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>Thanks,</span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span>fds</span></font></div>
<div dir="ltr" align="left"><font color="#0000ff" face="Arial" size="2"><span></span></font> </div><br>
<div dir="ltr" align="left" lang="en-us">
<hr>
<font face="Tahoma" size="2"><b>From:</b> Liam Slusser [mailto:<a href="mailto:lslusser@gmail.com" target="_blank">lslusser@gmail.com</a>]
<br><b>Sent:</b> Thursday, May 07, 2009 1:51 PM<br><b>To:</b> Sacerdoti,
Federico<br><b>Cc:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Subject:</b> Re:
[Gluster-users] rm -rf errors<br></font><br></div><div><div></div><div class="h5">
<div></div>You should try upgrading to the 2.0.0 release and try again.
They fixed all sorts of bugs.
<div><br></div>
<div>liam<br>
<div><br>
<div class="gmail_quote">On Thu, May 7, 2009 at 8:21 AM, Sacerdoti, Federico <span dir="ltr"><<a href="mailto:Federico.Sacerdoti@deshawresearch.com" target="_blank">Federico.Sacerdoti@deshawresearch.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0px 0px 0px 0.8ex; padding-left: 1ex;">Hello,<br><br>I
am evaluating glusterfs and have seen some strange behavior with<br>remove. I
have gluster/2.0.0rc4 setup on 10 linux nodes connected with<br>GigE. The
config is Nufa/fuse with one storage brick per server, as seen<br>in the
attached nufa.vol config file, which I use for both clients
and<br>servers.<br><br>My experiment is to launch 10 parallel writers, each of
whom writes<br>32GiB worth of data in small files (2MB) to a shared
gluster-fuse<br>mounted filesystem. The files are named uniquely per client,
so each<br>file is only written once. This worked well, and I am seeing
performance<br>close to that of native disk, even with 8-writers per
node.<br><br>However when I do a parallel "rm -rf writedir/" on the 10 nodes,
where<br>writedir is the directory written in by the parallel writers
described<br>above, I see strange effects. There are 69,000 UNLINK errors in
the<br>glusterfsd.log of one server, in the form shown below. This alone is
not<br>surprising as the operation is ocurring in parallel. However the
remove<br>took much longer than expected, 92min, and more surprisingly the
rm<br>command exited 0 but files remained in the writedir!<br><br>I ran rm -rf
writedir from a single client, and it too exited 0 but left<br>the writedir
non-empty. Is this expected?<br><br>Thanks,<br>Federico<br><br>--From
glusterfsd.log--<br>2009-05-04 11:35:15 E
[fuse-bridge.c:964:fuse_unlink_cbk]<br>glusterfs-fuse: 5764889: UNLINK()
/write.2MB.runid1.p1/5 => -1 (No such<br>file or directory)<br>2009-05-04
11:35:15 E [dht-common.c:1294:dht_err_cbk] nufa: subvolume<br>drdan0192
returned -1 (No such file or directory)<br>2009-05-04 11:35:15 E
[fuse-bridge.c:964:fuse_unlink_cbk]<br>glusterfs-fuse: 5764894: UNLINK()
/write.2MB.runid1.p1/51 => -1 (No such<br>file or
directory)<br>--end--<br> <<nufa.vol>><br><br>_______________________________________________<br>Gluster-users
mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br></blockquote></div><br></div></div></div></div></div>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br></blockquote></div><br>