<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.6000.21015" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>Thanks. I updated to 2.0.0 but the daemons will not
start and give a very generic error that does not help</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009></SPAN></FONT> </DIV>
<DIV><FONT face=Arial color=#0000ff size=2>2009-05-11 12:41:56 E
[glusterfsd.c:483:_xlator_graph_init] drdan0199: validating translator
failed<BR>2009-05-11 12:41:56 E [glusterfsd.c:1145:main] glusterfs: translator
initialization failed. exiting</FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2></FONT> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>Can
you see something wrong in the volume file? This works fine for
2.0.0rc4</FONT></SPAN></DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff
size=2></FONT></SPAN> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff
size=2>--START--</FONT></SPAN></DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>volume
storage<BR> type storage/posix<BR> option directory
/scratch/glusterfs/export<BR>end-volume</FONT></SPAN></DIV>
<DIV> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>#
Required for AFR (file replication) module<BR>volume locks<BR> type
features/locks<BR> subvolumes storage<BR>end-volume</FONT></SPAN></DIV>
<DIV> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>volume
brick<BR> type performance/io-threads<BR>#option thread-count 1<BR>
option thread-count 8 <BR> subvolumes
locks<BR>end-volume</FONT></SPAN></DIV>
<DIV> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>volume
server<BR> type protocol/server<BR> subvolumes brick<BR>
option transport-type tcp<BR> option auth.addr.brick.allow
10.232.*<BR>end-volume</FONT></SPAN></DIV>
<DIV><FONT face=Arial color=#0000ff size=2></FONT> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>volume
drdan0191<BR> type protocol/client<BR> option transport-type
tcp<BR> option remote-host drdan0191.en.desres.deshaw.com<BR> option
remote-subvolume brick<BR>end-volume</FONT></SPAN></DIV>
<DIV> </DIV>
<DIV><SPAN class=023454616-11052009><FONT face=Arial color=#0000ff size=2>volume
drdan0192<BR> type protocol/client<BR> option transport-type
tcp<BR> option remote-host drdan0192.en.desres.deshaw.com<BR> option
remote-subvolume brick<BR>end-volume<BR></FONT></SPAN></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>[...]</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009></SPAN></FONT> </DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>volume nufa<BR> type cluster/nufa<BR>
option local-volume-name `hostname -s`<BR> #subvolumes replicate1
replicate2 replicate3 replicate4 replicate5<BR> subvolumes drdan0191
drdan0192 drdan0193 drdan0194 drdan0195 drdan0196 drdan0197 drdan0198 drdan0199
drdan0200<BR>end-volume</SPAN></FONT></DIV>
<DIV> </DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009># This, from <A
href="https://savannah.nongnu.org/bugs/?24972">https://savannah.nongnu.org/bugs/?24972</A>,
does the <BR># filesystem mounting at server start time. Like an /etc/fstab
entry<BR>volume fuse<BR> type mount/fuse<BR> option direct-io-mode
1<BR> option entry-timeout 1<BR> #option attr-timeout 1 (not
recognized in 2.0)<BR> option mountpoint /mnt/glusterfs<BR>
subvolumes nufa<BR>end-volume </SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>--END--</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009></SPAN></FONT> </DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>Thanks,</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009>fds</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT face=Arial color=#0000ff size=2><SPAN
class=023454616-11052009></SPAN></FONT> </DIV><BR>
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> Liam Slusser [mailto:lslusser@gmail.com]
<BR><B>Sent:</B> Thursday, May 07, 2009 1:51 PM<BR><B>To:</B> Sacerdoti,
Federico<BR><B>Cc:</B> gluster-users@gluster.org<BR><B>Subject:</B> Re:
[Gluster-users] rm -rf errors<BR></FONT><BR></DIV>
<DIV></DIV>You should try upgrading to the 2.0.0 release and try again.
They fixed all sorts of bugs.
<DIV><BR></DIV>
<DIV>liam<BR>
<DIV><BR>
<DIV class=gmail_quote>On Thu, May 7, 2009 at 8:21 AM, Sacerdoti, Federico <SPAN
dir=ltr><<A
href="mailto:Federico.Sacerdoti@deshawresearch.com">Federico.Sacerdoti@deshawresearch.com</A>></SPAN>
wrote:<BR>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Hello,<BR><BR>I
am evaluating glusterfs and have seen some strange behavior with<BR>remove. I
have gluster/2.0.0rc4 setup on 10 linux nodes connected with<BR>GigE. The
config is Nufa/fuse with one storage brick per server, as seen<BR>in the
attached nufa.vol config file, which I use for both clients
and<BR>servers.<BR><BR>My experiment is to launch 10 parallel writers, each of
whom writes<BR>32GiB worth of data in small files (2MB) to a shared
gluster-fuse<BR>mounted filesystem. The files are named uniquely per client,
so each<BR>file is only written once. This worked well, and I am seeing
performance<BR>close to that of native disk, even with 8-writers per
node.<BR><BR>However when I do a parallel "rm -rf writedir/" on the 10 nodes,
where<BR>writedir is the directory written in by the parallel writers
described<BR>above, I see strange effects. There are 69,000 UNLINK errors in
the<BR>glusterfsd.log of one server, in the form shown below. This alone is
not<BR>surprising as the operation is ocurring in parallel. However the
remove<BR>took much longer than expected, 92min, and more surprisingly the
rm<BR>command exited 0 but files remained in the writedir!<BR><BR>I ran rm -rf
writedir from a single client, and it too exited 0 but left<BR>the writedir
non-empty. Is this expected?<BR><BR>Thanks,<BR>Federico<BR><BR>--From
glusterfsd.log--<BR>2009-05-04 11:35:15 E
[fuse-bridge.c:964:fuse_unlink_cbk]<BR>glusterfs-fuse: 5764889: UNLINK()
/write.2MB.runid1.p1/5 => -1 (No such<BR>file or directory)<BR>2009-05-04
11:35:15 E [dht-common.c:1294:dht_err_cbk] nufa: subvolume<BR>drdan0192
returned -1 (No such file or directory)<BR>2009-05-04 11:35:15 E
[fuse-bridge.c:964:fuse_unlink_cbk]<BR>glusterfs-fuse: 5764894: UNLINK()
/write.2MB.runid1.p1/51 => -1 (No such<BR>file or
directory)<BR>--end--<BR> <<nufa.vol>><BR><BR>_______________________________________________<BR>Gluster-users
mailing list<BR><A
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</A><BR><A
href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users"
target=_blank>http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</A><BR><BR></BLOCKQUOTE></DIV><BR></DIV></DIV></BODY></HTML>