<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
I'm betting that your bricks are formatted ext4. If they are, you
have a bug due to a recent structure change in ext4. If that is the
problem, you can downgrade your kernel to before they backported the
change (not sure which version that is though), or reformat your
bricks xfs.<br>
<br>
On 08/14/2012 12:15 AM, 符永涛 wrote:
<blockquote
cite="mid:CADFMGuLToETGy3zTzhQW_D8aCTQZ1bvMYrWkfAKzXq43Ygn+iw@mail.gmail.com"
type="cite">Hi Bryan,<br>
<br>
Thank you for your support. Just find out glusterfs 3.2 and 3.3
both have this issue.<br>
My server is redhat 6.3 kernel is 2.6.32-279.el6.x86_64 is it
compatible?<br>
<br>
More info listed bellow:<br>
<br>
volume info:<br>
Volume Name: yfudis3rep2<br>
Type: Distributed-Replicate<br>
Volume ID: 6a8da204-1348-4cd8-a188-13807b827965<br>
Status: Started<br>
Number of Bricks: 3 x 2 = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.10.135.21:/mnt/yfudis3rep2d<br>
Brick2: 10.10.135.23:/mnt/yfudis3rep2d<br>
Brick3: 10.10.135.24:/mnt/yfudis3rep2d<br>
Brick4: 10.10.135.25:/mnt/yfudis3rep2d<br>
Brick5: 10.10.135.26:/mnt/yfudis3rep2d<br>
Brick6: 10.10.135.27:/mnt/yfudis3rep2d<br>
Options Reconfigured:<br>
cluster.quorum-count: 2<br>
cluster.quorum-type: fixed<br>
diagnostics.client-log-level: DEBUG<br>
diagnostics.brick-log-level: DEBUG<br>
<br>
df<br>
10.10.135.24:/yfudis3rep2<br>
87G 16G 67G 20% /mnt/yfudis3rep2<br>
<br>
[<a moz-do-not-send="true" href="mailto:root@10.10.135.21">root@10.10.135.21</a>
~]# gluster peer status<br>
Number of Peers: 7<br>
<br>
Hostname: 10.10.135.28<br>
Uuid: f5ed0acf-9ef2-4378-801f-21c1e4c3ed7e<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.10.135.25<br>
Uuid: c3ed7be0-cd14-4c3a-9523-c3a059515faa<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.1.4.17<br>
Uuid: 6525bd4f-6f43-4eb7-b8c7-9860528a0cb6<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.10.135.24<br>
Uuid: 04bb84a6-a7f9-4b43-8c84-154914e807b5<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.10.135.23<br>
Uuid: 46651bb6-7584-4dc3-a32a-72ffee7c6775<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.10.135.27<br>
Uuid: a8b65039-eb5a-4787-a2f2-8cc963ceb09e<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 10.10.135.26<br>
Uuid: c02ac9eb-f0e5-4757-b31a-f9d22031ff38<br>
State: Peer in Cluster (Connected)<br>
<br>
<br>
mount<br>
10.10.135.24:/yfudis3rep2 on /mnt/yfudis3rep2 type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)<br>
<br>
<br>
<br>
<div class="gmail_quote">2012/8/14 Bryan Whitehead <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:driver@megahappy.net" target="_blank">driver@megahappy.net</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">can you post
more details? like gluster volume info, gluster peer<br>
status, output of mount, and df ?<br>
<div>
<div class="h5"><br>
On Mon, Aug 13, 2012 at 10:42 PM, 符永涛 <<a
moz-do-not-send="true" href="mailto:yongtaofu@gmail.com">yongtaofu@gmail.com</a>>
wrote:<br>
> Hi all,<br>
><br>
> Any one helps?<br>
> More information about this issue.<br>
><br>
> for example if i create abc.zip by<br>
> touch abc.zip<br>
> then run<br>
> ls &<br>
> it hangs<br>
> but if I run<br>
> rm -rf abc.zip<br>
> then ls returns many file with same name seems
there's bug here. ls hangs<br>
> because it falls into a loop and the files stat are
not valid.<br>
><br>
> Thank you.<br>
><br>
><br>
><br>
> 2012/8/14 符永涛 <<a moz-do-not-send="true"
href="mailto:yongtaofu@gmail.com">yongtaofu@gmail.com</a>><br>
>><br>
>> Hi Gluster experts,<br>
>><br>
>><br>
>> I'm new to glusterfs and I have encountered a
problem about list directory<br>
>> of glusters 3.3.<br>
>><br>
>> I have a volume configuration of 3(distribute) *
2(replica). When write<br>
>> file on the glusterfs client mount directory some
of the files can't be<br>
>> listed through ls command but the file exists.
Some times the ls command<br>
>> hangs.<br>
>><br>
>><br>
>> Any one know what's the problem is?<br>
>><br>
>><br>
>> Thank you very much.<br>
>><br>
>> --<br>
>> 符永涛<br>
><br>
><br>
><br>
><br>
> --<br>
> 符永涛<br>
><br>
</div>
</div>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a moz-do-not-send="true"
href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users"
target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
符永涛<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
</body>
</html>