<div><br></div>I worked on this issue this morning and could find nothing that would indicate it wouldn't work. I was down to 45 free inodes (says xfs_db) so i brought down one of the nodes and applied the inode64 option to /etc/fstab and remount the partition and restarted gluster. Everything appears to be working normally so i applied the same option to my other server, and again, everything is working normally.<div>
<br></div><div>I'll let you know after we run with this for a few days but so far everything is fine and working normally. I'm on Centos 5.3 x86_64 btw.</div><div><br></div><div>An interesting note, after applying the inode64 option the "ifree" output after running xfs_db didn't actually change but the filesystem is working normally. I found a bunch of posts on the interweb of people who had that exact experience.</div>
<div><br></div><div>liam<br><br><div class="gmail_quote">On Tue, Jul 28, 2009 at 2:55 AM, Roger Torrentsgenerós <span dir="ltr"><<a href="mailto:rtorrents@flumotion.com">rtorrents@flumotion.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
Any light on this issue?<br>
<br>
To sum up, I need to know if GlusterFS is capable of dealing with an xfs<br>
filesystem mounted with the inode64 option, or if it will be shortly.<br>
<br>
Sorry but it's an urgent matter to me.<br>
<br>
Thanks a lot.<br>
<br>
Roger<br>
<br>
On Thu, 2009-07-23 at 04:13 -0700, Liam Slusser wrote:<br>
><br>
><br>
> This brings up an interesting<br>
> question. I just had a look on our large cluster (which is using xfs) and i've eaten up 98% of my free inodes. I am mounting the filesystem without any options so i assume i am not using the inode64 option. (i dont believe it is default even on 64bit systems).<br>
><br>
><br>
> Is gluster giggy with using the inode64 mount option on xfs? My understanding is as long as the binary is 64bit it shouldnt have any issue (im using 64bit gluster binary i compiled myself).<br>
><br>
><br>
> Anybody have some insight into this?<br>
><br>
><br>
> liam<br>
><br>
> On Thu, Jul 23, 2009 at 2:59 AM, Matthias Saou<br>
> <<a href="mailto:thias@spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net">thias@spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net</a>><br>
> wrote:<br>
> Hi,<br>
><br>
> Replying to myself with some more details : The servers are<br>
> 64bit<br>
> (x86_64) whereas the clients are 32bit (ix86). It seems like<br>
> this could<br>
> be the cause of this problem...<br>
><br>
> <a href="http://oss.sgi.com/archives/xfs/2009-07/msg00044.html" target="_blank">http://oss.sgi.com/archives/xfs/2009-07/msg00044.html</a><br>
><br>
> But if the glusterfs client doesn't know about the original<br>
> inodes of<br>
> the files, then it should be possible to fix, right?<br>
><br>
> Matthias<br>
><br>
> Matthias Saou wrote :<br>
><br>
><br>
> > Hi,<br>
> ><br>
> > (Note: I have access to the systems referenced in the<br>
> initial post)<br>
> ><br>
> > I think I've found the problem. It's the filesystem, XFS,<br>
> which has<br>
> > been mounted with the "inode64" option, as it can't be<br>
> mounted without<br>
> > since it has been grown to 39TB. I've just checked this :<br>
> ><br>
> > # ls -1 -ai /file/data/cust | sort -n<br>
> ><br>
> > And the last few lines are like this :<br>
> ><br>
> > [...]<br>
> > 2148235729 cust2<br>
> > 2148236297 cust6<br>
> > 2148236751 cust5<br>
> > 2148236974 cust7<br>
> > 2148237729 cust3<br>
> > 2148239365 cust4<br>
> > 2156210172 cust8<br>
> > 61637541899 cust1<br>
> > 96636784146 cust9<br>
> ><br>
> > Note that "cust1" here is the one where the problem has been<br>
> seen<br>
> > initially. I've just checked, and the "cust9" directory is<br>
> affected in<br>
> > the exact same way.<br>
> ><br>
> > So it seems like the glusterfs build being used has problems<br>
> with 64bit<br>
> > inodes. Is this a known limitation? Is it easy to fix or<br>
> work around?<br>
> ><br>
> > Matthias<br>
> ><br>
> > Roger Torrentsgenerós wrote :<br>
> ><br>
> > ><br>
> > > We have 2 servers, let's name them file01 and file02. They<br>
> are synced<br>
> > > very frequently, so we can assume contents are the same.<br>
> Then we have<br>
> > > lots of clients, everyone of each has two glusterfs<br>
> mountings, one<br>
> > > against every file server.<br>
> > ><br>
> > > Before you ask, let me say the clients are in a production<br>
> environment,<br>
> > > where I can't afford any downtime. To make the migration<br>
> from glusterfs<br>
> > > v1.3 to glusterfs v2.0 as smooth as possible, I recompiled<br>
> the packages<br>
> > > to run under glusterfs2 name. Servers are running two<br>
> instances of the<br>
> > > glusterfs daemon, and the old one is to be stopped when<br>
> all the<br>
> > > migration is complete. So you'll be seeing some glusterfs2<br>
> and build<br>
> > > dates that may not be normal, but you'll also see this has<br>
> nothing to do<br>
> > > with this matter.<br>
> > ><br>
> > > file01 server log:<br>
> > ><br>
> > ><br>
> ================================================================================<br>
> > > Version : glusterfs 2.0.1 built on May 26 2009<br>
> 05:11:51<br>
> > > TLA Revision : 5c1d9108c1529a1155963cb1911f8870a674ab5b<br>
> > > Starting Time: 2009-07-14 18:07:12<br>
> > > Command line : /usr/sbin/glusterfsd2<br>
> -p /var/run/glusterfsd2.pid<br>
> > > PID : 6337<br>
> > > System name : Linux<br>
> > > Nodename : file01<br>
> > > Kernel Release : 2.6.18-128.1.14.el5<br>
> > > Hardware Identifier: x86_64<br>
> > ><br>
> > > Given volfile:<br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > 1: # The data store directory to serve<br>
> > > 2: volume filedata-ds<br>
> > > 3: type storage/posix<br>
> > > 4: option directory /file/data<br>
> > > 5: end-volume<br>
> > > 6:<br>
> > > 7: # Make the data store read-only<br>
> > > 8: volume filedata-readonly<br>
> > > 9: type testing/features/filter<br>
> > > 10: option read-only on<br>
> > > 11: subvolumes filedata-ds<br>
> > > 12: end-volume<br>
> > > 13:<br>
> > > 14: # Optimize<br>
> > > 15: volume filedata-iothreads<br>
> > > 16: type performance/io-threads<br>
> > > 17: option thread-count 64<br>
> > > 18: # option autoscaling on<br>
> > > 19: # option min-threads 16<br>
> > > 20: # option max-threads 256<br>
> > > 21: subvolumes filedata-readonly<br>
> > > 22: end-volume<br>
> > > 23:<br>
> > > 24: # Add readahead feature<br>
> > > 25: volume filedata<br>
> > > 26: type performance/read-ahead # cache per file =<br>
> (page-count x<br>
> > > page-size)<br>
> > > 27: # option page-size 256kB # 256KB is the<br>
> default option ?<br>
> > > 28: # option page-count 8 # 16 is default<br>
> option ?<br>
> > > 29: subvolumes filedata-iothreads<br>
> > > 30: end-volume<br>
> > > 31:<br>
> > > 32: # Main server section<br>
> > > 33: volume server<br>
> > > 34: type protocol/server<br>
> > > 35: option transport-type tcp<br>
> > > 36: option transport.socket.listen-port 6997<br>
> > > 37: subvolumes filedata<br>
> > > 38: option auth.addr.filedata.allow 192.168.128.* #<br>
> streamers<br>
> > > 39: option verify-volfile-checksum off # don't have<br>
> clients complain<br>
> > > 40: end-volume<br>
> > > 41:<br>
> > ><br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > [2009-07-14 18:07:12] N [glusterfsd.c:1152:main]<br>
> glusterfs: Successfully<br>
> > > started<br>
> > ><br>
> > > file02 server log:<br>
> > ><br>
> > ><br>
> ================================================================================<br>
> > > Version : glusterfs 2.0.1 built on May 26 2009<br>
> 05:11:51<br>
> > > TLA Revision : 5c1d9108c1529a1155963cb1911f8870a674ab5b<br>
> > > Starting Time: 2009-06-28 08:42:13<br>
> > > Command line : /usr/sbin/glusterfsd2<br>
> -p /var/run/glusterfsd2.pid<br>
> > > PID : 5846<br>
> > > System name : Linux<br>
> > > Nodename : file02<br>
> > > Kernel Release : 2.6.18-92.1.10.el5<br>
> > > Hardware Identifier: x86_64<br>
> > ><br>
> > > Given volfile:<br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > 1: # The data store directory to serve<br>
> > > 2: volume filedata-ds<br>
> > > 3: type storage/posix<br>
> > > 4: option directory /file/data<br>
> > > 5: end-volume<br>
> > > 6:<br>
> > > 7: # Make the data store read-only<br>
> > > 8: volume filedata-readonly<br>
> > > 9: type testing/features/filter<br>
> > > 10: option read-only on<br>
> > > 11: subvolumes filedata-ds<br>
> > > 12: end-volume<br>
> > > 13:<br>
> > > 14: # Optimize<br>
> > > 15: volume filedata-iothreads<br>
> > > 16: type performance/io-threads<br>
> > > 17: option thread-count 64<br>
> > > 18: # option autoscaling on<br>
> > > 19: # option min-threads 16<br>
> > > 20: # option max-threads 256<br>
> > > 21: subvolumes filedata-readonly<br>
> > > 22: end-volume<br>
> > > 23:<br>
> > > 24: # Add readahead feature<br>
> > > 25: volume filedata<br>
> > > 26: type performance/read-ahead # cache per file =<br>
> (page-count x<br>
> > > page-size)<br>
> > > 27: # option page-size 256kB # 256KB is the<br>
> default option ?<br>
> > > 28: # option page-count 8 # 16 is default<br>
> option ?<br>
> > > 29: subvolumes filedata-iothreads<br>
> > > 30: end-volume<br>
> > > 31:<br>
> > > 32: # Main server section<br>
> > > 33: volume server<br>
> > > 34: type protocol/server<br>
> > > 35: option transport-type tcp<br>
> > > 36: option transport.socket.listen-port 6997<br>
> > > 37: subvolumes filedata<br>
> > > 38: option auth.addr.filedata.allow 192.168.128.* #<br>
> streamers<br>
> > > 39: option verify-volfile-checksum off # don't have<br>
> clients complain<br>
> > > 40: end-volume<br>
> > > 41:<br>
> > ><br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > [2009-06-28 08:42:13] N [glusterfsd.c:1152:main]<br>
> glusterfs: Successfully<br>
> > > started<br>
> > ><br>
> > > Now let's pick a random client, for example streamer013,<br>
> and see its<br>
> > > log:<br>
> > ><br>
> > ><br>
> ================================================================================<br>
> > > Version : glusterfs 2.0.1 built on May 26 2009<br>
> 05:23:52<br>
> > > TLA Revision : 5c1d9108c1529a1155963cb1911f8870a674ab5b<br>
> > > Starting Time: 2009-07-22 18:34:31<br>
> > > Command line : /usr/sbin/glusterfs2 --log-level=NORMAL<br>
> > > --volfile-server=file02.priv<br>
> --volfile-server-port=6997 /mnt/file02<br>
> > > PID : 14519<br>
> > > System name : Linux<br>
> > > Nodename : streamer013<br>
> > > Kernel Release : 2.6.18-92.1.10.el5PAE<br>
> > > Hardware Identifier: i686<br>
> > ><br>
> > > Given volfile:<br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > 1: # filedata<br>
> > > 2: volume filedata<br>
> > > 3: type protocol/client<br>
> > > 4: option transport-type tcp<br>
> > > 5: option remote-host file02.priv<br>
> > > 6: option remote-port 6997 # use non default<br>
> to run in<br>
> > > parallel<br>
> > > 7: option remote-subvolume filedata<br>
> > > 8: end-volume<br>
> > > 9:<br>
> > > 10: # Add readahead feature<br>
> > > 11: volume readahead<br>
> > > 12: type performance/read-ahead # cache per file =<br>
> (page-count x<br>
> > > page-size)<br>
> > > 13: # option page-size 256kB # 256KB is the<br>
> default option ?<br>
> > > 14: # option page-count 2 # 16 is default<br>
> option ?<br>
> > > 15: subvolumes filedata<br>
> > > 16: end-volume<br>
> > > 17:<br>
> > > 18: # Add threads<br>
> > > 19: volume iothreads<br>
> > > 20: type performance/io-threads<br>
> > > 21: option thread-count 8<br>
> > > 22: # option autoscaling on<br>
> > > 23: # option min-threads 16<br>
> > > 24: # option max-threads 256<br>
> > > 25: subvolumes readahead<br>
> > > 26: end-volume<br>
> > > 27:<br>
> > > 28: # Add IO-Cache feature<br>
> > > 29: volume iocache<br>
> > > 30: type performance/io-cache<br>
> > > 31: option cache-size 64MB # default is 32MB (in<br>
> 1.3)<br>
> > > 32: option page-size 256KB # 128KB is default<br>
> option (in 1.3)<br>
> > > 33: subvolumes iothreads<br>
> > > 34: end-volume<br>
> > > 35:<br>
> > ><br>
> > ><br>
> +------------------------------------------------------------------------------+<br>
> > > [2009-07-22 18:34:31] N [glusterfsd.c:1152:main]<br>
> glusterfs: Successfully<br>
> > > started<br>
> > > [2009-07-22 18:34:31] N<br>
> [client-protocol.c:5557:client_setvolume_cbk]<br>
> > > filedata: Connected to <a href="http://192.168.128.232:6997" target="_blank">192.168.128.232:6997</a>, attached to<br>
> remote volume<br>
> > > 'filedata'.<br>
> > > [2009-07-22 18:34:31] N<br>
> [client-protocol.c:5557:client_setvolume_cbk]<br>
> > > filedata: Connected to <a href="http://192.168.128.232:6997" target="_blank">192.168.128.232:6997</a>, attached to<br>
> remote volume<br>
> > > 'filedata'.<br>
> > ><br>
> > > The mountings seem ok:<br>
> > ><br>
> > > [root@streamer013 /]# mount<br>
> > > /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)<br>
> > > proc on /proc type proc (rw)<br>
> > > sysfs on /sys type sysfs (rw)<br>
> > > devpts on /dev/pts type devpts (rw,gid=5,mode=620)<br>
> > > /dev/sda1 on /boot type ext3 (rw)<br>
> > > tmpfs on /dev/shm type tmpfs (rw)<br>
> > > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)<br>
> > > glusterfs#file01.priv on /mnt/file01 type fuse<br>
> > > (rw,max_read=131072,allow_other,default_permissions)<br>
> > > glusterfs#file02.priv on /mnt/file02 type fuse<br>
> > > (rw,max_read=131072,allow_other,default_permissions)<br>
> > ><br>
> > > They work:<br>
> > ><br>
> > > [root@streamer013 /]# ls /mnt/file01/<br>
> > > cust<br>
> > > [root@streamer013 /]# ls /mnt/file02/<br>
> > > cust<br>
> > ><br>
> > > And they are seen by both servers:<br>
> > ><br>
> > > file01:<br>
> > ><br>
> > > [2009-07-22 18:34:19] N<br>
> [server-helpers.c:723:server_connection_destroy]<br>
> > > server: destroyed connection of streamer013.<br>
> > ><br>
> p4.bt.bcn.flumotion.net-14335-2009/07/22-18:34:13:210609-filedata<br>
> > > [2009-07-22 18:34:31] N [server-protocol.c:7796:notify]<br>
> server:<br>
> > > <a href="http://192.168.128.213:1017" target="_blank">192.168.128.213:1017</a> disconnected<br>
> > > [2009-07-22 18:34:31] N [server-protocol.c:7796:notify]<br>
> server:<br>
> > > <a href="http://192.168.128.213:1018" target="_blank">192.168.128.213:1018</a> disconnected<br>
> > > [2009-07-22 18:34:31] N<br>
> [server-protocol.c:7035:mop_setvolume] server:<br>
> > > accepted client from <a href="http://192.168.128.213:1017" target="_blank">192.168.128.213:1017</a><br>
> > > [2009-07-22 18:34:31] N<br>
> [server-protocol.c:7035:mop_setvolume] server:<br>
> > > accepted client from <a href="http://192.168.128.213:1018" target="_blank">192.168.128.213:1018</a><br>
> > ><br>
> > > file02:<br>
> > ><br>
> > > [2009-07-22 18:34:20] N<br>
> [server-helpers.c:723:server_connection_destroy]<br>
> > > server: destroyed connection of streamer013.<br>
> > ><br>
> p4.bt.bcn.flumotion.net-14379-2009/07/22-18:34:13:267495-filedata<br>
> > > [2009-07-22 18:34:31] N [server-protocol.c:7796:notify]<br>
> server:<br>
> > > <a href="http://192.168.128.213:1014" target="_blank">192.168.128.213:1014</a> disconnected<br>
> > > [2009-07-22 18:34:31] N [server-protocol.c:7796:notify]<br>
> server:<br>
> > > <a href="http://192.168.128.213:1015" target="_blank">192.168.128.213:1015</a> disconnected<br>
> > > [2009-07-22 18:34:31] N<br>
> [server-protocol.c:7035:mop_setvolume] server:<br>
> > > accepted client from <a href="http://192.168.128.213:1015" target="_blank">192.168.128.213:1015</a><br>
> > > [2009-07-22 18:34:31] N<br>
> [server-protocol.c:7035:mop_setvolume] server:<br>
> > > accepted client from <a href="http://192.168.128.213:1014" target="_blank">192.168.128.213:1014</a><br>
> > ><br>
> > > Now let's see the funny things. First, a content listing<br>
> of a particular<br>
> > > directory, locally from both servers:<br>
> > ><br>
> > > [root@file01 ~]# ls /file/data/cust/cust1<br>
> > > configs files outgoing reports<br>
> > ><br>
> > > [root@file02 ~]# ls /file/data/cust/cust1<br>
> > > configs files outgoing reports<br>
> > ><br>
> > > Now let's try to see the same from the client side:<br>
> > ><br>
> > > [root@streamer013 /]# ls /mnt/file01/cust/cust1<br>
> > > ls: /mnt/file01/cust/cust1: No such file or directory<br>
> > > [root@streamer013 /]# ls /mnt/file02/cust/cust1<br>
> > > configs files outgoing reports<br>
> > ><br>
> > > Oops :( And the client log says:<br>
> > ><br>
> > > [2009-07-22 18:41:22] W [fuse-bridge.c:1651:fuse_opendir]<br>
> > > glusterfs-fuse: 64: OPENDIR (null) (fuse_loc_fill()<br>
> failed)<br>
> > ><br>
> > > While none of the servers logs say anything.<br>
> > ><br>
> > > So files really exist in the servers, but the same client<br>
> can see them<br>
> > > in one of the filers but not in the other, although both<br>
> are running<br>
> > > exactly the same software. But there's more. It seems it<br>
> only happens<br>
> > > for certain directories (I can't show you the contents due<br>
> to privacity,<br>
> > > but I guess you'll figure it out):<br>
> > ><br>
> > > [root@streamer013 /]# ls /mnt/file01/cust/|wc -l<br>
> > > 95<br>
> > > [root@streamer013 /]# ls /mnt/file02/cust/|wc -l<br>
> > > 95<br>
> > > [root@streamer013 /]# for i in `ls /mnt/file01/cust/`; do<br>
> > > ls /mnt/file01/cust/$i; done|grep such<br>
> > > ls: /mnt/file01/cust/cust1: No such file or directory<br>
> > > ls: /mnt/file01/cust/cust2: No such file or directory<br>
> > > [root@streamer013 /]# for i in `ls /mnt/file02/cust/`; do<br>
> > > ls /mnt/file02/cust/$i; done|grep such<br>
> > > [root@streamer013 /]#<br>
> > ><br>
> > > And of course, our client log error twice:<br>
> > ><br>
> > > [2009-07-22 18:49:21] W [fuse-bridge.c:1651:fuse_opendir]<br>
> > > glusterfs-fuse: 2119: OPENDIR (null) (fuse_loc_fill()<br>
> failed)<br>
> > > [2009-07-22 18:49:21] W [fuse-bridge.c:1651:fuse_opendir]<br>
> > > glusterfs-fuse: 2376: OPENDIR (null) (fuse_loc_fill()<br>
> failed)<br>
> > ><br>
> > ><br>
> > > I hope having been clear enough this time. If you need<br>
> more data just<br>
> > > let me know and I'll see what I can do.<br>
> > ><br>
> > > And thanks again for your help.<br>
> > ><br>
> > > Roger<br>
> > ><br>
> > ><br>
> > > On Wed, 2009-07-22 at 09:10 -0700, Anand Avati wrote:<br>
> > > > > I have been witnessing some strange behaviour with my<br>
> GlusterFS system.<br>
> > > > > Fact is there are some files which exist and are<br>
> completely accessible<br>
> > > > > in the server, while they can't be accessed from a<br>
> client, while other<br>
> > > > > files do.<br>
> > > > ><br>
> > > > > To be sure, I copied the same files to another<br>
> directory and I still was<br>
> > > > > unable to see them from the client. To be sure it<br>
> wasn't any kind of<br>
> > > > > file permissions, selinux or whatever issue, I created<br>
> a copy from a<br>
> > > > > working directory, and still wasn't seen from the<br>
> client. All I get is<br>
> > > > > a:<br>
> > > > ><br>
> > > > > ls: .: No such file or directory<br>
> > > > ><br>
> > > > > And the client log says:<br>
> > > > ><br>
> > > > > [2009-07-22 14:04:18] W<br>
> [fuse-bridge.c:1651:fuse_opendir]<br>
> > > > > glusterfs-fuse: 104778: OPENDIR (null)<br>
> (fuse_loc_fill() failed)<br>
> > > > ><br>
> > > > > While the server log says nothing.<br>
> > > > ><br>
> > > > > Funniest thing is the same client has another<br>
> GlusterFS mount to another<br>
> > > > > server, which has exactly the same contents as the<br>
> first one, and this<br>
> > > > > mount does work.<br>
> > > > ><br>
> > > > > Some data:<br>
> > > > ><br>
> > > > > [root@streamer001 /]# ls /mnt/file01/cust/cust1/<br>
> > > > > ls: /mnt/file01/cust/cust1/: No such file or directory<br>
> > > > ><br>
> > > > > [root@streamer001 /]# ls /mnt/file02/cust/cust1/<br>
> > > > > configs files outgoing reports<br>
> > > > ><br>
> > > > > [root@streamer001 /]# mount<br>
> > > > > /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)<br>
> > > > > proc on /proc type proc (rw)<br>
> > > > > sysfs on /sys type sysfs (rw)<br>
> > > > > devpts on /dev/pts type devpts (rw,gid=5,mode=620)<br>
> > > > > /dev/sda1 on /boot type ext3 (rw)<br>
> > > > > tmpfs on /dev/shm type tmpfs (rw)<br>
> > > > > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)<br>
> > > > > sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)<br>
> > > > > glusterfs#file01.priv on /mnt/file01 type fuse<br>
> > > > > (rw,max_read=131072,allow_other,default_permissions)<br>
> > > > > glusterfs#file02.priv on /mnt/file02 type fuse<br>
> > > > > (rw,max_read=131072,allow_other,default_permissions)<br>
> > > > ><br>
> > > > > [root@file01 /]# ls /file/data/cust/cust1<br>
> > > > > configs files outgoing reports<br>
> > > > ><br>
> > > > > [root@file02 /]# ls /file/data/cust/cust1<br>
> > > > > configs files outgoing reports<br>
> > > > ><br>
> > > > > Any ideas?<br>
> > > ><br>
> > > > Can you please post all your client and server logs and<br>
> volfiles? Are<br>
> > > > you quite certain that this is not a result of some<br>
> misconfiguration?<br>
> > > ><br>
> > > > Avati<br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > Gluster-devel mailing list<br>
> > > <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> > > <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
> Clean custom Red Hat Linux rpm packages :<br>
> <a href="http://freshrpms.net/" target="_blank">http://freshrpms.net/</a><br>
> Fedora release 10 (Cambridge) - Linux kernel<br>
><br>
> 2.6.27.25-170.2.72.fc10.x86_64 Load : 0.50 3.32 2.58<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
<a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br></div>