<html><head/><body><p style="color:#000000;">If you wrote directly to the bricks instead of via a glusterfs mountpoint, then you're missing xattrs, which confuses glusterfs. It can tell there's something there, but without xattrs it 'doesn't compute.'<br><br>I don't think that bug is related to your issue at all.<br><br><br>-------- Original Message --------<br> From: Gerald Brandt <gbr@majentis.com><br> Sent: Thu, Sep 13, 2012 03:24 PM<br> To: Lonni J Friedman <netllama@gmail.com><br> CC: gluster-users@gluster.org<br> Subject: Re: [Gluster-users] problems with replication & NFS<br><br></p>Hi,
<br>
<br>
You need to write to the gluster mounted partition, not the XFS mounted one.
<br>
<br>
Gerald
<br>
<br>
<br>
----- Original Message -----
<br>
> Greetings,
<br>
> I'm trying to setup a small glusterFS test cluster, in order to gauge
<br>
> the feasibility for using it in a large production environment. I've
<br>
> been working through the official Admin Guide
<br>
> (Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf) along with
<br>
> the website setup instructions (
<br>
> <a href="http://www.gluster.org/community/documentation/index.php/Getting_started_overview">http://www.gluster.org/community/documentation/index.php/Getting_started_overview</a>
<br>
> ).
<br>
>
<br>
> What I have are two Fedora16-x86_64 servers, with a 20GB XFS
<br>
> formatted
<br>
> partition set aside as bricks. I'm using version 3.3.0. I setup
<br>
> each
<br>
> for replication, and it seems like its setup & working:
<br>
> ####
<br>
> $ gluster volume info gv0
<br>
>
<br>
> Volume Name: gv0
<br>
> Type: Replicate
<br>
> Volume ID: 6c9fbbc7-e382-4f26-afae-60f8658207c5
<br>
> Status: Started
<br>
> Number of Bricks: 1 x 2 = 2
<br>
> Transport-type: tcp
<br>
> Bricks:
<br>
> Brick1: <a href="http://10.31.99.166">10.31.99.166</a>:/mnt/sdb1
<br>
> Brick2: <a href="http://10.31.99.165">10.31.99.165</a>:/mnt/sdb1
<br>
> ####
<br>
>
<br>
> This is where my problems begin. I assumed that if replication was
<br>
> truly working, then any changes to the contents of /mnt/sdb1 on one
<br>
> brick would automatically get replicated to the other brick.
<br>
> However,
<br>
> that isn't happening. In fact, nothing seems to be happening. I've
<br>
> added new files, changed pre-existing, yet none of it ever replicates
<br>
> to the other brick. Both bricks were empty prior to formatting the
<br>
> filesystem and setting them up for this test instance. Surely I must
<br>
> be missing something obvious, as something this fundamental & basic
<br>
> must work, right?
<br>
>
<br>
> Next problem is that my production environment would need to access
<br>
> the volume via NFS (rather than 'native' gluster). I had a 3rd
<br>
> system
<br>
> setup (also with Fedora16-x86_64), and was able to successfully NFS
<br>
> mount the gluster volume. Or so I thought. When I attempted to
<br>
> simply look at the files on the mount point (using 'ls'), it seemed
<br>
> to
<br>
> work at first, but then shortly afterwards, it failed with a cryptic
<br>
> "Invalid argument" error. So I manually unmounted, then remounted,
<br>
> and tried again. Once again, it worked ok for a few seconds, then
<br>
> died again with the same "Invalid argument" error:
<br>
> ########
<br>
> [root@cuda-fs3 basebackups]# mount -t nfs -o vers=3,mountproto=tcp
<br>
> <a href="http://10.31.99.165">10.31.99.165</a>:/gv0 /mnt/gv0
<br>
> [root@cuda-fs3 basebackups]# ls -l /mnt/gv0/
<br>
> total 8
<br>
> -rw-r--r-- 0 root root 6670 Sep 13 10:21 foo1
<br>
> [root@cuda-fs3 basebackups]# ls -l /mnt/gv0/
<br>
> total 8
<br>
> -rw-r--r-- 0 root root 6670 Sep 13 10:21 foo1
<br>
> [root@cuda-fs3 basebackups]# ls -l /mnt/gv0/
<br>
> ls: cannot access /mnt/gv0/foo1: Invalid argument
<br>
> total 0
<br>
> -????????? ? ? ? ? ? foo1
<br>
> ########
<br>
>
<br>
> The duration between the mount command invocation and the failed 'ls'
<br>
> command was literally about 5 seconds. I have numerous other
<br>
> traditional NFS mounts that work just fine. Its only the gluster
<br>
> volume that exhibits this behavior. I did some googling, and this
<br>
> bug
<br>
> seems to match my problem exactly:
<br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=800755">https://bugzilla.redhat.com/show_bug.cgi?id=800755</a>
<br>
>
<br>
> I can't quite tell from the bug whether its actually fixed in the
<br>
> released 3.3.0, or not. Can someone clarify whether NFS is supposed
<br>
> to work in 3.3.0 ? Am I doing something wrong?
<br>
>
<br>
> thanks!
<br>
> _______________________________________________
<br>
> Gluster-users mailing list
<br>
> Gluster-users@gluster.org
<br>
> <a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
<br>
>
<br>
_______________________________________________
<br>
Gluster-users mailing list
<br>
Gluster-users@gluster.org
<br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
<br>
</body></html>