<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 11/25/2014 05:59 AM, Derick Turner
wrote:<br>
</div>
<blockquote cite="mid:5473CD55.3040700@e-learndesign.co.uk"
type="cite">Gluster version is standard Ubuntu 14.04 LTS repo
version -
<br>
<br>
glusterfs 3.4.2 built on Jan 14 2014 18:05:37
<br>
Repository revision: git://git.gluster.com/glusterfs.git
<br>
Copyright (c) 2006-2011 Gluster Inc.
<a class="moz-txt-link-rfc2396E" href="http://www.gluster.com"><http://www.gluster.com></a>
<br>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
<br>
You may redistribute copies of GlusterFS under the terms of the
GNU General Public License.
<br>
<br>
<br>
The gluster volume heal <volume> info command produces a lot
of output. There are a number of <gfid:hashnumber> entries
for both nodes and a few directories in the list as well.
Checking the directories on both nodes and the files appear to be
the same on each so I resolved those issues. There are, however,
still a large number of gfid files listed from the gluster volume
heal eukleia info command. There are also a large number of gfid
files and one file listed from the gluster volume heal eukleia
info split-brain and one file. This file no longer exists on
either of the bricks or the mounted filesystems.
<br>
<br>
Is there any way to clear these down or resolve this?
<br>
</blockquote>
Could you check how many files are reported for the following
command's output?<br>
This command needs to be executed on the brick inside .glusterfs:<br>
<br>
find <i class="moz-txt-slash"><span class="moz-txt-tag">/</span>your/brick/directory<span
class="moz-txt-tag">/</span></i>.glusterfs -links 1 -type f<br>
<br>
All such files need to be deleted/renamed to some other place I
guess.<br>
<br>
Pranith<br>
<blockquote cite="mid:5473CD55.3040700@e-learndesign.co.uk"
type="cite">
<br>
Thanks
<br>
<br>
Derick
<br>
<br>
<br>
On 24/11/14 05:32, Pranith Kumar Karampuri wrote:
<br>
<blockquote type="cite">
<br>
On 11/21/2014 05:33 AM, Derick Turner wrote:
<br>
<blockquote type="cite">I have a new set up which has been
running for a few weeks. Due to a configuration issue the
self heal wasnt working properly and I ended up with the
system in a bit of a state. Ive been chasing down issues and
it should (fingers crossed) be back and stable again. One
issue wich seems to be re-occurring is that on one of the
client bricks I get a load of gfids don't exist anywhere
else. The inodes of these files only point to the gfid file
and it appears that they keep coming back.....
<br>
<br>
Volume is set up as such
<br>
<br>
root@vader:/gluster/eukleiahome/intertrust/moodledata# gluster
volume info eukleiaweb
<br>
<br>
Volume Name: eukleiaweb
<br>
Type: Replicate
<br>
Volume ID: d8a29f07-7f3e-46a3-9ec4-4281038267ce
<br>
Status: Started
<br>
Number of Bricks: 1 x 2 = 2
<br>
Transport-type: tcp
<br>
Bricks:
<br>
Brick1: lando:/gluster/eukleiahome
<br>
Brick2: vader:/gluster/eukleiahome
<br>
<br>
and the file systems are mounting via NFS.
<br>
<br>
In the logs of the host for Brick one I get the following
(e.g.)
<br>
<br>
[2014-11-20 23:53:55.910705] W
[client-rpc-fops.c:471:client3_3_open_cbk]
0-eukleiaweb-client-1: remote operation failed: No such file
or directory. Path:
<gfid:e5d25375-ecb8-47d2-833f-0586b659f98a>
(00000000-0000-0000-0000-000000000000)
<br>
[2014-11-20 23:53:55.910721] E
[afr-self-heal-data.c:1270:afr_sh_data_open_cbk]
0-eukleiaweb-replicate-0: open of
<gfid:e5d25375-ecb8-47d2-833f-0586b659f98a> failed on
child eukleiaweb-client-1 (No such file or directory)
<br>
[2014-11-20 23:53:55.921425] W
[client-rpc-fops.c:1538:client3_3_inodelk_cbk]
0-eukleiaweb-client-1: remote operation failed: No such file
or directory
<br>
<br>
when I check this gfid out it exists on Brick 1 but not on
Brick 2 (which I am assuming is due to the error above).
Additionally when I check for the file that this GFID
references it doesn't go anywhere. I.e. -
<br>
</blockquote>
Which version of gluster are you using? Could you check if there
are any directories that need to be healed, using "gluster
volume heal <volname> info?
<br>
<br>
Pranith
<br>
<blockquote type="cite">
<br>
root@lando:/gluster/eukleiahome# find . -samefile
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
<br>
./.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
<br>
<br>
root@lando:/gluster/eukleiahome# file
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
<br>
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a: JPEG
image data, EXIF standard
<br>
<br>
I have tried removing these files using rm
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a but
eitherall of the occurrences haven't been logged in
/var/log/glusterfs/glusterfsd.log (as I am clearing out all
that I can find) or they are re-appearing.
<br>
<br>
Firstly, is this something to worry about? Secondly, should I
be able to simply get rid of them (and I'm being mistaken
about them re-appearing) and if so, is simply removing them
the best method?
<br>
<br>
Thanks
<br>
<br>
Derick
<br>
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>