<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, May 15, 2014 at 1:16 AM, Vincent Caron <span dir="ltr"><<a href="mailto:vcaron@bearstech.com" target="_blank">vcaron@bearstech.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
sorry for the strange title, but it might be possible that those<br>
symptoms are related...<br>
<br>
I'm running a Gluster 3.4.2 volume with 8 bricks in distribute mode.<br>
<br>
Easiest problem to diagnose : sometimes a folder goes into a state<br>
where any attempt to create a new file or folder within ends up with<br>
EINVAL (being root or not).<br>
<br>
Those errors may persist or not (sometimes it's fixed by itself a<br>
while later), and it seems to be related to this kind of information in<br>
the client's log :<br>
<br>
[2014-05-14 19:17:12.937364] I [dht-common.c:623:dht_revalidate_cbk]<br>
0-xxx-prod-dht: mismatching layouts for /uploads/images/5373<br>
[2014-05-14 19:17:12.938960] I<br>
[dht-layout.c:726:dht_layout_dir_mismatch] 0-xxx-prod-dht:<br>
/uploads/images/5373/9c8b - disk layout missing<br>
<br>
... where /uploads/images/5373 is a folder which I can list with<br>
readable file, but where creating new files and folder is impossible.<br>
<br>
<br>
Meanwhile I historically tried to remove a brick from this volume<br>
which has 8 of them. The rebalance operation took ages and I had to<br>
cancel it because I was suffering from performance problems. Now the 8th<br>
brick is shown in the volume status but marked 'decomissioned' in the<br>
internal metadata, and has this property : while all 8 bricks have the<br>
same inode count (roughly 5 millions), only the 7th first have a<br>
balanced block usage with 800 GB, while the 8th has been stuck 270 GB<br>
(the point where I tried to remove it).<br>
<br>
It looks expected. The 8th brick is still part of the volume but has<br>
only 'historical' data which is still needed. I'd like to re-add this<br>
brick into my volume, is it as simple as issuing a 'volume add-brick'<br>
and then a 'volume rebalance' ?<br></blockquote><div><br></div><div>If you haven't yet commited the remove-brick operation, you can do remove-brick stop followed by rebalance start operation.<br><br></div><div>
i.e. gluster volume remove-brick <vol> <brick> stop<br></div><div> gluster volume rebalance <vol> start <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
In any case if there's a way to avoid those bugs, I'd be interested to<br>
hear about it. Thanks for your experience and your help !<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>Thanks & Regards<br>Shylesh Kumar M<br>
</div></div>