<div dir="ltr"><div><div><div><br></div>I can't umount the zfs brick filesystem even if I stopped the glusterfs on the bad node.<br><br></div>Besides, the bad data now seems propagated to the good brick. <br><br></div>
<div>Can I treat the node is just gone bad?<br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 9, 2013 at 9:34 AM, Todd Pfaff <span dir="ltr"><<a href="mailto:pfaff@rhpcs.mcmaster.ca" target="_blank">pfaff@rhpcs.mcmaster.ca</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Liang,<br>
<br>
I suppose my choice of words was misleading. What I mean is:<br>
<br>
- unmount the corrupted brick filesystem<br>
- try to check and repair the brick filesystem<br>
- if repair fails, re-create the filesystem<br>
- remount the brick filesystem<br>
<br>
but, as I said, I'm not very familiar with zfs. Based on my quick glance<br>
at some zfs documentation it sounds to me like online zfs check-and-repair<br>
may be possible (this is oracle zfs documentation and I have no idea how<br>
the linux zfs implementation compares):<br>
<br>
<a href="http://docs.oracle.com/cd/E23823_01/html/819-5461/gbbwa.html" target="_blank">http://docs.oracle.com/cd/<u></u>E23823_01/html/819-5461/gbbwa.<u></u>html</a><br>
<br>
but since you're a zfs user you likely already know much more about zfs<br>
than I do.<span class="HOEnZb"><font color="#888888"><br>
<br>
Todd</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On Wed, 9 Jan 2013, Liang Ma wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Todd,<br>
<br>
Thanks for your reply. But how can I take this brick offline? Since the<br>
gluster volume has replicate count 2, it won't allow me to remove one brick.<br>
Is there a command which can take one replicate brick offline?<br>
<br>
Many thanks.<br>
<br>
Liang<br>
<br>
<br>
On Tue, Jan 8, 2013 at 3:02 PM, Todd Pfaff <<a href="mailto:pfaff@rhpcs.mcmaster.ca" target="_blank">pfaff@rhpcs.mcmaster.ca</a>> wrote:<br>
Liang,<br>
<br>
I don't claim to know the answer to your question, and my<br>
knowledge of zfs<br>
is minimal at best so I may be way off base here, but it seems<br>
to me that<br>
your attempted random corruption with this command:<br>
<br>
dd if=/dev/urandom of=/dev/<u></u>sda6 bs=1024 count=20480<br>
<br>
is likely going to corrupt the underlying zfs filesystem metadata, not<br>
just file data, and I wouldn't expect gluster to be able to fixed a<br>
brick's corrupted filesystem. Perhaps you now have to take the brick<br>
offline, fix any zfs filesystem errors if possible, bring the brick<br>
back<br>
online and see what then happens with self-heal.<br>
<br>
--<br>
Todd Pfaff <<a href="mailto:pfaff@mcmaster.ca" target="_blank">pfaff@mcmaster.ca</a>><br>
<a href="http://www.rhpcs.mcmaster.ca/" target="_blank">http://www.rhpcs.mcmaster.ca/</a><br>
<br>
On Tue, 8 Jan 2013, Liang Ma wrote:<br>
<br>
Hi There,<br>
<br>
I'd like to test and understand the self heal feature of<br>
glusterfs. This is<br>
what I did with 3.3.1-ubuntu1~precise4 on Ubuntu 12.04.1<br>
LTS.<br>
<br>
gluster volume create gtest replica 2 gluster3:/zfs-test<br>
gluster4:/zfs-test<br>
where zfs-test is a zfs pool on partition /dev/sda6 in<br>
both nodes.<br>
<br>
To simulate a random corruption on node gluster3<br>
<br>
dd if=/dev/urandom of=/dev/<u></u>sda6 bs=1024 count=20480<br>
<br>
Now zfs detected the corrupted files<br>
<br>
pool: zfs-test<br>
state: ONLINE<br>
status: One or more devices has experienced an error<br>
resulting in data<br>
corruption. Applications may be affected.<br>
action: Restore the file in question if possible.<br>
Otherwise restore the<br>
entire pool from backup.<br>
see: <a href="http://zfsonlinux.org/msg/ZFS-8000-8A" target="_blank">http://zfsonlinux.org/msg/ZFS-<u></u>8000-8A</a><br>
scan: none requested<br>
config:<br>
<br>
NAME STATE READ WRITE CKSUM<br>
zfs-test ONLINE 0 0 2.29K<br>
sda6 ONLINE 0 0 4.59K<br>
<br>
errors: Permanent errors have been detected in the<br>
following files:<br>
<br>
/zfs-test/<xattrdir>/trusted.<u></u>gfid<br>
<br>
/zfs-test/.glusterfs/b0/1e/<u></u>b01ec17c-14cc-4999-938b-<u></u>b4a71e358b46<br>
<br>
/zfs-test/.glusterfs/b0/1e/<u></u>b01ec17c-14cc-4999-938b-<u></u>b4a71e358b46/<xat<br>
trdir>/trusted.gfid<br>
<br>
/zfs-test/.glusterfs/dd/8c/<u></u>dd8c6797-18c3-4f3b-b1ca-<u></u>86def2b578c5/<xat<br>
trdir>/trusted.gfid<br>
<br>
Now the gluster log file shows the self heal can't fix the<br>
corruption<br>
[2013-01-08 12:46:03.371214] W<br>
[afr-common.c:1196:afr_detect_<u></u>self_heal_by_iatt]<br>
2-gtest-replicate-0:<br>
/K.iso: gfid different on subvolume<br>
[2013-01-08 12:46:03.373539] E<br>
[afr-self-heal-common.c:1419:<u></u>afr_sh_common_lookup_cbk]<br>
2-gtest-replicate-0:<br>
Missing Gfids for /K.iso<br>
[2013-01-08 12:46:03.385701] E<br>
[afr-self-heal-common.c:2160:<u></u>afr_self_heal_completion_cbk]<br>
2-gtest-replicate-0: background gfid self-heal failed on<br>
/K.iso<br>
[2013-01-08 12:46:03.385760] W<br>
[fuse-bridge.c:292:fuse_entry_<u></u>cbk]<br>
0-glusterfs-fuse: 11901: LOOKUP() /K.iso => -1 (No data<br>
available)<br>
<br>
where K.iso is one of the sample files affected by the dd<br>
command.<br>
<br>
So could anyone tell me what is the best way to repair the<br>
simulated<br>
corruption?<br>
<br>
Thank you.<br>
<br>
Liang<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</div></div></blockquote></div><br></div>