Was this the client log or the glustershd log?<div><br></div><div>Thanks,</div><div>Avati<br><br><div class="gmail_quote">On Mon, Jul 9, 2012 at 8:23 AM, Jake Grimmett <span dir="ltr"><<a href="mailto:jog@mrc-lmb.cam.ac.uk" target="_blank">jog@mrc-lmb.cam.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Fernando / Christian,<br>
<br>
Many thanks for getting back to me.<br>
<br>
Slow writes are acceptable; most of our VM's are small web servers with low traffic. My aim is to have a fully self-contained two server KVM cluster with live migration, no external storage and the ability to reboot either node with zero VM downtime. We seem to be "almost there", bar a hiccup when the self-heal is in progress and some minor grumbles from sanlock (which might be fixed by the new sanlock in RHEL 6.3)<br>
<br>
Incidentally, the logs shows a "diff" self heal on a node reboot:<br>
<br>
[2012-07-09 16:04:06.743512] I [afr-self-heal-algorithm.c:<u></u>122:sh_loop_driver_done] 0-gluster-rep-replicate-0: diff self-heal on /box1-clone2.img: completed. (16 blocks of 16974 were different (0.09%))<br>
<br>
So, does this log show "Granular locking" occurring, or does it just happen transparently when a file exceeds a certain size?<br>
<br>
many thanks<br>
<br>
Jake<div class="im"><br>
<br>
<br>
On 07/09/2012 04:01 PM, Fernando Frediani (Qube) wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
Jake,<br>
<br>
I haven’t had a chanced to test with my KVM cluster yet but it should be<br>
a default things from 3.3.<br>
<br>
Just be in mind that running Virtual Machines is NOT a supported things<br>
for Redhat Storage server according to Redhat Sales people. They said<br>
towards the end of the year. As you might have observed performance<br>
specially for write isn’t any near fantastic.<br>
<br>
<br>
Fernando<br>
<br></div>
*From:*<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<u></u>gluster.org</a><br>
[mailto:<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<u></u>gluster.org</a>] *On Behalf Of *Christian Wittwer<br>
*Sent:* 09 July 2012 15:51<br>
*To:* Jake Grimmett<br>
*Cc:* <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
*Subject:* Re: [Gluster-users] "Granular locking" - does this need to be<div class="im"><br>
enabled in 3.3.0 ?<br>
<br>
Hi Jake<br>
<br>
I can confirm exact the same behaviour with gluster 3.3.0 on Ubuntu<br>
12.04. During the self-heal process the VM gets 100% I/O wait and is locked.<br>
<br>
After the self-heal the root filesystem was read-only which forced me to<br>
do a reboot and fsck.<br>
<br>
Cheers,<br>
<br>
Christian<br>
<br>
2012/7/9 Jake Grimmett <<a href="mailto:jog@mrc-lmb.cam.ac.uk" target="_blank">jog@mrc-lmb.cam.ac.uk</a><br></div>
<mailto:<a href="mailto:jog@mrc-lmb.cam.ac.uk" target="_blank">jog@mrc-lmb.cam.ac.uk</a>><u></u>><div class="im"><br>
<br>
Dear All,<br>
<br>
I have a pair of Scientific Linux 6.2 servers, acting as KVM<br>
virtualisation hosts for ~30 VM's. The VM images are stored in a<br>
replicated gluster volume shared between the two servers. Live migration<br>
works fine, and the sanlock prevents me from (stupidly) starting the<br>
same VM on both machines. Each server has 10GB ethernet and a 10 disk<br>
RAID5 array.<br>
<br>
If I migrate all the VM's to server #1 and shutdown server #2, all works<br>
perfectly with no interruption. When I restart server #2, the VM's<br>
freeze while the self-heal process is running - and this healing can<br>
take a long time.<br>
<br>
I'm not sure if "Granular Locking" is on. It's listed as a "technology<br>
preview" in the Redhat Storage server 2 notes - do I need to do anything<br>
to enable it?<br>
<br>
i.e. set "cluster.data-self-heal-<u></u>algorithm" to diff ?<br>
or edit "cluster.self-heal-window-<u></u>size" ?<br>
<br>
any tips from other people doing similar much appreciated!<br>
<br>
Many thanks,<br>
<br>
Jake<br>
<br></div>
jog <---at---> <a href="http://mrc-lmb.cam.ac.uk" target="_blank">mrc-lmb.cam.ac.uk</a> <<a href="http://mrc-lmb.cam.ac.uk" target="_blank">http://mrc-lmb.cam.ac.uk</a>><br>
______________________________<u></u>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<u></u>org</a>><br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/<u></u>mailman/listinfo/gluster-users</a><br>
<br><span class="HOEnZb"><font color="#888888">
</font></span></blockquote><span class="HOEnZb"><font color="#888888">
<br>
<br>
-- <br>
Dr Jake Grimmett<br>
Head Of Scientific Computing<br>
MRC Laboratory of Molecular Biology<br>
Hills Road, Cambridge, CB2 0QH, UK.<br>
Phone 01223 402219<br>
Mobile 0776 9886539</font></span><div class="HOEnZb"><div class="h5"><br>
______________________________<u></u>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/<u></u>mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>