<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 12-12-26 10:24 PM, Miles Fidelman
wrote:<br>
</div>
<blockquote cite="mid:50DBCD79.2090909@meetinghouse.net" type="cite">Hi
Folks,
<br>
<br>
I find myself trying to expand a 2-node high-availability cluster
from to a 4-node cluster. I'm running Xen virtualization, and
currently using DRBD to mirror data, and pacemaker to failover
cleanly.
<br>
<br>
The thing is, I'm trying to add 2 nodes to the cluster, and DRBD
doesn't scale. Also, as a function of rackspace limits, and the
hardware at hand, I can't separate storage nodes from compute
nodes - instead, I have to live with 4 nodes, each with 4 large
drives (but also w/ 4 gigE ports per server).
<br>
<br>
The obvious thought is to use Gluster to assemble all the drives
into one large storage pool, with replication. But.. last time I
looked at this (6 months or so back), it looked like some of the
critical features were brand new, and performance seemed to be a
problem in the configuration I'm thinking of.
<br>
<br>
Which leads me to my question: Has the situation improved to the
point that I can use Gluster this way?
<br>
<br>
Thanks very much,
<br>
<br>
Miles Fidelman
<br>
<br>
<br>
</blockquote>
<font face="Helvetica, Arial, sans-serif">Hi,<br>
<br>
I have a XenServer pool (3 servers) talking to an GlusterFS
replicate server over NFS with uCARP for IP failover. <br>
<br>
The system was put in place in May 2012, using GlusterFS 3.3. It
ran very well, with speeds comparable to my existing iSCSI
solution (</font><a
href="http://majentis.com/2011/09/21/xenserver-iscsi-and-glusterfsnfs/">http://majentis.com/2011/09/21/xenserver-iscsi-and-glusterfsnfs/</a><br>
<br>
I was quite pleased with the system, it worked flawlessly. Until
November. At that point, the Gluster NFS server started stalling
under load. It would become unresponsive for a long enough period
of time that the VM's under XenServer would lose their drives.
Linux would remount the drives read-only and then eventually lock
up, while Windows would just lock up. In this case, Windows was
more resilient to the transient disk loss.<br>
<br>
I have been unable to solve the problem, and am now switching back
to a DRBD/iSCSI setup. I'm not happy about it, but we were losing
NFS connectively nightly, during backups. Life was hell for a long
time while I was trying to fix things.<br>
<br>
Gerald
</body>
</html>