<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
I found the problem this morning !<br>It's because TCP connection are not reseted on Master Server and when the client come back on master, they enter both in a "TCP DUP ACK" storm. Need to kill all gluster process when Master comes up.<br>More info : <a href="https://bugzilla.redhat.com/show_bug.cgi?id=369991#c31">https://bugzilla.redhat.com/show_bug.cgi?id=369991#c31</a><br><br>Anthony<br><br><br><div>> From: mueller@tropenklinik.de<br>> To: sokar6012@hotmail.com; whit.gluster@transpect.com<br>> CC: gluster-users@gluster.org<br>> Subject: AW: [Gluster-users] UCARP with NFS<br>> Date: Thu, 8 Sep 2011 16:13:24 +0200<br>> <br>> Cmd on slave : <br>> usr/sbin/ucarp -z -B -M -b 1 -i bond0:0<br>> <br>> Did you try "-b 7" at your cmd start. This solved for me the things in<br>> another configuration<br>> <br>> <br>> <br>> <br>> EDV Daniel Müller<br>> <br>> Leitung EDV<br>> Tropenklinik Paul-Lechler-Krankenhaus<br>> Paul-Lechler-Str. 24<br>> 72076 Tübingen <br>> Tel.: 07071/206-463, Fax: 07071/206-499<br>> eMail: mueller@tropenklinik.de<br>> Internet: www.tropenklinik.de <br>> <br>> Von: gluster-users-bounces@gluster.org<br>> [mailto:gluster-users-bounces@gluster.org] Im Auftrag von anthony garnier<br>> Gesendet: Donnerstag, 8. September 2011 15:55<br>> An: whit.gluster@transpect.com<br>> Cc: gluster-users@gluster.org<br>> Betreff: Re: [Gluster-users] UCARP with NFS<br>> <br>> Whit,<br>> <br>> Here is my conf file : <br>> #<br>> # Location of the ucarp pid file<br>> UCARP_PIDFILE=/var/run/ucarp0.pid<br>> <br>> # Define if this host is the prefered MASTER ( this aadd or remove the -P<br>> option)<br>> UCARP_MASTER="yes"<br>> <br>> #<br>> # ucarp base, Interval monitoring time <br>> #lower number will be perfered master<br>> # set to same to have master stay alive as long as possible<br>> UCARP_BASE=1<br>> <br>> #Priority [0-255]<br>> #lower number will be perfered master<br>> ADVSKEW=0<br>> <br>> <br>> #<br>> # Interface for Ipaddress<br>> INTERFACE=bond0:0<br>> <br>> #<br>> # Instance id<br>> # any number from 1 to 255<br>> # Master and Backup need to be the same<br>> INSTANCE_ID=42<br>> <br>> #<br>> # Password so servers can trust who they are talking to<br>> PASSWORD=glusterfs<br>> <br>> #<br>> # The Application Address that will failover<br>> VIRTUAL_ADDRESS=10.68.217.3<br>> VIRTUAL_BROADCAST=10.68.217.255<br>> VIRTUAL_NETMASK=255.255.255.0<br>> #<br>> <br>> #Script for configuring interface<br>> UPSCRIPT=/etc/ucarp/script/vip-up.sh<br>> DOWNSCRIPT=/etc/ucarp/script/vip-down.sh<br>> <br>> # The Maintanence Address of the local machine<br>> SOURCE_ADDRESS=10.68.217.85<br>> <br>> <br>> Cmd on master : <br>> /usr/sbin/ucarp -z -B -P -b 1 -i bond0:0 -v 42 -p glusterfs -k 0 -a<br>> 10.68.217.3 -s 10.68.217.85 --upscript=/etc/ucarp/script/vip-up.sh<br>> --downscript=/etc/ucarp/script/vip-down.sh<br>> <br>> Cmd on slave : <br>> usr/sbin/ucarp -z -B -M -b 1 -i bond0:0 \ -v 42 -p glusterfs -k 50 -a<br>> 10.68.217.3 -s 10.68.217.86 --upscript=/etc/ucarp/script/vip-up.sh<br>> --downscript=/etc/ucarp/script/vip-down.sh<br>> <br>> <br>> To me, to have a prefered master is necessary because I'm using RR DNS and I<br>> want to do a kind of "active/active" failover.I'll explain the whole idea : <br>> <br>> SERVER 1<---------------> SERVER 2<br>> VIP1 VIP2<br>> <br>> When I access the URL glusterfs.preprod.inetpsa.com, RRDNS gives me one of<br>> the VIP(load balancing). The main problem here is if I use only RRDNS, if a<br>> server goes down the client currently binded on this server will fail to. So<br>> to avoid that I need a VIP failover. <br>> By this way, If a server goes down, all the client on this server will be<br>> binded on the other one. Because I want loadbalacing, I need a prefered<br>> master, so by default need that VIP 1 stay on server 1 and VIP 2 stay on<br>> server 2.<br>> Currently I trying to make it works with one VIP only.<br>> <br>> <br>> Anthony<br>> <br>> > Date: Thu, 8 Sep 2011 09:32:59 -0400<br>> > From: whit.gluster@transpect.com<br>> > To: sokar6012@hotmail.com<br>> > CC: gluster-users@gluster.org<br>> > Subject: Re: [Gluster-users] UCARP with NFS<br>> > <br>> > On Thu, Sep 08, 2011 at 01:02:41PM +0000, anthony garnier wrote:<br>> > <br>> > > I got a client mounted on the VIP, when the Master fall, the client<br>> switch<br>> > > automaticaly on the Slave with almost no delay, it works like a charm.<br>> But when<br>> > > the Master come back up, the mount point on the client freeze.<br>> > > I've done a monitoring with tcpdump, when the master came up, the client<br>> send<br>> > > paquets on the master but the master seems to not establish the TCP<br>> connection.<br>> > <br>> > Anthony,<br>> > <br>> > Your UCARP command line choices and scripts would be worth looking at<br>> here.<br>> > There are different UCARP behavior options for when the master comes back<br>> > up. If the initial failover works fine, it may be that you'll have better<br>> > results if you don't have a preferred master. That is, you can either have<br>> > UCARP set so that the slave relinquishes the IP back to the master when<br>> the<br>> > master comes back up, or you can have UCARP set so that the slave becomes<br>> > the new master, until such time as the new master goes down, in which case<br>> > the former master becomes master again.<br>> > <br>> > If you're doing it the first way, there may be a brief overlap, where both<br>> > systems claim the VIP. That may be where your mount is failing. By doing<br>> it<br>> > the second way, where the VIP is held by whichever system has it until<br>> that<br>> > system actually goes down, there's no overlap. There shouldn't be a<br>> reason,<br>> > in the Gluster context, to care which system is master, is there?<br>> > <br>> > Whit<br>> <br></div>                                            </div></body>
</html>