<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 09/01/2014 12:08 PM, Roman wrote:<br>
</div>
<blockquote
cite="mid:CAFR=TBprAcGaC8sQ8LYRQ1jDrj0r2G4wmmRQjv0pXKJNU2QFOQ@mail.gmail.com"
type="cite">
<div dir="ltr">Well, as for me, VM-s are not very impacted by
healing process. At least the munin server running with pretty
high load (average rarely goes below 0,9 :) )had no problems. To
create some more load I've made a copy of 590 MB file on the
VM-s disk, It took 22 seconds. Which is ca 27 MB /sec or 214
Mbps/sec
<div>
<br>
</div>
<div>Servers are connected via 10 gbit network. Proxmox client
is connected to the server with separate 1 gbps interface. We
are thinking of moving it to 10gbps also.<br>
<div><br>
</div>
<div>Here are some heal info which is pretty confusing.</div>
</div>
<div><br>
</div>
<div>right after 1st server restored it connection, it was
pretty clear:</div>
<div><br>
</div>
<div>
<div>root@stor1:~# gluster volume heal
HA-2TB-TT-Proxmox-cluster info</div>
<div>Brick stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB/</div>
<div>/images/124/vm-124-disk-1.qcow2 - Possibly undergoing
heal</div>
<div>Number of entries: 1</div>
<div><br>
</div>
<div>Brick stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB/</div>
<div>/images/124/vm-124-disk-1.qcow2 - Possibly undergoing
heal</div>
<div>/images/112/vm-112-disk-1.raw - Possibly undergoing heal</div>
<div>Number of entries: 2</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>some time later is says </div>
<div>
<div>root@stor1:~# gluster volume heal
HA-2TB-TT-Proxmox-cluster info</div>
<div>Brick stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB/</div>
<div>Number of entries: 0</div>
<div><br>
</div>
<div>Brick stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB/</div>
<div>Number of entries: 0</div>
</div>
<div><br>
</div>
<div>while I can still see traffic between servers and still
there was no messages about healing process completion.</div>
</div>
</blockquote>
On which machine do we have the mount?<br>
<br>
Pranith<br>
<blockquote
cite="mid:CAFR=TBprAcGaC8sQ8LYRQ1jDrj0r2G4wmmRQjv0pXKJNU2QFOQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">2014-08-29 10:02 GMT+03:00 Pranith
Kumar Karampuri <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Wow, this is great
news! Thanks a lot for sharing the results :-). Did you
get a chance to test the performance of the applications
in the vm during self-heal?<br>
May I know more about your use case? i.e. How many vms and
what is the avg size of each vm etc?<span class="HOEnZb"><font
color="#888888"><br>
<br>
Pranith</font></span>
<div>
<div class="h5"><br>
<br>
<div>On 08/28/2014 11:27 PM, Roman wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Here are the results.
<div>1. still have problem with logs rotation.
logs are being written to .log.1 file, not .log
file. any hints, how to fix?</div>
<div>2. healing logs are now much more better, I
can see the successful message.</div>
<div>3. both volumes with HD off and on
successfully synced. the volume with HD on
synced much more faster.</div>
<div>4. both VMs on volumes survived the outage,
when new files were added and deleted during
outage.</div>
<div><br>
</div>
<div>So replication works well with both HD on and
off for volumes for VM-s. With HD even faster.
Need to solve the logging issue.</div>
<div><br>
</div>
<div>Seems we could start production storage from
this moment :) The whole company will use it.
Some distributed and some replicated. Thanks for
great product.</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">2014-08-27 16:03
GMT+03:00 Roman <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:romeo.r@gmail.com"
target="_blank">romeo.r@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">Installed new packages. Will
make some tests tomorrow. thanx.</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">2014-08-27 14:10
GMT+03:00 Pranith Kumar Karampuri <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:pkarampu@redhat.com"
target="_blank">pkarampu@redhat.com</a>></span>:
<div>
<div><br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div>
<div><br>
On 08/27/2014 04:38 PM, Kaleb
KEITHLEY wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex"> On
08/27/2014 03:09 AM, Humble
Chirammal wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex"> <br>
<br>
----- Original Message -----<br>
| From: "Pranith Kumar
Karampuri" <<a
moz-do-not-send="true"
href="mailto:pkarampu@redhat.com"
target="_blank">pkarampu@redhat.com</a>><br>
| To: "Humble Chirammal"
<<a
moz-do-not-send="true"
href="mailto:hchiramm@redhat.com"
target="_blank">hchiramm@redhat.com</a>><br>
| Cc: "Roman" <<a
moz-do-not-send="true"
href="mailto:romeo.r@gmail.com"
target="_blank">romeo.r@gmail.com</a>>,
<a moz-do-not-send="true"
href="mailto:gluster-users@gluster.org"
target="_blank">gluster-users@gluster.org</a>,
"Niels de Vos" <<a
moz-do-not-send="true"
href="mailto:ndevos@redhat.com"
target="_blank">ndevos@redhat.com</a>><br>
| Sent: Wednesday, August
27, 2014 12:34:22 PM<br>
| Subject: Re:
[Gluster-users] libgfapi
failover problem on replica
bricks<br>
|<br>
|<br>
| On 08/27/2014 12:24 PM,
Roman wrote:<br>
| > root@stor1:~# ls -l
/usr/sbin/glfsheal<br>
| > ls: cannot access
/usr/sbin/glfsheal: No such
file or directory<br>
| > Seems like not.<br>
| Humble,<br>
| Seems like the
binary is still not
packaged?<br>
<br>
Checking with Kaleb on this.<br>
<br>
</blockquote>
...<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex"> |
>>> |<br>
| >>> |
Humble/Niels,<br>
| >>>
| Do we have debs
available for 3.5.2? In
3.5.1<br>
| >>>
there was packaging<br>
| >>> |
issue where
/usr/bin/glfsheal is not
packaged along<br>
| >>>
with the deb. I<br>
| >>> |
think that should be fixed
now as well?<br>
| >>> |<br>
| >>>
Pranith,<br>
| >>><br>
| >>>
The 3.5.2 packages for
debian is not available yet.
We<br>
| >>>
are co-ordinating
internally to get it
processed.<br>
| >>> I
will update the list once
its available.<br>
| >>><br>
| >>>
--Humble<br>
</blockquote>
<br>
glfsheal isn't in our 3.5.2-1
DPKGs either. We (meaning I)
started with the 3.5.1
packaging bits from Semiosis.
Perhaps he fixed 3.5.1 after
giving me his bits.<br>
<br>
I'll fix it and spin 3.5.2-2
DPKGs.<br>
</blockquote>
</div>
</div>
That is great Kaleb. Please notify
semiosis as well in case he is yet
to fix it.<br>
<br>
Pranith<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex"> <br>
<span><font color="#888888"> -- <br>
<br>
Kaleb<br>
<br>
</font></span></blockquote>
<br>
</blockquote>
</div>
</div>
</div>
<span><font color="#888888"><br>
<br clear="all">
<div><br>
</div>
-- <br>
Best regards,<br>
Roman. </font></span></div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
Best regards,<br>
Roman. </div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
Best regards,<br>
Roman.
</div>
</blockquote>
<br>
</body>
</html>