<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Thank you for your answer...<br>
Does using the NFS client insure replication to all bricks? My
problem is that I see Gluster has "unfinished" replication tasks
that lie around. Seems like the Gluster needs an external trigger to
like "ls -l" on the file in question to re-trigger and complete the
replication if it had failed (temporarily) for any reason.<br>
<br>
I have solved the problem of making the application read from the
"local brick" by mounting the brick locally with -bind as read-only
and making my application separate reads from writes with different
filesystem paths.<br>
<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 21/10/12 23.18, Israel Shirk wrote:<br>
</div>
<blockquote
cite="mid:CAF5SWt5C_6UA3hAnjVCFG8AGxyb-axVcbRRmaEz8nB-NLDrrUA@mail.gmail.com"
type="cite">Haris, try the NFS mount. Gluster typically triggers
healing through the client, so if you skip the client, nothing
heals.
<div><br>
</div>
<div>The native Gluster client tends to be really @#$@#$@# stupid.
It'll send reads to Singapore while you're in Virginia (and
there are bricks 0.2ms away), then when healing is needed it
will take a bunch of time to do that, all the while it's
blocking your application or web server, which under heavy loads
will cause your entire application to buckle.</div>
<div><br>
</div>
<div>The NFS client is dumb, which in my mind is a lot better -
it'll just do what you tell it to do and allow you to compensate
for connectivity issues yourself using something like Linux-HA.<br>
<br>
You have to keep in mind when using gluster that 99% of the
people using it are running their tests on a single server (see
the recent notes about how testing patches are only performed on
a single server), and most applications don't distribute or
mirror to bricks more than a few hundred yards away. Their idea
of geo-replication is that you send your writes to the other
side of the world (which may or may not be up at the moment),
then twiddle your thumbs for a while and hope it gets back to
you. So, that said, it's possible to get it to work, and it's
almost better than lsyncd, but it'll still make you cry
periodically.</div>
<div><br>
</div>
<div>Ok, back to happy time :)<br>
<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi everyone,<br>
<br>
I am using Gluster in replication mode.<br>
Have 3 bricks on 3 different physical servers connected with
WAN. This<br>
makes writing but also reading files from Gluster mounted
volume very slow.<br>
To remedy this I have made my web application read Gluster
files from<br>
the brick directly (I make a readonly bind mount of the
brick), but<br>
write to the Gluster FS mounted volume so that the files
will instantly<br>
replicate on all 3 servers. At least, "instant replication"
is what I<br>
envision GLuster will do for me :)<br>
<br>
My problem is that files sometimes do not replicate to all 3
servers<br>
instantly. There are certainly short network outages which
may prevent<br>
instant replication and I have situations like this:<br>
<br>
ssh web1-prod ls -l<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
-rw-r--r-- 1 apache apache 75901 Oct 19 18:00<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
web2-prod.<br>
ssh web2-prod ls -l<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
-rw-r--r-- 1 apache apache 0 Oct 19 18:00<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
web3-prod.<br>
ssh web3-prod ls -l<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
-rw-r--r--. 1 apache apache 75901 Oct 19 18:00<br>
/home/gluster/r/production/<a moz-do-not-send="true"
href="http://zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js"
target="_blank">zdrava.tv/web/uploads/24/playlist/38/19_10_2012_18.js</a><br>
<br>
Where the file on web2 server brick has a size of 0. So
serving this<br>
file from web2 makes my application make errors..<br>
<br>
I have had a brain-split situation couple of times and
resolved<br>
manually. The above kind of situation is not a brain-split
and resolves<br>
and (re-)replicates completly with a simple "ls -l" on the
file in<br>
question from any of the servers.<br>
<br>
My question is:<br>
I suppose that the problem here is incomplete replication
for the file<br>
in question due to temporary network problems.<br>
How to insure the complete replication immediatly after the
network has<br>
been restored?<br>
<br>
<br>
kind regards<br>
Haris Zukanovic<br>
<br>
--<br>
--<br>
Haris Zukanovic<br>
<br>
</blockquote>
</div>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
--
Haris Zukanovic</pre>
</body>
</html>