<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Any news? Wed latest seems to have been and gone?<br>
<br>
I think some kind of optimistic locking is the kind of thing which
pushes Gluster into the high performance bracket (without needing
40GB cards, which when you think about it kind of ends up creating
just one big NUMA machine rather than a cluster setup)<br>
<br>
If the kernel NLM can do this satisfactorily then seems like things
get even better?<br>
<br>
Cheers<br>
<br>
Ed W<br>
<br>
On 26/09/2010 03:02, Craig Carl wrote:
<blockquote
cite="mid:1647979425.13098.1285466564363.JavaMail.root@mailbox1"
type="cite">
<style type="text/css">p { margin: 0; }</style>
<div style="font-family: Times New Roman; font-size: 12pt; color:
rgb(0, 0, 0);">Ed - <br>
I'll follow up on your request with engineering and
professional services, can we get back to you Wednesday latest?<span><br>
<br>
<span name="x"></span>
<div>
<div>Thanks, <br>
<br>
Craig<br>
<br>
--<br>
Craig Carl<br>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Sales Engineer; Gluster, Inc.
<br>
Cell - (<span class="Object"
id="OBJ_PREFIX_DWT262"><a
moz-do-not-send="true"
href="about:blank"
mce_href="about:blank"
target="_blank">408)
829-9953</a></span>
(California, USA)<br>
Office - (<span class="Object"
id="OBJ_PREFIX_DWT263"><a
moz-do-not-send="true"
href="about:blank"
mce_href="about:blank"
target="_blank">408)
770-1884</a></span><br>
Gtalk - <span class="Object"
id="OBJ_PREFIX_DWT264"><span
class="Object"
id="OBJ_PREFIX_DWT426"><a class="moz-txt-link-abbreviated" href="mailto:craig.carl@gmail.com">craig.carl@gmail.com</a></span></span><br>
Twitter - @gluster<br>
<span class="Object"
id="OBJ_PREFIX_DWT265"><span
class="Object"
id="OBJ_PREFIX_DWT427"><a
moz-do-not-send="true"
href="http://www.youtube.com/user/GlusterStorage"
mce_href="http://www.youtube.com/user/GlusterStorage" target="_blank">Installing
Gluster Storage Platform,
the movie!</a><br>
</span></span><span
class="Object"
id="OBJ_PREFIX_DWT5265"><a
moz-do-not-send="true"
href="http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/"
mce_href="http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/"
target="_blank">http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/</a></span><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<span name="x"></span><br>
</span><br>
<hr id="zwchr"><b>From: </b>"Ed W" <a class="moz-txt-link-rfc2396E" href="mailto:lists@wildgooses.com"><lists@wildgooses.com></a><br>
<b>To: </b><a class="moz-txt-link-abbreviated" href="mailto:gluster-devel@nongnu.org">gluster-devel@nongnu.org</a><br>
<b>Sent: </b>Saturday, September 25, 2010 5:35:21 PM<br>
<b>Subject: </b>Re: [Gluster-devel] Can I bring a development
idea to Dev's attention?<br>
<br>
Does someone from Gluster like to contact me with a
"reasonable" offer <br>
for sponsoring some kind of "optimistic cache" feature, with a
specific <br>
view to optimising the NUFA server side replication
architecture?<br>
<br>
I would specifically like to optimise the case that you have a
flat <br>
namespace on the server (master/master filesharing), but you
optimise <br>
the applications in such a way that the applications running on
each <br>
brick (NUFA) only touch a subset of all files (in general). eg
a <br>
mailserver with a flat filesystem, but users are proxied so that
they <br>
generally touch only a specific server, or a webserver with a
flat <br>
namespace where a proxy points specific domains to be served by
specific <br>
servers?<br>
<br>
In this case I would like to see a specific brick realise that
it's <br>
predominantly the reader/write for a subset of all files and
optimise <br>
it's access at the expense of other bricks which need to access
the same <br>
files (ie I don't just want to turn up the writeback cache, I
want cache <br>
coherency across the entire cluster). I would accept that
random <br>
read/writes to random bricks would be slower, in return for the
<br>
optimisation that reads/writes would be faster *if* the clients
optimise <br>
themselves to *prefer* to touch specific bricks (ie NUFA). Such
an <br>
optimisation should not be set in stone of course, if the
activity on a <br>
subdirectory generally seems to move across to another brick
then that <br>
brick should eventually optimise it's read/write performance (at
the <br>
expense that another brick's access now becomes slower to that
same <br>
subset of files.)<br>
<br>
Anyone care to quote on this? Seems like it's a popular
performance <br>
issue on the mailing list and with some optimisation later it
also seems <br>
like the basis for cross datacenter replication?<br>
<br>
<br>
Thanks<br>
<br>
Ed W<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
<a class="moz-txt-link-freetext" href="http://lists.nongnu.org/mailman/listinfo/gluster-devel">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a></div>
</blockquote>
<br>
</body>
</html>