<div dir="ltr">Hi Paolo,<br><br>Take a look at <a href="http://www.gluster.org/docs/index.php/Understanding_Unify_Translator">http://www.gluster.org/docs/index.php/Understanding_Unify_Translator</a> .<br>The diagram on this page indicates that the AFRs / Stripe, etc should be just below the unify. However, I *think* it is more of a suggestion than a binding rule as I don't see anything the Volume descriptor syntax to prevent you from doing it the other way round.<br>
<br>On your observations of poor performance on 100Mbps interconnect, I'm facing the same issues. In particular, the performance starts degrading very fast when the file sizes drop below 64K.<br><br>We'll be doing file system tweaks some time this week and will post the results if they are any good.<br>
<br>Regards<br>Chandranshu<br><br><div class="gmail_quote">On Wed, Sep 17, 2008 at 4:55 PM, <span dir="ltr"><<a href="mailto:gluster-users-request@gluster.org">gluster-users-request@gluster.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Message: 5<br>
Date: Wed, 17 Sep 2008 13:25:11 +0200<br>
From: "Paolo Supino" <<a href="mailto:paolo.supino@gmail.com">paolo.supino@gmail.com</a>><br>
Subject: Re: [Gluster-users] deployment<br>
To: "Keith Freedman" <<a href="mailto:freedman@freeformit.com">freedman@freeformit.com</a>><br>
Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
Message-ID:<br>
<<a href="mailto:2e94257a0809170425y5805854eq43b3be8fb4d74417@mail.gmail.com">2e94257a0809170425y5805854eq43b3be8fb4d74417@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hi Keith<br>
<br>
There's a section on the website that gives the configuration for a<br>
unify/AFR but doesn't say whether AFR goes above or below unify. At the<br>
moment I don't need the whole 2TB and I can live with half of it, but I<br>
might down the road need the extra space. If and when that happens is it<br>
possible to break the unify/AFR and move everything to only unify without<br>
deleting data (not that will be an obstacle, see below)?<br>
Can anyone answer the question: does AFR goes above or below unify?<br>
<br>
I don't think that the data stored on the gluster volume will be mission<br>
critical: it 's genomic data that is being processed on the cluster. I think<br>
that the worst case scenario in case of brick loss will be that a few hours<br>
of processing will be lost.<br>
<br>
<br>
<br>
--<br>
TIA<br>
Paolo<br>
<br>
<br>
<br>
<br>
On Wed, Sep 17, 2008 at 12:22 PM, Keith Freedman <<a href="mailto:freedman@freeformit.com">freedman@freeformit.com</a>>wrote:<br>
<br>
> Some other things to consider:<br>
><br>
> the unify is a good idea to make use of all your space. However, with that<br>
> many nodes, your probability of a node failing is high.<br>
> so just be aware, if one of the nodes fails, whatever data stored on that<br>
> node will be lost.<br>
><br>
> If you dont need the full 2TB's then I'd suggest using AFR.<br>
><br>
> I *think* you can run afr UNDER unify, so you would create one unify brick<br>
> with half the machines, another with the other half and AFR across them.<br>
> but I'm not sure.. it may be that AFR has to be above Unify<br>
><br>
> of course, if you don't care about the data really, i.e. it's all backup or<br>
> working space or temp files, etc.. then no need to AFR them.<br>
><br>
> Keith<br>
><br>
> At 01:52 AM 9/17/2008, Paolo Supino wrote:<br>
><br>
>> Hi Raghavendra<br>
>><br>
>> I like your reply and definitely will give it a try. There's nothing I<br>
>> hate mre than wasted infrastructure ...<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> --<br>
>> TIA<br>
>> Paolo<br>
>><br>
>><br>
>> On Wed, Sep 17, 2008 at 8:13 AM, Raghavendra G <<mailto:<br>
>> <a href="mailto:raghavendra.hg@gmail.com">raghavendra.hg@gmail.com</a>><a href="mailto:raghavendra.hg@gmail.com">raghavendra.hg@gmail.com</a>> wrote:<br>
>> Hi Paolo,<br>
>><br>
>> One of the configurations is to have glusterfs as server on each of the<br>
>> nodes exporting a brick. Each node should also have glusterfs running as<br>
>> client having unify translator, unifying all the servers.<br>
>><br>
>> regards,<br>
>><br>
>> On Tue, Sep 16, 2008 at 10:34 PM, Paolo Supino <<mailto:<br>
>> <a href="mailto:paolo.supino@gmail.com">paolo.supino@gmail.com</a>><a href="mailto:paolo.supino@gmail.com">paolo.supino@gmail.com</a>> wrote:<br>
>> Hi<br>
>><br>
>> I have a small HPC cluster of 36 nodes (1 head, 35 compute). Each of the<br>
>> nodes has a 1 65GB (~ 2.2TB combined) volume that isn't being used. I<br>
>> thought of using a parallel filesystem in order to put this unused space<br>
>> into good use. The configuration I had in mind is: All nodes will act a<br>
>> bricks and all nodes will act as clients. I have no experience with Gluster<br>
>> and want to know what people on the mailing list thought of the idea,<br>
>> deployment scenario, pros and cons etc ... Any reply will help :-)<br>
>><br>
>><br>
>><br>
>> --<br>
>> TIA<br>
>> Paolo<br>
>><br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
>><br>
>><br>
>> --<br>
>> Raghavendra G<br>
>><br>
>> A centipede was happy quite, until a toad in fun,<br>
>> Said, "Prey, which leg comes after which?",<br>
>> This raised his doubts to such a pitch,<br>
>> He fell flat into the ditch,<br>
>> Not knowing how to run.<br>
>> -Anonymous<br>
>><br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
>><br>
><br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <a href="http://zresearch.com/pipermail/gluster-users/attachments/20080917/b97b508c/attachment.htm" target="_blank">http://zresearch.com/pipermail/gluster-users/attachments/20080917/b97b508c/attachment.htm</a><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br>
<br>
End of Gluster-users Digest, Vol 5, Issue 14<br>
********************************************<br>
</blockquote></div><br></div>