Wilkins,<br> I have added relevant performance translator inline. please go through translator options document and change translator parameters according to your needs.<br><br><div class="gmail_quote">On Thu, Oct 23, 2008 at 1:31 AM, <span dir="ltr"><<a href="mailto:m.c.wilkins@massey.ac.nz">m.c.wilkins@massey.ac.nz</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
Hi,<br>
<br>
I only heard about GlusterFS last week, so am still a newbie. I have<br>
a question regarding using performance translators, in particular in a<br>
NUFA setup.<br>
<br>
A quick summary of my setup. I have two machines (a third is to be<br>
added): k9 has two bricks (16T and 2T), orac has one brick of 5T. I<br>
have used AFR for the namespace. My config is below.<br>
<br>
Everything seems to be working OK, but I would like to add in some<br>
performance translators and I'm not exactly sure where. There are<br>
five: read ahead, write behind, threaded IO, IO-cache, and booster.<br>
Which go where? On server or client? On each individual brick, or<br>
after the unify or afr? I have read the doco, that is why I've<br>
managed to get this far, I can see how I can stick in one or two<br>
translators, but not if I should have all of them and where they<br>
should all go. For instance I see IO-cache should go on the client<br>
side, but should it be on each brick, or on the unify or what?<br>
<br>
I know this is quite a big ask, but if someone could have a read<br>
through my config and perhaps show where I should stick in all the<br>
translators that would be great.<br>
<br>
Thank you muchly!<br>
<br>
Matt<br>
<br>
This is the config on k9 (the one on orac is very similar, I won't<br>
bother showing it here):<br>
<br>
volume brick0<br>
type storage/posix<br>
option directory /export/brick0<br>
end-volume<br>
</blockquote><div><br>volume iot-0<br>type performance/io-threads<br>subvolume brick0<br>end-volume<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
volume brick1<br>
type storage/posix<br>
option directory /export/brick1<br>
end-volume<br>
</blockquote><div><br>volume iot-1<br>type performance/io-threads<br>subvolume brick1<br>end-volume<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
volume brick-ns<br>
type storage/posix<br>
option directory /export/brick-ns<br>
end-volume<br>
</blockquote><div><br>volume iot-ns<br>type performance/io-threads<br>subvolume brick-ns<br>end-volume <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
volume server<br>
type protocol/server<br>
subvolumes brick0 brick1 brick-ns<br>
option transport-type tcp/server<br>
#option auth.ip.brick0.allow <a href="http://127.0.0.1" target="_blank">127.0.0.1</a>,<a href="http://130.123.129.121" target="_blank">130.123.129.121</a>,<a href="http://130.123.128.35" target="_blank">130.123.128.35</a>,<a href="http://130.123.128.28" target="_blank">130.123.128.28</a> # this is what i want, but it doesn't seem to work<br>
option auth.ip.brick0.allow *<br>
option auth.ip.brick1.allow *<br>
option auth.ip.brick-ns.allow *</blockquote><div> option auth.ip.iot-0.allow *<br>
option auth.ip.iot-1.allow *<br>
option auth.ip.iot-ns.allow *<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
end-volume<br>
<br>
volume client-orac-0<br>
type protocol/client<br>
option transport-type tcp/client<br>
option remote-host orac<br>
option remote-subvolume iot-0<br>
end-volume<br>
<br>
volume client-orac-ns<br>
type protocol/client<br>
option transport-type tcp/client<br>
option remote-host orac<br>
option remote-subvolume iot-ns<br>
end-volume<br>
<br>
volume afr-ns<br>
type cluster/afr<br>
subvolumes iot-ns client-orac-ns<br>
end-volume<br>
<br>
volume unify<br>
type cluster/unify<br>
option namespace afr-ns<br>
option scheduler nufa<br>
option nufa.local-volume-name iot-0,iot-1<br>
option nufa.limits.min-free-disk 5%<br>
subvolumes iot-0 iot-1 client-orac-0<br>
end-volume<br>
</blockquote><div><br>volume ra<br>type performance/read-ahead<br>subvolume unify<br>end-volume<br> <br></div><div>volume ioc<br>type performance/io-cache<br>subvolume ra<br>end-volume <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>hard work often pays off after time, but laziness always pays off now<br>