Hi Filipe Maia,<br><br> io-threads translator doesn't have option "cache-size" . cache-size option should be used for io-cache translator. Read the following link to understand more about currently available options about each translator <a href="http://gluster.org/docs/index.php/Translators_options">http://gluster.org/docs/index.php/Translators_options</a> <br>
<br>and following log <br>"2009-01-14 12:22:43 W [write-behind.c:1363:init] brick: aggregate-size<br>
is not zero, disabling flush-behind" <br><br>Is harmless. under current code base its loglevel is changed to DEBUG. <br><br><div class="gmail_quote">On Wed, Jan 14, 2009 at 6:01 PM, Filipe Maia <span dir="ltr"><<a href="mailto:filipe@xray.bmc.uu.se">filipe@xray.bmc.uu.se</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi,<br>
<br>
I've tried to use the doc/examples/write-behind.vol and io-threads.vol<br>
examples in my unify configuration.<br>
Here's the glusterfs-server.vol:<br>
<br>
<br>
volume disk<br>
type storage/posix # POSIX FS translator<br>
option directory /export/data # Export this directory<br>
end-volume<br>
<br>
volume ns<br>
type storage/posix<br>
option directory /export/ns<br>
end-volume<br>
<br>
### 'IO-threads' translator gives a threading behaviour to File I/O calls.<br>
# All other normal fops are having default behaviour. Loading this on<br>
server side helps<br>
# to reduce the contension of network. (Which is assumed as a GlusterFS hang).<br>
# One can load it in client side to reduce the latency involved in case of a<br>
# slow network, when loaded below write-behind.<br>
volume iot<br>
type performance/io-threads<br>
subvolumes disk<br>
option thread-count 4 # default value is 1<br>
option cache-size 16MB # default is 64MB (This is per thread, so configure it<br>
# according to your RAM size and thread-count.<br>
end-volume<br>
<br>
<br>
### 'Write-behind' translator is a performance booster for write operation. Best<br>
# used on client side, as its main intension is to reduce the network latency<br>
# caused for each write operation.<br>
<br>
volume brick<br>
type performance/write-behind<br>
subvolumes iot<br>
option flush-behind on # default value is 'off'<br>
option window-size 2MB<br>
option aggregate-size 1MB # default value is 0<br>
end-volume<br>
<br>
# Volume name is server<br>
volume server<br>
type protocol/server<br>
option transport-type tcp<br>
option auth.addr.brick.allow *<br>
option auth.addr.ns.allow *<br>
subvolumes brick ns<br>
end-volume<br>
<br>
<br>
<br>
I'm getting the following warnings with glusterfs-1.4.0rc7:<br>
<br>
tintoretto:~# tail /var/log/glusterfsd.log<br>
2009-01-14 12:22:43 W [write-behind.c:1363:init] brick: aggregate-size<br>
is not zero, disabling flush-behind<br>
2009-01-14 12:22:43 W [glusterfsd.c:416:_log_if_option_is_invalid]<br>
iot: option 'cache-size' is not recognized<br>
tintoretto:~#<br>
<br>
What am I doing wrong?<br>
<br>
Filipe<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>Harshavardhana<br>[y4m4 on #<a href="mailto:gluster@irc.freenode.net">gluster@irc.freenode.net</a>]<br>"Samudaya TantraShilpi"<br>Z Research Inc - <a href="http://www.zresearch.com">http://www.zresearch.com</a><br>
<br>