Hi Roland,<br><br>* replies inline *<br><div class="gmail_quote">On Mon, Feb 15, 2010 at 10:18 PM, Roland Fischer <span dir="ltr"><<a href="mailto:roland.fischer@xidras.com">roland.fischer@xidras.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hello,<br>
<br>
i have some troubles with glusterfs v.3.0.0 and samba.<br>
Have anybody experience with gfs and samba? Maybe my config files oder tuning options are bad?<br>
<br>
we use xen 3.4.1 and glusterfs3.0.0 and client-side-replication.<br>
<br>
One domU which is running on glusterfs should release another gfs-lun via samba. But the performance is bad.<br>
<br>
servervolfile:<br>
cat export-web-data-client_repl.vol<br>
# export-web-data-client_repl<br>
# gfs-01-01 /GFS/web-data<br>
# gfs-01-02 /GFS/web-data<br>
<br>
volume posix<br>
type storage/posix<br>
option directory /GFS/web-data<br>
end-volume<br>
<br>
volume locks<br>
type features/locks<br>
subvolumes posix<br>
end-volume<br>
<br>
volume writebehind<br>
type performance/write-behind<br>
option cache-size 4MB<br>
option flush-behind on<br>
subvolumes locks<br>
end-volume<br>
<br>
volume web-data<br>
type performance/io-threads<br>
option thread-count 32<br>
subvolumes writebehind<br>
end-volume<br>
<br></blockquote><div>May we know this reason of io-threads over write-behind have you seen any benefits in using this way. If you are not sure i would suggest moving writebehind over io-threads. <br> <br>can you use volume files generated using volgen in case you are not sure on which way to stack the translators up?.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
volume server<br>
type protocol/server<br>
option transport-type tcp<br>
option transport.socket.listen-port 7000<br>
option auth.addr.web-data.allow *<br>
subvolumes web-data<br>
end-volume<br>
<br>
clientvolfile:<br>
cat /etc/glusterfs/mount-web-data-client_repl.vol<br>
volume gfs-01-01<br>
type protocol/client<br>
option transport-type tcp<br>
option remote-host gfs-01-01<br>
option remote-port 7000<br>
option ping-timeout 5<br>
option remote-subvolume web-data<br>
end-volume<br>
<br>
volume gfs-01-02<br>
type protocol/client<br>
option transport-type tcp<br>
option remote-host gfs-01-02<br>
option remote-port 7000<br>
option ping-timeout 5<br>
option remote-subvolume web-data<br>
end-volume<br>
<br>
volume web-data-replicate<br>
type cluster/replicate<br>
subvolumes gfs-01-01 gfs-01-02<br>
end-volume<br>
<br>
volume readahead<br>
type performance/read-ahead<br>
option page-count 16 # cache per file = (page-count x page-size)<br>
subvolumes web-data-replicate<br>
end-volume<br>
<br></blockquote><div>what is the client side and server side TOTAL ram ?. How many servers and clients do you have?. Coz having read-ahead count on 16 is no good for an ethernet link, you might be choking up the bandwidth unnecessarily. <br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
volume writebehind<br>
type performance/write-behind<br>
option cache-size 2048KB<br>
option flush-behind on<br>
subvolumes readahead<br>
end-volume<br>
<br>
volume iocache<br>
type performance/io-cache<br>
option cache-size 256MB #1GB supported<br>
option cache-timeout 1<br>
subvolumes writebehind<br>
end-volume<br>
<br></blockquote><div>Are your all datasets worth only 256MB? <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
volume quickread<br>
type performance/quick-read<br>
option cache-timeout 1<br>
option max-file-size 64kB<br>
subvolumes iocache<br>
end-volume<br>
<br>
volume statprefetch<br>
type performance/stat-prefetch<br>
subvolumes quickread<br>
end-volume<br>
<br>
<br>
Thank you very much<br>
regards,<br>
Roland<br>
<br></blockquote><div>Even with this we would need to know the backend disk performance with o-direct to properly analyse the cost of using buffering on server side to get better performance out of the system. <br><br>Also have you tried below options in your smb.conf? <br>
<br>socket options = TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=131072 SO_RCVBUF=131072<br> max xmit = 131072<br>getwd cache = yes<br> use sendfile=yes<br><br>RCVBUF and SNDBUF could be changed depending on your needs. <br><br>
Thanks<br>--<br>Harshavardhana<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
<a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br>