<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=text/html;charset=iso-8859-1 http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.18241"></HEAD>
<BODY style="PADDING-LEFT: 10px; PADDING-RIGHT: 10px; PADDING-TOP: 15px"
id=MailContainerBody leftMargin=0 topMargin=0 bgColor=#ffffff
CanvasTabStop="true" name="Compose message area">
<DIV><FONT size=2 face=宋体>All,</FONT></DIV>
<DIV><FONT size=2 face=宋体></FONT> </DIV>
<DIV><FONT size=2 face=宋体> It seems difficult for you.
</FONT></DIV>
<DIV><FONT size=2 face=宋体></FONT> </DIV>
<DIV><FONT size=2 face=宋体> There is a new problem when I
tested.</FONT></DIV>
<DIV><FONT size=2 face=宋体></FONT> </DIV>
<DIV><FONT size=2 face=宋体> When I kill all the storage
nodes, the client still try to send data, and doesn't quit.</FONT></DIV>
<DIV><FONT size=2 face=宋体></FONT> </DIV>
<DIV><FONT size=2 face=宋体>Thanks,</FONT></DIV>
<DIV><FONT size=2 face=宋体>Alfred</FONT></DIV>
<DIV style="FONT: 10pt Tahoma">
<DIV><BR></DIV>
<DIV style="BACKGROUND: #f5f5f5">
<DIV style="font-color: black"><B>From:</B> <A
title="mailto:yangyaomin@gmail.com CTRL + 单击以下链接"
href="mailto:yangyaomin@gmail.com">yaomin @ gmail</A> </DIV>
<DIV><B>Sent:</B> Monday, January 05, 2009 10:52 PM</DIV>
<DIV><B>To:</B> <A title="mailto:krishna@zresearch.com CTRL + 单击以下链接"
href="mailto:krishna@zresearch.com">Krishna Srinivas</A> </DIV>
<DIV><B>Cc:</B> <A title="mailto:gluster-devel@nongnu.org CTRL + 单击以下链接"
href="mailto:gluster-devel@nongnu.org">gluster-devel@nongnu.org</A> </DIV>
<DIV><B>Subject:</B> Re: [Gluster-devel] Cascading different translator doesn't
work as expectation</DIV></DIV></DIV>
<DIV><BR></DIV>
<DIV>Krishna,<BR></DIV>
<DIV> <FONT face=??>Thank you for your quick
response.</FONT></DIV>
<DIV><FONT face=??></FONT><BR> There are two log information
in the client's log file when setting up the client.<BR> <FONT
size=2> </FONT><FONT color=#0000ff><FONT size=2>2009-01-05 18:44:59 W
[fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse: 2: (34) / => 1 Rehashing
0/0<BR> 2009-01-05 18:48:04 W
[fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse: 2: (34) / => 1 Rehashing
0/0<BR></FONT> </FONT></DIV>
<DIV><FONT size=2 face=??> <FONT size=3>There is no any
information in the storage node's log file.</FONT></FONT></DIV>
<DIV><FONT size=2 face=??><FONT size=3></FONT></FONT> </DIV>
<DIV><FONT size=2 face=??><FONT size=3> Although I changed the
scheduler from ALU to RR, there only the No.3(192.168.13.5) and
No.4(192.168.13.7) storage nodes on working.</FONT></FONT></DIV>
<DIV><FONT size=2 face=??><FONT size=3></FONT></FONT> </DIV>
<DIV><FONT size=2 face=??><FONT size=3> Each machine
has 2GB memory.</FONT></FONT></DIV>
<DIV> </DIV>
<DIV><FONT face=??>Thanks,</FONT></DIV>
<DIV><FONT size=2 face=??><FONT size=3>Alfred </FONT></FONT></DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><FONT face=??>The following is the vol file on server for each storage
node.</FONT></DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><FONT size=2
face=??>##############################################<BR>### GlusterFS
Server Volume Specification
##<BR>##############################################</FONT></DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><FONT size=2 face=??>#### CONFIG FILE RULES:<BR>### "#" is comment
character.<BR>### - Config file is case sensitive<BR>### - Options within a
volume block can be in any order.<BR>### - Spaces or tabs are used as delimitter
within a line. <BR>### - Multiple values to options will be : delimitted.<BR>###
- Each option should end within a line.<BR>### - Missing or commented fields
will assume default values.<BR>### - Blank/commented lines are allowed.<BR>### -
Sub-volumes should already be defined above before referring.</FONT></DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><FONT size=2 face=??>volume name_space<BR> type
storage/posix<BR> option directory
/locfsb/name_space<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>volume brick1<BR> type
storage/posix
# POSIX FS translator<BR> option directory /locfs/brick
# Export this directory<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>volume brick2<BR> type
storage/posix
# POSIX FS translator<BR> option directory /locfsb/brick
# Export this directory<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>volume server<BR> type protocol/server<BR>
option transport-type tcp/server # For
TCP/IP transport<BR># option listen-port
6996
# Default is 6996<BR># option client-volume-filename
/etc/glusterfs/glusterfs-client.vol<BR> subvolumes brick1 brick2
name_space<BR> option auth.ip.brick1.allow 192.168.13.* # Allow access to
"brick1" volume<BR> option auth.ip.brick2.allow 192.168.13.* # Allow
access to "brick2" volume<BR> option auth.ip.name_space.allow 192.168.13.*
# Allow access to "name_space" volume<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add io-threads feature<BR>volume iot<BR>
type performance/io-threads<BR> option thread-count 1 # deault is
1<BR> option cache-size 16MB #64MB<BR> subvolumes brick1
#bricks<BR>end-volume
<BR> <BR>### Add readahead feature<BR>volume readahead<BR> type
performance/read-ahead<BR> option page-size
1MB # unit in bytes<BR> option page-count
4 # cache per file = (page-count x
page-size)<BR> subvolumes iot<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add IO-Cache feature<BR>volume iocache<BR>
type performance/io-cache<BR> option page-size 256KB
<BR> option page-count
8
<BR> subvolumes readahead <BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add writeback feature<BR>volume
writeback<BR> type performance/write-behind<BR> option
aggregate-size 1MB #option flush-behind off<BR> option window-size
3MB # default is 0bytes<BR># option flush-behind
on # default is 'off'<BR> subvolumes
iocache<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add io-threads feature<BR>volume iot2<BR>
type performance/io-threads<BR> option thread-count 1 # deault is
1<BR> option cache-size 16MB #64MB<BR> subvolumes brick2
#bricks<BR>end-volume
<BR> <BR>### Add readahead feature<BR>volume readahead<BR> type
performance/read-ahead<BR> option page-size
1MB # unit in bytes<BR> option page-count
4 # cache per file = (page-count x
page-size)<BR> subvolumes iot2<BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add IO-Cache feature<BR>volume iocache<BR>
type performance/io-cache<BR> option page-size 256KB
<BR> option page-count
8
<BR> subvolumes readahead <BR>end-volume</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=??>### Add writeback feature<BR>volume
writeback<BR> type performance/write-behind<BR> option
aggregate-size 1MB #option flush-behind off<BR> option window-size
3MB # default is 0bytes<BR># option flush-behind
on # default is 'off'<BR> subvolumes
iocache<BR>end-volume<BR></FONT></DIV>
<DIV><FONT size=2 face=??></FONT> </DIV>
<DIV><BR>--------------------------------------------------<BR>From: "Krishna
Srinivas" <krishna@zresearch.com><BR>Sent: Monday, January 05, 2009 2:07
PM<BR>To: "yaomin @ gmail" <yangyaomin@gmail.com><BR>Cc:
<gluster-devel@nongnu.org><BR>Subject: Re: [Gluster-devel] Cascading
different translator doesn't work as expectation<BR><BR>> Alfred,<BR>>
<BR>> Can you check client logs for any error messages?<BR>> You are using
ALU, it might be creating the files on the disk with max<BR>> space (which
being your storage nodes 3, 4)<BR>> You can check with RR scheduler to see if
all the nodes are participating.<BR>> <BR>> How much memory do the servers
and client use?<BR>> <BR>> Krishna<BR>> <BR>> On Sun, Jan 4, 2009 at
6:48 PM, yaomin @ gmail <yangyaomin@gmail.com> wrote:<BR>>>
Hey,<BR>>><BR>>> I try to use the following
cascading mode to enhance the throughput<BR>>> performance, but the result
is bad.<BR>>> There are four storage nodes and
each exports 2
directories.<BR>>><BR>>>
(on
client)
unify(alu)
translator<BR>>>
/<BR>>>
\<BR>>><BR>>>
/
\<BR>>><BR>>>
/
\<BR>>><BR>>>
/
\<BR>>><BR>>>
/
\<BR>>> (translator on client)
stripe<BR>>>
stripe<BR>>>
/<BR>>>
\
/
\<BR>>>
/<BR>>>
\
/
\<BR>>>
/<BR>>>
\
/ \<BR>>>
(translator on client) AFR
AFR
AFR<BR>>>
AFR<BR>>>
/
\
/<BR>>>
\
/
\
/
\<BR>>>
/
\ /<BR>>>
\
/
\
/
\<BR>>>
#1-1 #2-1 #3-1
#4-4 #1-2
#2-2<BR>>> #3-2 #4-2<BR>>> When
I use iozone to test with 10 concurrent processes, I only find the<BR>>>
#3 and #4 storages working, and the other 2 nodes doesn't work. As
my<BR>>> expectation, the 4 storage nodes should simultaneously work at
any time, but<BR>>> it is out of my mind. what's wrong with
it?<BR>>> Another issue is that the memory is exhausted on
storage nodes when<BR>>> writing and on client server when reading, and it
is not what I want. Is<BR>>> there any method to limit the usage of
memory?<BR>>><BR>>><BR>>> Best Wishes.<BR>>>
Alfred<BR>>></DIV></BODY></HTML>