<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=gb2312" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.19088">
<STYLE>@font-face {
        font-family: ËÎÌå;
}
@font-face {
        font-family: Verdana;
}
@font-face {
        font-family: @ËÎÌå;
}
@page Section1 {size: 595.3pt 841.9pt; margin: 72.0pt 90.0pt 72.0pt 90.0pt; layout-grid: 15.6pt; }
P.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; TEXT-ALIGN: justify; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; FONT-SIZE: 10.5pt
}
LI.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; TEXT-ALIGN: justify; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; FONT-SIZE: 10.5pt
}
DIV.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; TEXT-ALIGN: justify; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; FONT-SIZE: 10.5pt
}
A:link {
        COLOR: blue; TEXT-DECORATION: underline
}
SPAN.MsoHyperlink {
        COLOR: blue; TEXT-DECORATION: underline
}
A:visited {
        COLOR: purple; TEXT-DECORATION: underline
}
SPAN.MsoHyperlinkFollowed {
        COLOR: purple; TEXT-DECORATION: underline
}
SPAN.EmailStyle17 {
        FONT-STYLE: normal; FONT-FAMILY: Verdana; COLOR: windowtext; FONT-WEIGHT: normal; TEXT-DECORATION: none; mso-style-type: personal-compose
}
DIV.Section1 {
        page: Section1
}
UNKNOWN {
        FONT-SIZE: 10pt
}
BLOCKQUOTE {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
</STYLE>
</HEAD>
<BODY style="MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE: 10pt"><FONT
color=#000000 size=2 face=Verdana>
<DIV> </DIV>
<DIV><FONT size=2 face=Verdana>
<DIV>Dear all,</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV>Recenly, I have installed the gLuster 3.2.2 source and do much IO performance on gLuster by tuning the cache-size and cache-timeout. And I found the cache-size doesn't work for multi bricks whick is strange and stuck me. </DIV>
<DIV>For example, volume info:</DIV>
<DIV>Volume Name: strip-vol</DIV>
<DIV>Type: Stripe</DIV>
<DIV>Status: Started</DIV>
<DIV>Number of Bricks: 2</DIV>
<DIV>Transport-type: tcp</DIV>
<DIV>Bricks:</DIV>
<DIV>Brick1: cloud04:/web-data01/data02 //multi bricks</DIV>
<DIV>Brick2: cloud04:/web-data01/data03</DIV>
<DIV>Options Reconfigured:</DIV>
<DIV>performance.cache-refresh-timeout: 60</DIV>
<DIV>performance.cache-size: 1024000000</DIV>
<DIV> </DIV>
<DIV>test scripts: </DIV>
<DIV>for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done </DIV>
<DIV> </DIV>
<DIV>test results:</DIV>
<DIV> </DIV>
<DIV>First test:</DIV>
<DIV>Description: distributed volume with multi bricks</DIV>
<DIV> </DIV>
<DIV>[root@cloud04 testfs]# for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.107977 seconds, 89 MB/s</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.108642 seconds, 85 MB/s</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.10802 seconds, 90 MB/s</DIV>
<DIV> </DIV>
<DIV>From the dd test doing in 1Gb/s Ethenet, I got the performance shows read a same files which should have cached rather than got them through network.</DIV>
<DIV> </DIV>
<DIV>Second test:</DIV>
<DIV> </DIV>
<DIV>Description: distributed volume with only one brick in a single server</DIV>
<DIV>Volume Name: test-vol</DIV>
<DIV>Type: Stripe</DIV>
<DIV>Status: Started</DIV>
<DIV>Number of Bricks: 1</DIV>
<DIV>Transport-type: tcp</DIV>
<DIV>Bricks:</DIV>
<DIV>Brick1: cloud04:/web-data01/data02 //single brick</DIV>
<DIV>Options Reconfigured:</DIV>
<DIV>performance.cache-refresh-timeout: 60</DIV>
<DIV>performance.cache-size: 1024000000</DIV>
<DIV> </DIV>
<DIV>[root@cloud04 testfs]# for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.107977 seconds, 89 MB/s</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.108642 seconds, 899 MB/s</DIV>
<DIV>20+0 records in</DIV>
<DIV>20+0 records out</DIV>
<DIV>20971520 bytes (21 MB) copied, 0.10802 seconds, 897 MB/s</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>The result shows it read the cached file which is reasonable.</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>Could you know why I got the low performance when doing the first test with multi-bricks? Hence, I debug and trace the two volumes and found that a volume with single brick can cache the file accessed, then next time to read the same file again just read from cache not from network. But for a volume with multi-bricks, cache feature does not work. Could you give me some configuration tips to optimize IO performance when volume with multi-bricks. </DIV>
<DIV> </DIV>
<DIV>Thank you for your any help in advance.</DIV>
<DIV> </DIV>
<DIV>Cheers,</DIV>
<DIV>Qiulan</DIV>
<DIV>
<DIV>2011-10-18</DIV></DIV>
<DIV>====================================================================</DIV>
<DIV>Computing center,the Institute of High Energy Physics, CAS, China</DIV>
<DIV>Qiulan
Huang Tel: (+86) 10 8823 6010-105</DIV>
<DIV>P.O. Box 918-7 Fax: (+86) 10 8823 6839</DIV>
<DIV>Beijing 100049 P.R. China Email: Qiulan.Huang@ihep.ac.cn</DIV>
<DIV>===================================================================</DIV>
<DIV> </DIV>
<DIV></DIV></FONT></DIV></FONT></BODY></HTML>