<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-2022-jp"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I compared the profile dumps while write and read is separately running;<div><div><br></div><div>writing:</div><div>------------------------------------------------</div><div><div>Interval 58 Stats:</div><div> Block Size: 65536b+ 131072b+ </div><div> No. of Reads: 0 0 </div><div>No. of Writes: 27120 10500 </div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 100.00 133.51 us 36.00 us 1339.00 us 37619 WRITE</div><div> </div><div> Duration: 12 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 3153854464 bytes</div></div><div>------------------------------------------------</div><div><br></div><div><br></div><div>read:</div><div>------------------------------------------------</div><div><div>Interval 63 Stats:</div><div> Block Size: 131072b+ </div><div> No. of Reads: 3529 </div><div>No. of Writes: 0 </div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.54 87.86 us 68.00 us 127.00 us 42 FSTAT</div><div> 99.46 193.68 us 89.00 us 2121.00 us 3529 READ</div><div> </div><div> Duration: 12 seconds</div><div> Data Read: 462553088 bytes</div><div>Data Written: 0 bytes</div></div><div>------------------------------------------------</div><div><br></div><div><br></div><div><br></div><div>two server brick avg dumps:</div><div>================================</div><div><div>Brick: 192.168.101.133:/dcsdata/d0</div><div>----------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8192b+ 16384b+ 32768b+ </div><div> No. of Reads: 0 0 0 </div><div>No. of Writes: 2 1 1 </div><div> </div><div> Block Size: 65536b+ 131072b+ 262144b+ </div><div> No. of Reads: 0 1613832 0 </div><div>No. of Writes: 2282474 1148962 227 </div><div> </div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 14 FORGET</div><div> 0.00 0.00 us 0.00 us 0.00 us 39 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 114 RELEASEDIR</div><div> 0.00 84.50 us 54.00 us 115.00 us 2 OPENDIR</div><div> 0.00 79.00 us 52.00 us 127.00 us 4 OPEN</div><div> 0.00 47.00 us 14.00 us 130.00 us 8 FLUSH</div><div> 0.00 342.00 us 311.00 us 373.00 us 2 CREATE</div><div> 0.00 104.77 us 26.00 us 281.00 us 13 STATFS</div><div> 0.01 131.75 us 35.00 us 285.00 us 93 LOOKUP</div><div> 0.02 7446.00 us 104.00 us 29191.00 us 4 READDIRP</div><div> 0.07 2784.89 us 49.00 us 49224.00 us 36 GETXATTR</div><div> 0.20 64.49 us 29.00 us 164.00 us 4506 FSTAT</div><div> 1.07 399482.25 us 361616.00 us 450370.00 us 4 UNLINK</div><div> 42.87 167.36 us 56.00 us 44827.00 us 381080 READ</div><div> 55.76 71.51 us 35.00 us 7032.00 us 1159912 WRITE</div><div> </div><div> Duration: 22156 seconds</div><div> Data Read: 211528187904 bytes</div><div>Data Written: 300276908032 bytes</div><div> </div><div>Interval 71 Stats:</div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 1 RELEASEDIR</div><div> 0.18 54.00 us 54.00 us 54.00 us 1 OPENDIR</div><div> 1.05 107.33 us 40.00 us 217.00 us 3 STATFS</div><div> 2.90 126.57 us 81.00 us 256.00 us 7 LOOKUP</div><div> 95.88 14669.00 us 147.00 us 29191.00 us 2 READDIRP</div><div> </div><div> Duration: 581 seconds</div><div> Data Read: 0 bytes</div><div>Data Written: 0 bytes</div><div> </div><div>Brick: 192.168.101.134:/dcsdata/d0</div><div>----------------------------------</div><div>Cumulative Stats:</div><div> Block Size: 8192b+ 16384b+ 32768b+ </div><div> No. of Reads: 0 0 0 </div><div>No. of Writes: 2 3 24 </div><div> </div><div> Block Size: 65536b+ 131072b+ 262144b+ </div><div> No. of Reads: 22 1563063 0 </div><div>No. of Writes: 1522412 1525007 184 </div><div> </div><div> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop</div><div> --------- ----------- ----------- ----------- ------------ ----</div><div> 0.00 0.00 us 0.00 us 0.00 us 14 FORGET</div><div> 0.00 0.00 us 0.00 us 0.00 us 39 RELEASE</div><div> 0.00 0.00 us 0.00 us 0.00 us 114 RELEASEDIR</div><div> 0.00 116.50 us 111.00 us 122.00 us 2 OPENDIR</div><div> 0.00 69.25 us 23.00 us 95.00 us 8 FLUSH</div><div> 0.00 418.00 us 285.00 us 551.00 us 2 CREATE</div><div> 0.00 239.25 us 101.00 us 396.00 us 4 READDIRP</div><div> 0.00 93.00 us 39.00 us 249.00 us 13 STATFS</div><div> 0.01 142.89 us 78.00 us 241.00 us 87 LOOKUP</div><div> 0.09 48402.25 us 114.00 us 99173.00 us 4 OPEN</div><div> 0.19 10974.42 us 60.00 us 345979.00 us 36 GETXATTR</div><div> 0.20 94.33 us 41.00 us 200.00 us 4387 FSTAT</div><div> 0.85 440436.25 us 381525.00 us 582989.00 us 4 UNLINK</div><div> 35.80 193.96 us 57.00 us 23312.00 us 380869 READ</div><div> 62.86 134.89 us 29.00 us 9976.00 us 961593 WRITE</div><div> </div><div> Duration: 22155 seconds</div><div> Data Read: 204875400152 bytes</div><div>Data Written: 299728837956 bytes</div></div><div>================================</div><div><br></div><div><br></div><div>Kane</div><div><br><div><div>$B:_(B 2013-9-18$B!$2<8a(B2:45$B!$(BAnand Avati <<a href="mailto:avati@gluster.org">avati@gluster.org</a>> $B<LF;!'(B</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">Can you get the volume profile dumps for both the runs and compare them?<div><br></div><div>Avati</div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Sep 17, 2013 at 10:46 PM, kane <span dir="ltr"><<a href="mailto:stef_9k@163.com" target="_blank">stef_9k@163.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have already used "kernel oplocks = no" in the smb.conf, next is my original smb.conf file global settings:<br>
<div class="im">[global]<br>
workgroup = MYGROUP<br>
server string = DCS Samba Server<br>
log file = /var/log/samba/log.vfs<br>
max log size = 500000<br>
</div><div class="im"> aio read size = 262144<br>
aio write size = 262144<br>
aio write behind = true<br>
</div><div class="im"> security = user<br>
passdb backend = tdbsam<br>
load printers = yes<br>
cups options = raw<br>
read raw = yes<br>
write raw = yes<br>
max xmit = 262144<br>
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144<br>
</div># max protocol = SMB2<br>
<div class="im"> kernel oplocks = no<br>
stat cache = no<br>
<br>
</div>thank you<br>
<span class="HOEnZb"><font color="#888888">-Kane<br>
$B:_(B 2013-9-18$B!$2<8a(B1:38$B!$(BAnand Avati <<a href="mailto:avati@redhat.com">avati@redhat.com</a>> $B<LF;!'(B<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> On 9/17/13 10:34 PM, kane wrote:<br>
>> Hi Anand,<br>
>><br>
>> I use 2 gluster server , this is my volume info:<br>
>> Volume Name: soul<br>
>> Type: Distribute<br>
>> Volume ID: 58f049d0-a38a-4ebe-94c0-086d492bdfa6<br>
>> Status: Started<br>
>> Number of Bricks: 2<br>
>> Transport-type: tcp<br>
>> Bricks:<br>
>> Brick1: 192.168.101.133:/dcsdata/d0<br>
>> Brick2: 192.168.101.134:/dcsdata/d0<br>
>><br>
>> each brick use a raid 5 logic disk with 8*2TSATA hdd.<br>
>><br>
>> smb.conf:<br>
>> [gvol]<br>
>> comment = For samba export of volume test<br>
>> vfs objects = glusterfs<br>
>> glusterfs:volfile_server = localhost<br>
>> glusterfs:volume = soul<br>
>> path = /<br>
>> read only = no<br>
>> guest ok = yes<br>
>><br>
>> this my testparm result:<br>
>> [global]<br>
>> workgroup = MYGROUP<br>
>> server string = DCS Samba Server<br>
>> log file = /var/log/samba/log.vfs<br>
>> max log size = 500000<br>
>> max xmit = 262144<br>
>> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144<br>
>> SO_SNDBUF=262144<br>
>> stat cache = No<br>
>> kernel oplocks = No<br>
>> idmap config * : backend = tdb<br>
>> aio read size = 262144<br>
>> aio write size = 262144<br>
>> aio write behind = true<br>
>> cups options = raw<br>
>><br>
>> in client mount the smb share with cifs to dir /mnt/vfs,<br>
>> then use iozone executed in the cifs mount dir "/mnt/vfs":<br>
>> $ ./iozone -s 10G -r 128k -i0 -i1 -t 4<br>
>> File size set to 10485760 KB<br>
>> Record Size 128 KB<br>
>> Command line used: ./iozone -s 10G -r 128k -i0 -i1 -t 4<br>
>> Output is in Kbytes/sec<br>
>> Time Resolution = 0.000001 seconds.<br>
>> Processor cache size set to 1024 Kbytes.<br>
>> Processor cache line size set to 32 bytes.<br>
>> File stride size set to 17 * record size.<br>
>> Throughput test with 4 processes<br>
>> Each process writes a 10485760 Kbyte file in 128 Kbyte records<br>
>><br>
>> Children see throughput for 4 initial writers = 534315.84 KB/sec<br>
>> Parent sees throughput for 4 initial writers = 519428.83 KB/sec<br>
>> Min throughput per process = 133154.69 KB/sec<br>
>> Max throughput per process = 134341.05 KB/sec<br>
>> Avg throughput per process = 133578.96 KB/sec<br>
>> Min xfer = 10391296.00 KB<br>
>><br>
>> Children see throughput for 4 rewriters = 536634.88 KB/sec<br>
>> Parent sees throughput for 4 rewriters = 522618.54 KB/sec<br>
>> Min throughput per process = 133408.80 KB/sec<br>
>> Max throughput per process = 134721.36 KB/sec<br>
>> Avg throughput per process = 134158.72 KB/sec<br>
>> Min xfer = 10384384.00 KB<br>
>><br>
>> Children see throughput for 4 readers = 77403.54 KB/sec<br>
>> Parent sees throughput for 4 readers = 77402.86 KB/sec<br>
>> Min throughput per process = 19349.42 KB/sec<br>
>> Max throughput per process = 19353.42 KB/sec<br>
>> Avg throughput per process = 19350.88 KB/sec<br>
>> Min xfer = 10483712.00 KB<br>
>><br>
>> Children see throughput for 4 re-readers = 77424.40 KB/sec<br>
>> Parent sees throughput for 4 re-readers = 77423.89 KB/sec<br>
>> Min throughput per process = 19354.75 KB/sec<br>
>> Max throughput per process = 19358.50 KB/sec<br>
>> Avg throughput per process = 19356.10 KB/sec<br>
>> Min xfer = 10483840.00 KB<br>
>><br>
>> then the use the same command test in the dir mounted with glister fuse:<br>
>> File size set to 10485760 KB<br>
>> Record Size 128 KB<br>
>> Command line used: ./iozone -s 10G -r 128k -i0 -i1 -t 4<br>
>> Output is in Kbytes/sec<br>
>> Time Resolution = 0.000001 seconds.<br>
>> Processor cache size set to 1024 Kbytes.<br>
>> Processor cache line size set to 32 bytes.<br>
>> File stride size set to 17 * record size.<br>
>> Throughput test with 4 processes<br>
>> Each process writes a 10485760 Kbyte file in 128 Kbyte records<br>
>><br>
>> Children see throughput for 4 initial writers = 887534.72 KB/sec<br>
>> Parent sees throughput for 4 initial writers = 848830.39 KB/sec<br>
>> Min throughput per process = 220140.91 KB/sec<br>
>> Max throughput per process = 223690.45 KB/sec<br>
>> Avg throughput per process = 221883.68 KB/sec<br>
>> Min xfer = 10319360.00 KB<br>
>><br>
>> Children see throughput for 4 rewriters = 892774.92 KB/sec<br>
>> Parent sees throughput for 4 rewriters = 871186.83 KB/sec<br>
>> Min throughput per process = 222326.44 KB/sec<br>
>> Max throughput per process = 223970.17 KB/sec<br>
>> Avg throughput per process = 223193.73 KB/sec<br>
>> Min xfer = 10431360.00 KB<br>
>><br>
>> Children see throughput for 4 readers = 605889.12 KB/sec<br>
>> Parent sees throughput for 4 readers = 601767.96 KB/sec<br>
>> Min throughput per process = 143133.14 KB/sec<br>
>> Max throughput per process = 159550.88 KB/sec<br>
>> Avg throughput per process = 151472.28 KB/sec<br>
>> Min xfer = 9406848.00 KB<br>
>><br>
>> it shows much higher perf.<br>
>><br>
>> any places i did wrong?<br>
>><br>
>><br>
>> thank you<br>
>> -Kane<br>
>><br>
>> $B:_(B 2013-9-18$B!$2<8a(B1:19$B!$(BAnand Avati <<a href="mailto:avati@gluster.org">avati@gluster.org</a><br>
>> <mailto:<a href="mailto:avati@gluster.org">avati@gluster.org</a>>> $B<LF;!'(B<br>
>><br>
>>> How are you testing this? What tool are you using?<br>
>>><br>
>>> Avati<br>
>>><br>
>>><br>
>>> On Tue, Sep 17, 2013 at 9:02 PM, kane <<a href="mailto:stef_9k@163.com">stef_9k@163.com</a><br>
>>> <mailto:<a href="mailto:stef_9k@163.com">stef_9k@163.com</a>>> wrote:<br>
>>><br>
>>> Hi Vijay<br>
>>><br>
>>> I used the code in<br>
>>> <a href="https://github.com/gluster/glusterfs.git" target="_blank">https://github.com/gluster/glusterfs.git</a> with the lasted commit:<br>
>>> commit de2a8d303311bd600cb93a775bc79a0edea1ee1a<br>
>>> Author: Anand Avati <<a href="mailto:avati@redhat.com">avati@redhat.com</a> <mailto:<a href="mailto:avati@redhat.com">avati@redhat.com</a>>><br>
>>> Date: Tue Sep 17 16:45:03 2013 -0700<br>
>>><br>
>>> Revert "cluster/distribute: Rebalance should also verify free<br>
>>> inodes"<br>
>>><br>
>>> This reverts commit 215fea41a96479312a5ab8783c13b30ab9fe00fa<br>
>>><br>
>>> Realized soon after merging, $B!D(B.<br>
>>><br>
>>> which include the patch you mentioned last time improve read perf,<br>
>>> written by Anand.<br>
>>><br>
>>> but the read perf was still slow:<br>
>>> write: 500MB/s<br>
>>> read: 77MB/s<br>
>>><br>
>>> while via fuse :<br>
>>> write 800MB/s<br>
>>> read 600MB/s<br>
>>><br>
>>> any advises?<br>
>>><br>
>>><br>
>>> Thank you.<br>
>>> -Kane<br>
>>><br>
>>> $B:_(B 2013-9-13$B!$2<8a(B10:37$B!$(Bkane <<a href="mailto:stef_9k@163.com">stef_9k@163.com</a><br>
>>> <mailto:<a href="mailto:stef_9k@163.com">stef_9k@163.com</a>>> $B<LF;!'(B<br>
>>><br>
>>>> Hi Vijay$B!$(B<br>
>>>><br>
>>>> thank you for post this message, i will try it soon<br>
>>>><br>
>>>> -kane<br>
>>>><br>
>>>><br>
>>>><br>
>>>> $B:_(B 2013-9-13$B!$2<8a(B9:21$B!$(BVijay Bellur <<a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a><br>
>>> <mailto:<a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a>>> $B<LF;!'(B<br>
>>>><br>
>>>>> On 09/13/2013 06:10 PM, kane wrote:<br>
>>>>>> Hi<br>
>>>>>><br>
>>>>>> We use gluster samba vfs test io,but the read performance via<br>
>>> vfs is<br>
>>>>>> half of write perfomance,<br>
>>>>>> but via fuse the read and write performance is almost the same.<br>
>>>>>><br>
>>>>>> this is our smb.conf:<br>
>>>>>> [global]<br>
>>>>>> workgroup = MYGROUP<br>
>>>>>> server string = DCS Samba Server<br>
>>>>>> log file = /var/log/samba/log.vfs<br>
>>>>>> max log size = 500000<br>
>>>>>> # use sendfile = true<br>
>>>>>> aio read size = 262144<br>
>>>>>> aio write size = 262144<br>
>>>>>> aio write behind = true<br>
>>>>>> min receivefile size = 262144<br>
>>>>>> write cache size = 268435456<br>
>>>>>> security = user<br>
>>>>>> passdb backend = tdbsam<br>
>>>>>> load printers = yes<br>
>>>>>> cups options = raw<br>
>>>>>> read raw = yes<br>
>>>>>> write raw = yes<br>
>>>>>> max xmit = 262144<br>
>>>>>> socket options = TCP_NODELAY IPTOS_LOWDELAY<br>
>>> SO_RCVBUF=262144<br>
>>>>>> SO_SNDBUF=262144<br>
>>>>>> kernel oplocks = no<br>
>>>>>> stat cache = no<br>
>>>>>><br>
>>>>>> any advises helpful?<br>
>>>>>><br>
>>>>><br>
>>>>> This patch has shown improvement in read performance with libgfapi:<br>
>>>>><br>
>>>>> <a href="http://review.gluster.org/#/c/5897/" target="_blank">http://review.gluster.org/#/c/5897/</a><br>
>>>>><br>
>>>>> Would it be possible for you to try this patch and check if it<br>
>>> improves performance in your case?<br>
>>>>><br>
>>>>> -Vijay<br>
>>>>><br>
>>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br>
>>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
>>><br>
>><br>
><br>
> Please add 'kernel oplocks = no' in the [gvol] section and try again.<br>
><br>
> Avati<br>
><br>
<br>
<br>
</div></div></blockquote></div><br></div>
</blockquote></div><br></div></div></body></html>