<div dir="ltr">Hi Kane,<div><br></div><div>1. Which version of samba are you running?</div><div><br></div><div>2. Can you re-run the test after adding the following lines to smb.conf's global section and tell if it helps?<br>
</div><div>kernel oplocks = no<br></div><div>stat cache = no</div><div><br></div><div>Thanks,</div><div>Raghavendra Talur</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Aug 21, 2013 at 3:48 PM, kane <span dir="ltr"><<a href="mailto:stef_9k@163.com" target="_blank">stef_9k@163.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hi Lala, thank you for reply this issue.<div><div><br></div><div>this is our smb.conf:</div>
<div>--------</div><div><div>[global]</div><div> workgroup = MYGROUP</div><div> server string = DCS Samba Server</div><div> log file = /var/log/samba/log.vfs</div><div> max log size = 500000</div>
<div># log level = 10</div><div># max xmit = 65535 </div><div># getwd cache = yes</div><div># use sendfile = yes </div><div># strict sync = no </div><div># sync always = no </div><div># large readwrite = yes </div>
<div> aio read size = 262144</div><div> aio write size = 262144</div><div> aio write behind = true</div><div># min receivefile size = 262144 </div><div> write cache size = 268435456</div>
<div># oplocks = yes</div><div> security = user</div><div> passdb backend = tdbsam</div><div> load printers = yes</div><div> cups options = raw</div><div> read raw = yes</div><div> write raw = yes</div>
<div> max xmit = 262144</div><div> read size = 262144</div><div> socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144</div><div> max protocol = SMB2</div></div><div><div>
<br></div><div>[homes]</div><div> comment = Home Directories</div><div> browseable = no</div><div> writable = yes</div><div><br></div><div><br></div><div>[printers]</div><div> comment = All Printers</div>
<div> path = /var/spool/samba</div><div> browseable = no</div><div> guest ok = no</div><div> writable = no</div><div> printable = yes</div><div><br></div><div>[cifs]</div><div> path = /mnt/fuse</div>
<div> guest ok = yes</div><div> writable = yes</div><div><br></div><div>[raw]</div><div> path = /dcsdata/d0</div><div> guest ok = yes</div><div> writable = yes</div><div><br></div><div>[gvol]</div>
<div> comment = For samba export of volume test</div><div> vfs objects = glusterfs</div><div> glusterfs:volfile_server = localhost</div><div> glusterfs:volume = soul</div><div class="im"><div>
path = /</div><div> read only = no</div><div> guest ok = yes</div></div></div><div>--------</div><div><br></div><div>our win 7 client hardware:</div><div>Intel® Xeon® <span style="white-space:pre-wrap">        </span>E31230 @ 3.20GHz</div>
<div>8GB RAM</div><div><br></div><div>linux client hardware:</div><div>Intel(R) Xeon(R) CPU X3430 @ 2.40GHz</div><div>16GB RAM</div><div><br></div><div>pretty thanks</div><div><br></div><div>-kane</div><div><br>
<div><div>在 2013-8-21,下午4:53,Lalatendu Mohanty <<a href="mailto:lmohanty@redhat.com" target="_blank">lmohanty@redhat.com</a>> 写道:</div><div><div class="h5"><br><blockquote type="cite">
<div bgcolor="#FFFFFF" text="#000000">
<div>On 08/21/2013 01:32 PM, kane wrote:<br>
</div>
<blockquote type="cite">
Hello:
<div><br>
</div>
<div><span style="white-space:pre-wrap"> </span>We
have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib
to test samba performance in windows client.</div>
<div><br>
</div>
<div>two glusterfs server nodes export share with name of "gvol":</div>
<div>hardwares:</div>
<div><span style="white-space:pre-wrap"> </span>brick
use a raid 5 logic disk with 8 * 2T SATA HDDs</div>
<div><span style="white-space:pre-wrap"> </span>10G
network connection</div>
<div><br>
</div>
<div>one linux client mount the "gvol" with cmd:</div>
<div>[root@localhost current]# mount.cifs //<a href="http://192.168.100.133/gvol" target="_blank">192.168.100.133/gvol</a>
/mnt/vfs -o user=kane,pass=123456</div>
<div><br>
</div>
<div>then i use iozone to test the write performance in mount dir
"/mnt/vfs":</div>
<div>
<div>[root@localhost current]# ./iozone -s 10G -r 128k -i0 -t 4</div>
<div>…..</div>
<div><span style="white-space:pre-wrap"> </span>File
size set to 10485760 KB</div>
<div><span style="white-space:pre-wrap"> </span>Record
Size 128 KB</div>
<div><span style="white-space:pre-wrap"> </span>Command
line used: ./iozone -s 10G -r 128k -i0 -t 4</div>
<div><span style="white-space:pre-wrap"> </span>Output
is in Kbytes/sec</div>
<div><span style="white-space:pre-wrap"> </span>Time
Resolution = 0.000001 seconds.</div>
<div><span style="white-space:pre-wrap"> </span>Processor
cache size set to 1024 Kbytes.</div>
<div><span style="white-space:pre-wrap"> </span>Processor
cache line size set to 32 bytes.</div>
<div><span style="white-space:pre-wrap"> </span>File
stride size set to 17 * record size.</div>
<div><span style="white-space:pre-wrap"> </span>Throughput
test with 4 processes</div>
<div><span style="white-space:pre-wrap"> </span>Each
process writes a 10485760 Kbyte file in 128 Kbyte records</div>
<div><br>
</div>
<div><span style="white-space:pre-wrap"> </span>Children
see throughput for 4 initial writers <span style="white-space:pre-wrap"> </span>=
487376.67 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Parent
sees throughput for 4 initial writers <span style="white-space:pre-wrap"> </span>=
486184.67 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Min
throughput per process <span style="white-space:pre-wrap"> </span>= 121699.91 KB/sec </div>
<div><span style="white-space:pre-wrap"> </span>Max
throughput per process <span style="white-space:pre-wrap"> </span>= 122005.73 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Avg
throughput per process <span style="white-space:pre-wrap"> </span>= 121844.17 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Min
xfer <span style="white-space:pre-wrap"> </span>=
10459520.00 KB</div>
<div><br>
</div>
<div><span style="white-space:pre-wrap"> </span>Children
see throughput for 4 rewriters <span style="white-space:pre-wrap"> </span>= 491416.41 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Parent
sees throughput for 4 rewriters <span style="white-space:pre-wrap"> </span>= 490298.11 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Min
throughput per process <span style="white-space:pre-wrap"> </span>= 122808.87 KB/sec </div>
<div><span style="white-space:pre-wrap"> </span>Max
throughput per process <span style="white-space:pre-wrap"> </span>= 122937.74 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Avg
throughput per process <span style="white-space:pre-wrap"> </span>= 122854.10 KB/sec</div>
<div><span style="white-space:pre-wrap"> </span>Min
xfer <span style="white-space:pre-wrap"> </span>=
10474880.00 KB</div>
</div>
<div>
<div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
linux client mount
with cifs , write performance reach 480MB/s per client;</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
but when i use win7
client mount the "gvol" with cmd:</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
net use Z: <a>\\192.168.100.133\gvol</a>
123456 /user:kane</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
then also use
iozone test in dir Z, even with write block 1Mbyte :</div>
<div style="text-align:-webkit-auto;text-indent:0px;word-wrap:break-word">
<div style="word-wrap:break-word"> File size set to 10485760 KB</div>
<div style="word-wrap:break-word"> Record Size 1024 KB</div>
<div style="word-wrap:break-word"> Command line used: iozone -s
10G -r 1m -i0 -t 4</div>
<div style="word-wrap:break-word"> Output is in Kbytes/sec</div>
<div style="word-wrap:break-word"> Time Resolution = -0.000000
seconds.</div>
<div style="word-wrap:break-word"> Processor cache size set to
1024 Kbytes.</div>
<div style="word-wrap:break-word"> Processor cache line size set
to 32 bytes.</div>
<div style="word-wrap:break-word"> File stride size set to 17 *
record size.</div>
<div style="word-wrap:break-word"> Throughput test with 4
processes</div>
<div style="word-wrap:break-word"> Each process writes a
10485760 Kbyte file in 1024 Kbyte records</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word"> Children see throughput for
4 initial writers = 148164.82 KB/sec</div>
<div style="word-wrap:break-word"> Parent sees throughput for 4
initial writers = 148015.48 KB/sec</div>
<div style="word-wrap:break-word"> Min throughput per process
= 37039.91 KB/sec</div>
<div style="word-wrap:break-word"> Max throughput per process
= 37044.45 KB/sec</div>
<div style="word-wrap:break-word"> Avg throughput per process
= 37041.21 KB/sec</div>
<div style="word-wrap:break-word"> Min xfer
= 10484736.00 KB</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word"> Children see throughput for
4 rewriters = 147642.12 KB/sec</div>
<div style="word-wrap:break-word"> Parent sees throughput for 4
rewriters = 147472.16 KB/sec</div>
<div style="word-wrap:break-word"> Min throughput per process
= 36909.13 KB/sec</div>
<div style="word-wrap:break-word"> Max throughput per process
= 36913.29 KB/sec</div>
<div style="word-wrap:break-word"> Avg throughput per process
= 36910.53 KB/sec</div>
<div style="word-wrap:break-word"> Min xfer
= 10484736.00 KB</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">iozone test complete.</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">then reach 140MB/s</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">so , anyone meet with this problem.Is
there win7 clinet to reconfigure to perform well?</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">Thanks!</div>
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
kane
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
----------------------------------------------------------------</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
Email: <a href="mailto:kai.zhou@soulinfo.com" target="_blank">kai.zhou@soulinfo.com</a><br>
电话: 0510-85385788-616</div>
<span style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br>
</span></div>
</div>
</blockquote>
<br>
<br>
Hi kane,<br>
<br>
I do run IOs using win7 client with glusterfs3.4 , but I never
compared the performance with Linux cifs mount. I don't think we
need to do any special configuration on Windows side. I hope your
Linux and Windows client have similar configuration i.e. RAM, cache,
disk type etc. However I am curious to know if your setup uses the
vfs plug-in correctly. We can confirm that looking at smb.conf entry
for the gluster volume which should have been created by "gluster
start command" automatically .<br>
<br>
e.g: entry in smb.conf for one of volume "smbvol" of mine looks like
below<br>
<br>
[gluster-smbvol]<br>
comment = For samba share of volume smbvol<br>
vfs objects = glusterfs<br>
glusterfs:volume = smbvol<br>
path = /<br>
read only = no<br>
guest ok = yes<br>
<br>
Kindly copy the entries in smb.conf for your gluster volume in this
email.<br>
-Lala<br>
<blockquote type="cite">
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
</blockquote></div></div></div><br></div></div></div><br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">
<font color="#666666"><b>Raghavendra Talur </b></font><div><br></div></div>
</div>