<div dir="ltr"><br><div class="gmail_extra">I guess the reason for this big difference in write v/s rewrite in win7 performance could be, the fact that we don't implement fallocate() method in vfs_glusterfs. If the client (e.g win7) requests guaranteed reservation of space before initiating the write, then Samba falls back to the crude method of writing zeroes to the file for the full size even before accepting the first write call from the client. I am assuming the linux cifs client does not care about reservation of space and therefore Samba does not execute the zero-writing fallacote fallback.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Avati<br><div class="gmail_quote">On Thu, Aug 22, 2013 at 11:12 AM, Lalatendu Mohanty <span dir="ltr"><<a href="mailto:lmohanty@redhat.com" target="_blank">lmohanty@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div><div class="h5">
<div>On 08/22/2013 02:14 PM, kane wrote:<br>
</div>
<blockquote type="cite">
Hi Raghavendra Talur,
<div><br>
</div>
<div>1. I found that use the smb.conf test with iozone, it shows
some difference in results:</div>
<div>smb.conf:</div>
<div>-----------</div>
<div>
<div>[root@localhost ~]# testparm </div>
<div>Load smb config files from /etc/samba/smb.conf</div>
<div>rlimit_max: increasing rlimit_max (1024) to minimum Windows
limit (16384)</div>
<div>Processing section "[homes]"</div>
<div>Processing section "[printers]"</div>
<div>Processing section "[cifs]"</div>
<div>Processing section "[raw]"</div>
<div>Processing section "[gvol]"</div>
<div>Loaded services file OK.</div>
<div>Server role: ROLE_STANDALONE</div>
<div>Press enter to see a dump of your service definitions</div>
<div><br>
</div>
<div>
<div>[global]</div>
<div><span style="white-space:pre-wrap"> </span>workgroup
= MYGROUP</div>
<div><span style="white-space:pre-wrap"> </span>server
string = DCS Samba Server</div>
<div><span style="white-space:pre-wrap"> </span>log
file = /var/log/samba/log.vfs</div>
<div><span style="white-space:pre-wrap"> </span>max
log size = 500000</div>
<div><span style="white-space:pre-wrap"> </span>max
protocol = SMB2</div>
<div><span style="white-space:pre-wrap"> </span>min
receivefile size = 262144</div>
<div><span style="white-space:pre-wrap"> </span>max
xmit = 262144</div>
<div><span style="white-space:pre-wrap"> </span>socket
options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144
SO_SNDBUF=262144</div>
<div><span style="white-space:pre-wrap"> </span>idmap
config * : backend = tdb</div>
<div><span style="white-space:pre-wrap"> </span>aio
read size = 262144</div>
<div><span style="white-space:pre-wrap"> </span>aio
write size = 262144</div>
<div><span style="white-space:pre-wrap"> </span>aio
write behind = true</div>
<div><span style="white-space:pre-wrap"> </span>write
cache size = 268435456</div>
<div><span style="white-space:pre-wrap"> </span>cups
options = raw</div>
</div>
<div>…….</div>
<div><br>
</div>
<div>[raw]</div>
<div><span style="white-space:pre-wrap"> </span>path
= /dcsdata/d0</div>
<div><span style="white-space:pre-wrap"> </span>read
only = No</div>
<div><span style="white-space:pre-wrap"> </span>guest
ok = Yes</div>
<div><br>
</div>
<div>[gvol]</div>
<div><span style="white-space:pre-wrap"> </span>comment
= For samba export of volume test</div>
<div><span style="white-space:pre-wrap"> </span>path
= /</div>
<div><span style="white-space:pre-wrap"> </span>read
only = No</div>
<div><span style="white-space:pre-wrap"> </span>guest
ok = Yes</div>
<div><span style="white-space:pre-wrap"> </span>vfs
objects = glusterfs</div>
<div><span style="white-space:pre-wrap"> </span>glusterfs:volume
= soul</div>
<div><span style="white-space:pre-wrap"> </span>glusterfs:volfile_server
= localhost</div>
</div>
<div>-----------</div>
<div>iozone test with cmd : iozone -s 10G -r 1m -i0 -t 4</div>
<div>-----------</div>
<div>
<div> Run began: Thu Aug 22 16:11:40 2013</div>
<div><br>
</div>
<div> File size set to 10485760 KB</div>
<div> Record Size 1024 KB</div>
<div> Command line used: iozone -s 10G -r 1m -i0 -t 4</div>
<div> Output is in Kbytes/sec</div>
<div> Time Resolution = 0.000000 seconds.</div>
<div> Processor cache size set to 1024 Kbytes.</div>
<div> Processor cache line size set to 32 bytes.</div>
<div> File stride size set to 17 * record size.</div>
<div> Throughput test with 4 processes</div>
<div> Each process writes a 10485760 Kbyte file in 1024
Kbyte records</div>
<div><br>
</div>
<div> Children see throughput for 4 initial writers =
147008.14 KB/sec</div>
<div> Parent sees throughput for 4 initial writers =
146846.43 KB/sec</div>
<div> Min throughput per process =
36750.59 KB/sec</div>
<div> Max throughput per process =
36754.97 KB/sec</div>
<div> Avg throughput per process =
36752.04 KB/sec</div>
<div> Min xfer =
10484736.00 KB</div>
<div><br>
</div>
<div> Children see throughput for 4 rewriters =
147494.85 KB/sec</div>
<div> Parent sees throughput for 4 rewriters =
147310.95 KB/sec</div>
<div> Min throughput per process =
36871.96 KB/sec</div>
<div> Max throughput per process =
36877.09 KB/sec</div>
<div> Avg throughput per process =
36873.71 KB/sec</div>
<div> Min xfer =
10484736.00 KB</div>
<div><br>
</div>
<div>iozone test complete.</div>
</div>
<div>-----------</div>
<div><br>
</div>
<div>The results of rewrite show some difference, with your
recommend smb.conf, the rewite and write diff in iozone docs :</div>
<div>
<div title="Page 3">
<div>
<div>
<p><span style="font-size:10.000000pt;font-family:'TimesNewRomanPS';font-weight:700">Write</span><span style="font-size:10.000000pt;font-family:'TimesNewRomanPSMT'">: This test measures the
performance of writing a new file. When a new file is
written not only does the data need to be stored but
also the overhead information for keeping track of
where the data is located on the storage media. This
overhead is called the “metadata” It consists of the
directory information, the space allocation and any
other data associated with a file that is not part of
the data contained in the file. It is normal for the
initial write performance to be lower than the
performance of re- writing a file due to this overhead
information. </span></p>
<p><span style="font-size:10.000000pt;font-family:'TimesNewRomanPS';font-weight:700">Re-write</span><span style="font-size:10.000000pt;font-family:'TimesNewRomanPSMT'">: This test measures the
performance of writing a file that already exists.
When a file is written that already exists the work
required is less as the metadata already exists. It is
normal for the rewrite performance to be higher than
the performance of writing a new file. </span></p>
<div>but use iozone test with 4 threads, the rewrite
performs much better than write, </div>
<div>i thought rewite:180MB/s vs write:150MB/s is
reasonable, but rewrite:400MB/s vs 140MB/s is out of my
expectation.</div>
</div>
</div>
</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</blockquote></div></div>
Kane,<br>
<br>
When we are using same samba share on windows and Linux, the only
thing different is "unix extensions" support in Linux which Linux
uses by default. However I don't think "unix extension" has anything
to do with performance of writes and rewrites. I only guess is the
samba client in Linux is works well with samba than the Microsoft's
smb client which is in windows 7.<br>
<p>"unix extensions" parameter controls whether Samba
implements the CIFS UNIX extensions. These extensions enable Samba
to better serve UNIX CIFS clients by supporting features such as
symbolic links, hard links, etc... These extensions require a
similarly enabled client, and are of no current use to Windows
clients.</p>
<p>To disable unix extension put following in global section
of smb.conf and restart smb service<span></span><span><em><em><code><br>
unix extensions</code></em><span> </span>=<span> </span><code>no</code></em></span></p><div><div class="h5">
<blockquote type="cite">
<div> </div>
<div>thanks</div>
<div>-kane</div>
<div><br>
<div>
<div>在 2013-8-22,下午4:06,kane <<a href="mailto:stef_9k@163.com" target="_blank">stef_9k@163.com</a>> 写道:</div>
<br>
<blockquote type="cite">
<div style="word-wrap:break-word">Hi Raghavendra
Talur,
<div><br>
</div>
<div>1. My samba version is:</div>
<div>
<div>[root@localhost ~]# smbd -V</div>
<div>Version 3.6.9-151.el6</div>
<div><br>
</div>
<div>2. Sorry in the first mail list, I forgot to tell,
when use win7 client mount the server raw xfs backend
with a raid5 disk,</div>
<div>it shows good write performance with same smb.conf
in samba vfs glusterfs 3.4 test show next in point 3:</div>
<div>$ ./iozone -s 10G -r 128k -i0 -t 4</div>
<div><span style="white-space:pre-wrap">----------------</span></div>
<div>
<div> Run began: Thu Aug 22 15:59:11 2013</div>
<div><br>
</div>
<div> File size set to 10485760 KB</div>
<div> Record Size 1024 KB</div>
<div> Command line used: iozone -s 10G -r 1m
-i0 -t 4</div>
<div> Output is in Kbytes/sec</div>
<div> Time Resolution = -0.000000 seconds.</div>
<div> Processor cache size set to 1024 Kbytes.</div>
<div> Processor cache line size set to 32
bytes.</div>
<div> File stride size set to 17 * record size.</div>
<div> Throughput test with 4 processes</div>
<div> Each process writes a 10485760 Kbyte file
in 1024 Kbyte records</div>
<div><br>
</div>
<div> Children see throughput for 4 initial
writers = 566996.86 KB/sec</div>
<div> Parent sees throughput for 4 initial
writers = 566831.18 KB/sec</div>
<div> Min throughput per process
= 141741.52 KB/sec</div>
<div> Max throughput per process
= 141764.00 KB/sec</div>
<div> Avg throughput per process
= 141749.21 KB/sec</div>
<div> Min xfer
= 10482688.00 KB</div>
<div><br>
</div>
<div> Children see throughput for 4 rewriters
= 432868.28 KB/sec</div>
<div> Parent sees throughput for 4 rewriters
= 420648.01 KB/sec</div>
<div> Min throughput per process
= 108115.68 KB/sec</div>
<div> Max throughput per process
= 108383.86 KB/sec</div>
<div> Avg throughput per process
= 108217.07 KB/sec</div>
<div> Min xfer
= 10460160.00 KB</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>iozone test complete.</div>
</div>
<div><span style="white-space:pre-wrap">----------------</span></div>
<div><span style="white-space:pre-wrap"><br>
</span></div>
<div><span style="white-space:pre-wrap">3. With your
recommended conf added in smb.conf, t</span>his is
testparm result:</div>
<div>
<div>
<div>[root@localhost ~]# testparm </div>
<div>Load smb config files from /etc/samba/smb.conf</div>
<div>rlimit_max: increasing rlimit_max (1024) to
minimum Windows limit (16384)</div>
<div>Processing section "[homes]"</div>
<div>Processing section "[printers]"</div>
<div>Processing section "[cifs]"</div>
<div>Processing section "[raw]"</div>
<div>Processing section "[gvol]"</div>
<div>Loaded services file OK.</div>
<div>Server role: ROLE_STANDALONE</div>
<div>Press enter to see a dump of your service
definitions</div>
<div><br>
</div>
</div>
<div>
<div>[global]</div>
<div><span style="white-space:pre-wrap"> </span>workgroup =
MYGROUP</div>
<div><span style="white-space:pre-wrap"> </span>server
string = DCS Samba Server</div>
<div><span style="white-space:pre-wrap"> </span>log file =
/var/log/samba/log.vfs</div>
<div><span style="white-space:pre-wrap"> </span>max log size
= 500000</div>
<div><span style="white-space:pre-wrap"> </span>max protocol
= SMB2</div>
<div><span style="white-space:pre-wrap"> </span>max xmit =
262144</div>
<div><span style="white-space:pre-wrap"> </span>socket
options = TCP_NODELAY IPTOS_LOWDELAY
SO_RCVBUF=262144 SO_SNDBUF=262144</div>
<div><span style="white-space:pre-wrap"> </span>stat cache =
No</div>
<div><span style="white-space:pre-wrap"> </span>kernel
oplocks = No</div>
<div><span style="white-space:pre-wrap"> </span>idmap config
* : backend = tdb</div>
<div><span style="white-space:pre-wrap"> </span>aio read
size = 262144</div>
<div><span style="white-space:pre-wrap"> </span>aio write
size = 262144</div>
<div><span style="white-space:pre-wrap"> </span>aio write
behind = true</div>
<div><span style="white-space:pre-wrap"> </span>write cache
size = 268435456</div>
<div><span style="white-space:pre-wrap"> </span>cups options
= raw</div>
<div>……</div>
<div><br>
</div>
<div>[cifs]</div>
<div><span style="white-space:pre-wrap"> </span>path =
/mnt/fuse</div>
<div><span style="white-space:pre-wrap"> </span>read only =
No</div>
<div><span style="white-space:pre-wrap"> </span>guest ok =
Yes</div>
<div><br>
</div>
<div>[raw]</div>
<div><span style="white-space:pre-wrap"> </span>path =
/dcsdata/d0</div>
<div><span style="white-space:pre-wrap"> </span>read only =
No</div>
<div><span style="white-space:pre-wrap"> </span>guest ok =
Yes</div>
<div><br>
</div>
<div>[gvol]</div>
<div><span style="white-space:pre-wrap"> </span>comment =
For samba export of volume test</div>
<div><span style="white-space:pre-wrap"> </span>path = /</div>
<div><span style="white-space:pre-wrap"> </span>read only =
No</div>
<div><span style="white-space:pre-wrap"> </span>guest ok =
Yes</div>
<div><span style="white-space:pre-wrap"> </span>vfs objects
= glusterfs</div>
<div><span style="white-space:pre-wrap"> </span>glusterfs:volume
= soul</div>
<div><span style="white-space:pre-wrap"> </span>glusterfs:volfile_server
= localhost</div>
</div>
</div>
<div><br>
</div>
<div>the iozone test result with cmd: iozone -s 10G -r
1m -i0 -t 4</div>
<div>-------------</div>
<div>
<div> Run began: Thu Aug 22 15:47:31 2013</div>
<div><br>
</div>
<div> File size set to 10485760 KB</div>
<div> Record Size 1024 KB</div>
<div> Command line used: iozone -s 10G -r 1m
-i0 -t 4</div>
<div> Output is in Kbytes/sec</div>
<div> Time Resolution = -0.000000 seconds.</div>
<div> Processor cache size set to 1024 Kbytes.</div>
<div> Processor cache line size set to 32
bytes.</div>
<div> File stride size set to 17 * record size.</div>
<div> Throughput test with 4 processes</div>
<div> Each process writes a 10485760 Kbyte file
in 1024 Kbyte records</div>
<div><br>
</div>
<div> Children see throughput for 4 initial
writers = 135588.82 KB/sec</div>
<div> Parent sees throughput for 4 initial
writers = 135549.95 KB/sec</div>
<div> Min throughput per process
= 33895.92 KB/sec</div>
<div> Max throughput per process
= 33900.02 KB/sec</div>
<div> Avg throughput per process
= 33897.20 KB/sec</div>
<div> Min xfer
= 10484736.00 KB</div>
<div><br>
</div>
<div> Children see throughput for 4 rewriters
= 397494.38 KB/sec</div>
<div> Parent sees throughput for 4 rewriters
= 387431.63 KB/sec</div>
<div> Min throughput per process
= 99280.98 KB/sec</div>
<div> Max throughput per process
= 99538.40 KB/sec</div>
<div> Avg throughput per process
= 99373.59 KB/sec</div>
<div> Min xfer
= 10459136.00 KB</div>
<div>--------------</div>
<div><br>
</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>在 2013-8-22,下午3:31,RAGHAVENDRA TALUR <<a href="mailto:raghavendra.talur@gmail.com" target="_blank">raghavendra.talur@gmail.com</a>>
写道:</div>
<br>
<blockquote type="cite">
<div dir="ltr">Hi Kane,
<div><br>
</div>
<div>1. Which version of samba are you running?</div>
<div><br>
</div>
<div>2. Can you re-run the test after adding the
following lines to smb.conf's global section and
tell if it helps?<br>
</div>
<div>kernel oplocks = no<br>
</div>
<div>stat cache = no</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Raghavendra Talur</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Wed, Aug 21, 2013 at
3:48 PM, kane <span dir="ltr"><<a href="mailto:stef_9k@163.com" target="_blank">stef_9k@163.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div style="word-wrap:break-word">Hi Lala, thank
you for reply this issue.
<div>
<div><br>
</div>
<div>this is our smb.conf:</div>
<div>--------</div>
<div>
<div>[global]</div>
<div> workgroup = MYGROUP</div>
<div> server string = DCS Samba
Server</div>
<div> log file =
/var/log/samba/log.vfs</div>
<div> max log size = 500000</div>
<div># log level = 10</div>
<div># max xmit = 65535 </div>
<div># getwd cache = yes</div>
<div># use sendfile = yes </div>
<div># strict sync = no </div>
<div># sync always = no </div>
<div># large readwrite = yes </div>
<div> aio read size = 262144</div>
<div> aio write size = 262144</div>
<div> aio write behind = true</div>
<div># min receivefile size =
262144 </div>
<div> write cache size =
268435456</div>
<div># oplocks = yes</div>
<div> security = user</div>
<div> passdb backend = tdbsam</div>
<div> load printers = yes</div>
<div> cups options = raw</div>
<div> read raw = yes</div>
<div> write raw = yes</div>
<div> max xmit = 262144</div>
<div> read size = 262144</div>
<div> socket options =
TCP_NODELAY IPTOS_LOWDELAY
SO_RCVBUF=262144 SO_SNDBUF=262144</div>
<div> max protocol = SMB2</div>
</div>
<div>
<div> <br>
</div>
<div>[homes]</div>
<div> comment = Home Directories</div>
<div> browseable = no</div>
<div> writable = yes</div>
<div><br>
</div>
<div><br>
</div>
<div>[printers]</div>
<div> comment = All Printers</div>
<div> path = /var/spool/samba</div>
<div> browseable = no</div>
<div> guest ok = no</div>
<div> writable = no</div>
<div> printable = yes</div>
<div><br>
</div>
<div>[cifs]</div>
<div> path = /mnt/fuse</div>
<div> guest ok = yes</div>
<div> writable = yes</div>
<div><br>
</div>
<div>[raw]</div>
<div> path = /dcsdata/d0</div>
<div> guest ok = yes</div>
<div> writable = yes</div>
<div><br>
</div>
<div>[gvol]</div>
<div> comment = For samba export
of volume test</div>
<div> vfs objects = glusterfs</div>
<div> glusterfs:volfile_server =
localhost</div>
<div> glusterfs:volume = soul</div>
<div>
<div> path = /</div>
<div> read only = no</div>
<div> guest ok = yes</div>
</div>
</div>
<div>--------</div>
<div><br>
</div>
<div>our win 7 client hardware:</div>
<div>Intel® Xeon® <span style="white-space:pre-wrap"> </span>E31230
@ 3.20GHz</div>
<div>8GB RAM</div>
<div><br>
</div>
<div>linux client hardware:</div>
<div>Intel(R) Xeon(R) CPU X3430
@ 2.40GHz</div>
<div>16GB RAM</div>
<div><br>
</div>
<div>pretty thanks</div>
<div><br>
</div>
<div>-kane</div>
<div><br>
<div>
<div>在 2013-8-21,下午4:53,Lalatendu
Mohanty <<a href="mailto:lmohanty@redhat.com" target="_blank">lmohanty@redhat.com</a>>
写道:</div>
<div>
<div><br>
<blockquote type="cite">
<div bgcolor="#FFFFFF" text="#000000">
<div>On 08/21/2013 01:32 PM,
kane wrote:<br>
</div>
<blockquote type="cite">
Hello:
<div><br>
</div>
<div><span style="white-space:pre-wrap">
</span>We have used
glusterfs3.4 with the
lasted samba-glusterfs-vfs
lib to test samba
performance in windows
client.</div>
<div><br>
</div>
<div>two glusterfs server
nodes export share with
name of "gvol":</div>
<div>hardwares:</div>
<div><span style="white-space:pre-wrap">
</span>brick use a raid 5
logic disk with 8 * 2T
SATA HDDs</div>
<div><span style="white-space:pre-wrap">
</span>10G network
connection</div>
<div><br>
</div>
<div>one linux client mount
the "gvol" with cmd:</div>
<div>[root@localhost
current]# mount.cifs //<a href="http://192.168.100.133/gvol" target="_blank">192.168.100.133/gvol</a>
/mnt/vfs -o
user=kane,pass=123456</div>
<div><br>
</div>
<div>then i use iozone to
test the write performance
in mount dir "/mnt/vfs":</div>
<div>
<div>[root@localhost
current]# ./iozone -s
10G -r 128k -i0 -t 4</div>
<div>…..</div>
<div><span style="white-space:pre-wrap">
</span>File size set to
10485760 KB</div>
<div><span style="white-space:pre-wrap">
</span>Record Size 128
KB</div>
<div><span style="white-space:pre-wrap">
</span>Command line
used: ./iozone -s 10G -r
128k -i0 -t 4</div>
<div><span style="white-space:pre-wrap">
</span>Output is in
Kbytes/sec</div>
<div><span style="white-space:pre-wrap">
</span>Time Resolution =
0.000001 seconds.</div>
<div><span style="white-space:pre-wrap">
</span>Processor cache
size set to 1024 Kbytes.</div>
<div><span style="white-space:pre-wrap">
</span>Processor cache
line size set to 32
bytes.</div>
<div><span style="white-space:pre-wrap">
</span>File stride size
set to 17 * record size.</div>
<div><span style="white-space:pre-wrap">
</span>Throughput test
with 4 processes</div>
<div><span style="white-space:pre-wrap">
</span>Each process
writes a 10485760 Kbyte
file in 128 Kbyte
records</div>
<div><br>
</div>
<div><span style="white-space:pre-wrap">
</span>Children see
throughput for 4
initial writers <span style="white-space:pre-wrap">
</span>= 487376.67
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Parent sees
throughput for 4
initial writers <span style="white-space:pre-wrap">
</span>= 486184.67
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Min throughput
per process <span style="white-space:pre-wrap">
</span>= 121699.91
KB/sec </div>
<div><span style="white-space:pre-wrap">
</span>Max throughput
per process <span style="white-space:pre-wrap">
</span>= 122005.73
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Avg throughput
per process <span style="white-space:pre-wrap">
</span>= 121844.17
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Min xfer <span style="white-space:pre-wrap">
</span>= 10459520.00 KB</div>
<div><br>
</div>
<div><span style="white-space:pre-wrap">
</span>Children see
throughput for 4
rewriters <span style="white-space:pre-wrap">
</span>= 491416.41
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Parent sees
throughput for 4
rewriters <span style="white-space:pre-wrap">
</span>= 490298.11
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Min throughput
per process <span style="white-space:pre-wrap">
</span>= 122808.87
KB/sec </div>
<div><span style="white-space:pre-wrap">
</span>Max throughput
per process <span style="white-space:pre-wrap">
</span>= 122937.74
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Avg throughput
per process <span style="white-space:pre-wrap">
</span>= 122854.10
KB/sec</div>
<div><span style="white-space:pre-wrap">
</span>Min xfer <span style="white-space:pre-wrap">
</span>= 10474880.00 KB</div>
</div>
<div>
<div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
linux
client mount with cifs
, write performance
reach 480MB/s per
client;</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
but
when i use win7 client
mount the "gvol" with
cmd:</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
net
use Z: <a>\\192.168.100.133\gvol</a>
123456 /user:kane</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
then
also use iozone test
in dir Z, even with
write block 1Mbyte :</div>
<div style="text-align:-webkit-auto;text-indent:0px;word-wrap:break-word">
<div style="word-wrap:break-word">
File size set
to 10485760 KB</div>
<div style="word-wrap:break-word">
Record Size
1024 KB</div>
<div style="word-wrap:break-word">
Command line
used: iozone -s 10G
-r 1m -i0 -t 4</div>
<div style="word-wrap:break-word">
Output is in
Kbytes/sec</div>
<div style="word-wrap:break-word">
Time
Resolution =
-0.000000 seconds.</div>
<div style="word-wrap:break-word">
Processor
cache size set to
1024 Kbytes.</div>
<div style="word-wrap:break-word">
Processor
cache line size set
to 32 bytes.</div>
<div style="word-wrap:break-word">
File stride
size set to 17 *
record size.</div>
<div style="word-wrap:break-word">
Throughput
test with 4
processes</div>
<div style="word-wrap:break-word">
Each process
writes a 10485760
Kbyte file in 1024
Kbyte records</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">
Children see
throughput for 4
initial writers =
148164.82 KB/sec</div>
<div style="word-wrap:break-word">
Parent sees
throughput for 4
initial writers =
148015.48 KB/sec</div>
<div style="word-wrap:break-word">
Min throughput
per process
=
37039.91 KB/sec</div>
<div style="word-wrap:break-word">
Max throughput
per process
=
37044.45 KB/sec</div>
<div style="word-wrap:break-word">
Avg throughput
per process
=
37041.21 KB/sec</div>
<div style="word-wrap:break-word">
Min xfer
=
10484736.00 KB</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">
Children see
throughput for 4
rewriters =
147642.12 KB/sec</div>
<div style="word-wrap:break-word">
Parent sees
throughput for 4
rewriters =
147472.16 KB/sec</div>
<div style="word-wrap:break-word">
Min throughput
per process
=
36909.13 KB/sec</div>
<div style="word-wrap:break-word">
Max throughput
per process
=
36913.29 KB/sec</div>
<div style="word-wrap:break-word">
Avg throughput
per process
=
36910.53 KB/sec</div>
<div style="word-wrap:break-word">
Min xfer
=
10484736.00 KB</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">iozone
test complete.</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">then
reach 140MB/s</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">so
, anyone meet with
this problem.Is
there win7 clinet to
reconfigure to
perform well?</div>
<div style="word-wrap:break-word"><br>
</div>
<div style="word-wrap:break-word">Thanks!</div>
<br>
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
kane
</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
----------------------------------------------------------------</div>
<div style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word">
Email:
<a href="mailto:kai.zhou@soulinfo.com" target="_blank">kai.zhou@soulinfo.com</a><br>
电话:
0510-85385788-616</div>
<span style="font-family:Helvetica;font-size:medium;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br>
</span></div>
</div>
</blockquote>
<br>
<br>
Hi kane,<br>
<br>
I do run IOs using win7 client
with glusterfs3.4 , but I
never compared the
performance with Linux cifs
mount. I don't think we need
to do any special
configuration on Windows side.
I hope your Linux and Windows
client have similar
configuration i.e. RAM, cache,
disk type etc. However I am
curious to know if your setup
uses the vfs plug-in
correctly. We can confirm that
looking at smb.conf entry for
the gluster volume which
should have been created by
"gluster start command"
automatically .<br>
<br>
e.g: entry in smb.conf for one
of volume "smbvol" of mine
looks like below<br>
<br>
[gluster-smbvol]<br>
comment = For samba share of
volume smbvol<br>
vfs objects = glusterfs<br>
glusterfs:volume = smbvol<br>
path = /<br>
read only = no<br>
guest ok = yes<br>
<br>
Kindly copy the entries in
smb.conf for your gluster
volume in this email.<br>
-Lala<br>
<blockquote type="cite">
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</div>
</div>
<br>
</div>
</div>
</div>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr"> <font color="#666666"><b>Raghavendra
Talur </b></font>
<div><br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>