<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 02/26/2014 01:09 AM, Jeff Byers
wrote:<br>
</div>
<blockquote
cite="mid:CAERmy3AZR3t3Pqm7cpvgkcy6B46WNuWnm5KjKfeMQ6SOyDmj8w@mail.gmail.com"
type="cite">
<div dir="ltr">
<p>Hello,</p>
<p>I have a problem with very slow Windows Explorer browsing<br>
when there are a large number of directories/files.</p>
<div>In this case, the top level folder has almost 6000
directories,</div>
<div>
admittedly large, but it works almost instantaneously when a</div>
<div>Windows Server share was being used. </div>
<div> </div>
<div>Migrating to a Samba/GlusterFS share, there is almost a 20<br>
second delay while the explorer window populates the list.<br>
This leaves a bad impression on the storage performance. The<br>
systems are otherwise idle.</div>
<div>To isolate the cause, I've eliminated everything, from<br>
networking, Windows, and have narrowed in on GlusterFS</div>
<div>being the sole cause of most of the directory lag.</div>
<div> </div>
<div>I was optimistic on using the GlusterFS VFS libgfapi
instead<br>
of FUSE with Samba, and it does help performance<br>
dramatically in some cases, but it does not help (and<br>
sometimes hurts) when compared to the CIFS FUSE mount</div>
<div>for directory listings.</div>
<p>NFS for directory listings, and small I/O's seems to be<br>
better, but I cannot use NFS, as I need to use CIFS for<br>
Windows clients, need ACL's, Active Directory, etc.</p>
<p>Versions:<br>
CentOS release 6.5 (Final)<br>
# glusterd -V<br>
glusterfs 3.4.2 built on Jan 6 2014 14:31:51<br>
# smbd -V<br>
Version 4.1.4</p>
<p>For testing, I've got a single GlusterFS volume, with a<br>
single ext4 brick, being accessed locally:</p>
<p># gluster volume info nas-cbs-0005<br>
Volume Name: nas-cbs-0005<br>
Type: Distribute<br>
Volume ID: 5068e9a5-d60f-439c-b319-befbf9a73a50<br>
Status: Started<br>
Number of Bricks: 1<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 192.168.5.181:/exports/nas-segment-0004/nas-cbs-0005<br>
Options Reconfigured:<br>
server.allow-insecure: on<br>
nfs.rpc-auth-allow: *<br>
nfs.disable: off<br>
nfs.addr-namelookup: off</p>
<p>The Samba share options are:</p>
<p>[nas-cbs-0005]<br>
path = /samba/nas-cbs-0005/cifs_share<br>
admin users = "localadmin"<br>
valid users = "localadmin"<br>
invalid users =<br>
read list =<br>
write list = "localadmin"<br>
guest ok = yes<br>
read only = no<br>
hide unreadable = yes<br>
hide dot files = yes<br>
available = yes</p>
<p>[nas-cbs-0005-vfs]<br>
path = /<br>
vfs objects = glusterfs<br>
glusterfs:volume = nas-cbs-0005<br>
kernel share modes = No<br>
use sendfile = false<br>
admin users = "localadmin"<br>
valid users = "localadmin"<br>
invalid users =<br>
read list =<br>
write list = "localadmin"<br>
guest ok = yes<br>
read only = no<br>
hide unreadable = yes<br>
hide dot files = yes<br>
available = yes</p>
<p>I've locally mounted the volume three ways, with NFS, Samba<br>
CIFS through a GlusterFS FUSE mount, and VFS libgfapi mount:</p>
<p># mount<br>
/dev/sdr on /exports/nas-segment-0004 type ext4
(rw,noatime,auto_da_alloc,barrier,nodelalloc,journal_checksum,acl,user_xattr)<br>
/var/lib/glusterd/vols/nas-cbs-0005/nas-cbs-0005-fuse.vol on
/samba/nas-cbs-0005 type fuse.glusterfs
(rw,allow_other,max_read=131072)<br>
//<a moz-do-not-send="true"
href="http://10.10.200.181/nas-cbs-0005">10.10.200.181/nas-cbs-0005</a>
on /mnt/nas-cbs-0005-cifs type cifs
(rw,username=localadmin,password=localadmin)<br>
10.10.200.181:/nas-cbs-0005 on /mnt/nas-cbs-0005 type nfs
(rw,addr=10.10.200.181)<br>
//<a moz-do-not-send="true"
href="http://10.10.200.181/nas-cbs-0005-vfs">10.10.200.181/nas-cbs-0005-vfs</a>
on /mnt/nas-cbs-0005-cifs-vfs type cifs
(rw,username=localadmin,password=localadmin)</p>
<p>Directory listing 6000 empty directories benchmark results:</p>
<p> Directory listing the ext4 mount directly is almost<br>
instantaneous of course.</p>
<p> Directory listing the NFS mount is also very fast, less
than a second.</p>
<p> Directory listing the CIFS FUSE mount is so slow, almost
16<br>
seconds!</p>
<p> Directory listing the CIFS VFS libgfapi mount is about
twice<br>
as fast as FUSE, but still slow at 8 seconds.</p>
<p>Unfortunately, due to:</p>
<p> Bug 1004327 - New files are not inheriting ACL from
parent<br>
directory unless "stat-prefetch" is off for<br>
the respective gluster volume<br>
<a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1004327">https://bugzilla.redhat.com/show_bug.cgi?id=1004327</a></p>
<p>I need to have 'stat-prefetch' off. Retesting with this<br>
setting.</p>
<p>Directory listing 6000 empty directories benchmark results<br>
('stat-prefetch' is off):</p>
<p> Accessing the ext4 mount directly is almost<br>
instantaneous of course.</p>
<p> Accessing the NFS mount is still very fast, less than a
second.</p>
<p> Accessing the CIFS FUSE mount is slow, almost 14<br>
seconds, but slightly faster than when 'stat-prefetch' was<br>
on?</p>
<p> Accessing the CIFS VFS libgfapi mount is now about twice<br>
as slow as FUSE, at almost 26 seconds, I guess due<br>
to 'stat- prefetch' being off!</p>
<p>To see if the directory listing problem was due to file<br>
system metadata handling, or small I/O's, did some simple<br>
small block file I/O benchmarks with the same configuration.</p>
<p> 64KB Sequential Writes:</p>
<p> NFS small block writes seem slow at about 50 MB/sec.</p>
<p> CIFS FUSE small block writes are more than twice as fast
as<br>
NFS, at about 118 MB/sec.</p>
<p> CIFS VFS libgfapi small block writes are very fast, about<br>
twice as fast as CIFS FUSE, at about 232 MB/sec.</p>
<p> 64KB Sequential Reads:</p>
<p> NFS small block reads are very fast, at about 334 MB/sec.</p>
<p> CIFS FUSE small block reads are half of NFS, at about 124<br>
MB/sec.</p>
<p> CIFS VFS libgfapi small block reads are about the same as<br>
CIFS FUSE, at about 127 MB/sec.</p>
<p> 4KB Sequential Writes:</p>
<p> NFS very small block writes are very slow at about 4
MB/sec.</p>
<p> CIFS FUSE very small block writes are faster, at about 11<br>
MB/sec.</p>
<p> CIFS VFS libgfapi very small block writes are twice as
fast<br>
as CIFS FUSE, at about 22 MB/sec.</p>
<p> 4KB Sequential Reads:</p>
<p> NFS very small block reads are very fast at about 346<br>
MB/sec.</p>
<p> CIFS FUSE very small block reads are less than half as
fast<br>
as NFS, at about 143 MB/sec.</p>
<p> CIFS VFS libgfapi very small block reads a slight bit
slower<br>
than CIFS FUSE, at about 137 MB/sec.</p>
<p>I'm not quite sure how interpret these results. Write<br>
caching is playing a part for sure, but it should apply<br>
equally for both NFS and CIFS I would think. With small file<br>
I/O's, NFS is better at reading than CIFS, and CIFS VFS is<br>
twice as good at writing as CIFS FUSE. Sadly, CIFS VFS is<br>
about the same as CIFS FUSE at reading.</p>
<p>Regarding the directory listing lag problem, I've tried most<br>
of the the GlusterFS volume options that seemed like they<br>
might help, but nothing really did.</p>
<p>Gluster having 'stat-prefetch' on helps, but has to be off<br>
for the bug.</p>
<div>
<div>BTW: I've repeated some tests with empty files instead of</div>
<div>directories, and the results were similar. The issue is
not</div>
<div>specific to directories only.</div>
</div>
<div> </div>
<div>I know that small file reads and file-system metadata<br>
handling is not GlusterFS's strong suit, but is there </div>
<div>*anything* that can be done to help it out? Any ideas?</div>
<div>Should I hope/expect for GlusterFS 3.5.x to improve this</div>
<div>any?</div>
<p>Raw data is below.</p>
<p>Any advice is appreciated. Thanks.</p>
<p>~ Jeff Byers ~</p>
<p>##########################</p>
<p>Directory listing of 6000 empty directories ('stat-prefetch'<br>
is on):</p>
<p>Directory listing the ext4 mount directly is almost<br>
instantaneous of course.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m41.235s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.110s<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.109s</p>
<p>Directory listing the NFS mount is also very fast.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m44.352s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.471s<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.114s</p>
<p>Directory listing the CIFS FUSE mount is so slow, almost 16<br>
seconds!</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m56.573s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m16.101s<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m15.986s</p>
<p>Directory listing the CIFS VFS libgfapi mount is about twice<br>
as fast as FUSE, but still slow at 8 seconds.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 0m48.839s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 0m8.197s<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 0m8.450s</p>
<p>####################</p>
<p>Retesting directory list with Gluster default settings,<br>
including 'stat-prefetch' off, due to:</p>
<p> Bug 1004327 - New files are not inheriting ACL from
parent directory<br>
unless "stat-prefetch" is off for the
respective gluster<br>
volume<br>
<a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1004327">https://bugzilla.redhat.com/show_bug.cgi?id=1004327</a></p>
<p># gluster volume info nas-cbs-0005</p>
<p>Volume Name: nas-cbs-0005<br>
Type: Distribute<br>
Volume ID: 5068e9a5-d60f-439c-b319-befbf9a73a50<br>
Status: Started<br>
Number of Bricks: 1<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 192.168.5.181:/exports/nas-segment-0004/nas-cbs-0005<br>
Options Reconfigured:<br>
performance.stat-prefetch: off<br>
server.allow-insecure: on<br>
nfs.rpc-auth-allow: *<br>
nfs.disable: off<br>
nfs.addr-namelookup: off</p>
<p>Directory listing of 6000 empty directories ('stat-prefetch'<br>
is off):</p>
<p>Accessing the ext4 mount directly is almost instantaneous of<br>
course.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m39.483s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.136s<br>
# time ls -l
/exports/nas-segment-0004/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.109s</p>
<p>Accessing the NFS mount is also very fast.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m43.819s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.342s<br>
# time ls -l /mnt/nas-cbs-0005/cifs_share/manydirs/
>/dev/null<br>
real 0m0.200s</p>
<p>Accessing the CIFS FUSE mount is slow, almost 14 seconds!</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m55.759s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m13.458s<br>
# time ls -l /mnt/nas-cbs-0005-cifs/manydirs/ >/dev/null<br>
real 0m13.665s</p>
<p>Accessing the CIFS VFS libgfapi mount is now about twice as<br>
slow as FUSE, at almost 26 seconds due to 'stat-prefetch'<br>
being off!</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 1m2.821s (Throw away first time for ext4 FS cache
population?)<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 0m25.563s<br>
# time ls -l /mnt/nas-cbs-0005-cifs-vfs/cifs_share/manydirs/
>/dev/null<br>
real 0m26.949s</p>
<p>####################</p>
<p>64KB Writes:</p>
<p>NFS small block writes seem slow at about 50 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 27.249756 secs, 49.25 MB/sec<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 25.893526 secs, 51.83 MB/sec</p>
<p>CIFS FUSE small block writes are more than twice as fast as
NFS, at about 118 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 11.509077 secs, 116.62 MB/sec<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 11.223902 secs, 119.58 MB/sec</p>
<p>CIFS VFS libgfapi small block writes are very fast, about<br>
twice as fast as CIFS FUSE, at about 232 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 5.704753 secs, 235.27 MB/sec<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 5.862486 secs, 228.94 MB/sec</p>
<p>64KB Reads:</p>
<p>NFS small block reads are very fast, at about 334 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 3.972426 secs, 337.87 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 4.066978 secs, 330.02 MB/sec</p>
<p>CIFS FUSE small block reads are half of NFS, at about 124<br>
MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 10.837072 secs, 123.85 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 10.716980 secs, 125.24 MB/sec</p>
<p>CIFS VFS libgfapi small block reads are about the same as<br>
CIFS FUSE, at about 127 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 10.397888 secs, 129.08 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=64k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 10.696802 secs, 125.47 MB/sec</p>
<p>4KB Writes:</p>
<p>NFS very small block writes are very slow at about 4 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 20.450521 secs, 4.10 MB/sec<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 19.669923 secs, 4.26 MB/sec</p>
<p>CIFS FUSE very small block writes are faster, at about 11<br>
MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 7.247578 secs, 11.57 MB/sec<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 7.422002 secs, 11.30 MB/sec</p>
<p>CIFS VFS libgfapi very small block writes are twice as fast<br>
as CIFS FUSE, at about 22 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 3.766179 secs, 22.27 MB/sec<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
if=/dev/zero of=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 3.761176 secs, 22.30 MB/sec</p>
<p>4KB Reads:</p>
<p>NFS very small block reads are very fast at about 346<br>
MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 0.244960 secs, 342.45 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005/cifs_share/testfile
count=20k<br>
time to transfer data was 0.240472 secs, 348.84 MB/sec</p>
<p>CIFS FUSE very small block reads are less than half as fast<br>
as NFS, at about 143 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 0.606534 secs, 138.30 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs/testfile count=20k<br>
time to transfer data was 0.576185 secs, 145.59 MB/sec</p>
<p>CIFS VFS libgfapi very small block reads a slight bit slower<br>
than CIFS FUSE, at about 137 MB/sec.</p>
<p># sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 0.611328 secs, 137.22 MB/sec<br>
# sync;sync; echo '3' > /proc/sys/vm/drop_caches<br>
# sgp_dd time=1 thr=4 bs=4k bpt=1 iflag=dsync oflag=dsync
of=/dev/null if=/mnt/nas-cbs-0005-cifs-vfs/cifs_share/testfile
count=20k<br>
time to transfer data was 0.615834 secs, 136.22 MB/sec</p>
<div>EOM</div>
</div>
</blockquote>
Hi Jeff,<br>
<br>
Can you open a bugzilla for the same upstream and put all the
relevant information in to that? That will help us in having a
single place to track and solve this issue.<br>
<br>
Regards,<br>
Vivek<br>
<blockquote
cite="mid:CAERmy3AZR3t3Pqm7cpvgkcy6B46WNuWnm5KjKfeMQ6SOyDmj8w@mail.gmail.com"
type="cite">
<div dir="ltr">
<div> </div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>