[Gluster-users] [Gluster-devel] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

Jaden Liang jaden1q84 at gmail.com
Tue Sep 2 09:17:25 UTC 2014


Hello, gluster-devel and gluster-users team,

We are running a performance test in a replica 1 volume and find out the
single file sequence writing performance only get about 50MB/s in a 1Gbps
Ethernet. However, if we test multiple files sequence writing, the writing
performance can go up to 120MB/s which is the top speed of network.

We also tried to use the stat xlator to find out where is the bottleneck of
single file write performance. Here is the stat data:

Client-side:
......
vs_vol_rep1-client-8.latency.WRITE=total:21834371.000000us,
mean:2665.328491us, count:8192, max:4063475, min:1849
......

Server-side:
......
/data/sdb1/brick1.latency.WRITE=total:6156857.000000us, mean:751.569458us,
count:8192, max:230864, min:611
......

Note that the test is write a 1GB single file sequentially to a replica 1
volume through 1Gbps Ethernet network.

On the client-side, we can see there are 8192 write requests totally. Every
request will write 128KB data. Total eclipsed time is 21834371us, about 21
seconds. The mean time of request is 2665us, about 2.6ms which means it
could only serves about 380 requests in 1 seconds. Plus there are other
time consuming like statfs, lookup, but those are not major reasons.

On the server-side, the mean time of request is 751us include write data to
HDD disk. So we think that is not the major reason.

And we also modify some codes to do the statistic of system epoll elapsed
time. It only took about 20us from enqueue data to finish sent-out.

Now we are heading to the rpc mechanism in glusterfs. Still, we think this
issue maybe encountered in gluster-devel or gluster-users teams. Therefor,
any suggestions would be grateful. Or have anyone know such issue?

Best regards,
Jaden Liang
9/2/2014


-- 
Best regards,
Jaden Liang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140902/5dcbc91b/attachment.html>


More information about the Gluster-users mailing list