<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
We are building a new web-serving farm here. Believing, like most
people, that the choice of the technology does not affect
performance in read-dominated work-loads (such as ours), we picked
GlusterFS for its rich feature set.<br>
<br>
However, when we got to doing some testing, GlusterFS-mounted shares
lose -- by a wide margin -- not only to the SAN-connected RAIDs, but
even to NFS-mounted shares.<br>
<br>
Here are the numbers... All of the computer-systems involved are
VMWare VMs running RHEL6. Each VM has its own dedicated
SAN-connected "disk". GlusterFS is using replicated volume -- with
two bricks. Each brick is on a VM of its own, residing on the VM's
SAN-connected "disk".<br>
<br>
The web-server is, likewise, a VM. The same set of four test-files
was placed on the web-server's own SAN-connected "disk", on an
NFS-mount, and on a GlusterFS-share. (The NFS service is by a NetApp
NFS "appliance".) Here are the corresponding lines from
mount-listing:<br>
<blockquote>
<ul>
<li>Local (SAN-connected):<br>
<tt>/dev/mapper/vg_root-lv_data01 on /data01 type ext4 (rw)</tt></li>
<li>NFS:<br>
.....<tt>nas02:/NFS-DCMS on /data03 type nfs </tt><tt>(rw,nfsvers=3,rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,addr=10.x.x.x)</tt></li>
<li>GlusterFS:<br>
<tt>glusterfs.X:/test-ie on /mnt/glusterfs/test-ie type
fuse.glusterfs</tt> <tt>(rw,default_permissions,allow_other,max_read=131072)</tt><br>
</li>
</ul>
</blockquote>
As mentioned above, four test-files were used for the benchmark:<br>
<ol>
<li>Small static file - 429 bytes</li>
<li>Larger static file - 93347 bytes</li>
<li>Small PHP file (a single php call in it – to <code>phpinfo()</code>
function). Although the file is small, its output was over 64Kb.</li>
<li>Large PHP file (apc.php). Although the file is larger, its
output was only about 12Kb.</li>
</ol>
The tests were run using our homegrown utility, which reports
average latency of each successful request. It was configured to
create 17 threads each hitting the file for 11 seconds. The timings
(in milliseconds) are in the table below:<br>
<blockquote>
<table border="1">
<tbody>
<tr>
</tr>
<tr>
<th class="confluenceTh mceSelected"><br>
</th>
<th class="confluenceTh mceSelected">Local</th>
<th class="confluenceTh mceSelected">NFS</th>
<th class="confluenceTh mceSelected">GlusterFS</th>
</tr>
<tr>
<th class="confluenceTh mceSelected">Small static file</th>
<td class="confluenceTd mceSelected" align="center">3.643</td>
<td class="confluenceTd mceSelected" align="center">
<p>6.801</p>
</td>
<td class="confluenceTd mceSelected" align="center">22.41</td>
</tr>
<tr>
<th class="confluenceTh mceSelected">Large static file</th>
<td class="confluenceTd mceSelected" align="center">15.34</td>
<td class="confluenceTd mceSelected" align="center">15.97</td>
<td class="confluenceTd mceSelected" align="center">40.80</td>
</tr>
<tr>
<th class="confluenceTh mceSelected" colspan="1">Small PHP
script</th>
<td class="confluenceTd mceSelected" colspan="1"
align="center">50.58</td>
<td class="confluenceTd mceSelected" colspan="1"
align="center">67.72</td>
<td class="confluenceTd mceSelected" colspan="1"
align="center">77.17</td>
</tr>
<tr>
<th class="confluenceTh mceSelected">Large PHP script</th>
<td class="confluenceTd mceSelected" align="center">16.50</td>
<td class="confluenceTd mceSelected" align="center">17.81</td>
<td class="confluenceTd mceSelected" align="center">118.4</td>
</tr>
</tbody>
</table>
</blockquote>
<div align="justify">Discouragingly, not only GlusterFS' performance
is pretty bad, the <tt>glusterfs</tt>-process running on the
web-server could be seen hogging an entire CPU during the tests...
This suggests, the bottleneck is not in the underlying storage or
network, but the CPU -- which would be quite unusual for an
IO-intensive workload. (<tt>glusterfsd</tt>-processes hosting each
brick were using about 9% of one CPU each.)<br>
</div>
<br>
We used the "officially" provided 3.4.1 RPMs for RHEL.<br>
<br>
Could it be, the GlusterFS developers stopped caring for
read-performance -- and stopped routinely testing it? The wording of
<a href="http://www.gluster.org/category/performance/">Performance
page at Gluster.org</a> has a hint of such "arrogance":<br>
<blockquote><i>Let's start with read-dominated workloads. It's well
known that OS (and app)
caches can absorb most of the reads in a system. This was the
fundamental
observation behind Seltzer et al's work on log-structured
filesystems all those
years ago. Reads often take care of themselves, so </i><i><strong>at
the filesystem level</strong></i><i>
focus on writes.</i></blockquote>
Or did we do such poor job configuring gluster here, that our setup
can be made 2-3 times faster simply by correcting our mistakes? Any
comments? Thank you!<br>
<blockquote>-mi<br>
</blockquote>
</body>
</html>