<div class="post-text">
<p>I have a small GlusterFS Cluster
providing a replicated volume. Each server has 2 SAS disks for the OS
and logs and 22 SATA disks for the actual data striped together as a
RAID10 using MegaRAID SAS 9280-4i4e with this configuration: <a href="http://pastebin.com/2xj4401J" rel="nofollow">http://pastebin.com/2xj4401J</a> </p>
<p>Connected to this cluster are a few other servers with the native
client running nginx to serve files stored on it in the order of 3-10MB.</p>
<p>Right now a storage server has a outgoing bandwith of 300Mbit/s and
the busy rate of the raid array is at 30-40%. There are also strange
side-effects: Sometimes the io-latency skyrockets and there is no access
possible on the raid for >10 seconds. This happens at 300Mbit/s or 1000Mbit/s of outgoing bandwidth. The file system used is xfs
and it has been tuned to match the raid stripe size.</p><p>I've tested all sorts of gluster settings but none seem to have any effect because of that I've reset the volume configuration and it is using the default one.<br>
</p>
<p>Does anyone have an idea what could be the reason for such a bad performance? 22 Disks in a RAID10 should deliver <em>way</em> more throughput.</p>
</div>