<html><head></head><body>The proper way to engineer your system would be to identify the performance of your most expensive processes and design a system to allow those to perform in a way that is in line with your expectations. <br>
<br>
If you're properly engineering a system, you should know what your performance expectations are before you begin testing to see if you met them.<br>
<br>
Also be wary of comparing apples to orchards. Picking a basket of apples to feed 20 people may be the fastest and most efficient way to feed them, but when you have 2000, it's better to just send them in to the orchard to pick their own. Sure, each individuals performance will be degraded, but the overall task is vastly more performant. <br><br><div class="gmail_quote">On August 11, 2014 1:59:24 AM PDT, Alan Orth <alan.orth@gmail.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Hi,<br /><br />I guess some good advice is "pre-mature optimization is the root of all<br />evil". Use the Gluster defaults for your replica volumes, then, when<br />you inevitably have performance issues, identify bottlenecks logically<br />and iteratively:<br /> - Raw RAID read/write speeds<br /> - Raw network read/write speeds<br /><br />ie, make sure your hardware/network can keep up before trying to "fix"<br />GlusterFS. We use GlusterFS for home directories on a compute cluster<br />with 10-20 concurrent users (~100 total users), and I mandate that users<br />do write-heavy jobs to compute-node local storage. It's a bit different<br />use case than yours, but hopefully useful insight. For the record,<br />we're using 10Gbe over copper.<br /><br />Other than that, the Red Hat storage guide recommends hardware RAID6 and<br />XFS (rather than ext4).<br /><br />Cheers,<br /><br />Alan<br /><br />On 08/01/2014 10:02 AM, Bruno MACADRÉ wrote:<br /><blockquote
class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> Hi all,<br /> <br /> I'm currently doing a Fileserver between 2 nodes, I use GlusterFS in<br /> replicate mode between them to keep data in sync.<br /> <br /> This fileserver is planned to be used by about 200<br /> users/workstations simultaneously for homes and other shares so my<br /> questions are :<br /> <br /> * What's the best mount type (GlusterFS or NFS) for performances<br /> and/or stability ?<br /> * I see around the Web a lot of tuning (all and nothing), is<br /> there a tuning concept according to the final use and the hardware of<br /> the servers ?<br /> * Is there some caveat to avoid ?<br /> <br /> Thanks by advance for any answers<br /> Regards,<br /> Bruno.<br /> <br /></blockquote><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>