<div dir="ltr"><div>Hi,</div><div><br></div>SAS 7200 RPM disks are not that small size at all (same as SATA basically). If I remember right, the reason of switching to SAS here would be Full Duplex with SAS (you can read and write in the same time to them) Â instead of Half Duplex with SATA disks (read or write per one moment only).</div><div class="gmail_extra"><br><div class="gmail_quote">2014-09-23 9:02 GMT+03:00 Chris Knipe <span dir="ltr"><<a href="mailto:savage@savage.za.org" target="_blank">savage@savage.za.org</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
SSD has been considered but is not an option due to cost. SAS has<br>
been considered but is not a option due to the relatively small sizes<br>
of the drives. We are *rapidly* growing towards a PB of actual online<br>
storage.<br>
<br>
We are exploring raid controllers with onboard SSD cache which may help.<br>
<div class="HOEnZb"><div class="h5"><br>
On Tue, Sep 23, 2014 at 7:59 AM, Roman <<a href="mailto:romeo.r@gmail.com">romeo.r@gmail.com</a>> wrote:<br>
> Hi,<br>
><br>
> just a question ...<br>
><br>
> Would SAS disks be better in situation with lots of seek times using<br>
> GlusterFS?<br>
><br>
> 2014-09-22 23:03 GMT+03:00 Jeff Darcy <<a href="mailto:jdarcy@redhat.com">jdarcy@redhat.com</a>>:<br>
>><br>
>><br>
>> > The biggest issue that we are having, is that we are talking about<br>
>> > -billions- of small (max 5MB) files. Seek times are killing us<br>
>> > completely from what we can make out. (OS, HW/RAID has been tweaked to<br>
>> > kingdom come and back).<br>
>><br>
>> This is probably the key point. It's unlikely that seek times are going<br>
>> to get better with GlusterFS, unless it's because the new servers have<br>
>> more memory and disks, but if that's the case then you might as well<br>
>> just deploy more memory and disks in your existing scheme. On top of<br>
>> that, using any distributed file system is likely to mean more network<br>
>> round trips, to maintain consistency. There would be a benefit from<br>
>> letting GlusterFS handle the distribution (and redistribution) of files<br>
>> automatically instead of having to do your own sharding, but that's not<br>
>> the same as a performance benefit.<br>
>><br>
>> > I’m not yet too clued up on all the GlusterFS naming, but essentially<br>
>> > if we do go the GlusterFS route, we would like to use non replicated<br>
>> > storage bricks on all the front-end, as well as back-end servers in<br>
>> > order to maximize storage.<br>
>><br>
>> That's fine, so long as you recognize that recovering from a failed<br>
>> server becomes more of a manual process, but it's probably a moot point<br>
>> in light of the seek-time issue mentioned above. As much as I hate to<br>
>> discourage people from using GlusterFS, it's even worse to have them be<br>
>> disappointed, or for other users with other needs to be so as we spend<br>
>> time trying to fix the unfixable.<br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Best regards,<br>
> Roman.<br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
<br>
Regards,<br>
Chris Knipe<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>Best regards,<br>Roman.
</div>