<div dir="ltr">Hi.<br><br>Thanks for all the answers.<br><br>I should say that indeed especially the metaserver-less (P2P?) approach of GlusterFS makes it a very attractive option, as it basically cancels any single points of failure.<br>
<br>My largest concert over GlusterFs is really the luck of central administration tool. Modifying the configuration files on every server/client with every topology change becomes a hurdle on 10 servers already, and probably impossbile beyond 100.<br>
<br>Hence, I'm happy to hear version 1.4 will have some kind of a web interface. The only questions are:<br><br>1) Will it support a central management of all serves/clients, including the global AFR settings?<br><br>
2) When it comes out? :)<br><br>Regards.<br><br><div class="gmail_quote">2008/10/20 Vikas Gorur <span dir="ltr"><<a href="mailto:vikasgp@gmail.com">vikasgp@gmail.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
2008/10/18 Stas Oskin <<a href="mailto:stas.oskin@gmail.com">stas.oskin@gmail.com</a>>:<br>
<div><div></div><div class="Wj3C7c">> Hi.<br>
><br>
> I'm evaluating GlusterFS for our DFS implementation, and wondered how it<br>
> compares to KFS/CloudStore?<br>
><br>
> These features here look especially nice<br>
> (<a href="http://kosmosfs.sourceforge.net/features.html" target="_blank">http://kosmosfs.sourceforge.net/features.html</a>). Any idea what of them exist<br>
> in GlusterFS as well?<br>
<br>
</div></div>Stas,<br>
<br>
Here's how GlusterFS compares to KFS, feature by feature:<br>
<br>
> Incremental scalability:<br>
<br>
Currently adding new storage nodes requires a change in the config<br>
file and restarting servers and clients. However, there is no need to<br>
move/copy data or perform any other maintenance steps. "Hot add"<br>
capability is planned for the 1.5 release.<br>
<br>
> Availability<br>
<br>
GlusterFS supports n-way data replication through the AFR translator.<br>
<br>
> Per file degree of replication<br>
<br>
GlusterFS used to have this feature, but it was dropped due to lack<br>
of interest. It would not be too hard to bring it back.<br>
<br>
> Re-balancing<br>
<br>
The DHT and unify translators have extensive support for distributing<br>
data across nodes. One can use unify schedulers to define file creation<br>
policies such as:<br>
<br>
* ALU - Adaptively (based on disk space utilization, disk speed, etc.)<br>
schedule file creation.<br>
<br>
* Round robin<br>
<br>
* Non uniform (NUFA) - prefer a local volume for file creation and use remote<br>
ones only when there is no space on the local volume.<br>
<br>
> Data integrity<br>
<br>
GlusterFS arguably provides better data integrity since it runs over<br>
an existing filesystem, and does not access disks at the block level.<br>
Thus in the worst case (which shouldn't happen), even if GlusterFS<br>
crashes, your data will still be readable with normal tools.<br>
<br>
> Rack-aware data placement<br>
<br>
None of our users have mentioned this need until now, thus GlusterFS<br>
has no rack awareness. One could incorporate this intelligence into<br>
our cluster translators (unify, afr, stripe) quite easily.<br>
<br>
> File writes and caching<br>
<br>
GlusterFS provides a POSIX-compliant filesystem interface. GlusterFS<br>
has fine-tunable caching translators, such as read-ahead (to read ahead),<br>
write-behind (to reduce write latency), and io-cache (caching file data).<br>
<br>
> Language support<br>
<br>
This is irrelevant to GlusterFS since it is mounted and accessed as a normal<br>
filesystem, through FUSE. This means all your applications can run on GlusterFS<br>
without any modifications.<br>
<br>
> Deploy scripts<br>
<br>
Users have found GlusterFS to be so simple to setup compared to other<br>
cluster filesystems that there isn't really a need for deploy scripts. ;)<br>
<br>
> Local read optimization<br>
<br>
As mentioned earlier, if your data access patterns justify it (that<br>
is, if users generally access local data and only occassionly want<br>
remote data), you can configure 'unify' with the NUFA scheduler to achieve<br>
this.<br>
<br>
In addition, I'd like to mention two particular strengths of GlusterFS.<br>
<br>
1) GlusterFS has no notion of a 'meta-server'. I have not looked through<br>
KFS' design in detail, but the mention of a 'meta-server' leads me to<br>
believe that failure of the meta-server will take the entire cluster offline.<br>
Please correct me if the impression is wrong.<br>
<br>
GlusterFS on the other hand has no single point of failure such as central<br>
meta server.<br>
<br>
2) GlusterFS 1.4 will have a web-based interface which will allow<br>
you to start/stop GlusterFS, monitor logs and performance, and do<br>
other admin activities.<br>
<br>
<br>
Please contact us if you need further clarifications or details.<br>
<font color="#888888"><br>
Vikas Gorur<br>
Engineer - Z Research<br>
</font></blockquote></div></div>