<div dir="ltr"><br><br><div class="gmail_quote">On Tue, Oct 21, 2008 at 1:40 AM, Keith Freedman <span dir="ltr"><<a href="mailto:freedman@freeformit.com">freedman@freeformit.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">At 12:12 PM 10/20/2008, Stas Oskin wrote:<br>
>Hi.<br>
><br>
>Thanks for all the answers.<br>
><br>
>I should say that indeed especially the metaserver-less (P2P?)<br>
>approach of GlusterFS makes it a very attractive option, as it<br>
>basically cancels any single points of failure.<br>
<br>
</div>I think it's important that people understand the tradeoffs.<br>
Having a central metaserver insures the integrity of the data.<br>
With Gluster, by not having a meta server, they introduce different<br>
problems (and different solutions).<br>
With AFR, you can get into a split brain situation. So examining<br>
glusters split brain resolution would be necessary and you'd have to<br>
determine if you're comfortable with the tradeoffs. In my view, it's<br>
a good workable solution but it may not work for everyone.<br>
<br>
The other issue is the namespace brick on the unify translator. this<br>
is effectively a meta-data like thing. you can afr the namespace<br>
brick to provide additional availability, but if your namespace brick<br>
is unavailable then you have a similar problem as you have with a<br>
metadata server outage in another solution.</blockquote><div><br>new DHT translator scheduled for 1.4.0 release provides similar functionality as unify (file scheduling) and does not use a namespace brick. thus DHT completely avoids meta-data server concept from glusterfs.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
<br>
So, while I personally think gluster is one of the "best" solutions<br>
out there, it's because the numbers for my situation point in that<br>
direction but it wont for everyone.<br>
<div class="Ih2E3d"><br>
>My largest concert over GlusterFs is really the luck of central<br>
>administration tool. Modifying the configuration files on every<br>
>server/client with every topology change becomes a hurdle on 10<br>
>servers already, and probably impossbile beyond 100.<br>
<br>
</div>in most cases, your client configurations are pretty much identical,<br>
so maintaining these is relatively simple. If your server topology<br>
changes often then it can be inconvenient, partly because you have to<br>
deal with IP addresses.<br>
It's also not good for certain grid operating systems which use<br>
internal IP's and the IP's change randomly, or if you for some reason<br>
have servers using dhcp.<br>
<div class="Ih2E3d"><br>
>Hence, I'm happy to hear version 1.4 will have some kind of a web<br>
>interface. The only questions are:<br>
><br>
>1) Will it support a central management of all serves/clients,<br>
>including the global AFR settings?<br>
><br>
>2) When it comes out? :)<br>
><br>
>Regards.<br>
><br>
</div>>2008/10/20 Vikas Gorur <<mailto:<a href="mailto:vikasgp@gmail.com">vikasgp@gmail.com</a>><a href="mailto:vikasgp@gmail.com">vikasgp@gmail.com</a>><br>
>2008/10/18 Stas Oskin <<mailto:<a href="mailto:stas.oskin@gmail.com">stas.oskin@gmail.com</a>><a href="mailto:stas.oskin@gmail.com">stas.oskin@gmail.com</a>>:<br>
<div class="Ih2E3d">> > Hi.<br>
> ><br>
> > I'm evaluating GlusterFS for our DFS implementation, and wondered how it<br>
> > compares to KFS/CloudStore?<br>
> ><br>
> > These features here look especially nice<br>
> ><br>
</div>> (<<a href="http://kosmosfs.sourceforge.net/features.html" target="_blank">http://kosmosfs.sourceforge.net/features.html</a>><a href="http://kosmosfs.sourceforge.net/features.html" target="_blank">http://kosmosfs.sourceforge.net/features.html</a>).<br>
<div><div></div><div class="Wj3C7c">> Any idea what of them exist<br>
> > in GlusterFS as well?<br>
><br>
>Stas,<br>
><br>
>Here's how GlusterFS compares to KFS, feature by feature:<br>
><br>
> > Incremental scalability:<br>
><br>
>Currently adding new storage nodes requires a change in the config<br>
>file and restarting servers and clients. However, there is no need to<br>
>move/copy data or perform any other maintenance steps. "Hot add"<br>
>capability is planned for the 1.5 release.<br>
><br>
> > Availability<br>
><br>
>GlusterFS supports n-way data replication through the AFR translator.<br>
><br>
> > Per file degree of replication<br>
><br>
>GlusterFS used to have this feature, but it was dropped due to lack<br>
>of interest. It would not be too hard to bring it back.<br>
><br>
> > Re-balancing<br>
><br>
>The DHT and unify translators have extensive support for distributing<br>
>data across nodes. One can use unify schedulers to define file creation<br>
>policies such as:<br>
><br>
>* ALU - Adaptively (based on disk space utilization, disk speed, etc.)<br>
>schedule file creation.<br>
><br>
>* Round robin<br>
><br>
>* Non uniform (NUFA) - prefer a local volume for file creation and use remote<br>
>ones only when there is no space on the local volume.<br>
><br>
> > Data integrity<br>
><br>
>GlusterFS arguably provides better data integrity since it runs over<br>
>an existing filesystem, and does not access disks at the block level.<br>
>Thus in the worst case (which shouldn't happen), even if GlusterFS<br>
>crashes, your data will still be readable with normal tools.<br>
><br>
> > Rack-aware data placement<br>
><br>
>None of our users have mentioned this need until now, thus GlusterFS<br>
>has no rack awareness. One could incorporate this intelligence into<br>
>our cluster translators (unify, afr, stripe) quite easily.<br>
><br>
> > File writes and caching<br>
><br>
>GlusterFS provides a POSIX-compliant filesystem interface. GlusterFS<br>
>has fine-tunable caching translators, such as read-ahead (to read ahead),<br>
>write-behind (to reduce write latency), and io-cache (caching file data).<br>
><br>
> > Language support<br>
><br>
>This is irrelevant to GlusterFS since it is mounted and accessed as a normal<br>
>filesystem, through FUSE. This means all your applications can run<br>
>on GlusterFS<br>
>without any modifications.<br>
><br>
> > Deploy scripts<br>
><br>
>Users have found GlusterFS to be so simple to setup compared to other<br>
>cluster filesystems that there isn't really a need for deploy scripts. ;)<br>
><br>
> > Local read optimization<br>
><br>
>As mentioned earlier, if your data access patterns justify it (that<br>
>is, if users generally access local data and only occassionly want<br>
>remote data), you can configure 'unify' with the NUFA scheduler to achieve<br>
>this.<br>
><br>
>In addition, I'd like to mention two particular strengths of GlusterFS.<br>
><br>
>1) GlusterFS has no notion of a 'meta-server'. I have not looked through<br>
>KFS' design in detail, but the mention of a 'meta-server' leads me to<br>
>believe that failure of the meta-server will take the entire cluster offline.<br>
>Please correct me if the impression is wrong.<br>
><br>
>GlusterFS on the other hand has no single point of failure such as central<br>
>meta server.<br>
><br>
>2) GlusterFS 1.4 will have a web-based interface which will allow<br>
>you to start/stop GlusterFS, monitor logs and performance, and do<br>
>other admin activities.<br>
><br>
><br>
>Please contact us if you need further clarifications or details.<br>
><br>
>Vikas Gorur<br>
>Engineer - Z Research<br>
><br>
</div></div><div><div></div><div class="Wj3C7c">>_______________________________________________<br>
>Gluster-users mailing list<br>
><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
><a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>hard work often pays off after time, but laziness always pays off now<br>
</div>