<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>For now there isn't any public site with more information. We will publish it here when it will be available.</p>
<p>There are still some things to work out, but I'll be glad to answer any questions you may have.</p>
<p>Xavi</p>
<p>On 30.03.2012 09:58, Pascal wrote:</p>
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%"><!-- html ignored --><!-- head ignored --><!-- meta ignored -->
<pre>Hello Xavier,

it sounds pretty interesting to me. 
Will you publish any news on this mailing list or is there another
place where I can update my self about your progress?

Am Fri, 30 Mar 2012 08:33:37 +0200
schrieb Xavier Hernandez
&lt;<a href="mailto:xhernandez@datalab.es">xhernandez@datalab.es</a>&gt;:</pre>
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Sorry, the previous message was intended for Pascal. Xavi On 30.03.2012 08:29, Xavier Hernandez wrote:
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Hello David, we</blockquote>
aren't the core developers of GlusterFS, but we are developing a new translator that will be able to implement something similar to a RAID6. In fact it will be able to have a configurable level of redundacy. A redundancy of 1 is equivalent to RAID 5; a redundancy of 2 is equivalent to RAID 6; and higher levels of redundancy are supported.
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">We are</blockquote>
also trying to improve performance over replicate by using a new contention detection and locking mecanism, but no promises about that yet.
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">We plan to begin internal tests soon. When we consider it</blockquote>
stable, we will release a first beta.
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Xavi On 29.03.2012</blockquote>
17:14, Pascal wrote:
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Am Thu, 29 Mar 2012 11:02:38 -0400</blockquote>
</blockquote>
schrieb David Coulson &lt;<a href="mailto:david@davidcoulson.net">david@davidcoulson.net</a>&gt;:
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Sorry for</blockquote>
</blockquote>
confusion, I understood you wrong in the first place. Now I
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">guess I</blockquote>
</blockquote>
know what you mean and I will think about it.
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Are there more</blockquote>
</blockquote>
suggestions or official plans from the GlusterFS
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">developers?</blockquote>
</blockquote>
Not following. If you have a replica count of 3, you can lose two boxes in that group and still have access to all your data. It's more like a 3-way RAID-1 than anything like RAID-6. On 3/29/12 11:00 AM, Pascal wrote:
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Am Thu, 29 Mar 2012 10:47:38 -0400 schrieb David</blockquote>
</blockquote>
</blockquote>
</blockquote>
Coulson&lt;<a href="mailto:david@davidcoulson.net">david@davidcoulson.net</a>[6]&gt;: Hello David, thanks for your quick reply. I already considered a replica count of 3 (and six servers at all, correct?), but the problem would still be that two hard drives from the same "replica group" were not allowed to fail at the same time.
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Try doing a distributed-replica with a replica count of 3.</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
Not really 'RAID-6' comparable, but you can have two nodes fail without outage. <a href="http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html">http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html</a>[5]On 3/29/12 10:39 AM, Pascal wrote:
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Hello everyone, I</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
would like to know if it is possible to setup a GlusterFS installation which is comparable to a RAID 6? I did some research in the community and several mailing lists and all I could find were the similar request from 2009 (<a href="http://gluster.org/pipermail/gluster-users/2009-May/002208.html">http://gluster.org/pipermail/gluster-users/2009-May/002208.html</a> [1], <a href="http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions">http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions</a> [2]). I would just like to have a scenario where two GlusterFS nodes/servers, respectively their hard drives, could fail at the same time. Thanks in advance! Pascal _______________________________________________ Gluster-devel mailing list <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a> [3] <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a>[4]_______________________________________________ Gluster-devel mailing list <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a> [7] <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a>[8]_______________________________________________
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">Gluster-devel mailing</blockquote>
</blockquote>
list
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%">
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%"><a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a></blockquote>
</blockquote>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a> Links: ------ [1] <a href="http://gluster.org/pipermail/gluster-users/2009-May/002208.html">http://gluster.org/pipermail/gluster-users/2009-May/002208.html</a> [2] <a href="http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions">http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions</a> [3] mailto:<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a> [4] <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a> [5] <a href="http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html">http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html</a> [6] mailto:<a href="mailto:david@davidcoulson.net">david@davidcoulson.net</a> [7] mailto:<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a> [8] <a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a></blockquote>
<pre>

_______________________________________________
Gluster-devel mailing list
<a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a>
<a href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a>
</pre>
</blockquote>
</body></html>