Thanks for your response, Nathan.<br><br>
<div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
3. Use Gluster for redundancy instead of RAID. It would be nice if I can<br>
lose any single hard drive and/or entire server and still have access to<br>
100% of all the data in the pool. In this sort of setup, is it possible to<br>
limit the number of copies of data to 2 or 3, or if I have 10 machines, will<br>
I be forced to have 10 copies of the data?<br>
</blockquote>
<br></div>
Sure, you can do almost anything you want, today there is not RAID like functionality, but that is on the roadmap. We use "replicate" on pairs of servers and then unify them together with "distribute".<br>
<br>
An example of our 4 node test cluster is:<br>
<br>
<a href="http://share.robotics.net/glusterfs.vol" target="_blank">http://share.robotics.net/glusterfs.vol</a><br>
<a href="http://share.robotics.net/glusterfsd.vol" target="_blank">http://share.robotics.net/glusterfsd.vol</a><div class="Ih2E3d"></div></blockquote><div><br>If you replicate pairs of servers, how come you're still using RAID 6? <br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
4. Get good performance. Will i get acceptable performance through gigabit<br>
ethernet, or do I need 10 gigabit ethernet or infiniband to have something<br>
decent? Because I want a configuration where each machine is both a client<br>
and a server, will performance degrade as I add more machine such that the<br>
network needs to handle n^2 connections, where n is the number of servers?<br>
Or will performance improve because data will be striped across a lot of<br>
machines?<br>
</blockquote>
<br></div>
Infiniband is totally worth it, stuff is low cost (even can pick it up on ebay like we did) and has much lower latency then ethernet.<br>
<br>
Now as far as "good performance" this is where I am having the most issues with Gluster. To make it work with xen you need --disable-direct-io-mode when you start up glusterfs. I am not saying this is the best way to test, but if you "dd if=/dev/zero of=test bs=1G count=8" we get:<br>
<br>
XFS partition on 3ware 378 MB/s Not bad for writes!<br>
Gluster default 110 MB/s Expected more...<br>
Gluster disable-direct-io-mode 22 MB/s OUCH!!!<br>
<br>
The other issue we have is that we have so far only been able to use zen with file and not tap:aio (it stars, but never finishes domu boot).<div class="Ih2E3d"></div></blockquote><div><br>I saw the note in the technical faq about --disable-direct-io-mode. What does this do, and why is it needed to perform Xen migration?<br>
</div></div>