Hey Brian,<br><br>Thanks for the info, I may have mispelled them. It was actually afr and unify. I wasn't spelling very well this morning. :-)<br><br>I have been following different steps I found and your probably right, I plan on rebuilding these systems tonight and just use the gluster commands to put the peers and volumes together and run more tests. I will give strace a try during the failovers to see what might be happening.<br>
<br>I appreciate everything.<br><br>Joe<br><br><div class="gmail_quote">On Wed, Mar 14, 2012 at 4:17 PM, Brian Candler <span dir="ltr"><<a href="mailto:B.Candler@pobox.com">B.Candler@pobox.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Wed, Mar 14, 2012 at 03:13:29PM -0400, Joseph Hardeman wrote:<br>
> Thank you for responding. So you aren't using the vol files in<br>
> /etc/glusterfs to control anything, such as afra or unity?<br>
<br>
</div>Nope - indeed I have no idea what afra or unity are (and googling for<br>
"gluster afra unity" doesn't match anything useful)<br>
<br>
I have used the CLI utils as per the documentation, and everything "just<br>
works". There are a three files in /etc/glusterfs/ but they have not<br>
changed:<br>
<br>
$ ls -l /etc/glusterfs/<br>
total 12<br>
-rw-r--r-- 1 root root 229 2011-11-18 07:00 glusterd.vol<br>
-rw-r--r-- 1 root root 1908 2011-11-18 07:00 glusterfsd.vol.sample<br>
-rw-r--r-- 1 root root 2005 2011-11-18 07:00 glusterfs.vol.sample<br>
<br>
All the config changes are instead reflected under /etc/glusterd/<br>
<br>
$ ls /etc/glusterd/<br>
geo-replication <a href="http://glusterd.info" target="_blank">glusterd.info</a> nfs peers vols<br>
<br>
I see there's lots of *old* documentation for gluster <=2.x which talks<br>
about doing things manually, but the new documentation has been seriously<br>
dumbed down and doesn't even mention the module stacking configuration<br>
files.<br>
<div class="im"><br>
> I am just<br>
> asking because after building my own rpms and installing them, I was<br>
> able to build like I did before and I didn't see the high CPU usage.<br>
> Now the weird thing I saw was during a test failover and<br>
> stopping/starting glusterd on the first of pair I did see high cpu and<br>
> the vm's hung.<br>
<br>
</div>Attaching strace to the gluster processes might give you an idea what's<br>
happening?<br>
<br>
Regards,<br>
<br>
Brian.<br>
</blockquote></div><br>