ok - I figured it out (at least for version 1.3 that ships with ubuntu - as long as there the files exist (even at 0 length) - on ehe new volume- it will start to sync them. I'll go to the latest release and try out a few more scenarios.<br>
<br>(That's good enuff for me - I'm not looking for perfect just something reasonable.)<br><br>-Adrian<br><br><div class="gmail_quote">On Sun, May 3, 2009 at 12:48 AM, Adrian Terranova <span dir="ltr"><<a href="mailto:aterranova@gmail.com">aterranova@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">crap - just realized I cut pasted the server2x (sorry about that.)<br><br>volume client3<br>
type protocol/client<br> option transport-type tcp/client # for TCP/IP transport<br> option remote-host 127.0.0.1 # IP address of the remote brick<br>
option remote-port 6998 # default server port is 6996<br> option remote-subvolume brick3 # name of the remote volume<br>end-volume<br><br>## Add AFR (Automatic File Replication) feature.<br>volume afr<br>
type cluster/afr<br> subvolumes client3 client1 client2<br># option replicate *:3<br>end-volume<br><br><br>[snip] <br><div><div></div><div class="h5"><br><div class="gmail_quote">On Sun, May 3, 2009 at 12:42 AM, Adrian Terranova <span dir="ltr"><<a href="mailto:aterranova@gmail.com" target="_blank">aterranova@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hello all, <br><br>I've setup AFR - and am very impressed with the product - however - when I do a delete of /home/export1 and /home/export2 - what needs to happen for autoheal to happen? (I would like to understand this in some detail before implementing for my home directory data (mostly - just trying to work out the procedure for adding / replacing a volume)- I tried remounting the client, and restarting the server with a couple of find variations- none seemed to work) Is this an artifact of my one host setup or something?<br>
<br>New files seems to show up - but the existing files / directories don't seem to come back when I read them.<br><br>How would I get my files back onto replaced subvolumes?<br><br>--Adrian<br><br><br>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br>
[snip]server <br>peril@mythbuntufe-desktop:/etc/glusterfs$ grep -v \^# glusterfs-server.vol |more<br><br><br><br>volume posix1<br> type storage/posix # POSIX FS translator<br> option directory /home/export1 # Export this directory<br>
end-volume<br><br>volume brick1<br> type features/posix-locks<br> option mandatory on # enables mandatory locking on all files<br> subvolumes posix1<br>end-volume<br><br>volume server<br> type protocol/server<br>
option transport-type tcp/server # For TCP/IP transport<br> option listen-port 6996 # Default is 6996<br> subvolumes brick1<br> option auth.ip.brick1.allow * # access to "brick" volume<br>
end-volume<br><br><br><br>volume posix2<br> type storage/posix # POSIX FS translator<br> option directory /home/export2 # Export this directory<br>end-volume<br><br>volume brick2<br> type features/posix-locks<br>
option mandatory on # enables mandatory locking on all files<br> subvolumes posix2<br>end-volume<br><br>volume server<br> type protocol/server<br> option transport-type tcp/server # For TCP/IP transport<br>
option listen-port 6997 # Default is 6996<br> subvolumes brick2<br> option auth.ip.brick2.allow * # Allow access to "brick" volume<br>end-volume<br><br><br><br><br>volume posix3<br> type storage/posix # POSIX FS translator<br>
option directory /home/export3 # Export this directory<br>end-volume<br><br>volume brick3<br> type features/posix-locks<br> option mandatory on # enables mandatory locking on all files<br> subvolumes posix3<br>
end-volume<br><br>volume server<br> type protocol/server<br> option transport-type tcp/server # For TCP/IP transport<br> option listen-port 6998 # Default is 6996<br> subvolumes brick3<br> option auth.ip.brick3.allow * # access to "brick" volume<br>
end-volume<br><br>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX<br><br>[snip]client<br>peril@mythbuntufe-desktop:/etc/glusterfs$ grep -v \^# glusterfs-server.vol |more<br><br><br>
<br>volume posix1<br> type storage/posix # POSIX FS translator<br> option directory /home/export1 # Export this directory<br>end-volume<br><br>volume brick1<br> type features/posix-locks<br> option mandatory on # enables mandatory locking on all files<br>
subvolumes posix1<br>end-volume<br><br>volume server<br> type protocol/server<br> option transport-type tcp/server # For TCP/IP transport<br> option listen-port 6996 # Default is 6996<br> subvolumes brick1<br>
option auth.ip.brick1.allow * # access to "brick" volume<br>end-volume<br><br><br><br>volume posix2<br> type storage/posix # POSIX FS translator<br> option directory /home/export2 # Export this directory<br>
end-volume<br><br>volume brick2<br> type features/posix-locks<br> option mandatory on # enables mandatory locking on all files<br> subvolumes posix2<br>end-volume<br><br>volume server<br> type protocol/server<br>
option transport-type tcp/server # For TCP/IP transport<br> option listen-port 6997 # Default is 6996<br> subvolumes brick2<br> option auth.ip.brick2.allow * # Allow access to "brick" volume<br>
end-volume<br><br><br><br><br>volume posix3<br> type storage/posix # POSIX FS translator<br> option directory /home/export3 # Export this directory<br>end-volume<br><br>volume brick3<br> type features/posix-locks<br>
option mandatory on # enables mandatory locking on all files<br> subvolumes posix3<br>end-volume<br><br>volume server<br> type protocol/server<br> option transport-type tcp/server # For TCP/IP transport<br>
option listen-port 6998 # Default is 6996<br> subvolumes brick3<br> option auth.ip.brick3.allow * # access to "brick" volume<br>end-volume<br><br><br><br>
</blockquote></div><br>
</div></div></blockquote></div><br>