Hello All.<br><br>I run into the problem with replication.<br>I have two servers (192.168.0.62 and 192.168.0.37) and I want to do one replicated volume.<br>I reviewed documentation and made following:<br><br>---glusterfsd.vol---<br>
volume posix<br> type storage/posix<br> option directory /var/share<br>end-volume<br> <br>volume locks<br> type features/locks<br> subvolumes posix<br>end-volume<br> <br>volume brick<br> type performance/io-threads<br>
option thread-count 16<br> subvolumes locks<br>end-volume<br> <br>volume server<br> type protocol/server<br> option transport-type tcp<br> option auth.addr.brick.allow *<br> option auth.addr.brick-ns.allow *<br> subvolumes brick<br>
end-volume<br>-----------------------<br><br>and<br><br>---glusterfs.vol---<br>volume remote1<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.0.62<br> option remote-subvolume brick<br>
end-volume<br><br>volume remote2<br> type protocol/client<br> option transport-type tcp<br> option remote-host 192.168.0.37<br> option remote-subvolume brick<br>end-volume<br><br>volume replicate<br> type cluster/replicate<br>
subvolumes remote1 remote2<br>end-volume<br><br>volume writebehind<br> type performance/write-behind<br> option aggregate-size 128KB<br> option window-size 1MB<br> subvolumes replicate<br>end-volume<br><br>volume cache<br>
type performance/io-cache<br> option cache-size 512MB<br> subvolumes writebehind<br>end-volume<br>--------------<br><br>In the documentation wrote, that replicate transport is RAID1, but it isn't true!<br>If both servers up, everything works fine. But if second server down (due lost network connection), I run into the problem.<br>
If I deleted a file from first server and second server will be up, the deleted file will be recreated at the first server!<br>As you see, this is not RAID1.<br>Please help me. Can GlusertFS work as completely RAID1 or no?<br>
<br>With best wishes,<br>Victor<br>