I don't know from which version on but, if you use the native client for mounting the volumes, it's only required to have the IP active in the mount moment. After that, the native client will transparently manage node's failure.<br>
<br>Best regards,<br>Samuel.<br><br><div class="gmail_quote">On 18 July 2011 13:14, Marcel Pennewiß <span dir="ltr"><<a href="mailto:mailinglists@pennewiss.de">mailinglists@pennewiss.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">On Monday 18 July 2011 12:10:36 Uwe Weiss wrote:<br>
> My second node is 192.168.50.2. But in the Filesystem RA I have referenced<br>
> to 192.168.50.1 (see above). During my first test node1 was up and running,<br>
> but what happens if node1 is completely away and the address is<br>
> inaccessible?<br>
<br>
</div>We're using replicated setup and both nodes share an IPv4/IPv6-address (via<br>
pacemaker) which is used for accessing/mounting glusterfs-share and nfs-share<br>
(from backup-server).<br>
<div><div></div><div class="h5"><br>
Marcel<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br>