Hi ...,<br><br>I have a couple of queries on replacing completely failed Gluster servers/nodes, refer - <br><a href="http://community.gluster.org/q/how-do-you-replace-a-gluster-node/">http://community.gluster.org/q/how-do-you-replace-a-gluster-node/</a><br>

  and<br><a href="http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/">http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/</a><br>

<br>1.
 If Gluster is configured in &#39;distribute&#39; mode, and there is no 
replication, what happens if a node goes down completely when there is a
 &#39;write&#39; going on? What are the chances of corruption?<br><br>2. Say I 
have 2 Nodes in the Gluster Storage, each with an external disk array, 
can I setup a 3rd one as failover to any 1 of the first 2 nodes, using a
 backup of &#39;/etc/glusterd&#39;, and renaming the 3rd node to that of the 
failed node? The 3rd failover node will have direct access to the HDDs 
of both the 1st and 2nd nodes and will serve the same sub-volumes as the
 failed node.<br><br>Will this configuration work? <br>If this is possible, I would like to automate this process using scripts+heartbeat/ucarp.<br><br>Regards,<br><br><br>Indivar Nair