<div dir="ltr">Hello,<div><br></div><div>I seem to have hosed my installation while trying to replace a failed brick.  The instructions for replacing the brick with a different host name/IP on the Gluster site are no longer available so I used the instructions from the Redhat Storage class that I attended last week, which assumed the replacement had the same host name.</div><div><br></div><div><a href="http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/">http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/</a><br></div><div><br></div><div>It seems the working brick (I had two servers with simple replication only) will not release the DNS entry of the failed brick.</div><div><br></div><div>Is there any way to simply reset Gluster completely?  </div><div><br></div><div>Just to confirm, if I delete the volume so I can start over, deleting the volume will not delete the data.  Is this correct?  Finally, once the volume is deleted, do I have to do what Joe Julian recommended here? <a href="http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/">http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/</a></div><div><br></div><div>Thanks for any insights.</div><div><br></div><div>- Ryan</div></div>