<div dir="ltr">Thanks, Ted.  I&#39;ll try this today.</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 9, 2014 at 11:11 AM, Ted Miller <span dir="ltr">&lt;<a href="mailto:tmiller@hcjb.org" target="_blank">tmiller@hcjb.org</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span class="">
    On 10/7/2014 1:56 PM, Ryan Nix wrote:<br>
    <blockquote type="cite">
      
      <div dir="ltr">Hello,
        <div><br>
        </div>
        <div>I seem to have hosed my installation while trying to
          replace a failed brick.  The instructions for replacing the
          brick with a different host name/IP on the Gluster site are no
          longer available so I used the instructions from the Redhat
          Storage class that I attended last week, which assumed the
          replacement had the same host name.</div>
        <div><br>
        </div>
        <div><a href="http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/" target="_blank">http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/</a><br>
        </div>
        <div><br>
        </div>
        <div>It seems the working brick (I had two servers with simple
          replication only) will not release the DNS entry of the failed
          brick.</div>
        <div><br>
        </div>
        <div>Is there any way to simply reset Gluster completely?</div>
      </div>
    </blockquote></span>
    The simple way to &quot;reset gluster completely&quot; would be to delete the
    volume and start over.  Sometimes this is the quickest way,
    especially if you only have one or two volumes.<br>
    <br>
    If nothing has changed, deleting the volume will not affect the data
    on the brick.<br>
    <br>
    You can either:<br>
    Find and follow the instructions to delete the &quot;markers&quot; that
    glusterfs puts on the brick, in which case the create process should
    be the same as any new volume creation.<br>
    Otherwise, when you do the &quot;volume create...&quot; step, it will give you
    an error, something like &#39;brick already in use&#39;.  You used to be
    able to override that by adding --force to the command line.  (Have
    not needed it lately, so don&#39;t know if it still works.)<br>
    <br>
    Hope this helps<br>
    Ted Miller<br>
    Elkhart, IN<br>
    <blockquote type="cite"><span class="">
      <div dir="ltr">
        <div>  </div>
        <div><br>
        </div>
        <div>Just to confirm, if I delete the volume so I can start
          over, deleting the volume will not delete the data.  Is this
          correct?  Finally, once the volume is deleted, do I have to do
          what Joe Julian recommended here? <a href="http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/" target="_blank">http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/</a></div>
        <div><br>
        </div>
        <div>Thanks for any insights.</div>
        <div><br>
        </div>
        <div>- Ryan</div>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      </span><pre>_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </div>

</blockquote></div><br></div>