<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">The developers are of the mind to not
      delete anything that may possibly cause data loss, so they do
      leave that up to us to clean up manually.<br>
      <br>
      On 10/10/2014 5:53 AM, Ryan Nix wrote:<br>
    </div>
    <blockquote
cite="mid:CAOpa8knEwCDB7CMMRjQWwn61by6ZDh7gP=dwaXtDQMefUthxUg@mail.gmail.com"
      type="cite">
      <div dir="ltr">So I had to force the volume to stop.&nbsp; It seems the
        replace-brick function was hung-up, and not matter what I did,
        restart the gluster daemon, etc, it wouldn't work.&nbsp; I also did a
        yum erase gluster*, and removed the gluster directory in
        /var/lib, then reinstalled.&nbsp; Once I did that, I followed Joe's
        instructions&nbsp;<a moz-do-not-send="true"
href="http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/"
          target="_blank">http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/</a>&nbsp;and
        was able to recreate the volume.
        <div><br>
        </div>
        <div>When you delete a volume in Gluster, is the .glusterfs
          directory supposed to be automatically removed?&nbsp; If not, will
          future versions of Gluster do that?&nbsp; Seems kind of silly that
          you have to go through Joe's instructions, which are 2.5 years
          old now.</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Thu, Oct 9, 2014 at 11:11 AM, Ted
          Miller <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:tmiller@hcjb.org" target="_blank">tmiller@hcjb.org</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span class=""> On
                10/7/2014 1:56 PM, Ryan Nix wrote:<br>
                <blockquote type="cite">
                  <div dir="ltr">Hello,
                    <div><br>
                    </div>
                    <div>I seem to have hosed my installation while
                      trying to replace a failed brick.&nbsp; The
                      instructions for replacing the brick with a
                      different host name/IP on the Gluster site are no
                      longer available so I used the instructions from
                      the Redhat Storage class that I attended last
                      week, which assumed the replacement had the same
                      host name.</div>
                    <div><br>
                    </div>
                    <div><a moz-do-not-send="true"
href="http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/"
                        target="_blank">http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/</a><br>
                    </div>
                    <div><br>
                    </div>
                    <div>It seems the working brick (I had two servers
                      with simple replication only) will not release the
                      DNS entry of the failed brick.</div>
                    <div><br>
                    </div>
                    <div>Is there any way to simply reset Gluster
                      completely?</div>
                  </div>
                </blockquote>
              </span> The simple way to "reset gluster completely" would
              be to delete the volume and start over.&nbsp; Sometimes this is
              the quickest way, especially if you only have one or two
              volumes.<br>
              <br>
              If nothing has changed, deleting the volume will not
              affect the data on the brick.<br>
              <br>
              You can either:<br>
              Find and follow the instructions to delete the "markers"
              that glusterfs puts on the brick, in which case the create
              process should be the same as any new volume creation.<br>
              Otherwise, when you do the "volume create..." step, it
              will give you an error, something like 'brick already in
              use'.&nbsp; You used to be able to override that by adding
              --force to the command line.&nbsp; (Have not needed it lately,
              so don't know if it still works.)<br>
              <br>
              Hope this helps<br>
              Ted Miller<br>
              Elkhart, IN<br>
              <blockquote type="cite"><span class="">
                  <div dir="ltr">
                    <div> &nbsp;</div>
                    <div><br>
                    </div>
                    <div>Just to confirm, if I delete the volume so I
                      can start over, deleting the volume will not
                      delete the data.&nbsp; Is this correct?&nbsp; Finally, once
                      the volume is deleted, do I have to do what Joe
                      Julian recommended here?&nbsp;<a moz-do-not-send="true"
href="http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/"
                        target="_blank">http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/</a></div>
                    <div><br>
                    </div>
                    <div>Thanks for any insights.</div>
                    <br>
                  </div>
                </span></blockquote>
            </div>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>