<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 09/25/2013 06:16 AM, Andrew Lau
      wrote:<br>
    </div>
    <blockquote
cite="mid:CAD7dF9cMXiNQmbigJ23egp1Ki3gttvnkbo7nev4f3u9NRtn10w@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-family:tahoma,sans-serif">That's
          where I found the 200+ entries</div>
        <div class="gmail_default" style="font-family:tahoma,sans-serif"><br>
        </div>
        <div class="gmail_default">
          <div class="gmail_default"><font face="tahoma, sans-serif">[
              root@hv01 ]gluster volume heal STORAGE info split-brain</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">Gathering
              Heal info on volume STORAGE has been successful</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif"><br>
            </font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">Brick
              hv01:/data1</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">Number
              of entries: 271</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">at
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;path on brick</font></div>
          <div style="font-family:tahoma,sans-serif"><br>
          </div>
        </div>
        <div class="gmail_default">
          <div class="gmail_default">
            <font face="tahoma, sans-serif">2013-09-25 00:04:29
              /6682d31f-39ce-4896-99ef-14e1c9682585/dom_md/ids</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">2013-09-25
              00:04:29
/6682d31f-39ce-4896-99ef-14e1c9682585/images/5599c7c7-0c25-459a-9d7d-80190a7c739b/0593d351-2ab1-49cd-a9b6-c94c897ebcc7</font></div>
          <div class="gmail_default">
            <font face="tahoma, sans-serif">2013-09-24 23:54:29
              &lt;gfid:9c83f7e4-6982-4477-816b-172e4e640566&gt;</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">2013-09-24
              23:54:29 &lt;gfid:91e98909-c217-417b-a3c1-4cf0f2356e14&gt;</font></div>
          <div style="font-family:tahoma,sans-serif">&lt;snip&gt;</div>
          <div style="font-family:tahoma,sans-serif"><br>
          </div>
        </div>
        <div class="gmail_extra">
          <div class="gmail_default"><span
              style="font-family:tahoma,sans-serif"></span><font
              face="tahoma, sans-serif">Brick hv02:/data1</font></div>
          <div class="gmail_default"><font face="tahoma, sans-serif">Number
              of entries: 0</font></div>
          <div><br>
          </div>
          <div>
            <div class="gmail_default"
              style="font-family:tahoma,sans-serif">When I run the same
              command on hv02, it will show the reverse (the other node
              having 0 entries).&nbsp;</div>
            <div class="gmail_default"
              style="font-family:tahoma,sans-serif"><br>
            </div>
            <div class="gmail_default"
              style="font-family:tahoma,sans-serif">I remember last time
              having to delete these files individually on another
              split-brain case, but I was hoping there was a better
              solution than going through 200+ entries.</div>
          </div>
          <div class="gmail_default"
            style="font-family:tahoma,sans-serif"><br>
          </div>
        </div>
      </div>
    </blockquote>
    While I haven't tried it out myself, Jeff Darcy has written a script
    (<a class="moz-txt-link-freetext" href="https://github.com/jdarcy/glusterfs/tree/heal-script/extras/heal_script">https://github.com/jdarcy/glusterfs/tree/heal-script/extras/heal_script</a>)
    which helps in automating the process. He has detailed it's usage in
    his blog post
    <a class="moz-txt-link-freetext" href="http://hekafs.org/index.php/2012/06/healing-split-brain/">http://hekafs.org/index.php/2012/06/healing-split-brain/</a><br>
    <br>
    Hope this helps.<br>
    -Ravi<br>
    <blockquote
cite="mid:CAD7dF9cMXiNQmbigJ23egp1Ki3gttvnkbo7nev4f3u9NRtn10w@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_default"
            style="font-family:tahoma,sans-serif">Cheers.</div>
          <br>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">
            On Wed, Sep 25, 2013 at 10:39 AM, Mohit Anchlia <span
              dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:mohitanchlia@gmail.com" target="_blank">mohitanchlia@gmail.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div>What's the output of </div>
              <div>&nbsp;</div>
              <div>
                <div><code>gluster volume heal $VOLUME info </code><code>split</code><code>-brain</code></div>
                <br>
                <br>
              </div>
              <div class="gmail_quote">
                <div>
                  <div class="h5">On Tue, Sep 24, 2013 at 5:33 PM,
                    Andrew Lau <span dir="ltr">&lt;<a
                        moz-do-not-send="true"
                        href="mailto:andrew@andrewklau.com"
                        target="_blank">andrew@andrewklau.com</a>&gt;</span>
                    wrote:<br>
                  </div>
                </div>
                <blockquote style="margin:0px 0px 0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"
                  class="gmail_quote">
                  <div>
                    <div class="h5">
                      <div dir="ltr">
                        <div>Found the BZ&nbsp;<a moz-do-not-send="true"
                            style="font-family:arial"
                            href="https://bugzilla.redhat.com/show_bug.cgi?id=960190"
                            target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=960190</a>&nbsp;-
                          so I restarted one of the volumes and it seems
                          to have restarted the all daemons again.</div>
                        <div><br>
                        </div>
                        <div>Self heal started again, but I seem to have
                          split-brain issues everywhere. There's over
                          100 different entries on each node, what's the
                          best way to restore this now? Short of having
                          to manually go through and delete 200+ files.
                          It looks like a full split brain as the file
                          sizes on the different nodes are out of
                          balance by about 100GB or so.</div>
                        <div><br>
                        </div>
                        <div>Any suggestions would be much appreciated!</div>
                        <div>
                          <br>
                        </div>
                        <div>Cheers.</div>
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote">On Tue, Sep 24, 2013
                            at 10:32 PM, Andrew Lau <span dir="ltr">&lt;<a
                                moz-do-not-send="true"
                                href="mailto:andrew@andrewklau.com"
                                target="_blank">andrew@andrewklau.com</a>&gt;</span>
                            wrote:<br>
                            <blockquote style="margin:0px 0px 0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"
                              class="gmail_quote">
                              <div dir="ltr">
                                <div
                                  style="font-family:tahoma,sans-serif">
                                  Hi,</div>
                                <div
                                  style="font-family:tahoma,sans-serif"><br>
                                </div>
                                <div
                                  style="font-family:tahoma,sans-serif">
                                  Right now, I have a 2x1 replica. Ever
                                  since I had to reinstall one of the
                                  gluster servers, there's been issues
                                  with split-brain. The self-heal daemon
                                  doesn't seem to be running on either
                                  of the nodes.</div>
                                <div
                                  style="font-family:tahoma,sans-serif">
                                  <br>
                                </div>
                                <div
                                  style="font-family:tahoma,sans-serif">To
                                  reinstall the gluster server (the
                                  original brick data was intact but the
                                  OS had to be reinstalled)</div>
                                <div
                                  style="font-family:tahoma,sans-serif">
                                  - Reinstalled gluster</div>
                                <div
                                  style="font-family:tahoma,sans-serif">-
                                  Copied over the old uuid from backup</div>
                                <div
                                  style="font-family:tahoma,sans-serif">-
                                  gluster peer probe</div>
                                <div><span
                                    style="font-family:tahoma,sans-serif">-
                                  </span><font face="tahoma, sans-serif">gluster
                                    volume sync $othernode all</font></div>
                                <div><font face="tahoma, sans-serif">-
                                    mount -t glusterfs localhost:STORAGE
                                    /mnt</font></div>
                                <div><font face="tahoma, sans-serif">-
                                    find /mnt -noleaf -print0 | xargs
                                    --null stat &gt;/dev/null
                                    2&gt;/var/log/glusterfs/mnt-selfheal.log</font></div>
                                <div
                                  style="font-family:tahoma,sans-serif">
                                  <br>
                                </div>
                                <div
                                  style="font-family:tahoma,sans-serif">I
                                  let it resync and it was working fine,
                                  atleast so I thought. I just came back
                                  a few days later to see there's a miss
                                  match in the brick volumes. One is
                                  50GB ahead of the other.</div>
                                <div
                                  style="font-family:tahoma,sans-serif"><br>
                                </div>
                                <div
                                  style="font-family:tahoma,sans-serif">#
                                  gluster volume heal STORAGE info</div>
                                <div><font face="tahoma, sans-serif">Status:
                                    self-heal-daemon is not running on
                                    966456a1-b8a6-4ca8-9da7-d0eb96997cbe</font><br>
                                </div>
                                <div><font face="tahoma, sans-serif"><br>
                                  </font></div>
                                <div><font face="tahoma, sans-serif">/var/log/gluster/glustershd.log
                                    doesn't seem to have any recent
                                    logs, only those from when the two
                                    original gluster servers were
                                    running.</font></div>
                                <div><font face="tahoma, sans-serif"><br>
                                  </font></div>
                                <div><font face="tahoma, sans-serif">#
                                    gluster volume status</font></div>
                                <div><font face="tahoma, sans-serif"><br>
                                  </font></div>
                                <div><font face="tahoma, sans-serif">
                                    <div>Self-heal Daemon on localhost<span
                                        style="white-space:pre-wrap"> </span>N/A<span
                                        style="white-space:pre-wrap"> </span>N<span
                                        style="white-space:pre-wrap"> </span>N/A</div>
                                    <div><br>
                                    </div>
                                    <div>Any suggestions would be much
                                      appreciated!</div>
                                    <div><br>
                                    </div>
                                    <div>Cheers</div>
                                    <span><font color="#888888">
                                        <div>Andrew.</div>
                                      </font></span></font></div>
                              </div>
                            </blockquote>
                          </div>
                          <br>
                        </div>
                      </div>
                      <br>
                    </div>
                  </div>
                  _______________________________________________<br>
                  Gluster-users mailing list<br>
                  <a moz-do-not-send="true"
                    href="mailto:Gluster-users@gluster.org"
                    target="_blank">Gluster-users@gluster.org</a><br>
                  <a moz-do-not-send="true"
                    href="http://supercolony.gluster.org/mailman/listinfo/gluster-users"
                    target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
                </blockquote>
              </div>
              <br>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>