<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 10pt; color: #000000'><div>Is it normal to expect very high server load and clients being unable to access the mounts during this process? It means the application running on this will need to be offline for hours.<br><br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Ravishankar N" &lt;ravishankar@redhat.com&gt;<br><b>To: </b>"Alun James" &lt;ajames@tibus.com&gt;<br><b>Cc: </b>gluster-users@gluster.org<br><b>Sent: </b>Wednesday, 8 January, 2014 2:37:05 PM<br><b>Subject: </b>Re: [Gluster-users] delete brick / format / add empty brick<br><br>
  
    
  
  
    <div class="moz-cite-prefix">On 01/08/2014 05:57 PM, Alun James
      wrote:<br>
    </div>
    <blockquote cite="mid:11dcc6d3-9af8-48f6-b9ce-3a3cd89a0b12@bossk">
      <style>p { margin: 0; }</style>
      <div style="font-family: Arial; font-size: 10pt; color: #000000"><font size="2">I have this a go.</font>
        <div style="font-size: 10pt;"><br>
        </div>
        <div style="font-size: 10pt;"><i style="font-size: small;">gluster
            volume add-brick myvol replica 2 server2:/brick1&nbsp;</i></div>
        <div><i style="font-size: small;">gluster volume heal myvol full</i></div>
        <div>
          <div style="font-size: 10pt;"><br>
          </div>
          <div style="font-size: 10pt;">It seems to be syncing the files
            but very slowly. Also the server load on server01 has risen
            to 200+ and the gluster clients are no longer able to access
            the mounts. Is there a way to do this that is not as
            impactful? Could I manually rsync the bricks before adding
            the second node back in?</div>
          <div style="font-size: 10pt;"><br>
          </div>
        </div>
      </div>
    </blockquote>
    The recommended way to heal is using the command mentioned. The
    gluster self heal daemon takes appropriate file locks before
    healing. Since clients are accessing the volume, I don't think
    bypassing that and rsyncing the bricks is a good idea. <br>
    <br>
    Regards,<br>
    Ravi<br>
    <br>
    <br>
    <blockquote cite="mid:11dcc6d3-9af8-48f6-b9ce-3a3cd89a0b12@bossk">
      <div style="font-family: Arial; font-size: 10pt; color: #000000">
        <div>
          <div style="font-size: 10pt;"><br>
          </div>
          <div style="font-size: 10pt;">Alun.</div>
          <div style="font-size: 10pt;"><br>
            <hr id="zwchr">
            <div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From:
              </b>"Ravishankar N" <a class="moz-txt-link-rfc2396E" href="mailto:ravishankar@redhat.com" target="_blank">&lt;ravishankar@redhat.com&gt;</a><br>
              <b>To: </b>"Alun James" <a class="moz-txt-link-rfc2396E" href="mailto:ajames@tibus.com" target="_blank">&lt;ajames@tibus.com&gt;</a><br>
              <b>Cc: </b><a class="moz-txt-link-abbreviated" href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
              <b>Sent: </b>Wednesday, 8 January, 2014 4:00:44 AM<br>
              <b>Subject: </b>Re: [Gluster-users] delete brick / format
              / add empty brick<br>
              <br>
              <div class="moz-cite-prefix">On 01/07/2014 09:40 PM, Alun
                James wrote:<br>
              </div>
              <blockquote cite="mid:2d85c83b-17cc-4c02-b73a-ab09266ed554@bossk">
                <style>p { margin: 0; }</style>
                <div style="font-family: Arial; font-size: 10pt; color:
                  #000000"><font face="Arial" size="2">Hi folks,</font>
                  <div style="color: rgb(0, 0, 0); font-family: Arial;
                    font-size: 10pt;"><br>
                  </div>
                  <div><font face="Arial" size="2">I had a 2 node (1
                      brick each) replica, some network meltdown issues
                      seemed to cause problems with second node
                      (server02).&nbsp;glusterfsd process&nbsp;reaching&nbsp;200-300%
                      and errors relating to split brain possibilities
                      and self heal errors.</font></div>
                  <div><font face="Arial" size="2"><br>
                    </font></div>
                  <div><font face="Arial" size="2">Original volume info:</font></div>
                  <div><font face="Arial" size="2"><br>
                    </font></div>
                  <div><font face="Arial" size="2"><i>Volume Name: myvol</i></font></div>
                  <div><font face="Arial" size="2">
                      <div><i>Type: Replicate</i></div>
                      <div><i>Status: Started</i></div>
                      <div><i>Number of Bricks: 2</i></div>
                      <div><i>Transport-type: tcp</i></div>
                      <div><i>Bricks:</i></div>
                      <div><i>Brick1: server01:/brick1</i></div>
                      <div><i>Brick2: server02:/brick1</i></div>
                      <div><br>
                      </div>
                    </font></div>
                  <div><span style="font-family: Arial; font-size:
                      small;">I removed the second brick (that was
                      showing server problems).</span></div>
                  <div><font face="Arial" size="2"><br>
                    </font></div>
                  <div><font face="Arial" size="2"><i>gluster volume
                        remove-brick myvol replica 1 server02:/brick1</i></font></div>
                  <div><font face="Arial" size="2"><i><br>
                      </i></font></div>
                  <div><font face="Arial" size="2">Now the volume status
                      is:</font></div>
                  <div><font face="Arial" size="2"><br>
                    </font></div>
                  <div><font face="Arial" size="2">
                      <div><i>Volume Name: tsfsvol0</i></div>
                      <div><i>Type: Distribute</i></div>
                      <div><i>Status: Started</i></div>
                      <div><i>Number of Bricks: 1</i></div>
                      <div><i>Transport-type: tcp</i></div>
                      <div><i>Bricks:</i></div>
                      <div><i>Brick1: server01:/brick1</i></div>
                      <div><i><br>
                        </i></div>
                      <div>All is fine and the data on working server is
                        sound.</div>
                      <div><br>
                      </div>
                      <div>The xfs partition for&nbsp;<i>server02:/brick1</i>&nbsp;has
                        been formatted and therefore the data gone. All
                        other gluster config data has remained
                        untouched. Can I re-add the second server to the
                        volume with an empty brick and the data will
                        auto replicate over from the working server?</div>
                      <div><br>
                      </div>
                      <div><i>gluster volume add-brick myvol replica 2
                          server2:/brick1 ??</i></div>
                    </font><font face="Arial" size="2">
                      <div><br>
                      </div>
                    </font></div>
                </div>
              </blockquote>
              <br>
              <font size="2"><font face="Arial">Yes this should work
                  fine. You will need to run a&nbsp; `gluster volume heal
                  myvol full` to manually trigger the replication.</font></font><br>
              <br>
              <blockquote cite="mid:2d85c83b-17cc-4c02-b73a-ab09266ed554@bossk">
                <div style="font-family: Arial; font-size: 10pt; color:
                  #000000"><br>
                  <div style="color: rgb(0, 0, 0); font-family: Arial;
                    font-size: 10pt;"><br>
                    <br>
                    <div><span></span><font face="arial, helvetica,
                        sans-serif" size="2">ALUN JAMES<br>
                        <font>Senior Systems Engineer</font><br>
                        <font>Tibus</font><br>
                        <br>
                        <font>T: +44 (0)28 9033 1122</font><br>
                        <font>E: <a class="moz-txt-link-abbreviated" href="mailto:ajames@tibus.com" target="_blank">ajames@tibus.com</a></font><br>
                        <font>W: </font><a href="http://www.tibus.com" target="_blank">www.tibus.com</a><br>
                        <br>
                        <font>Follow us on Twitter </font><a href="http://twitter.com/intent/user?screen_name=tibus" target="_blank">@tibus</a><br>
                        <br>
                        <font>Tibus is a trading name of The Internet
                          Business Ltd, a company limited by share
                          capital and registered in Northern Ireland,
                          NI31325. It is part of UTV Media Plc.</font><br>
                        <br>
                        <font>This email and any attachment may contain
                          confidential information for the sole use of
                          the intended recipient. Any review, use,
                          distribution or disclosure by others is
                          strictly prohibited. If you are not the
                          intended recipient (or authorised to receive
                          for the recipient), please contact the sender
                          by reply email and delete all copies of this
                          message. &nbsp; &nbsp; </font></font><span></span><br>
                    </div>
                  </div>
                </div>
                <br>
                <fieldset class="mimeAttachmentHeader"></fieldset>
                <br>
                <pre>_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
              </blockquote>
              <br>
            </div>
            <br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  

</div><br></div></div></body></html>