<div dir="ltr">Joe,<div><br></div><div>Perhaps a typo</div><div><br></div>&quot;&quot;&quot;So first we move server1:/data/brick2 to server3:/data/brick1&quot;&quot;&quot; - <a href="http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/">http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/</a><div>
<br></div><div>Should be &quot;server3:/data/brick2&quot; </div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Sep 8, 2013 at 12:34 PM, Joe Julian <span dir="ltr">&lt;<a href="mailto:joe@julianfamily.org" target="_blank">joe@julianfamily.org</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><div><div class="h5">
    <div>On 09/05/2013 02:16 AM, Anup Nair
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">On Thu, Sep 5, 2013 at 12:41 AM, Vijay Bellur <span dir="ltr">&lt;<a href="mailto:vbellur@redhat.com" target="_blank">vbellur@redhat.com</a>&gt;</span>
        wrote:<br>
        <div class="gmail_extra">
          <div class="gmail_quote">
            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
              <div>
                <div>On 09/03/2013 01:18 PM, Anup Nair wrote:<br>
                </div>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                <div>
                  <div>
                    Glusterfs version 3.2.2<br>
                    <br>
                    I have a Gluster volume in which one our of the 4
                    peers/nodes had<br>
                    crashed some time ago, prior to my joining service
                    here.<br>
                    <br>
                    I see from volume info that the crashed
                    (non-existing) node is still<br>
                    listed as one of the peers and the bricks are also
                    listed. I would like<br>
                    to detach this node and its bricks and rebalance the
                    volume with<br>
                    remaining 3 peers. But I am unable to do so. Here
                    are my setps:<br>
                    <br>
                    1. #gluster peer status<br>
                       Number of Peers: 3 -- (note: excluding the one I
                    run this command from)<br>
                    <br>
                       Hostname: dbstore4r294 --- (note: node/peer that
                    is down)<br>
                       Uuid: 8bf13458-1222-452c-81d3-565a563d768a<br>
                       State: Peer in Cluster (Disconnected)<br>
                    <br>
                       Hostname: 172.16.1.90<br>
                       Uuid: 77ebd7e4-7960-4442-a4a4-00c5b99a61b4<br>
                       State: Peer in Cluster (Connected)<br>
                    <br>
                       Hostname: dbstore3r294<br>
                       Uuid: 23d7a18c-fe57-47a0-afbc-1e1a5305c0eb<br>
                       State: Peer in Cluster (Connected)<br>
                    <br>
                    2. #gluster peer detach dbstore4r294<br>
                       Brick(s) with the peer dbstore4r294 exist in
                    cluster<br>
                    <br>
                    3. #gluster volume info<br>
                    <br>
                       Volume Name: test-volume<br>
                       Type: Distributed-Replicate<br>
                       Status: Started<br>
                       Number of Bricks: 4 x 2 = 8<br>
                       Transport-type: tcp<br>
                       Bricks:<br>
                       Brick1: dbstore1r293:/datastore1<br>
                       Brick2: dbstore2r293:/datastore1<br>
                       Brick3: dbstore3r294:/datastore1<br>
                       Brick4: dbstore4r294:/datastore1<br>
                       Brick5: dbstore1r293:/datastore2<br>
                       Brick6: dbstore2r293:/datastore2<br>
                       Brick7: dbstore3r294:/datastore2<br>
                       Brick8: dbstore4r294:/datastore2<br>
                       Options Reconfigured:<br>
                       network.ping-timeout: 42s<br>
                       performance.cache-size: 64MB<br>
                       performance.write-behind-window-size: 3MB<br>
                       performance.io-thread-count: 8<br>
                       performance.cache-refresh-timeout: 2<br>
                    <br>
                    Note that the non-existent node/peer is  --
                    dbstore4r294 (bricks are<br>
                    :/datastore1 &amp; /datastore2  - i.e.  brick4 and
                    brick8)<br>
                    <br>
                    4. #gluster volume remove-brick test-volume
                    dbstore4r294:/datastore1<br>
                       Removing brick(s) can result in data loss. Do you
                    want to Continue?<br>
                    (y/n) y<br>
                       Remove brick incorrect brick count of 1 for
                    replica 2<br>
                    <br>
                    5. #gluster volume remove-brick test-volume
                    dbstore4r294:/datastore1<br>
                    dbstore4r294:/datastore2<br>
                       Removing brick(s) can result in data loss. Do you
                    want to Continue?<br>
                    (y/n) y<br>
                       Bricks not from same subvol for replica<br>
                    <br>
                    How do I remove the peer? What are the steps
                    considering that the node<br>
                    is non-existent?<br>
                  </div>
                </div>
                */<br>
              </blockquote>
              <br>
              <br>
              Do you plan to replace the dead server with a new server?
              If so, this could be a possible sequence of steps:<span><font color="#888888"><br>
                  <br>
                </font></span></blockquote>
            <div><br>
              <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline">
                <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline">
                  No. We are not going to replace it. So, I need to
                  resize it to a 3 node cluster. <br>
                </div>
                <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline"><br>
                </div>
                <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline">
                  I discovered the issue when one of the node hung and I
                  had to reboot it. I expected Gluster volume to be
                  available for one node failure. The volume was
                  non-responsive.</div>
                 
                <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline">Surprised
                  at that, I checked the details and found it was
                  running with one node missing for many months now,
                  perhaps an year!<br>
                  <br>
                </div>
                <div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;display:inline">I
                  have no node to replace it with. So, I am looking for
                  a method by which I can resize it.<br>
                </div>
              </div>
              <br>
            </div>
          </div>
        </div>
      </div>
    </blockquote></div></div>
    The problem is that you want to do a replica 2 volume with an odd
    number of servers. This can be done but requires that you think of
    bricks individually rather than tying sets of bricks to servers.
    Your goal is to simply have each pair of replica bricks on two
    unique servers.<br>
    <br>
    See
    
    <a href="http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/" target="_blank">http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/</a>
    for an example.<br>
  </div>

<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">
<i style="font-family:arial;font-size:small">Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes</i><br></div>
</div>