<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hi Gandalf,<br>
      <br>
      can you run the following command on the brick path? <br>
      <br>
      "getfattr -d -e hex -m . /datastore" on both "nas-01-data" and
      "nas-02-data" nodes. <br>
      <br>
      This will let us know whether there is
      "trusted.glusterfs.volume-id" set. <br>
      <br>
      -Shwetha<br>
      <br>
      On 11/26/2013 07:36 PM, gandalf istari wrote:<br>
    </div>
    <blockquote
cite="mid:CAFMZTixON=7OkQxyU+QV-jspvLx35BnHSP9mg7BZ_4zzq3pKbQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="" lang="x-western">
          <div class="">hi thanks for the quick answer.</div>
          <div class=""><br>
          </div>
          <div class="">I'm running glusterfs 3.4.1</div>
          <div class=""><br>
          </div>
          <div class="">[root@nas-02 datastore]# gluster volume start
            datastore1 force<br>
          </div>
          <div class="">
            <p class="">volume start: datastore1: failed: Failed to get
              extended attribute trusted.glusterfs.volume-id for brick
              dir /datastore. Reason : No data available</p>
            <p class="">It seems that the .gluster directory is missing
              for some reason.</p>
            <p class=""><br>
            </p>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">volume replace-brick datastore1 nas-01-data:/datastore <span style="font-family:arial">nas-02-data:/datastore</span>
commit force</pre>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">
</pre>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">To rebuild/replace the missing brick ?</pre>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">I'm quite new with glusterfs</pre>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">
</pre>
            <pre style="white-space:pre-wrap;color:rgb(0,0,0)">Thanks </pre>
            <p class=""><br>
            </p>
            <p class=""><br>
            </p>
          </div>
          <div class=""><br>
          </div>
          <div class=""><br>
          </div>
          <div class=""><br>
          </div>
          <div class="">On 26/11/13 12:47, gandalf istari wrote:<br>
          </div>
          <blockquote
cite="mid:CAFMZTiwYXv3V69+hryNwRfo=-xmsxwJeHC2XRhXRDgSiVcY7fA@mail.gmail.com"
            type="cite">
            <div dir="ltr">Hi have setup a two node replication
              glusterfs. After the initial installation the "master"
              node was put into the datacenter and after two week we
              moved the second one also to the datacenter.
              <div><br>
              </div>
              <div>But the sync has not started yet.</div>
              <div><br>
              </div>
              <div>On the "master"</div>
              <div>
                <p class="">gluster&gt; volume info all&nbsp;</p>
                <p class="">Volume Name: datastore1</p>
                <p class="">Type: Replicate</p>
                <p class="">Volume ID:
                  fdff5190-85ef-4cba-9056-a6bbbd8d6863</p>
                <p class="">Status: Started</p>
                <p class="">Number of Bricks: 1 x 2 = 2</p>
                <p class="">Transport-type: tcp</p>
                <p class="">Bricks:</p>
                <p class="">Brick1: nas-01-data:/datastore</p>
                <p class="">Brick2: nas-02-data:/datastore</p>
                <p class="">gluster&gt; peer status</p>
                <p class="">Number of Peers: 1</p>
                <p class=""><br>
                </p>
                <p class="">Hostname: nas-02-data</p>
                <p class="">Uuid: 71df9f86-a87b-481d-896c-c0d4ab679cfa</p>
                <p class=""> </p>
                <p class="">State: Peer in Cluster (Connected)</p>
                <p class=""><br>
                </p>
                <p class="">On the "slave"</p>
                <p class="">gluster&gt; peer status</p>
                <p class="">Number of Peers: 1</p>
                <p class="">Hostname: 192.168.70.6<br>
                </p>
                <p class="">Uuid: 97ef0154-ad7b-402a-b0cb-22be09134a3c</p>
                <p class=""> </p>
                <p class="">State: Peer in Cluster (Connected)</p>
                <p class=""><br>
                </p>
                <p class="">gluster&gt; volume status all</p>
                <p class="">Status of volume: datastore1</p>
                <p class="">Gluster process<span class=""> </span><span
                    class=""> </span><span class=""> </span><span
                    class=""> </span><span class=""> </span><span
                    class=""> </span>Port<span class=""> </span>Online<span
                    class=""> </span>Pid</p>
                <p class="">------------------------------------------------------------------------------</p>
                <p class="">Brick nas-01-data:/datastore<span class="">
                  </span><span class=""> </span><span class=""> </span><span
                    class=""> </span>49152<span class=""> </span>Y<span
                    class=""> </span>2130</p>
                <p class="">Brick nas-02-data:/datastore<span class="">
                  </span><span class=""> </span><span class=""> </span><span
                    class=""> </span>N/A<span class=""> </span>N<span
                    class=""> </span>N/A</p>
                <p class="">NFS Server on localhost<span class=""> </span><span
                    class=""> </span><span class=""> </span><span
                    class=""> </span><span class=""> </span>2049<span
                    class=""> </span>Y<span class=""> </span>8064</p>
                <p class="">Self-heal Daemon on localhost<span class="">
                  </span><span class=""> </span><span class=""> </span><span
                    class=""> </span>N/A<span class=""> </span>Y<span
                    class=""> </span>8073</p>
                <p class="">NFS Server on 192.168.70.6<span class=""> </span><span
                    class=""> </span><span class=""> </span><span
                    class=""> </span>2049<span class=""> </span>Y<span
                    class=""> </span>3379</p>
                <p class="">Self-heal Daemon on 192.168.70.6<span
                    class=""> </span><span class=""> </span><span
                    class=""> </span>N/A<span class=""> </span>Y<span
                    class=""> </span>3384</p>
              </div>
            </div>
          </blockquote>
          Which version of glusterfs are you running?<br>
          <br>
          volume status suggests that the second brick
          (nas-02-data:/datastore) is not running. <br>
          <br>
          Can you run "gluster volume start &lt;volname&gt; force" in
          any of these two nodes and try again? <br>
          Then you would also required to run `find . | xargs stat` on
          the mountpoint of the volume. That should trigger the self
          heal.<br>
          <blockquote
cite="mid:CAFMZTiwYXv3V69+hryNwRfo=-xmsxwJeHC2XRhXRDgSiVcY7fA@mail.gmail.com"
            type="cite">
            <div dir="ltr">
              <div>
                <p class="">&nbsp;</p>
                <p class=""> </p>
                <p class="">There are no active volume tasks</p>
                <p class=""><br>
                </p>
                <p class="">I would like to run on the "slave" gluster
                  volume sync nas-01-data datastore1</p>
              </div>
            </div>
          </blockquote>
          BTW, There is no concept of "master" and "slave" in afr
          (replication). However there is concept of "master volume" and
          "slave volume" in gluster geo-replication.<br>
          <blockquote
cite="mid:CAFMZTiwYXv3V69+hryNwRfo=-xmsxwJeHC2XRhXRDgSiVcY7fA@mail.gmail.com"
            type="cite">
            <div dir="ltr">
              <div>
                <p class="">But then the virtual machines hosted will be
                  unavailible is there another way to start the
                  replication ?</p>
                <p class=""><br>
                </p>
                <p class="">Thanks</p>
                <p class=""><br>
                </p>
                <p class=""><br>
                </p>
                <p class=""><br>
                </p>
              </div>
            </div>
            <br>
            <fieldset class=""></fieldset>
            <br>
            <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
          </blockquote>
          <br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
    <br>
  </body>
</html>