<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-cite-prefix">On 08/07/2014 03:23 PM, Pranith Kumar
      Karampuri wrote:<br>
    </div>
    <blockquote cite="mid:53E34C95.9080207@redhat.com" type="cite">
      <meta content="text/html; charset=ISO-8859-1"
        http-equiv="Content-Type">
      <br>
      <div class="moz-cite-prefix">On 08/07/2014 03:18 PM, Tiemen Ruiten
        wrote:<br>
      </div>
      <blockquote
cite="mid:CAAegNz3Mtk7=0Ng5edXQtEBAG4MPxAhy7A51nZY4vsc1-OdORQ@mail.gmail.com"
        type="cite">
        <div dir="ltr">
          <div class="gmail_extra">
            <div>Hello Pranith,<br>
              <br>
              Thanks for your reply. I'm using 3.5.2. <br>
              <br>
              Is it possible that Windows doesn't release the files
              after a write happens? <br>
              <br>
            </div>
            Because the self-heal often never occurs. Just this morning
            we discovered that when a web server read from the other
            node, some files that had been changed days ago still had
            content from before the edit.<br>
            <br>
          </div>
          <div class="gmail_extra">How can I ensure that everything
            syncs reliably and consistently when mounting from SMB? Is
            Samba VFS more reliable in this respect?<br>
          </div>
        </div>
      </blockquote>
      It should happen automatically. Even the mount *must* serve reads
      from good copy. In what scenario did you observe that the reads
      are served from stale brick?<br>
      Could you give 'getfattr -d -m. -e hex
      &lt;path-of-file-on-brick&gt;' output from both the bricks?<br>
    </blockquote>
    Sorry I was not clear here. Please give the output of the above
    commands for the file where you observed 'stale read'<br>
    <br>
    Pranith<br>
    <blockquote cite="mid:53E34C95.9080207@redhat.com" type="cite"> <br>
      Is it possible to provide self-heal-daemon logs so that we can
      inspect what is happening?<br>
      <br>
      Pranith<br>
      <blockquote
cite="mid:CAAegNz3Mtk7=0Ng5edXQtEBAG4MPxAhy7A51nZY4vsc1-OdORQ@mail.gmail.com"
        type="cite">
        <div dir="ltr">
          <div class="gmail_extra"><br>
          </div>
          <div class="gmail_extra">Tiemen<br>
          </div>
          <div class="gmail_extra"><br>
            <div class="gmail_quote">On 7 August 2014 03:14, Pranith
              Kumar Karampuri <span dir="ltr">&lt;<a
                  moz-do-not-send="true"
                  href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>
              wrote:<br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <div text="#000000" bgcolor="#FFFFFF"> hi Tiemen,<br>
                  From the logs you have pasted, it doesn't seem there
                  are any split-brains. It is just performing
                  self-heals. What version of glusterfs are you using?
                  Self-heals sometimes don't happen if the data
                  operations from mount are in progress because it tries
                  to give that more priority. Missing files should be
                  created once the self-heal completes on the parent
                  directory of those files.<br>
                  <br>
                  Pranith
                  <div>
                    <div class="h5"><br>
                      <br>
                      <div>On 08/07/2014 01:40 AM, Tiemen Ruiten wrote:<br>
                      </div>
                    </div>
                  </div>
                  <blockquote type="cite">
                    <div>
                      <div class="h5">
                        <div>Sorry, I seem to have messed up the
                          subject. <br>
                          <br>
                          I should add, I'm mounting these volumes
                          through GlusterFS FUSE, not the Samba VFS
                          plugin.<br>
                          &nbsp;<br>
                          On 06-08-14 21:47, Tiemen Ruiten wrote:<br>
                        </div>
                        <blockquote type="cite">
                          <div dir="ltr">
                            <div>
                              <div>
                                <div>Hello,<br>
                                  <br>
                                  I'm running into some serious problems
                                  with Gluster + CTDB and Samba. What I
                                  have:<br>
                                  <br>
                                </div>
                                A two node replicated gluster cluster
                                set up to share volumes using Samba
                                setup according to this guide: <a
                                  moz-do-not-send="true"
href="https://download.gluster.org/pub/gluster/glusterfs/doc/Gluster_CTDB_setup.v1.pdf"
                                  target="_blank">https://download.gluster.org/pub/gluster/glusterfs/doc/Gluster_CTDB_setup.v1.pdf</a><br>
                                <br>
                              </div>
                              When we edit or copy files into the volume
                              via SMB (from a Windows client accessing
                              through a samba file share) this
                              inevitably leads to a split-brain
                              scenario. For example:<br>
                              <br>
                              gluster&gt; volume heal fl-webroot info<br>
                              Brick
                              ankh.int.rdmedia.com:/export/glu/web/flash/webroot/<br>
&lt;gfid:0b162618-e46f-4921-92d0-c0fdb5290bf5&gt;<br>
&lt;gfid:a259de7d-69fc-47bd-90e7-06a33b3e6cc8&gt;<br>
                              Number of entries: 2<br>
                              <br>
                              Brick
                              morpork.int.rdmedia.com:/export/glu/web/flash/webroot/<br>
                              /LandingPage_Saturn_Production/images<br>
                              /LandingPage_Saturn_Production<br>
                              /LandingPage_Saturn_Production/Services/v2<br>
/LandingPage_Saturn_Production/images/country/be<br>
                              /LandingPage_Saturn_Production/bin<br>
                              /LandingPage_Saturn_Production/Services<br>
/LandingPage_Saturn_Production/images/generic<br>
/LandingPage_Saturn_Production/aspnet_client/system_web<br>
/LandingPage_Saturn_Production/images/country<br>
                              /LandingPage_Saturn_Production/Scripts<br>
/LandingPage_Saturn_Production/aspnet_client<br>
/LandingPage_Saturn_Production/images/country/fr<br>
                              Number of entries: 12<br>
                              <br>
                              gluster&gt; volume heal fl-webroot info<br>
                              Brick
                              ankh.int.rdmedia.com:/export/glu/web/flash/webroot/<br>
&lt;gfid:0b162618-e46f-4921-92d0-c0fdb5290bf5&gt;<br>
&lt;gfid:a259de7d-69fc-47bd-90e7-06a33b3e6cc8&gt;<br>
                              Number of entries: 2<br>
                              <br>
                              Brick
                              morpork.int.rdmedia.com:/export/glu/web/flash/webroot/<br>
                              /LandingPage_Saturn_Production/images<br>
                              /LandingPage_Saturn_Production<br>
                              /LandingPage_Saturn_Production/Services/v2<br>
/LandingPage_Saturn_Production/images/country/be<br>
                              /LandingPage_Saturn_Production/bin<br>
                              /LandingPage_Saturn_Production/Services<br>
/LandingPage_Saturn_Production/images/generic<br>
/LandingPage_Saturn_Production/aspnet_client/system_web<br>
/LandingPage_Saturn_Production/images/country<br>
                              /LandingPage_Saturn_Production/Scripts<br>
/LandingPage_Saturn_Production/aspnet_client<br>
/LandingPage_Saturn_Production/images/country/fr<br>
                              <br>
                              <br>
                              <br>
                            </div>
                            <div>Sometimes self-heal works, sometimes it
                              doesn't:<br>
                              <br>
                              [2014-08-06 19:32:17.986790] E
                              [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
                              0-fl-webroot-replicate-0:&nbsp; entry self
                              heal&nbsp; failed,&nbsp;&nbsp; on
                              /LandingPage_Saturn_Production/Services/v2<br>
                              [2014-08-06 19:32:18.008330] W
                              [client-rpc-fops.c:2772:client3_3_lookup_cbk]
                              0-fl-webroot-client-0: remote operation
                              failed: No such file or directory. Path:
                              &lt;gfid:a89d7a07-2e3d-41ee-adcc-cb2fba3d2282&gt;
                              (a89d7a07-2e3d-41ee-adcc-cb2fba3d2282)<br>
                              [2014-08-06 19:32:18.024057] I
                              [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
                              0-fl-webroot-replicate-0:&nbsp; gfid or missing
                              entry self heal&nbsp; is started, metadata self
                              heal&nbsp; is successfully completed,
                              backgroung data self heal&nbsp; is successfully
                              completed,&nbsp; data self heal from
                              fl-webroot-client-1&nbsp; to sinks&nbsp;
                              fl-webroot-client-0, with 0 bytes on
                              fl-webroot-client-0, 168 bytes on
                              fl-webroot-client-1,&nbsp; data - Pending
                              matrix:&nbsp; [ [ 0 0 ] [ 1 0 ] ]&nbsp; metadata
                              self heal from source fl-webroot-client-1
                              to fl-webroot-client-0,&nbsp; metadata -
                              Pending matrix:&nbsp; [ [ 0 0 ] [ 2 0 ] ], on
                              /LandingPage_Saturn_Production/Services/v2/PartnerApiService.asmx<br>
                              <br>
                            </div>
                            <div><b>More seriously, some files are
                                simply missing on one of the nodes
                                without any error in the logs or notice
                                when running gluster volume heal $volume
                                info.</b><br>
                            </div>
                            <div><br>
                            </div>
                            <div>Of course I can provide any log file
                              necessary.<br clear="all">
                            </div>
                            <div>
                              <div>
                                <div> </div>
                              </div>
                            </div>
                          </div>
                        </blockquote>
                      </div>
                    </div>
                  </blockquote>
                </div>
              </blockquote>
            </div>
            <br>
          </div>
        </div>
      </blockquote>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>