<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#ffffff" text="#000000">
    hi siga hiro,<br>
    &nbsp;&nbsp; I see the following warning:<br>
    [2011-08-24 11:36:04.695145] W
    [afr-common.c:656:afr_lookup_self_heal_check]
    0-syncdata-replicate-0: /testdata: gfid different on subvolume<br>
    <br>
    I also see that you have more than one mount on the volume. Most
    probably you are running into one of the following bugs:<br>
    1) <a class="moz-txt-link-freetext" href="http://bugs.gluster.com/show_bug.cgi?id=2921">http://bugs.gluster.com/show_bug.cgi?id=2921</a> (most likely this)<br>
    2) <a class="moz-txt-link-freetext" href="http://bugs.gluster.com/show_bug.cgi?id=2745">http://bugs.gluster.com/show_bug.cgi?id=2745</a><br>
    <br>
    If it is not the bug 2745, you can confirm it is the bug 2921 if the
    md5sums on the files match on both the machines 172.23.0.1,
    172.23.0.2<br>
    <br>
    pranith.<br>
    <br>
    On 08/24/2011 11:48 AM, siga hiro wrote:
    <blockquote
cite="mid:CAPnqno+y0NPjvUJVwu3H5K9xnwXOEedfqp2xKFW05TYtwcW9jQ@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <p>Hi, everyone.<br>
        Its nice meeting you.<br>
        I am poor at English....</p>
      <p>I am writing this because I'd like to update GlusterFS to
        3.2.2-1,and I want to change from gluster mount to nfs mount.</p>
      <p>I have installed GlusterFS 3.2.1 one week ago,and replication 2
        server.</p>
      <p>OS:CentOS5.5 64bit<br>
        RPM:glusterfs-core-3.2.1-1<br>
        &nbsp;&nbsp;&nbsp; glusterfs-fuse-3.2.1-1</p>
      <p>command<br>
        &nbsp;gluster volume create syncdata replica 2&nbsp; transport tcp
        172.23.0.1:/home/syncdata 172.23.0.2:/home/syncdata</p>
      <p>mount command<br>
        &nbsp;172.23.0.1 -&gt; mount -t glusterfs -o tcp,soft,timeo=3
        172.23.0.1:/syncdata /syncdata<br>
        &nbsp;172.23.0.2 -&gt; mount -t glusterfs -o tcp,soft,timeo=3
        172.23.0.2:/syncdata /syncdata</p>
      <p>So,Yesterday I update GlusterFS to 3.2.2-1 and use nfs mount.<br>
        &nbsp;172.23.0.2 -&gt; mount -t nfs&nbsp; -o
        nolock,nfsvers=3,tcp,hard,intr 172.23.0.2:/syncdata /syncdata</p>
      <p>[<a moz-do-not-send="true" href="mailto:root@172.23.0.2">root@172.23.0.2</a>
        /]# ls -al /syncdata/testdata/<br>
        ls: reading directory /syncdata/testdata/: Input/output error</p>
      <div>/var/log/glusterfs/nfs.log<br>
        [2011-08-24 11:35:16.319379] I
        [client-handshake.c:1082:select_server_supported_programs]
        0-syncdata-client-1: Using Program GlusterFS-3.1.0, Num
        (1298437), Version (310)<br>
        [2011-08-24 11:35:16.322126] I
        [client-handshake.c:913:client_setvolume_cbk]
        0-syncdata-client-1: Connected to <a moz-do-not-send="true"
          href="http://172.23.0.2:24009">172.23.0.2:24009</a>, attached
        to remote volume '/home/syncdata'.<br>
        [2011-08-24 11:35:16.322191] I [afr-common.c:2611:afr_notify]
        0-syncdata-replicate-0: Subvolume 'syncdata-client-1' came back
        up; going online.<br>
        [2011-08-24 11:35:16.323281] I
        [client-handshake.c:1082:select_server_supported_programs]
        0-syncdata-client-0: Using Program GlusterFS-3.1.0, Num
        (1298437), Version (310)<br>
        [2011-08-24 11:35:16.324274] I
        [client-handshake.c:913:client_setvolume_cbk]
        0-syncdata-client-0: Connected to <a moz-do-not-send="true"
          href="http://172.23.0.1:24009">172.23.0.1:24009</a>, attached
        to remote volume '/home/syncdata'.<br>
        [2011-08-24 11:35:16.324801] I
        [afr-common.c:912:afr_fresh_lookup_cbk] 0-syncdata-replicate-0:
        added root inode<br>
        [2011-08-24 11:36:04.695145] W
        [afr-common.c:656:afr_lookup_self_heal_check]
        0-syncdata-replicate-0: /testdata: gfid different on subvolume<br>
        [2011-08-24 11:36:04.696121] I
        [client3_1-fops.c:411:client3_1_stat_cbk] 0-syncdata-client-0:
        remote operation failed: No such file or directory<br>
        [2011-08-24 11:36:04.697121] I
        [client3_1-fops.c:1099:client3_1_access_cbk]
        0-syncdata-client-0: remote operation failed: No such file or
        directory<br>
        [2011-08-24 11:36:04.698118] I
        [client3_1-fops.c:2132:client3_1_opendir_cbk]
        0-syncdata-client-0: remote operation failed: No such file or
        directory<br>
        [2011-08-24 11:36:04.698140] W
        [client3_1-fops.c:5136:client3_1_readdir] 0-syncdata-client-0:
        (689897478): failed to get fd ctx. EBADFD<br>
        [2011-08-24 11:36:04.698155] W
        [client3_1-fops.c:5201:client3_1_readdir] 0-syncdata-client-0:
        failed to send the fop: File descriptor in bad state<br>
        [2011-08-24 11:36:04.698168] I
        [afr-dir-read.c:120:afr_examine_dir_readdir_cbk]
        0-syncdata-replicate-0: /fastask: failed to do opendir on
        syncdata-client-0</div>
      <p><br>
        # gluster volume info all</p>
      <div>Volume Name: syncdata<br>
        Type: Replicate<br>
        Status: Started<br>
        Number of Bricks: 2<br>
        Transport-type: tcp<br>
        Bricks:<br>
        Brick1: 172.23.0.1:/home/syncdata<br>
        Brick2: 172.23.0.2:/home/syncdata</div>
      <div>&nbsp;</div>
      <div>&nbsp;</div>
      <div>
        <div>After an 172.23.0.2 server is made to work as usual, I want
          to do the work of the 172.23.0.1 server.</div>
        <p>Any ideas?<br>
        </p>
      </div>
      <pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>