<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hi Raghavendra and Ben,<br>
      thanks for your answers.<br>
      The volume is a backend of nova instances of Openstack
      infrastructure, and as wrote by Raghavendra not seems, but I'm
      sure the compute node has been writing to gluster volume after a
      potential network problem, but in our monitoring system we did not
      see the network problem and if there was it could be there just
      for a while.<br>
      So the timeline could be:<br>
      - nova write/read to/from volume volume-nova-pp<br>
      - network problem during 1 second<br>
      - report in gluster log of network ptoblem (first part of log):<br>
      <br>
      [2014-10-10 07:29:43.730792] W [socket.c:522:__socket_rwv]
      0-glusterfs: readv on <a moz-do-not-send="true"
        href="http://192.168.61.100:24007" target="_blank">192.168.61.100:24007</a>
      failed (No data available)<br>
      [2014-10-10 07:29:54.022608] E
      [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to <a
        moz-do-not-send="true" href="http://192.168.61.100:24007"
        target="_blank">192.168.61.100:24007</a> failed (Connection
      refused)<br>
      [2014-10-10 07:30:05.271825] W
      [client-rpc-fops.c:866:client3_3_writev_cbk]
      0-volume-nova-pp-client-0: remote operation failed: Input/output
      error<br>
      [2014-10-10 07:30:08.783145] W
      [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse: 3661260:
      WRITE =&gt; -1 (Input/output error)<br>
      <br>
      - nova write/read to/from volume volume-nova-pp<br>
      - second part of log million of lines like this:<br>
      [2014-10-15 14:41:15.895105] W
      [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse: 951700230:
      WRITE =&gt; -1 (Transport endpoint is not connected)<br>
      <br>
      For Ben:<br>
      I'm using gluster 3.5.2 not gluster 3.6, am I try to use the
      gluster 3,6?<br>
      <br>
      It should be a very good things if in gluster will be e option to
      rate-limit a particular logging call or per unit of time or when
      the log size overtake a prefixed limit.<br>
      <br>
      I think in this particular case the WARNING should be write 1 time
      for minute after the first 1000 similar lines.<br>
      <br>
      Cheers<br>
      Sergio<br>
      <br>
      On 10/27/2014 05:32 PM, Raghavendra G wrote:<br>
    </div>
    <blockquote
cite="mid:CADRNtgRft_eEX2Ds1kL9EUPhzf6iTr6O0xtSv4UnnevmQ2mVGQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">Seems like there were on-going write operations. On
        errors we log and network disconnect has resulted in these logs.<br>
        <div>
          <div>
            <div class="gmail_extra"><br>
              <div class="gmail_quote">On Mon, Oct 27, 2014 at 7:21 PM,
                Sergio Traldi <span dir="ltr">&lt;<a
                    moz-do-not-send="true"
                    href="mailto:sergio.traldi@pd.infn.it"
                    target="_blank">sergio.traldi@pd.infn.it</a>&gt;</span>
                wrote:<br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
                  all,<br>
                  One server Redhat 6 with this rpms set:<br>
                  <br>
                  [ ~]# rpm -qa | grep gluster | sort<br>
                  glusterfs-3.5.2-1.el6.x86_64<br>
                  glusterfs-api-3.5.2-1.el6.x86_64<br>
                  glusterfs-cli-3.5.2-1.el6.x86_64<br>
                  glusterfs-fuse-3.5.2-1.el6.x86_64<br>
                  glusterfs-geo-replication-3.5.2-1.el6.x86_64<br>
                  glusterfs-libs-3.5.2-1.el6.x86_64<br>
                  glusterfs-server-3.5.2-1.el6.x86_64<br>
                  <br>
                  I have a gluster volume with 1 server and 1 brick:<br>
                  <br>
                  [ ~]# gluster volume info volume-nova-pp<br>
                  Volume Name: volume-nova-pp<br>
                  Type: Distribute<br>
                  Volume ID: b5ec289b-9a54-4df1-9c21-52ca556aeead<br>
                  Status: Started<br>
                  Number of Bricks: 1<br>
                  Transport-type: tcp<br>
                  Bricks:<br>
                  Brick1: 192.168.61.100:/brick-nova-pp/mpathc<br>
                  Options Reconfigured:<br>
                  storage.owner-gid: 162<br>
                  storage.owner-uid: 162<br>
                  <br>
                  There are four clients attached to this volume with
                  same O.S. and same fuse gluster rpms set:<br>
                  [ ~]# rpm -qa | grep gluster | sort<br>
                  glusterfs-3.5.0-2.el6.x86_64<br>
                  glusterfs-api-3.5.0-2.el6.x86_64<br>
                  glusterfs-fuse-3.5.0-2.el6.x86_64<br>
                  glusterfs-libs-3.5.0-2.el6.x86_6<br>
                  <br>
                  Last week, but it happens also two weeks ago, I found
                  the disk almost full and I found the gluster logs
                  /var/log/glusterfs/var-lib-nova-instances.log of 68GB:<br>
                  In the log there was the starting problem:<br>
                  <br>
                  [2014-10-10 07:29:43.730792] W
                  [socket.c:522:__socket_rwv] 0-glusterfs: readv on <a
                    moz-do-not-send="true"
                    href="http://192.168.61.100:24007" target="_blank">192.168.61.100:24007</a>
                  failed (No data available)<br>
                  [2014-10-10 07:29:54.022608] E
                  [socket.c:2161:socket_connect_finish] 0-glusterfs:
                  connection to <a moz-do-not-send="true"
                    href="http://192.168.61.100:24007" target="_blank">192.168.61.100:24007</a>
                  failed (Connection refused)<br>
                  [2014-10-10 07:30:05.271825] W [client-rpc-fops.c:866:client3_3_writev_cbk]
                  0-volume-nova-pp-client-0: remote operation failed:
                  Input/output error<br>
                  [2014-10-10 07:30:08.783145] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  3661260: WRITE =&gt; -1 (Input/output error)<br>
                  [2014-10-10 07:30:08.783368] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  3661262: WRITE =&gt; -1 (Input/output error)<br>
                  [2014-10-10 07:30:08.806553] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  3661649: WRITE =&gt; -1 (Input/output error)<br>
                  [2014-10-10 07:30:08.844415] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  3662235: WRITE =&gt; -1 (Input/output error)<br>
                  <br>
                  and a lot of these lines:<br>
                  <br>
                  [2014-10-15 14:41:15.895105] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  951700230: WRITE =&gt; -1 (Transport endpoint is not
                  connected)<br>
                  [2014-10-15 14:41:15.896205] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  951700232: WRITE =&gt; -1 (Transport endpoint is not
                  connected)<br>
                  <br>
                  This second line log with different "sector" number
                  has been written every millisecond so in about 1
                  minute we have 1GB write in O.S. disk.<br>
                  <br>
                  I search for a solution but I didn't find nobody
                  having the same problem.<br>
                  <br>
                  I think there was a network problemĀ  but why does
                  gluster write in logs million of:<br>
                  [2014-10-15 14:41:15.895105] W
                  [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse:
                  951700230: WRITE =&gt; -1 (Transport endpoint is not
                  connected) ?<br>
                  <br>
                  Thanks in advance.<br>
                  Cheers<br>
                  Sergio<br>
                  _______________________________________________<br>
                  Gluster-devel mailing list<br>
                  <a moz-do-not-send="true"
                    href="mailto:Gluster-devel@gluster.org"
                    target="_blank">Gluster-devel@gluster.org</a><br>
                  <a moz-do-not-send="true"
                    href="http://supercolony.gluster.org/mailman/listinfo/gluster-devel"
                    target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-devel</a><br>
                </blockquote>
              </div>
              <br>
              <br clear="all">
              <br>
              -- <br>
              Raghavendra G<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>