<div dir="ltr">Okay.<div>so here are first results:</div><div><br></div><div>after I disconnected the first server, I&#39;ve got this:</div><div><br></div><div><div>root@stor2:~# gluster volume heal HA-FAST-PVE1-150G info</div>

<div>Volume heal failed</div></div><div><br></div><div><br></div><div>but</div><div><div>[2014-08-26 11:45:35.315974] I [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-HA-FAST-PVE1-150G-replicate-0:  foreground data self heal  is successfully completed,  data self heal from HA-FAST-PVE1-150G-client-1  to sinks  HA-FAST-PVE1-150G-client-0, with 16108814336 bytes on HA-FAST-PVE1-150G-client-0, 16108814336 bytes on HA-FAST-PVE1-150G-client-1,  data - Pending matrix:  [ [ 0 0 ] [ 348 0 ] ]  on &lt;gfid:e3ede9c6-28d6-4755-841a-d8329e42ccc4&gt;</div>

</div><div><br></div><div>something wrong during upgrade?</div><div><br></div><div>I&#39;ve got two VM-s on different volumes: one with HD on and other with HD off.</div><div>Both survived the outage and both seemed synced.</div>
<div><br></div><div>but today I&#39;ve found kind of a bug with log rotation.</div><div><br></div><div>logs rotated both on server and client sides, but logs are being written in *.log.1 file :)</div><div><br></div><div>/var/log/glusterfs/mnt-pve-HA-MED-PVE1-1T.log.1<br>
</div><div>/var/log/glusterfs/glustershd.log.1<br></div><div><br></div><div>such behavior came after upgrade.</div><div><br></div><div>logrotate.d conf files include the HUP for gluster pid-s.</div><div><br></div><div>client:</div>
<div><div>/var/log/glusterfs/*.log {</div><div>        daily</div><div>        rotate 7</div><div>        delaycompress</div><div>        compress</div><div>        notifempty</div><div>        missingok</div><div>        postrotate</div>
<div>                [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`</div><div>        endscript</div><div>}</div></div><div><br></div><div>but I&#39;m not able to ls the pid on client side (should it be there?) :(</div>
<div><br></div><div>and servers:</div><div><div>/var/log/glusterfs/*.log {</div><div>        daily</div><div>        rotate 7</div><div>        delaycompress</div><div>        compress</div><div>        notifempty</div><div>
        missingok</div><div>        postrotate</div><div>                [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`</div><div>        endscript</div><div>}</div></div><div><br></div><div><br></div>
<div><div>/var/log/glusterfs/*/*.log {</div><div>        daily</div><div>        rotate 7</div><div>        delaycompress</div><div>        compress</div><div>        notifempty</div><div>        missingok</div><div>        copytruncate</div>
<div>        postrotate</div><div>                [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`</div><div>        endscript</div><div>}</div></div><div><br></div><div>I do have /var/run/glusterd.pid on server side.</div>
<div><br></div><div>Should I change something? Logrotation seems to be broken.</div>
<div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-08-26 9:29 GMT+03:00 Pranith Kumar Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF"><div class="">
    <br>
    <div>On 08/26/2014 11:55 AM, Roman wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hello all again!
        <div>I&#39;m back from vacation and I&#39;m pretty happy with 3.5.2
          available for wheezy. Thanks! Just made my updates.</div>
        <div>For 3.5.2 do I still have to set cluster.self-heal-daemon
          to off?</div>
      </div>
    </blockquote></div>
    Welcome back :-). If you set it to off, the test case you execute
    will work(Validate please :-) ). But we need to test it with
    self-heal-daemon &#39;on&#39; and fix any bugs if the test case does not
    work?<span class="HOEnZb"><font color="#888888"><br>
    <br>
    Pranith.</font></span><div><div class="h5"><br>
    <blockquote type="cite">
      <div dir="ltr">
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">2014-08-06 12:49 GMT+03:00 Humble
          Chirammal <span dir="ltr">&lt;<a href="mailto:hchiramm@redhat.com" target="_blank">hchiramm@redhat.com</a>&gt;</span>:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div><br>
              <br>
              <br>
              ----- Original Message -----<br>
              | From: &quot;Pranith Kumar Karampuri&quot; &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;<br>
              | To: &quot;Roman&quot; &lt;<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a>&gt;<br>
              | Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>,
              &quot;Niels de Vos&quot; &lt;<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>&gt;,
              &quot;Humble Chirammal&quot; &lt;<a href="mailto:hchiramm@redhat.com" target="_blank">hchiramm@redhat.com</a>&gt;<br>
              | Sent: Wednesday, August 6, 2014 12:09:57 PM<br>
              | Subject: Re: [Gluster-users] libgfapi failover problem
              on replica bricks<br>
              |<br>
              | Roman,<br>
              |      The file went into split-brain. I think we should
              do these tests<br>
              | with 3.5.2. Where monitoring the heals is easier. Let me
              also come up<br>
              | with a document about how to do this testing you are
              trying to do.<br>
              |<br>
              | Humble/Niels,<br>
              |      Do we have debs available for 3.5.2? In 3.5.1 there
              was packaging<br>
              | issue where /usr/bin/glfsheal is not packaged along with
              the deb. I<br>
              | think that should be fixed now as well?<br>
              |<br>
            </div>
            Pranith,<br>
            <br>
            The 3.5.2 packages for debian is not available yet. We are
            co-ordinating internally to get it processed.<br>
            I will update the list once its available.<br>
            <br>
            --Humble<br>
            <div>|<br>
              | On 08/06/2014 11:52 AM, Roman wrote:<br>
              | &gt; good morning,<br>
              | &gt;<br>
              | &gt; root@stor1:~# getfattr -d -m. -e hex<br>
              | &gt;
              /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt; getfattr: Removing leading &#39;/&#39; from absolute path
              names<br>
              | &gt; # file:
              exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;
              trusted.afr.HA-fast-150G-PVE1-client-0=0x000000000000000000000000<br>
              | &gt;
              trusted.afr.HA-fast-150G-PVE1-client-1=0x000001320000000000000000<br>
              | &gt; trusted.gfid=0x23c79523075a4158bea38078da570449<br>
              | &gt;<br>
              | &gt; getfattr: Removing leading &#39;/&#39; from absolute path
              names<br>
              | &gt; # file:
              exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;
              trusted.afr.HA-fast-150G-PVE1-client-0=0x000000040000000000000000<br>
              | &gt;
              trusted.afr.HA-fast-150G-PVE1-client-1=0x000000000000000000000000<br>
              | &gt; trusted.gfid=0x23c79523075a4158bea38078da570449<br>
              | &gt;<br>
              | &gt;<br>
              | &gt;<br>
              | &gt; 2014-08-06 9:20 GMT+03:00 Pranith Kumar Karampuri
              &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
            </div>
            | &gt; &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>| &gt;<br>
              | &gt;<br>
              | &gt;     On 08/06/2014 11:30 AM, Roman wrote:<br>
              | &gt;&gt;     Also, this time files are not the same!<br>
              | &gt;&gt;<br>
              | &gt;&gt;     root@stor1:~# md5sum<br>
              | &gt;&gt;   
               /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;&gt;     32411360c53116b96a059f17306caeda<br>
              | &gt;&gt;     
              /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;&gt;<br>
              | &gt;&gt;     root@stor2:~# md5sum<br>
              | &gt;&gt;   
               /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;&gt;     65b8a6031bcb6f5fb3a11cb1e8b1c9c9<br>
              | &gt;&gt;     
              /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
              | &gt;     What is the getfattr output?<br>
              | &gt;<br>
              | &gt;     Pranith<br>
              | &gt;<br>
              | &gt;&gt;<br>
              | &gt;&gt;<br>
              | &gt;&gt;     2014-08-05 16:33 GMT+03:00 Roman &lt;<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a><br>
            </div>
            | &gt;&gt;     &lt;mailto:<a href="mailto:romeo.r@gmail.com" target="_blank">romeo.r@gmail.com</a>&gt;&gt;:<br>
            <div>| &gt;&gt;<br>
              | &gt;&gt;         Nope, it is not working. But this time
              it went a bit other way<br>
              | &gt;&gt;<br>
              | &gt;&gt;         root@gluster-client:~# dmesg<br>
              | &gt;&gt;         Segmentation fault<br>
              | &gt;&gt;<br>
              | &gt;&gt;<br>
              | &gt;&gt;         I was not able even to start the VM
              after I done the tests<br>
              | &gt;&gt;<br>
              | &gt;&gt;         Could not read qcow2 header: Operation
              not permitted<br>
              | &gt;&gt;<br>
              | &gt;&gt;         And it seems, it never starts to sync
              files after first<br>
              | &gt;&gt;         disconnect. VM survives first
              disconnect, but not second (I<br>
              | &gt;&gt;         waited around 30 minutes). Also, I&#39;ve<br>
              | &gt;&gt;         got network.ping-timeout: 2 in volume
              settings, but logs<br>
              | &gt;&gt;         react on first disconnect around 30
              seconds. Second was<br>
              | &gt;&gt;         faster, 2 seconds.<br>
              | &gt;&gt;<br>
              | &gt;&gt;         Reaction was different also:<br>
              | &gt;&gt;<br>
              | &gt;&gt;         slower one:<br>
              | &gt;&gt;         [2014-08-05 13:26:19.558435] W
              [socket.c:514:__socket_rwv]<br>
              | &gt;&gt;         0-glusterfs: readv failed (Connection
              timed out)<br>
              | &gt;&gt;         [2014-08-05 13:26:19.558485] W<br>
              | &gt;&gt;       
               [socket.c:1962:__socket_proto_state_machine] 0-glusterfs:<br>
              | &gt;&gt;         reading from socket failed. Error
              (Connection timed out),<br>
            </div>
            | &gt;&gt;         peer (<a href="http://10.250.0.1:24007" target="_blank">10.250.0.1:24007</a>
            &lt;<a href="http://10.250.0.1:24007" target="_blank">http://10.250.0.1:24007</a>&gt;)<br>
            <div>| &gt;&gt;         [2014-08-05
              13:26:21.281426] W [socket.c:514:__socket_rwv]<br>
              | &gt;&gt;         0-HA-fast-150G-PVE1-client-0: readv
              failed (Connection timed out)<br>
              | &gt;&gt;         [2014-08-05 13:26:21.281474] W<br>
              | &gt;&gt;       
               [socket.c:1962:__socket_proto_state_machine]<br>
              | &gt;&gt;         0-HA-fast-150G-PVE1-client-0: reading
              from socket failed.<br>
              | &gt;&gt;         Error (Connection timed out), peer (<a href="http://10.250.0.1:49153" target="_blank">10.250.0.1:49153</a><br>
            </div>
            | &gt;&gt;         &lt;<a href="http://10.250.0.1:49153" target="_blank">http://10.250.0.1:49153</a>&gt;)<br>
            <div>| &gt;&gt;         [2014-08-05
              13:26:21.281507] I<br>
              | &gt;&gt;         [client.c:2098:client_rpc_notify]<br>
              | &gt;&gt;         0-HA-fast-150G-PVE1-client-0:
              disconnected<br>
              | &gt;&gt;<br>
              | &gt;&gt;         the fast one:<br>
              | &gt;&gt;         2014-08-05 12:52:44.607389] C<br>
              | &gt;&gt;       
               [client-handshake.c:127:rpc_client_ping_timer_expired]<br>
              | &gt;&gt;         0-HA-fast-150G-PVE1-client-1: server <a href="http://10.250.0.2:49153" target="_blank">10.250.0.2:49153</a><br>
            </div>
            | &gt;&gt;         &lt;<a href="http://10.250.0.2:49153" target="_blank">http://10.250.0.2:49153</a>&gt;
            has not responded in the last 2<br>
            <div>
              <div>| &gt;&gt;         seconds, disconnecting.<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607491] W
                [socket.c:514:__socket_rwv]<br>
                | &gt;&gt;         0-HA-fast-150G-PVE1-client-1: readv
                failed (No data available)<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607585] E<br>
                | &gt;&gt;         [rpc-clnt.c:368:saved_frames_unwind]<br>
                | &gt;&gt;       
                 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0xf8)<br>
                | &gt;&gt;         [0x7fcb1b4b0558]<br>
                | &gt;&gt;       
 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)<br>
                | &gt;&gt;         [0x7fcb1b4aea63]<br>
                | &gt;&gt;       
 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)<br>
                | &gt;&gt;         [0x7fcb1b4ae97e])))
                0-HA-fast-150G-PVE1-client-1: forced<br>
                | &gt;&gt;         unwinding frame type(GlusterFS 3.3)
                op(LOOKUP(27)) called at<br>
                | &gt;&gt;         2014-08-05 12:52:42.463881
                (xid=0x381883x)<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607604] W<br>
                | &gt;&gt;       
                 [client-rpc-fops.c:2624:client3_3_lookup_cbk]<br>
                | &gt;&gt;         0-HA-fast-150G-PVE1-client-1: remote
                operation failed:<br>
                | &gt;&gt;         Transport endpoint is not connected.
                Path: /<br>
                | &gt;&gt;       
                 (00000000-0000-0000-0000-000000000001)<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607736] E<br>
                | &gt;&gt;         [rpc-clnt.c:368:saved_frames_unwind]<br>
                | &gt;&gt;       
                 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0xf8)<br>
                | &gt;&gt;         [0x7fcb1b4b0558]<br>
                | &gt;&gt;       
 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)<br>
                | &gt;&gt;         [0x7fcb1b4aea63]<br>
                | &gt;&gt;       
 (--&gt;/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)<br>
                | &gt;&gt;         [0x7fcb1b4ae97e])))
                0-HA-fast-150G-PVE1-client-1: forced<br>
                | &gt;&gt;         unwinding frame type(GlusterFS
                Handshake) op(PING(3)) called<br>
                | &gt;&gt;         at 2014-08-05 12:52:42.463891
                (xid=0x381884x)<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607753] W<br>
                | &gt;&gt;       
                 [client-handshake.c:276:client_ping_cbk]<br>
                | &gt;&gt;         0-HA-fast-150G-PVE1-client-1: timer
                must have expired<br>
                | &gt;&gt;         [2014-08-05 12:52:44.607776] I<br>
                | &gt;&gt;         [client.c:2098:client_rpc_notify]<br>
                | &gt;&gt;         0-HA-fast-150G-PVE1-client-1:
                disconnected<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;         I&#39;ve got SSD disks (just for an
                info).<br>
                | &gt;&gt;         Should I go and give a try for 3.5.2?<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;         2014-08-05 13:06 GMT+03:00 Pranith
                Kumar Karampuri<br>
              </div>
            </div>
            | &gt;&gt;         &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>
            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>| &gt;&gt;<br>
              | &gt;&gt;             reply along with gluster-users
              please :-). May be you are<br>
              | &gt;&gt;             hitting &#39;reply&#39; instead of &#39;reply
              all&#39;?<br>
              | &gt;&gt;<br>
              | &gt;&gt;             Pranith<br>
              | &gt;&gt;<br>
              | &gt;&gt;             On 08/05/2014 03:35 PM, Roman
              wrote:<br>
              | &gt;&gt;&gt;             To make sure and clean, I&#39;ve
              created another VM with raw<br>
              | &gt;&gt;&gt;             format and goint to repeat
              those steps. So now I&#39;ve got<br>
              | &gt;&gt;&gt;             two VM-s one with qcow2 format
              and other with raw<br>
              | &gt;&gt;&gt;             format. I will send another
              e-mail shortly.<br>
              | &gt;&gt;&gt;<br>
              | &gt;&gt;&gt;<br>
              | &gt;&gt;&gt;             2014-08-05 13:01 GMT+03:00
              Pranith Kumar Karampuri<br>
            </div>
            | &gt;&gt;&gt;             &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>
            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;                 On 08/05/2014 03:07 PM,
                Roman wrote:<br>
                | &gt;&gt;&gt;&gt;                 really, seems like
                the same file<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 stor1:<br>
                | &gt;&gt;&gt;&gt;               
                 a951641c5230472929836f9fcede6b04<br>
                | &gt;&gt;&gt;&gt;                 
                /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 stor2:<br>
                | &gt;&gt;&gt;&gt;               
                 a951641c5230472929836f9fcede6b04<br>
                | &gt;&gt;&gt;&gt;                 
                /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 one thing I&#39;ve seen
                from logs, that somehow proxmox<br>
                | &gt;&gt;&gt;&gt;                 VE is connecting with
                wrong version to servers?<br>
                | &gt;&gt;&gt;&gt;                 [2014-08-05
                09:23:45.218550] I<br>
                | &gt;&gt;&gt;&gt;               
                 [client-handshake.c:1659:select_server_supported_programs]<br>
                | &gt;&gt;&gt;&gt;               
                 0-HA-fast-150G-PVE1-client-0: Using Program<br>
                | &gt;&gt;&gt;&gt;                 GlusterFS 3.3, Num
                (1298437), Version (330)<br>
                | &gt;&gt;&gt;                 It is the rpc (over the
                network data structures)<br>
                | &gt;&gt;&gt;                 version, which is not
                changed at all from 3.3 so<br>
                | &gt;&gt;&gt;                 thats not a problem. So
                what is the conclusion? Is<br>
                | &gt;&gt;&gt;                 your test case working
                now or not?<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;                 Pranith<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 but if I issue:<br>
                | &gt;&gt;&gt;&gt;                 root@pve1:~#
                glusterfs -V<br>
                | &gt;&gt;&gt;&gt;                 glusterfs 3.4.4 built
                on Jun 28 2014 03:44:57<br>
                | &gt;&gt;&gt;&gt;                 seems ok.<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 server  use 3.4.4
                meanwhile<br>
                | &gt;&gt;&gt;&gt;                 [2014-08-05
                09:23:45.117875] I<br>
                | &gt;&gt;&gt;&gt;               
                 [server-handshake.c:567:server_setvolume]<br>
                | &gt;&gt;&gt;&gt;               
                 0-HA-fast-150G-PVE1-server: accepted client from<br>
                | &gt;&gt;&gt;&gt;               
                 stor1-9004-2014/08/05-09:23:45:93538-HA-fast-150G-PVE1-client-1-0<br>
                | &gt;&gt;&gt;&gt;                 (version: 3.4.4)<br>
                | &gt;&gt;&gt;&gt;                 [2014-08-05
                09:23:49.103035] I<br>
                | &gt;&gt;&gt;&gt;               
                 [server-handshake.c:567:server_setvolume]<br>
                | &gt;&gt;&gt;&gt;               
                 0-HA-fast-150G-PVE1-server: accepted client from<br>
                | &gt;&gt;&gt;&gt;               
                 stor1-8998-2014/08/05-09:23:45:89883-HA-fast-150G-PVE1-client-0-0<br>
                | &gt;&gt;&gt;&gt;                 (version: 3.4.4)<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 if this could be the
                reason, of course.<br>
                | &gt;&gt;&gt;&gt;                 I did restart the
                Proxmox VE yesterday (just for an<br>
                | &gt;&gt;&gt;&gt;                 information)<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 2014-08-05 12:30
                GMT+03:00 Pranith Kumar Karampuri<br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;                 &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>
            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                     On 08/05/2014
                02:33 PM, Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;                     Waited long
                enough for now, still different<br>
                | &gt;&gt;&gt;&gt;&gt;                     sizes and no
                logs about healing :(<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     stor1<br>
                | &gt;&gt;&gt;&gt;&gt;                     # file:<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.afr.HA-fast-150G-PVE1-client-0=0x000000000000000000000000<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.afr.HA-fast-150G-PVE1-client-1=0x000000000000000000000000<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.gfid=0xf10ad81b58484bcd9b385a36a207f921<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     root@stor1:~#
                du -sh<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 /exports/fast-test/150G/images/127/<br>
                | &gt;&gt;&gt;&gt;&gt;                     1.2G 
                /exports/fast-test/150G/images/127/<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     stor2<br>
                | &gt;&gt;&gt;&gt;&gt;                     # file:<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 exports/fast-test/150G/images/127/vm-127-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.afr.HA-fast-150G-PVE1-client-0=0x000000000000000000000000<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.afr.HA-fast-150G-PVE1-client-1=0x000000000000000000000000<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 trusted.gfid=0xf10ad81b58484bcd9b385a36a207f921<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     root@stor2:~#
                du -sh<br>
                | &gt;&gt;&gt;&gt;&gt;                   
                 /exports/fast-test/150G/images/127/<br>
                | &gt;&gt;&gt;&gt;&gt;                     1.4G 
                /exports/fast-test/150G/images/127/<br>
                | &gt;&gt;&gt;&gt;                     According to the
                changelogs, the file doesn&#39;t<br>
                | &gt;&gt;&gt;&gt;                     need any healing.
                Could you stop the operations<br>
                | &gt;&gt;&gt;&gt;                     on the VMs and
                take md5sum on both these machines?<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                     Pranith<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     2014-08-05
                11:49 GMT+03:00 Pranith Kumar<br>
                | &gt;&gt;&gt;&gt;&gt;                     Karampuri
                &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;                     &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                         On
                08/05/2014 02:06 PM, Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         Well,
                it seems like it doesn&#39;t see the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 changes were made to the volume ? I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 created two files 200 and 100 MB (from<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 /dev/zero) after I disconnected the first<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 brick. Then connected it back and got<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         these
                logs:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.830150] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [glusterfsd-mgmt.c:1584:mgmt_getspec_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-glusterfs: No change in volfile, continuing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.830207] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [rpc-clnt.c:1676:rpc_clnt_reconfig]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: changing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         port
                to 49153 (from 0)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.830239] W<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [socket.c:514:__socket_rwv]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: readv<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 failed (No data available)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.831024] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [client-handshake.c:1659:select_server_supported_programs]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: Using<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 Program GlusterFS 3.3, Num (1298437),<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 Version (330)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.831375] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [client-handshake.c:1456:client_setvolume_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: Connected<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         to <a href="http://10.250.0.1:49153" target="_blank">10.250.0.1:49153</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;                         &lt;<a href="http://10.250.0.1:49153" target="_blank">http://10.250.0.1:49153</a>&gt;, attached
            to<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;               
                         remote volume &#39;/exports/fast-test/150G&#39;.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.831394] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [client-handshake.c:1468:client_setvolume_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: Server and<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 Client lk-version numbers are not same,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 reopening the fds<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.831566] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [client-handshake.c:450:client_set_lk_version_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-HA-fast-150G-PVE1-client-0: Server lk<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 version = 1<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [2014-08-05 08:30:37.830150] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 [glusterfsd-mgmt.c:1584:mgmt_getspec_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 0-glusterfs: No change in volfile, continuing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         this
                line seems weird to me tbh.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         I do
                not see any traffic on switch<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 interfaces between gluster servers, which<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 means, there is no syncing between them.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         I
                tried to ls -l the files on the client<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         and
                servers to trigger the healing, but<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         seems
                like no success. Should I wait more?<br>
                | &gt;&gt;&gt;&gt;&gt;                         Yes, it
                should take around 10-15 minutes.<br>
                | &gt;&gt;&gt;&gt;&gt;                         Could you
                provide &#39;getfattr -d -m. -e hex<br>
                | &gt;&gt;&gt;&gt;&gt;                       
                 &lt;file-on-brick&gt;&#39; on both the bricks.<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                         Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 2014-08-05 11:25 GMT+03:00 Pranith Kumar<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 Karampuri &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;                       
             &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 On 08/05/2014 01:10 PM, Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   Ahha! For some reason I was not able<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   to start the VM anymore, Proxmox VE<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   told me, that it is not able to read<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   the qcow2 header due to permission<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   is denied for some reason. So I just<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   deleted that file and created a new<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   VM. And the nex message I&#39;ve got was<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   this:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 Seems like these are the messages<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 where you took down the bricks before<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 self-heal. Could you restart the run<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 waiting for self-heals to complete<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 before taking down the next brick?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                           
                 Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   [2014-08-05 07:31:25.663412] E<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                 
                 [afr-self-heal-common.c:197:afr_sh_print_split_brain_log]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   0-HA-fast-150G-PVE1-replicate-0:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   Unable to self-heal contents of<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   &#39;/images/124/vm-124-disk-1.qcow2&#39;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   (possible split-brain). Please<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   delete the file from all but the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   preferred subvolume.- Pending<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   matrix:  [ [ 0 60 ] [ 11 0 ] ]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   [2014-08-05 07:31:25.663955] E<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                 
                 [afr-self-heal-common.c:2262:afr_self_heal_completion_cbk]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   0-HA-fast-150G-PVE1-replicate-0:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   background  data self-heal failed on<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   /images/124/vm-124-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   2014-08-05 10:13 GMT+03:00 Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   Kumar Karampuri &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                           
             &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                       I just responded to your earlier<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                       mail about how the log looks.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                       The log comes on the mount&#39;s logfile<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                       Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                       On 08/05/2014 12:41 PM, Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Ok, so I&#39;ve waited enough, I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           think. Had no any traffic on<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           switch ports between servers.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Could not find any suitable log<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           message about completed<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           self-heal (waited about 30<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           minutes). Plugged out the other<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           server&#39;s UTP cable this time<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           and got in the same situation:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           root@gluster-test1:~# cat<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           /var/log/dmesg<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           -bash: /bin/cat: Input/output error<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           brick logs:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [2014-08-05 07:09:03.005474] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [server.c:762:server_rpc_notify]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           0-HA-fast-150G-PVE1-server:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           disconnecting connectionfrom<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                         
                 pve1-27649-2014/08/04-13:27:54:720789-HA-fast-150G-PVE1-client-0-0<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [2014-08-05 07:09:03.005530] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [server-helpers.c:729:server_connection_put]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           0-HA-fast-150G-PVE1-server:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Shutting down connection<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                         
                 pve1-27649-2014/08/04-13:27:54:720789-HA-fast-150G-PVE1-client-0-0<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [2014-08-05 07:09:03.005560] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [server-helpers.c:463:do_fd_cleanup]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           0-HA-fast-150G-PVE1-server: fd<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           cleanup on<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           /images/124/vm-124-disk-1.qcow2<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           [2014-08-05 07:09:03.005797] I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                         
                 [server-helpers.c:617:server_connection_destroy]<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           0-HA-fast-150G-PVE1-server:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           destroyed connection of<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                         
                 pve1-27649-2014/08/04-13:27:54:720789-HA-fast-150G-PVE1-client-0-0<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           2014-08-05 9:53 GMT+03:00<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Pranith Kumar Karampuri<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               Do you think it is possible<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               for you to do these tests<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               on the latest version<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               3.5.2? &#39;gluster volume heal<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               &lt;volname&gt; info&#39; would give<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               you that information in<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               versions &gt; 3.5.1.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               Otherwise you will have to<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               check it from either the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               logs, there will be<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               self-heal completed message<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               on the mount logs (or) by<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               observing &#39;getfattr -d -m.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               -e hex &lt;image-file-on-bricks&gt;&#39;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               On 08/05/2014 12:09 PM,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                               Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   Ok, I understand. I will<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   try this shortly.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   How can I be sure, that<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   healing process is done,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   if I am not able to see<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   its status?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   2014-08-05 9:30 GMT+03:00<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   Pranith Kumar Karampuri<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       Mounts will do the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       healing, not the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       self-heal-daemon. The<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       problem I feel is that<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       whichever process does<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       the healing has the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       latest information<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       about the good bricks<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       in this usecase. Since<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       for VM usecase, mounts<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       should have the latest<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       information, we should<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       let the mounts do the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       healing. If the mount<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       accesses the VM image<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       either by someone<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       doing operations<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       inside the VM or<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       explicit stat on the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       file it should do the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       healing.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       Pranith.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       On 08/05/2014 10:39<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                       AM, Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           Hmmm, you told me to<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           turn it off. Did I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           understood something<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           wrong? After I issued<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           the command you&#39;ve<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           sent me, I was not<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           able to watch the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           healing process, it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           said, it won&#39;t be<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           healed, becouse its<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           turned off.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           2014-08-05 5:39<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           GMT+03:00 Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           Kumar Karampuri<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>| &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               You didn&#39;t<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               mention anything<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               about<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               self-healing. Did<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               you wait until<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               the self-heal is<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               complete?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               On 08/04/2014<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               05:49 PM, Roman<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                               wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Hi!<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Result is pretty<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   same. I set the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   switch port down<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   for 1st server,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   it was ok. Then<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   set it up back<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   and set other<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   server&#39;s port<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   off. and it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   triggered IO<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   error on two<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   virtual<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   machines: one<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   with local root<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   FS but network<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   mounted storage.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   and other with<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   network root FS.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   1st gave an<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   error on copying<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   to or from the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   mounted network<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   disk, other just<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   gave me an error<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   for even reading<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   log.files.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   cat:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                 
                 /var/log/alternatives.log:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Input/output error<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   then I reset the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   kvm VM and it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   said me, there<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   is no boot<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   device. Next I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   virtually<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   powered it off<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   and then back on<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   and it has booted.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   By the way, did<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   I have to<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   start/stop volume?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   &gt;&gt; Could you do<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   the following<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   and test it again?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   &gt;&gt; gluster
                volume<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   set &lt;volname&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                 
                 cluster.self-heal-daemon<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   off<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   &gt;&gt;Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   2014-08-04 14:10<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   GMT+03:00<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Pranith Kumar<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Karampuri<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;:<br>
            <div>
              <div>|
                &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       On<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       08/04/2014<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       03:33 PM,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       Roman wrote:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Hello!<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Facing the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           same<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           problem as<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           mentioned<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           here:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           <a href="http://supercolony.gluster.org/pipermail/gluster-users/2014-April/039959.html" target="_blank">http://supercolony.gluster.org/pipermail/gluster-users/2014-April/039959.html</a><br>

                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           my set up<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           is up and<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           running, so<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           i&#39;m ready<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           to help you<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           back with<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           feedback.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           setup:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           proxmox<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           server as<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           client<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           2 gluster<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           physical<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                            servers<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           server side<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           and client<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           side both<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           running atm<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           3.4.4<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           glusterfs<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           from<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           gluster repo.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           the problem
                is:<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           1. craeted<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           replica
                bricks.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           2. mounted<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           in proxmox<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           (tried both<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           promox<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           ways: via<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           GUI and<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           fstab (with<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           backup<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           volume<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           line), btw<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           while<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           mounting<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           via fstab<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           I&#39;m unable<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           to launch a<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           VM without<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           cache,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           meanwhile<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                         
                 direct-io-mode<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           is enabled<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           in fstab
                line)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           3. installed
                VM<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           4. bring<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           one volume<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           down - ok<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           5. bringing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           up, waiting<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           for sync is<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           done.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           6. bring<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           other<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           volume down<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           - getting<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           IO errors<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           on VM guest<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           and not<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           able to<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           restore the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           VM after I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           reset the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           VM via<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           host. It<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           says (no<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           bootable<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           media).<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           After I<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           shut it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           down<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           (forced)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           and bring<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           back up, it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           boots.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       Could you do<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       the<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       following<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       and test it<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       again?<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       gluster<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       volume set<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       &lt;volname&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                     
                 cluster.self-heal-daemon<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       off<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                       Pranith<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Need help.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Tried<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           3.4.3, 3.4.4.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Still<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           missing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           pkg-s for<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           3.4.5 for<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           debian and<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           3.5.2<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           (3.5.1<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           always<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           gives a<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           healing<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           error for<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           some reason)<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                         
                 _______________________________________________<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           Gluster-users<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           mailing list<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;     
                                                           <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
              </div>
            </div>
            | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;<br>
            <div>
              <div>|
                &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;       
                                                         <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;         
                                                   Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;             
                                           Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                 
                                   Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;                     
                           Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;&gt;                         
                   Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         --<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                         Best
                regards,<br>
                | &gt;&gt;&gt;&gt;&gt;&gt;                       
                 Roman.<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;&gt;                     --<br>
                | &gt;&gt;&gt;&gt;&gt;                     Best regards,<br>
                | &gt;&gt;&gt;&gt;&gt;                     Roman.<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;<br>
                | &gt;&gt;&gt;&gt;                 --<br>
                | &gt;&gt;&gt;&gt;                 Best regards,<br>
                | &gt;&gt;&gt;&gt;                 Roman.<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;<br>
                | &gt;&gt;&gt;             --<br>
                | &gt;&gt;&gt;             Best regards,<br>
                | &gt;&gt;&gt;             Roman.<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;         --<br>
                | &gt;&gt;         Best regards,<br>
                | &gt;&gt;         Roman.<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;<br>
                | &gt;&gt;     --<br>
                | &gt;&gt;     Best regards,<br>
                | &gt;&gt;     Roman.<br>
                | &gt;<br>
                | &gt;<br>
                | &gt;<br>
                | &gt;<br>
                | &gt; --<br>
                | &gt; Best regards,<br>
                | &gt; Roman.<br>
                |<br>
                |<br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
        <br clear="all">
        <div><br>
        </div>
        -- <br>
        Best regards,<br>
        Roman.
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br><br clear="all"><div><br></div>-- <br>Best regards,<br>Roman.
</div>