<div dir="ltr"><div><div>In the client's log, I found:<br><br>[2014-01-28 17:54:36.839220] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on file /fytest/46<br>
[2014-01-28 17:55:05.251490] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on file /fytest/49<br><br></div>the /fytest/46 & /fytest/49 are BAD files.<br>
<br></div>What does "no active sinks for performing" means?<br><br><div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jan 28, 2014 at 4:56 PM, Dan Mons <span dir="ltr"><<a href="mailto:dmons@cuttingedge.com.au" target="_blank">dmons@cuttingedge.com.au</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Is your write single or multi-threaded?<br>
<br>
If it's single threaded, try writing your files across as many threads<br>
as possible, and see what the performance improvement is like.<br>
<br>
-Dan<br>
----------------<br>
Dan Mons<br>
Skunk Works<br>
Cutting Edge<br>
<a href="http://cuttingedge.com.au" target="_blank">http://cuttingedge.com.au</a><br>
<br>
<br>
On 28 January 2014 18:49, Mingfan Lu <<a href="mailto:mingfan.lu@gmail.com">mingfan.lu@gmail.com</a>> wrote:<br>
><br>
> Hi,<br>
> I have a distributed and replica=3 volume (not to use stripe ) in a<br>
> cluster. I used dd to write 120 files to test. I foundthe write performane<br>
> of some files are much lower than others. all these "BAD" files are stored<br>
> in the same three brick servers for replication (I called node1 node2 node3)<br>
><br>
> e.g the bad write performance could be 10MBps while good performance could<br>
> be 150Mbps+<br>
><br>
> there are no problems about raid and networks.<br>
> If i stopped node1 & node2, the write performance of "BAD" files are the<br>
> similar to (even better) GOOD ones.<br>
><br>
> One thing I must metion is the raids of node1 and node2 are reformated for<br>
> some reason, there are many self-heal activities to restore files in node1<br>
> and node2.<br>
> Is the BAD write performance caused by aggresive self-heal?<br>
> How could I slow down the self-heal?<br>
> Any advise?<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>