<div dir="ltr">Hello,<div><br></div><div>I found other strange thing.</div><div><br></div><div style>On the dd-test (dd if=/dev/zero of=2testbin bs=1M count=1024 oflag=direct) my volume shows only 18-19MB/s.</div><div style>
Full network speed is 90-110MB/s, storage speed - ~200MB/s.</div><div style><br></div><div style>Volume type - replicated-distributed, 2 replicas, 4 nodes. Volumes mounted via fuse with direct-io=enable option.</div><div style>
<br></div><div style>Its sooo slooow, right?</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/3/5 harry mangalam <span dir="ltr"><<a href="mailto:harry.mangalam@uci.edu" target="_blank">harry.mangalam@uci.edu</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This kind of info is surprisingly hard to obtain. The gluster docs do contain<br>
some of it, ie:<br>
<br>
<<a href="http://community.gluster.org/a/linux-kernel-tuning-for-glusterfs/" target="_blank">http://community.gluster.org/a/linux-kernel-tuning-for-glusterfs/</a>><br>
<br>
I also found well-described kernel tuning parameters in the FHGFS wiki (as<br>
another distibuted fs, they share some characteristics)<br>
<br>
<a href="http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning" target="_blank">http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning</a><br>
<br>
and more XFS tuning filesystem params here:<br>
<br>
<<a href="http://www.mythtv.org/wiki/Optimizing_Performance#Further_Information" target="_blank">http://www.mythtv.org/wiki/Optimizing_Performance#Further_Information</a>><br>
<br>
and here:<br>
<<a href="http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-" target="_blank">http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-</a><br>
edition><br>
<br>
But of course, YMMV and a number of these parameters conflict and/or have<br>
serious tradeoffs, as you discovered.<br>
<br>
LSI recently loaned me a Nytro SAS controller (on-card SSD-cached) which seems<br>
pretty phenomenal on a single brick (and is predicted to perform well based on<br>
their profiling), but am waiting for another node to arrive before I can test<br>
it under true gluster conditions. Anyone else tried this hardware?<br>
<br>
hjm<br>
<div><div class="h5"><br>
On Tuesday, March 05, 2013 12:34:41 PM Nikita A Kardashin wrote:<br>
> Hello all!<br>
><br>
> This problem is solved by me today.<br>
> Root of all in the incompatibility of gluster cache and kvm cache.<br>
><br>
> Bug reproduces if KVM virtual machine created with cache=writethrough<br>
> (default for OpenStack) option and hosted on GlusterFS volume. If any other<br>
> (cache=writeback or cache=none with direct-io) cacher used, performance of<br>
> writing to existing file inside VM is equal to bare storage (from host<br>
> machine) write performance.<br>
><br>
> I think, it must be documented in Gluster and maybe filled a bug.<br>
><br>
> Other question. Where I can read something about gluster tuning (optimal<br>
> cache size, write-behind, flush-behind use cases and other)? I found only<br>
> options list, without any how-to or tested cases.<br>
><br>
><br>
> 2013/3/5 Toby Corkindale <<a href="mailto:toby.corkindale@strategicdata.com.au">toby.corkindale@strategicdata.com.au</a>><br>
><br>
> > On 01/03/13 21:12, Brian Candler wrote:<br>
> >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:<br>
> >>> If I try to execute above command inside virtual machine (KVM),<br>
> >>> first<br>
> >>> time all going right - about 900MB/s (cache effect, I think), but if<br>
> >>><br>
> >>> I<br>
> >>><br>
> >>> run this test again on existing file - task (dd) hungs up and can be<br>
> >>> stopped only by Ctrl+C.<br>
> >>> Overall virtual system latency is poor too. For example, apt-get<br>
> >>> upgrade upgrading system very, very slow, freezing on "Unpacking<br>
> >>> replacement" and other io-related steps.<br>
> >>> Does glusterfs have any tuning options, that can help me?<br>
> >><br>
> >> If you are finding that processes hang or freeze indefinitely, this is<br>
> >> not<br>
> >> a question of "tuning", this is simply "broken".<br>
> >><br>
> >> Anyway, you're asking the wrong person - I'm currently in the process of<br>
> >> stripping out glusterfs, although I remain interested in the project.<br>
> >><br>
> >> I did find that KVM performed very poorly, but KVM was not my main<br>
> >> application and that's not why I'm abandoning it. I'm stripping out<br>
> >> glusterfs primarily because it's not supportable in my environment,<br>
> >> because<br>
> >> there is no documentation on how to analyse and recover from failure<br>
> >> scenarios which can and do happen. This point in more detail:<br>
</div></div>> >> <a href="http://www.gluster.org/**pipermail/gluster-users/2013-**" target="_blank">http://www.gluster.org/**pipermail/gluster-users/2013-**</a><br>
> >> January/035118.html<<a href="http://www.gluster.org/pipermail/gluster-users/2013-J" target="_blank">http://www.gluster.org/pipermail/gluster-users/2013-J</a><br>
> >> anuary/035118.html><br>
<div class="im">> >><br>
> >> The other downside of gluster was its lack of flexibility, in particular<br>
> >> the<br>
> >> fact that there is no usage scaling factor on bricks, so that even with a<br>
> >> simple distributed setup all your bricks have to be the same size. Also,<br>
> >> the object store feature which I wanted to use, has clearly had hardly<br>
> >> any<br>
> >> testing (even the RPM packages don't install properly).<br>
> >><br>
> >> I *really* wanted to deploy gluster, because in principle I like the idea<br>
> >> of<br>
> >> a virtual distribution/replication system which sits on top of existing<br>
> >> local filesystems. But for storage, I need something where operational<br>
> >> supportability is at the top of the pile.<br>
> ><br>
> > I have to agree; GlusterFS has been in use here in production for a while,<br>
> > and while it mostly works, it's been fragile and documentation has been<br>
> > disappointing. Despite 3.3 being in beta for a year, it still seems to<br>
> > have<br>
> > been poorly tested. For eg, I can't believe almost no-one else noticed<br>
> > that<br>
> > the log files were busted.. nor that the bug report has been around for<br>
> > quarter of a year without being responded to or fixed.<br>
> ><br>
> > I have to ask -- what are you moving to now, Brian?<br>
> ><br>
> > -Toby<br>
> ><br>
> ><br>
</div>> > ______________________________**_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://supercolony.gluster." target="_blank">http://supercolony.gluster.</a>**org/mailman/listinfo/gluster-**users<<a href="http://s" target="_blank">http://s</a><br>
> > <a href="http://upercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">upercolony.gluster.org/mailman/listinfo/gluster-users</a>><br>
<br>
---<br>
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine<br>
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487<br>
415 South Circle View Dr, Irvine, CA, 92697 [shipping]<br>
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)<br>
---<br>
"Something must be done. [X] is something. Therefore, we must do it."<br>
Bruce Schneier, on American response to just about anything.<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>With best regards,<br>differentlocal (<a href="http://www.differentlocal.ru">www.differentlocal.ru</a> | <a href="mailto:differentlocal@gmail.com">differentlocal@gmail.com</a>),<br>
System administrator.
</div>