<div dir="ltr">Chris,<br> Thanks for benchmark numbers. I will check this. Recently I too observed this type of behavior. Will get back with some inputs and a fix probably.<br><br>Regards,<br>Amar<br><br><div class="gmail_quote">
2008/8/7 Keith Freedman <span dir="ltr"><<a href="mailto:freedman@freeformit.com">freedman@freeformit.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">At 08:20 AM 8/7/2008, Chris Davies wrote:<br>
>I'm not convinced that this is a network or hardware problem.<br>
<br>
</div>it doesn't sound like it to me either. what's the server stats while<br>
you're untarring?<br>
<br>
Hopefully one of thegluster devs will step in with some thoughts.<br>
<div><div></div><div class="Wj3C7c"><br>
<br>
> ><br>
> ><br>
> > Hope that wasn't confusing.<br>
> ><br>
> > At 10:05 PM 8/6/2008, Chris Davies wrote:<br>
> >> A continuation:<br>
> >><br>
> >> I used XFS & MD raid 1 on the partitions for the initial tests.<br>
> >> I tested reiser3 and reiser4 with no significant difference<br>
> >> I reraided to MD Raid 0 with XFS and received some improvement<br>
> >><br>
> >> I NFS mounted the partition and received bonnie++ numbers similar to<br>
> >> the best clientside AFR numbers I have been able to get, but,<br>
> >> unpacking the kernel using nfsv4/udp took 1 minute 47 seconds<br>
> >> compared<br>
> >> with 12 seconds for the bare drive, 41 seconds for serverside AFR and<br>
> >> an average of 17 minutes for clientside AFR.<br>
> >><br>
> >> If I turn off AFR, whether I mount the remote machine over the net or<br>
> >> use the local server's brick, tar xjf of a kernel takes roughly 29<br>
> >> seconds.<br>
> >><br>
> >> Large files replicate almost at wire speed. rsync/cp -Rp of a large<br>
> >> directory takes considerable time.<br>
> >><br>
> >> Both QA releases I've attempted of 1.4.0 have broken within minutes<br>
> >> using my configurations. 1.4.0qa32 and 1.4.0qa33. I'll turn debug<br>
> >> logs on and post summaries of those.<br>
> >><br>
> >> On Aug 6, 2008, at 2:48 PM, Chris Davies wrote:<br>
> >><br>
> >> > OS: Debian Linux/4.1, 64bit build<br>
> >> > Hardware: quad core xeon x3220, 8gb RAM, dual 7200RPM 1000gb WD<br>
> >> Hard<br>
> >> > Drives, 750gb raid 1 partition set as /gfsvol to be exported, dual<br>
> >> > gigE, juniper ex3200 switch<br>
> >> ><br>
> >> > Fuse libraries: fuse-2.7.3glfs10<br>
> >> > Gluster: glusterfs-1.3.10<br>
> >> ><br>
> >> > Running bonnie++ on both machines results in almost identical<br>
> >> numbers,<br>
> >> > eth1 is reserved wholly for server to server communications. Right<br>
> >> > now, the only load on these machines comes from my testbed.<br>
> >> There are<br>
> >> > four tests that give a reasonable indicator of performance.<br>
> >> ><br>
> >> > * loading a wordpress blog and looking at the line:<br>
> >> > <!-- 24 queries. 0.634 seconds. --><br>
> >> > * dd if=/dev/zero of=/gfs/test/out bs=1M count=512<br>
> >> > * time tar xjf /gfs/test/linux-2.6.26.1.tar.bz2<br>
> >> > * /usr/sbin/bonnie++ /gfs/test/<br>
> >> ><br>
> >> > On the wordpress test, .3 seconds is typical. On various gluster<br>
> >> > configurations I've received between .411 seconds (server side afr<br>
> >> > config below) and 1.2 seconds with some of the example<br>
> >> > configurations. Currently, my clientside AFR config comes in at .<br>
> >> 5xx<br>
> >> > seconds rather consistently.<br>
> >> ><br>
> >> > The second test on the clientside AFR results in 536870912 bytes<br>
> >> (537<br>
> >> > MB) copied, 4.65395 s, 115 MB/s<br>
> >> ><br>
> >> > The third test is unpacking a kernel which has ranged from 28<br>
> >> seconds<br>
> >> > using the Serverside AFR to 6+ minutes on some configurations.<br>
> >> > Currently the clientside AFR config comes in at about 17 minutes.<br>
> >> ><br>
> >> > The fourth test is a run of bonnie++ which varies from 36 minutes<br>
> >> on<br>
> >> > the serverside AFR to the 80 minute run on the clientside AFR<br>
> >> config.<br>
> >> ><br>
> >> > Current test environment is using both servers as clients &<br>
> >> servers --<br>
> >> > if I can get reasonable performance, the existing machines will<br>
> >> become<br>
> >> > clients and the servers will be split to their own platform, so, I<br>
> >> > want to make sure I am using tcp for connections to give as close<br>
> >> to a<br>
> >> > real world deployment as possible. This means I cannot run a<br>
> >> client-<br>
> >> > only config.<br>
> >> ><br>
> >> > Baseline Wordpress returns .311-.399 seconds<br>
> >> > Baseline dd 536870912 bytes (537 MB) copied, 0.489522 s, 1.1 GB/s<br>
> >> > Baseline tar xjf of the kernel, real 0m12.164s<br>
> >> > Baseline Config bonnie++ run on the raid 1 partition: (echo data |<br>
> >> > bon_csv2txt for the text reporting)<br>
> >> ><br>
> >> > c1ws1,16G,<br>
> >> > 66470,97,93198,16,42430,6,60253,86,97153,7,381.3,0,16,7534,37,++++<br>
> >> +,++<br>
> >> > +,5957,23,7320,34,+++++,+++,4667,21<br>
> >> ><br>
> >> > So far, the best performance I could manage was Server Side AFR<br>
> >> with<br>
> >> > writebehind/readahead on the server, with aggregate-size set to<br>
> >> 0mb,<br>
> >> > and the client side running writebehind/readahead. That resulted<br>
> >> in:<br>
> >> ><br>
> >> > c1ws2,16G,<br>
> >> ><br>
> >><br>
> 37636,50,76855,3,17429,2,60376,76,87653,3,158.6,0,16,1741,3,9683,6,2591,3,2030,3,9790,5,2369,3<br>
> >> ><br>
> >> > It was suggested in IRC that clientside AFR would be faster and<br>
> >> more<br>
> >> > reliable, however, I've ended up with the following as the best<br>
> >> > results from multiple attempts:<br>
> >> ><br>
> >> > c1ws1,16G,<br>
> >> ><br>
> >><br>
> 46041,58,76811,2,4603,0,59140,76,86103,3,132.4,0,16,1069,2,4795,2,1308,2,1045,2,5209,2,1246,2<br>
> >> ><br>
> >> > The bonnie++ run from the serverside AFR that resulted in the best<br>
> >> > results I've received to date took 34 minutes. The latest<br>
> >> clientside<br>
> >> > AFR bonnie run took 80 minutes. Based on the website, I would<br>
> >> expect<br>
> >> > to see better performance than drbd/GFS, but, so far that hasn't<br>
> >> been<br>
> >> > the case.<br>
> >> ><br>
> >> > Its been suggested that I use unify-rr-afr. In my current setup,<br>
> >> it<br>
> >> > seems that to do that, I would need to break my raid set which is<br>
> >> my<br>
> >> > next step in debugging this. Rather than use Raid 1 on the<br>
> >> server, I<br>
> >> > would have 2 bricks on each server which would allow the use of<br>
> >> unify<br>
> >> > and the rr scheduler.<br>
> >> ><br>
> >> > glusterfs-1.4.0qa32 results in<br>
> >> > [Wed Aug 06 02:01:44 2008] [notice] child pid 14025 exit signal Bus<br>
> >> > error (7)<br>
> >> > [Wed Aug 06 02:01:44 2008] [notice] child pid 14037 exit signal Bus<br>
> >> > error (7)<br>
> >> ><br>
> >> > when apache (not mod_gluster) tries to serve files off the<br>
> >> glusterfs<br>
> >> > partition.<br>
> >> ><br>
> >> > The main issue I'm having right now is file creation speed. I<br>
> >> realize<br>
> >> > that to create a file I need to do two network ops for each file<br>
> >> > created, but, it seems that something is horribly wrong in my<br>
> >> > configuration from the results in untarring the kernel.<br>
> >> ><br>
> >> > I've tried moving the performance translators around, but, some<br>
> >> don't<br>
> >> > seem to make much difference on the server side, and the ones that<br>
> >> > appear to make some difference client side, don't seem to help the<br>
> >> > file creation issue.<br>
> >> ><br>
> >> > On a side note, <a href="http://zresearch.com" target="_blank">zresearch.com</a>, I emailed through your contact<br>
> >> form and<br>
> >> > haven't heard back -- please provide a quote for generating the<br>
> >> > configuration and contact me offlist.<br>
> >> ><br>
> >> > ===/etc/gluster/gluster-server.vol<br>
> >> > volume posix<br>
> >> > type storage/posix<br>
> >> > option directory /gfsvol/data<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume plocks<br>
> >> > type features/posix-locks<br>
> >> > subvolumes posix<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume writebehind<br>
> >> > type performance/write-behind<br>
> >> > option flush-behind off # default is 'off'<br>
> >> > subvolumes plocks<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume readahead<br>
> >> > type performance/read-ahead<br>
> >> > option page-size 128kB # 256KB is the default option<br>
> >> > option page-count 4 # 2 is default option<br>
> >> > option force-atime-update off # default is off<br>
> >> > subvolumes writebehind<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume brick<br>
> >> > type performance/io-threads<br>
> >> > option thread-count 4 # deault is 1<br>
> >> > option cache-size 64MB #64MB<br>
> >> > subvolumes readahead<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume server<br>
> >> > type protocol/server<br>
> >> > option transport-type tcp/server<br>
> >> > subvolumes brick<br>
> >> > option auth.ip.brick.allow <a href="http://10.8.1." target="_blank">10.8.1.</a>*,<a href="http://127.0.0.1" target="_blank">127.0.0.1</a><br>
> >> > end-volume<br>
> >> ><br>
> >> ><br>
> >> > ===/etc/glusterfs/gluster-client.vol<br>
> >> ><br>
> >> > volume brick1<br>
> >> > type protocol/client<br>
> >> > option transport-type tcp/client # for TCP/IP transport<br>
> >> > option remote-host <a href="http://10.8.1.9" target="_blank">10.8.1.9</a> # IP address of server1<br>
> >> > option remote-subvolume brick # name of the remote volume on<br>
> >> > server1<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume brick2<br>
> >> > type protocol/client<br>
> >> > option transport-type tcp/client # for TCP/IP transport<br>
> >> > option remote-host <a href="http://10.8.1.10" target="_blank">10.8.1.10</a> # IP address of server2<br>
> >> > option remote-subvolume brick # name of the remote volume on<br>
> >> > server2<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume afr<br>
> >> > type cluster/afr<br>
> >> > subvolumes brick1 brick2<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume writebehind<br>
> >> > type performance/write-behind<br>
> >> > option aggregate-size 0MB<br>
> >> > option flush-behind off # default is 'off'<br>
> >> > subvolumes afr<br>
> >> > end-volume<br>
> >> ><br>
> >> > volume readahead<br>
> >> > type performance/read-ahead<br>
> >> > option page-size 128kB # 256KB is the default option<br>
> >> > option page-count 4 # 2 is default option<br>
> >> > option force-atime-update off # default is off<br>
> >> > subvolumes writebehind<br>
> >> > end-volume<br>
> >> ><br>
> >> > _______________________________________________<br>
> >> > Gluster-users mailing list<br>
> >> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >> > <a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
> >> ><br>
> >> > ><br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> Gluster-users mailing list<br>
> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >> <a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
> ><br>
> ><br>
> > !DSPAM:1,489aa3b2286571187917547!<br>
> ><br>
><br>
><br>
>_______________________________________________<br>
>Gluster-users mailing list<br>
><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
><a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Amar Tumballi<br>Gluster/GlusterFS Hacker<br>[bulde on #gluster/<a href="http://irc.gnu.org">irc.gnu.org</a>]<br><a href="http://www.zresearch.com">http://www.zresearch.com</a> - Commoditizing Super Storage!<br>
</div>