What do you mean with "small files"? 16k ? 160k? 16mb?<div>Do you know any workaround or any other software for this?<br><br>Mee too i'm trying to create a clustered storage for many</div><div>small file<br><br>
<div class="gmail_quote">2012/8/10 Philip Poten <span dir="ltr"><<a href="mailto:philip.poten@gmail.com" target="_blank">philip.poten@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Ivan,<br>
<br>
that's because Gluster has really bad "many small files" performance<br>
due to it's architecture.<br>
<br>
On all stat() calls (which rsync is doing plenty of), all replicas are<br>
being checked for integrity.<br>
<br>
regards,<br>
Philip<br>
<br>
2012/8/10 Ivan Dimitrov <<a href="mailto:dobber@amln.net">dobber@amln.net</a>>:<br>
> So I stopped a node to check the BIOS and after it went up, the rebalance<br>
> kicked in. I was looking for those kind of speeds on a normal write. The<br>
> rebalance is much faster than my rsync/cp.<br>
><br>
> <a href="https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png" target="_blank">https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png</a><br>
><br>
> Best Regards<br>
> Ivan Dimitrov<br>
><br>
><br>
> On 8/10/12 1:23 PM, Ivan Dimitrov wrote:<br>
>><br>
>> Hello<br>
>> What am I doing wrong?!?<br>
>><br>
>> I have a test setup with 4 identical servers with 2 disks each in<br>
>> distribute-replicate 2. All servers are connected to a GB switch.<br>
>><br>
>> I am experiencing really slow speeds at anything I do. Slow write, slow<br>
>> read, not to mention random write/reads.<br>
>><br>
>> Here is an example:<br>
>> random-files is a directory with 32768 files with average size 16kb.<br>
>> [root@gltclient]:~# rsync -a /root/speedtest/random-files/<br>
>> /home/gltvolume/<br>
>> ^^ This will take more than 3 hours.<br>
>><br>
>> On any of the servers if I do "iostat" the disks are not loaded at all:<br>
>><br>
>> <a href="https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png" target="_blank">https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png</a><br>
>><br>
>> This is similar result for all servers.<br>
>><br>
>> Here is an example of simple "ls" command on the content.<br>
>> [root@gltclient]:~# unalias ls<br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" ls /home/gltvolume/ | wc<br>
>> -l<br>
>> 2.81 seconds<br>
>> 5393<br>
>><br>
>> almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls<br>
>> will take around 35-45 seconds.<br>
>><br>
>> This directory is on local disk:<br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" ls<br>
>> /root/speedtest/random-files/ | wc -l<br>
>> 1.45 seconds<br>
>> 32768<br>
>><br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" cat /home/gltvolume/*<br>
>> >/dev/null<br>
>> 190.50 seconds<br>
>><br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" du -sh /home/gltvolume/<br>
>> 126M /home/gltvolume/<br>
>> 75.23 seconds<br>
>><br>
>><br>
>> Here is the volume information.<br>
>><br>
>> [root@glt1]:~# gluster volume info<br>
>><br>
>> Volume Name: gltvolume<br>
>> Type: Distributed-Replicate<br>
>> Volume ID: 16edd852-8d23-41da-924d-710b753bb374<br>
>> Status: Started<br>
>> Number of Bricks: 4 x 2 = 8<br>
>> Transport-type: tcp<br>
>> Bricks:<br>
>> Brick1: 1.1.74.246:/home/sda3<br>
>> Brick2: glt2.network.net:/home/sda3<br>
>> Brick3: 1.1.74.246:/home/sdb1<br>
>> Brick4: glt2.network.net:/home/sdb1<br>
>> Brick5: glt3.network.net:/home/sda3<br>
>> Brick6: gltclient.network.net:/home/sda3<br>
>> Brick7: glt3.network.net:/home/sdb1<br>
>> Brick8: gltclient.network.net:/home/sdb1<br>
>> Options Reconfigured:<br>
>> performance.io-thread-count: 32<br>
>> performance.cache-size: 256MB<br>
>> cluster.self-heal-daemon: on<br>
>><br>
>><br>
>> [root@glt1]:~# gluster volume status all detail<br>
>> Status of volume: gltvolume<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick 1.1.74.246:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 1479<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11901550<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt2.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 1589<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11901550<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick 1.1.74.246:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 1485<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15202933<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt2.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 1595<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15202933<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt3.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 28963<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11906058<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick gltclient.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 3145<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11906058<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt3.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 28969<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15207375<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick gltclient.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 3151<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15207375<br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
>><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>