<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:Tahoma;
        panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman","serif";
        color:black;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-reply;
        font-family:"Calibri","sans-serif";
        color:#1F497D;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body bgcolor="white" lang="EN-GB" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">I think Gluster as it stands now and current level of development is more for Multimedia and Archival files, not for small files nor for running Virtual Machines.
It requires still a fair amount of development which hopefully RedHat will put in place.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Fernando<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">From:</span></b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext"> gluster-users-bounces@gluster.org
[mailto:gluster-users-bounces@gluster.org] <b>On Behalf Of </b>Ivan Dimitrov<br>
<b>Sent:</b> 13 August 2012 08:33<br>
<b>To:</b> gluster-users@gluster.org<br>
<b>Subject:</b> Re: [Gluster-users] Gluster speed sooo slow<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">There is a big difference with working with small files (around 16kb) and big files (2mb). Performance is much better with big files. Witch is too bad for me ;(<br>
<br>
On 8/11/12 2:15 AM, Gandalf Corvotempesta wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">What do you mean with "small files"? 16k ? 160k? 16mb? <o:p>
</o:p></p>
<div>
<p class="MsoNormal">Do you know any workaround or any other software for this?<br>
<br>
Mee too i'm trying to create a clustered storage for many<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">small file<o:p></o:p></p>
<div>
<p class="MsoNormal">2012/8/10 Philip Poten <<a href="mailto:philip.poten@gmail.com" target="_blank">philip.poten@gmail.com</a>><o:p></o:p></p>
<p class="MsoNormal">Hi Ivan,<br>
<br>
that's because Gluster has really bad "many small files" performance<br>
due to it's architecture.<br>
<br>
On all stat() calls (which rsync is doing plenty of), all replicas are<br>
being checked for integrity.<br>
<br>
regards,<br>
Philip<br>
<br>
2012/8/10 Ivan Dimitrov <<a href="mailto:dobber@amln.net">dobber@amln.net</a>>:<br>
> So I stopped a node to check the BIOS and after it went up, the rebalance<br>
> kicked in. I was looking for those kind of speeds on a normal write. The<br>
> rebalance is much faster than my rsync/cp.<br>
><br>
> <a href="https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png" target="_blank">
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png</a><br>
><br>
> Best Regards<br>
> Ivan Dimitrov<br>
><br>
><br>
> On 8/10/12 1:23 PM, Ivan Dimitrov wrote:<br>
>><br>
>> Hello<br>
>> What am I doing wrong?!?<br>
>><br>
>> I have a test setup with 4 identical servers with 2 disks each in<br>
>> distribute-replicate 2. All servers are connected to a GB switch.<br>
>><br>
>> I am experiencing really slow speeds at anything I do. Slow write, slow<br>
>> read, not to mention random write/reads.<br>
>><br>
>> Here is an example:<br>
>> random-files is a directory with 32768 files with average size 16kb.<br>
>> [root@gltclient]:~# rsync -a /root/speedtest/random-files/<br>
>> /home/gltvolume/<br>
>> ^^ This will take more than 3 hours.<br>
>><br>
>> On any of the servers if I do "iostat" the disks are not loaded at all:<br>
>><br>
>> <a href="https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png" target="_blank">
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png</a><br>
>><br>
>> This is similar result for all servers.<br>
>><br>
>> Here is an example of simple "ls" command on the content.<br>
>> [root@gltclient]:~# unalias ls<br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" ls /home/gltvolume/ | wc<br>
>> -l<br>
>> 2.81 seconds<br>
>> 5393<br>
>><br>
>> almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls<br>
>> will take around 35-45 seconds.<br>
>><br>
>> This directory is on local disk:<br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" ls<br>
>> /root/speedtest/random-files/ | wc -l<br>
>> 1.45 seconds<br>
>> 32768<br>
>><br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" cat /home/gltvolume/*<br>
>> >/dev/null<br>
>> 190.50 seconds<br>
>><br>
>> [root@gltclient]:~# /usr/bin/time -f "%e seconds" du -sh /home/gltvolume/<br>
>> 126M /home/gltvolume/<br>
>> 75.23 seconds<br>
>><br>
>><br>
>> Here is the volume information.<br>
>><br>
>> [root@glt1]:~# gluster volume info<br>
>><br>
>> Volume Name: gltvolume<br>
>> Type: Distributed-Replicate<br>
>> Volume ID: 16edd852-8d23-41da-924d-710b753bb374<br>
>> Status: Started<br>
>> Number of Bricks: 4 x 2 = 8<br>
>> Transport-type: tcp<br>
>> Bricks:<br>
>> Brick1: 1.1.74.246:/home/sda3<br>
>> Brick2: glt2.network.net:/home/sda3<br>
>> Brick3: 1.1.74.246:/home/sdb1<br>
>> Brick4: glt2.network.net:/home/sdb1<br>
>> Brick5: glt3.network.net:/home/sda3<br>
>> Brick6: gltclient.network.net:/home/sda3<br>
>> Brick7: glt3.network.net:/home/sdb1<br>
>> Brick8: gltclient.network.net:/home/sdb1<br>
>> Options Reconfigured:<br>
>> performance.io-thread-count: 32<br>
>> performance.cache-size: 256MB<br>
>> cluster.self-heal-daemon: on<br>
>><br>
>><br>
>> [root@glt1]:~# gluster volume status all detail<br>
>> Status of volume: gltvolume<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick 1.1.74.246:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 1479<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11901550<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt2.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 1589<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11901550<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick 1.1.74.246:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 1485<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15202933<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt2.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 1595<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15202933<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt3.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 28963<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11906058<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick gltclient.network.net:/home/sda3<br>
>> Port : 24009<br>
>> Online : Y<br>
>> Pid : 3145<br>
>> File System : ext4<br>
>> Device : /dev/sda3<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 179.3GB<br>
>> Total Disk Space : 179.7GB<br>
>> Inode Count : 11968512<br>
>> Free Inodes : 11906058<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick glt3.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 28969<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15207375<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick : Brick gltclient.network.net:/home/sdb1<br>
>> Port : 24010<br>
>> Online : Y<br>
>> Pid : 3151<br>
>> File System : ext4<br>
>> Device : /dev/sdb1<br>
>> Mount Options : rw,noatime<br>
>> Inode Size : 256<br>
>> Disk Space Free : 228.8GB<br>
>> Total Disk Space : 229.2GB<br>
>> Inode Count : 15269888<br>
>> Free Inodes : 15207375<br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
>><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><o:p></o:p></p>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</blockquote>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>