<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=us-ascii"><meta name=Generator content="Microsoft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
        {mso-style-priority:34;
        margin-top:0in;
        margin-right:0in;
        margin-bottom:0in;
        margin-left:.5in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri","sans-serif";}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri","sans-serif";
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
/* List Definitions */
@list l0
        {mso-list-id:1425616168;
        mso-list-type:hybrid;
        mso-list-template-ids:1446440108 -2051902426 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
        {mso-level-tab-stop:none;
        mso-level-number-position:left;
        margin-left:.75in;
        text-indent:-.25in;}
ol
        {margin-bottom:0in;}
ul
        {margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link=blue vlink=purple><div class=WordSection1><p class=MsoNormal>I am testing gluster for possible deployment. The test is over internal network between virtual machines, but if we go production it would probably be infiniband.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Just pulled the latest binaries, namely 3.2.5-2.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>First… can anything be done to help performance? It’s rather slow for doing a tar extract. Here is the volumes:<o:p></o:p></p><p class=MsoNormal>Volume Name: v1<o:p></o:p></p><p class=MsoNormal>Type: Replicate<o:p></o:p></p><p class=MsoNormal>Status: Started<o:p></o:p></p><p class=MsoNormal>Number of Bricks: 2<o:p></o:p></p><p class=MsoNormal>Transport-type: tcp<o:p></o:p></p><p class=MsoNormal>Bricks:<o:p></o:p></p><p class=MsoNormal>Brick1: 10.0.12.141:/data1<o:p></o:p></p><p class=MsoNormal>Brick2: 10.0.12.142:/data1<o:p></o:p></p><p class=MsoNormal>Options Reconfigured:<o:p></o:p></p><p class=MsoNormal>performance.flush-behind: on<o:p></o:p></p><p class=MsoNormal>performance.write-behind: on<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Volume Name: v2<o:p></o:p></p><p class=MsoNormal>Type: Replicate<o:p></o:p></p><p class=MsoNormal>Status: Started<o:p></o:p></p><p class=MsoNormal>Number of Bricks: 2<o:p></o:p></p><p class=MsoNormal>Transport-type: tcp<o:p></o:p></p><p class=MsoNormal>Bricks:<o:p></o:p></p><p class=MsoNormal>Brick1: 10.0.12.142:/data2<o:p></o:p></p><p class=MsoNormal>Brick2: 10.0.12.143:/data1<o:p></o:p></p><p class=MsoNormal>Options Reconfigured:<o:p></o:p></p><p class=MsoNormal>performance.flush-behind: on<o:p></o:p></p><p class=MsoNormal>performance.write-behind: on<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I added the flush-behind and write-behind in hopes of it helping, but it did not. Any others?<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Is naggle/TCP_NODAY set on sockets? Looks like it used to be an option, is it always on now?<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>In the time it took to tar xpf a file, I was able to significantly quicker do a scp –r of an already extracted copy from the same source disk to both the replicate bricks in different subdirectories. (Probably shouldn’t access bricks directly like that, but this is just a test…)<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>In this exercise I noticed two things...<o:p></o:p></p><p class=MsoListParagraph style='margin-left:.75in;text-indent:-.25in;mso-list:l0 level1 lfo1'><![if !supportLists]><span style='mso-list:Ignore'>1.<span style='font:7.0pt "Times New Roman"'> </span></span><![endif]>Although files were mostly identical, the storage of as reported from du –k was over 25% higher in the bricks compared to the files copied over via scp. Do extended attributes or something added to the files take up that much space?<o:p></o:p></p><p class=MsoNormal>And more of concern (this is a bug?)<o:p></o:p></p><p class=MsoListParagraph style='margin-left:.75in;text-indent:-.25in;mso-list:l0 level1 lfo1'><![if !supportLists]><span style='mso-list:Ignore'>2.<span style='font:7.0pt "Times New Roman"'> </span></span><![endif]>One file didn’t extract correctly at first. It came out as 0 long. Under further investigation, and after retrying the tar command overtop the first (worked the second time), I noticed it was a symlink that failed. Perhaps one of the above options caused the problem? Either way, seems sylinks a slightly buggy.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Here is an interesting alternative/hack to improve performance when working with lots of small files (and convince yourself gluster has acceptable performance for random I/O on larger files) (this hack defeats some of the advantages of gluster as you have to restrict access to one client at a time, but you still get some of the benefits… namely the fault tolerant distributed network storage. Considering that I am mainly interested in a distributed backend store for virtual machines, this is closer to how I would use it… In summary, create a large file stored on gluster, and format and mount that as a loopback device. <o:p></o:p></p><p class=MsoNormal>Time to extract a tar file to a gluster mount directly: 5m 47.5 seconds + .026s sync.<o:p></o:p></p><p class=MsoNormal>Time to extract a tar file to a loopback filesystem that was created on and mounted from the same gluster mount: 33.8 sec + 6.9 sec sync <o:p></o:p></p><p class=MsoNormal>That’s less than 1/8<sup>th</sup> time and much closer to expected results.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p></div></body></html>