<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><br><br>thks a lot,i test it,and it work!<div><br></div><div>xiao li<br><br><br><div></div><div id="divNeteaseMailCard"></div><br><pre><br>ÔÚ 2012-11-06 20:34:55£¬"Brian Foster" <bfoster@redhat.com> дµÀ£º
>On 11/05/2012 08:38 PM, ФÁ¦ wrote:
>> I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard
>> disk is 72Gx6:
>>
>> each server mount info is
>> //dev/sda4 on /exp1 type xfs (rw)/
>> //dev/sdb1 on /exp2 type xfs (rw)/
>> //dev/sdc1 on /exp3 type xfs (rw)/
>> //dev/sdd1 on /exp4 type xfs (rw)/
>> //dev/sde1 on /exp5 type xfs (rw)/
>> //dev/sdf1 on /exp6 type xfs (rw)/
>>
>> I create a gluster volume have 4 stripe
>> /gluster volume create test-volume3 stripe 4 transport tcp \/
>> /172.16.20.231:/exp4 \/
>> /172.16.20.232:/exp4 \/
>> /172.16.20.233:/exp4 \/
>> /172.16.20.235:/exp4 \/
>>
>> then i mount volume on client 172.16.20.230
>> /mount -t glusterfs 192.168.106.231:/test-volume3 /gfs3/
>> and i dd 10G file in gfs3
>> /dd if=/dev/zero of=/gfs3/3 bs=1M count=10240/
>> /10240+0 records in/
>> /10240+0 records out/
>> /10737418240 bytes (11 GB) copied, 119.515 s, 89.8 MB/s/
>> I am very confused about this
>> /[root@node231 ~]# du -hs /exp4/
>> /10G /exp4/
>> /[root@node232 ~]# du -hs /exp4/
>> /10G /exp4/
>> /[root@node233 ~]# du -hs /exp4/
>> /10G /exp4/
>> /[root@node235 ~]# du -hs /exp4/
>> /10G /exp4/
>> i *understand **Stripe 4 should hold 1/4 space on each brick,why is 10G
>> on each brick?*
>> *can someon e explain it,think you.*
>>
>
>The default stripe data layout conflicts with XFS default speculative
>preallocation behavior. XFS preallocates space beyond the end of files
>and the stripe translator continuously seeks past this space, making it
>permanent.
>
>You can address this by 1.) enabling the cluster.stripe-coalesce
>translator option in gluster or 2.) setting the allocsize mount option
>(i.e., allocsize=128k) in XFS. Note that using the latter option will
>increase the likelihood of fragmentation on the backend filesystem.
>
>Brian
>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
</pre></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>