<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
On 01/13/2013 04:14 AM, glusterzhxue wrote:<br>
<blockquote cite="mid:2013011320143501335810@163.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<style>
BLOCKQUOTE {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
        LINE-HEIGHT: 1.5; FONT-FAMILY: 宋体; COLOR: #000000; FONT-SIZE: 10.5pt
}
</style>
<meta name="GENERATOR" content="MSHTML 9.00.8112.16457">
<div>Hi all,</div>
<div>We placed Virtual Machine Imame(based on kvm) on gluster file
system, but IO performance of the VM is only half of the
bandwidth.</div>
<div>If we mount it on a physical machine using the same volume as
the above VM, physical host reaches full bandwidth. We performed
it many times, but each had the same result.</div>
</blockquote>
What you're seeing is the difference between bandwidth and latency.
When you're writing a big file to a VM filesystem, you're not
performing the same operations as writing a file to a GlusterFS
mount thus you're able to measure bandwidth. The filesystem within
the VM is doing things like journaling, inode operations, etc. that
you don't have to do when writing to the client requiring a lot more
I/O operations per second, thus amplifying the latency present in
both your network and the context switching through FUSE.<br>
<br>
You have two options:<br>
1. Mount the GlusterFS volume from within the VM and host the data
you're operating on there. This avoids all the additional overhead
of managing a filesystem on top of FUSE.<br>
2. Try the 3.4 qa release and native GlusterFS support in the latest
qemu-kvm.<br>
<br>
</body>
</html>