<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=GB2312" http-equiv=Content-Type>
<STYLE>
BLOCKQUOTE {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
        LINE-HEIGHT: 1.5; FONT-FAMILY: ËÎÌå; COLOR: #000080; FONT-SIZE: 10.5pt
}
</STYLE>

<META name=GENERATOR content="MSHTML 9.00.8112.16457"></HEAD>
<BODY style="MARGIN: 10px">
<DIV>Thanks all the responses.</DIV>
<DIV>It seems that I need to&nbsp;furtherly describe our question. If&nbsp;our 
VM is mounted on gluster&nbsp;via NFS(V3),&nbsp;writing big data on&nbsp;VM 
reach the full bandwidth. However, if&nbsp;VM is mounted by 
gluster&nbsp;client,&nbsp;it has half bandwidth when writing data on VM (mounted 
on gluster). </DIV>
<DIV>Considering&nbsp;we don't want VM users see our gluster file system, mount 
gluster file on VM is not allowed.</DIV>
<DIV>By the way, Where can&nbsp;I get the 3.4a version&nbsp;of Gluster? </DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;</DIV>
<DIV>Regards,</DIV>
<HR style="WIDTH: 210px; HEIGHT: 1px" align=left color=#b5c4df SIZE=1>

<DIV><SPAN>zhxue</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV 
style="BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING-BOTTOM: 0cm; PADDING-LEFT: 0cm; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; PADDING-TOP: 3pt">
<DIV 
style="PADDING-BOTTOM: 8px; PADDING-LEFT: 8px; PADDING-RIGHT: 8px; BACKGROUND: #efefef; COLOR: #000000; FONT-SIZE: 12px; PADDING-TOP: 8px">
<DIV><B>From:</B>&nbsp;<A 
href="mailto:gluster-users-request@gluster.org">gluster-users-request</A></DIV>
<DIV><B>Date:</B>&nbsp;2013-01-14&nbsp;20:00</DIV>
<DIV><B>To:</B>&nbsp;<A 
href="mailto:gluster-users@gluster.org">gluster-users</A></DIV>
<DIV><B>Subject:</B>&nbsp;Gluster-users Digest, Vol 57, Issue 
31</DIV></DIV></DIV>
<DIV>
<DIV>Send&nbsp;Gluster-users&nbsp;mailing&nbsp;list&nbsp;submissions&nbsp;to</DIV>
<DIV>gluster-users@gluster.org</DIV>
<DIV>&nbsp;</DIV>
<DIV>To&nbsp;subscribe&nbsp;or&nbsp;unsubscribe&nbsp;via&nbsp;the&nbsp;World&nbsp;Wide&nbsp;Web,&nbsp;visit</DIV>
<DIV>http://supercolony.gluster.org/mailman/listinfo/gluster-users</DIV>
<DIV>or,&nbsp;via&nbsp;email,&nbsp;send&nbsp;a&nbsp;message&nbsp;with&nbsp;subject&nbsp;or&nbsp;body&nbsp;'help'&nbsp;to</DIV>
<DIV>gluster-users-request@gluster.org</DIV>
<DIV>&nbsp;</DIV>
<DIV>You&nbsp;can&nbsp;reach&nbsp;the&nbsp;person&nbsp;managing&nbsp;the&nbsp;list&nbsp;at</DIV>
<DIV>gluster-users-owner@gluster.org</DIV>
<DIV>&nbsp;</DIV>
<DIV>When&nbsp;replying,&nbsp;please&nbsp;edit&nbsp;your&nbsp;Subject&nbsp;line&nbsp;so&nbsp;it&nbsp;is&nbsp;more&nbsp;specific</DIV>
<DIV>than&nbsp;"Re:&nbsp;Contents&nbsp;of&nbsp;Gluster-users&nbsp;digest..."</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>Today's&nbsp;Topics:</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;1.&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on&nbsp;Gluster&nbsp;(glusterzhxue)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;2.&nbsp;Re:&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on&nbsp;Gluster&nbsp;(Joe&nbsp;Julian)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;3.&nbsp;Re:&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on&nbsp;Gluster</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(Stephan&nbsp;von&nbsp;Krawczynski)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;4.&nbsp;Re:&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;VM&nbsp;on&nbsp;Gluster&nbsp;(Bharata&nbsp;B&nbsp;Rao)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;5.&nbsp;dm-glusterfs&nbsp;(was&nbsp;Re:&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;VM&nbsp;on</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Gluster)&nbsp;(Jeff&nbsp;Darcy)</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>----------------------------------------------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>Message:&nbsp;1</DIV>
<DIV>Date:&nbsp;Sun,&nbsp;13&nbsp;Jan&nbsp;2013&nbsp;20:14:36&nbsp;+0800</DIV>
<DIV>From:&nbsp;glusterzhxue&nbsp;&lt;glusterzhxue@163.com&gt;</DIV>
<DIV>To:&nbsp;gluster-users&nbsp;&lt;gluster-users@gluster.org&gt;</DIV>
<DIV>Subject:&nbsp;[Gluster-users]&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on&nbsp;Gluster</DIV>
<DIV>Message-ID:&nbsp;&lt;2013011320143501335810@163.com&gt;</DIV>
<DIV>Content-Type:&nbsp;text/plain;&nbsp;charset="gb2312"</DIV>
<DIV>&nbsp;</DIV>
<DIV>Hi&nbsp;all,</DIV>
<DIV>We&nbsp;placed&nbsp;Virtual&nbsp;Machine&nbsp;Imame(based&nbsp;on&nbsp;kvm)&nbsp;on&nbsp;gluster&nbsp;file&nbsp;system,&nbsp;but&nbsp;IO&nbsp;performance&nbsp;of&nbsp;the&nbsp;VM&nbsp;is&nbsp;only&nbsp;half&nbsp;of&nbsp;the&nbsp;bandwidth.</DIV>
<DIV>If&nbsp;we&nbsp;mount&nbsp;it&nbsp;on&nbsp;a&nbsp;physical&nbsp;machine&nbsp;using&nbsp;the&nbsp;same&nbsp;volume&nbsp;as&nbsp;the&nbsp;above&nbsp;VM,&nbsp;physical&nbsp;host&nbsp;reaches&nbsp;full&nbsp;bandwidth.&nbsp;We&nbsp;performed&nbsp;it&nbsp;many&nbsp;times,&nbsp;but&nbsp;each&nbsp;had&nbsp;the&nbsp;same&nbsp;result.</DIV>
<DIV>Anybody&nbsp;could&nbsp;help&nbsp;us?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Thanks</DIV>
<DIV>&nbsp;</DIV>
<DIV>zhxue</DIV>
<DIV>--------------&nbsp;next&nbsp;part&nbsp;--------------</DIV>
<DIV>An&nbsp;HTML&nbsp;attachment&nbsp;was&nbsp;scrubbed...</DIV>
<DIV>URL:&nbsp;&lt;http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130113/78161071/attachment-0001.html&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>Message:&nbsp;2</DIV>
<DIV>Date:&nbsp;Sun,&nbsp;13&nbsp;Jan&nbsp;2013&nbsp;07:11:14&nbsp;-0800</DIV>
<DIV>From:&nbsp;Joe&nbsp;Julian&nbsp;&lt;joe@julianfamily.org&gt;</DIV>
<DIV>To:&nbsp;gluster-users@gluster.org</DIV>
<DIV>Subject:&nbsp;Re:&nbsp;[Gluster-users]&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on</DIV>
<DIV>Gluster</DIV>
<DIV>Message-ID:&nbsp;&lt;50F2CE92.6060703@julianfamily.org&gt;</DIV>
<DIV>Content-Type:&nbsp;text/plain;&nbsp;charset="iso-8859-1";&nbsp;Format="flowed"</DIV>
<DIV>&nbsp;</DIV>
<DIV>On&nbsp;01/13/2013&nbsp;04:14&nbsp;AM,&nbsp;glusterzhxue&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;Hi&nbsp;all,</DIV>
<DIV>&gt;&nbsp;We&nbsp;placed&nbsp;Virtual&nbsp;Machine&nbsp;Imame(based&nbsp;on&nbsp;kvm)&nbsp;on&nbsp;gluster&nbsp;file&nbsp;system,&nbsp;</DIV>
<DIV>&gt;&nbsp;but&nbsp;IO&nbsp;performance&nbsp;of&nbsp;the&nbsp;VM&nbsp;is&nbsp;only&nbsp;half&nbsp;of&nbsp;the&nbsp;bandwidth.</DIV>
<DIV>&gt;&nbsp;If&nbsp;we&nbsp;mount&nbsp;it&nbsp;on&nbsp;a&nbsp;physical&nbsp;machine&nbsp;using&nbsp;the&nbsp;same&nbsp;volume&nbsp;as&nbsp;the&nbsp;</DIV>
<DIV>&gt;&nbsp;above&nbsp;VM,&nbsp;physical&nbsp;host&nbsp;reaches&nbsp;full&nbsp;bandwidth.&nbsp;We&nbsp;performed&nbsp;it&nbsp;many&nbsp;</DIV>
<DIV>&gt;&nbsp;times,&nbsp;but&nbsp;each&nbsp;had&nbsp;the&nbsp;same&nbsp;result.</DIV>
<DIV>What&nbsp;you're&nbsp;seeing&nbsp;is&nbsp;the&nbsp;difference&nbsp;between&nbsp;bandwidth&nbsp;and&nbsp;latency.&nbsp;When&nbsp;</DIV>
<DIV>you're&nbsp;writing&nbsp;a&nbsp;big&nbsp;file&nbsp;to&nbsp;a&nbsp;VM&nbsp;filesystem,&nbsp;you're&nbsp;not&nbsp;performing&nbsp;the&nbsp;</DIV>
<DIV>same&nbsp;operations&nbsp;as&nbsp;writing&nbsp;a&nbsp;file&nbsp;to&nbsp;a&nbsp;GlusterFS&nbsp;mount&nbsp;thus&nbsp;you're&nbsp;able&nbsp;</DIV>
<DIV>to&nbsp;measure&nbsp;bandwidth.&nbsp;The&nbsp;filesystem&nbsp;within&nbsp;the&nbsp;VM&nbsp;is&nbsp;doing&nbsp;things&nbsp;like&nbsp;</DIV>
<DIV>journaling,&nbsp;inode&nbsp;operations,&nbsp;etc.&nbsp;that&nbsp;you&nbsp;don't&nbsp;have&nbsp;to&nbsp;do&nbsp;when&nbsp;</DIV>
<DIV>writing&nbsp;to&nbsp;the&nbsp;client&nbsp;requiring&nbsp;a&nbsp;lot&nbsp;more&nbsp;I/O&nbsp;operations&nbsp;per&nbsp;second,&nbsp;</DIV>
<DIV>thus&nbsp;amplifying&nbsp;the&nbsp;latency&nbsp;present&nbsp;in&nbsp;both&nbsp;your&nbsp;network&nbsp;and&nbsp;the&nbsp;context&nbsp;</DIV>
<DIV>switching&nbsp;through&nbsp;FUSE.</DIV>
<DIV>&nbsp;</DIV>
<DIV>You&nbsp;have&nbsp;two&nbsp;options:</DIV>
<DIV>1.&nbsp;Mount&nbsp;the&nbsp;GlusterFS&nbsp;volume&nbsp;from&nbsp;within&nbsp;the&nbsp;VM&nbsp;and&nbsp;host&nbsp;the&nbsp;data&nbsp;</DIV>
<DIV>you're&nbsp;operating&nbsp;on&nbsp;there.&nbsp;This&nbsp;avoids&nbsp;all&nbsp;the&nbsp;additional&nbsp;overhead&nbsp;of&nbsp;</DIV>
<DIV>managing&nbsp;a&nbsp;filesystem&nbsp;on&nbsp;top&nbsp;of&nbsp;FUSE.</DIV>
<DIV>2.&nbsp;Try&nbsp;the&nbsp;3.4&nbsp;qa&nbsp;release&nbsp;and&nbsp;native&nbsp;GlusterFS&nbsp;support&nbsp;in&nbsp;the&nbsp;latest&nbsp;</DIV>
<DIV>qemu-kvm.</DIV>
<DIV>&nbsp;</DIV>
<DIV>--------------&nbsp;next&nbsp;part&nbsp;--------------</DIV>
<DIV>An&nbsp;HTML&nbsp;attachment&nbsp;was&nbsp;scrubbed...</DIV>
<DIV>URL:&nbsp;&lt;http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130113/7cc6641e/attachment-0001.html&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>Message:&nbsp;3</DIV>
<DIV>Date:&nbsp;Sun,&nbsp;13&nbsp;Jan&nbsp;2013&nbsp;23:55:01&nbsp;+0100</DIV>
<DIV>From:&nbsp;Stephan&nbsp;von&nbsp;Krawczynski&nbsp;&lt;skraw@ithnet.com&gt;</DIV>
<DIV>To:&nbsp;Joe&nbsp;Julian&nbsp;&lt;joe@julianfamily.org&gt;</DIV>
<DIV>Cc:&nbsp;gluster-users@gluster.org</DIV>
<DIV>Subject:&nbsp;Re:&nbsp;[Gluster-users]&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;&nbsp;VM&nbsp;on</DIV>
<DIV>Gluster</DIV>
<DIV>Message-ID:&nbsp;&lt;20130113235501.c0a2eb24.skraw@ithnet.com&gt;</DIV>
<DIV>Content-Type:&nbsp;text/plain;&nbsp;charset=US-ASCII</DIV>
<DIV>&nbsp;</DIV>
<DIV>On&nbsp;Sun,&nbsp;13&nbsp;Jan&nbsp;2013&nbsp;07:11:14&nbsp;-0800</DIV>
<DIV>Joe&nbsp;Julian&nbsp;&lt;joe@julianfamily.org&gt;&nbsp;wrote:</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;On&nbsp;01/13/2013&nbsp;04:14&nbsp;AM,&nbsp;glusterzhxue&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;Hi&nbsp;all,</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;We&nbsp;placed&nbsp;Virtual&nbsp;Machine&nbsp;Imame(based&nbsp;on&nbsp;kvm)&nbsp;on&nbsp;gluster&nbsp;file&nbsp;system,&nbsp;</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;but&nbsp;IO&nbsp;performance&nbsp;of&nbsp;the&nbsp;VM&nbsp;is&nbsp;only&nbsp;half&nbsp;of&nbsp;the&nbsp;bandwidth.</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;If&nbsp;we&nbsp;mount&nbsp;it&nbsp;on&nbsp;a&nbsp;physical&nbsp;machine&nbsp;using&nbsp;the&nbsp;same&nbsp;volume&nbsp;as&nbsp;the&nbsp;</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;above&nbsp;VM,&nbsp;physical&nbsp;host&nbsp;reaches&nbsp;full&nbsp;bandwidth.&nbsp;We&nbsp;performed&nbsp;it&nbsp;many&nbsp;</DIV>
<DIV>&gt;&nbsp;&gt;&nbsp;times,&nbsp;but&nbsp;each&nbsp;had&nbsp;the&nbsp;same&nbsp;result.</DIV>
<DIV>&gt;&nbsp;What&nbsp;you're&nbsp;seeing&nbsp;is&nbsp;the&nbsp;difference&nbsp;between&nbsp;bandwidth&nbsp;and&nbsp;latency.&nbsp;When&nbsp;</DIV>
<DIV>&gt;&nbsp;you're&nbsp;writing&nbsp;a&nbsp;big&nbsp;file&nbsp;to&nbsp;a&nbsp;VM&nbsp;filesystem,&nbsp;you're&nbsp;not&nbsp;performing&nbsp;the&nbsp;</DIV>
<DIV>&gt;&nbsp;same&nbsp;operations&nbsp;as&nbsp;writing&nbsp;a&nbsp;file&nbsp;to&nbsp;a&nbsp;GlusterFS&nbsp;mount&nbsp;thus&nbsp;you're&nbsp;able&nbsp;</DIV>
<DIV>&gt;&nbsp;to&nbsp;measure&nbsp;bandwidth.&nbsp;The&nbsp;filesystem&nbsp;within&nbsp;the&nbsp;VM&nbsp;is&nbsp;doing&nbsp;things&nbsp;like&nbsp;</DIV>
<DIV>&gt;&nbsp;journaling,&nbsp;inode&nbsp;operations,&nbsp;etc.&nbsp;that&nbsp;you&nbsp;don't&nbsp;have&nbsp;to&nbsp;do&nbsp;when&nbsp;</DIV>
<DIV>&gt;&nbsp;writing&nbsp;to&nbsp;the&nbsp;client&nbsp;requiring&nbsp;a&nbsp;lot&nbsp;more&nbsp;I/O&nbsp;operations&nbsp;per&nbsp;second,&nbsp;</DIV>
<DIV>&gt;&nbsp;thus&nbsp;amplifying&nbsp;the&nbsp;latency&nbsp;present&nbsp;in&nbsp;both&nbsp;your&nbsp;network&nbsp;and&nbsp;the&nbsp;context&nbsp;</DIV>
<DIV>&gt;&nbsp;switching&nbsp;through&nbsp;FUSE.</DIV>
<DIV>&gt;&nbsp;</DIV>
<DIV>&gt;&nbsp;You&nbsp;have&nbsp;two&nbsp;options:</DIV>
<DIV>&gt;&nbsp;1.&nbsp;Mount&nbsp;the&nbsp;GlusterFS&nbsp;volume&nbsp;from&nbsp;within&nbsp;the&nbsp;VM&nbsp;and&nbsp;host&nbsp;the&nbsp;data&nbsp;</DIV>
<DIV>&gt;&nbsp;you're&nbsp;operating&nbsp;on&nbsp;there.&nbsp;This&nbsp;avoids&nbsp;all&nbsp;the&nbsp;additional&nbsp;overhead&nbsp;of&nbsp;</DIV>
<DIV>&gt;&nbsp;managing&nbsp;a&nbsp;filesystem&nbsp;on&nbsp;top&nbsp;of&nbsp;FUSE.</DIV>
<DIV>&gt;&nbsp;2.&nbsp;Try&nbsp;the&nbsp;3.4&nbsp;qa&nbsp;release&nbsp;and&nbsp;native&nbsp;GlusterFS&nbsp;support&nbsp;in&nbsp;the&nbsp;latest&nbsp;</DIV>
<DIV>&gt;&nbsp;qemu-kvm.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Thank&nbsp;you&nbsp;for&nbsp;telling&nbsp;the&nbsp;people&nbsp;openly&nbsp;that&nbsp;FUSE&nbsp;is&nbsp;a&nbsp;performance&nbsp;problem</DIV>
<DIV>which&nbsp;could&nbsp;be&nbsp;solved&nbsp;by&nbsp;a&nbsp;kernel-based&nbsp;glusterfs.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Do&nbsp;you&nbsp;want&nbsp;to&nbsp;make&nbsp;drivers&nbsp;for&nbsp;every&nbsp;application&nbsp;like&nbsp;qemu?&nbsp;How&nbsp;many&nbsp;burnt</DIV>
<DIV>manpower&nbsp;will&nbsp;it&nbsp;take&nbsp;until&nbsp;the&nbsp;real&nbsp;solution&nbsp;is&nbsp;accepted?</DIV>
<DIV>It&nbsp;is&nbsp;no&nbsp;solution&nbsp;to&nbsp;mess&nbsp;around&nbsp;_inside_&nbsp;the&nbsp;VM&nbsp;for&nbsp;most&nbsp;people,&nbsp;you&nbsp;simply</DIV>
<DIV>don't&nbsp;want&nbsp;_customers_&nbsp;on&nbsp;your&nbsp;VM&nbsp;with&nbsp;a&nbsp;glusterfs&nbsp;mount.&nbsp;You&nbsp;want&nbsp;them&nbsp;to&nbsp;see</DIV>
<DIV>a&nbsp;local&nbsp;fs&nbsp;only.</DIV>
<DIV>&nbsp;</DIV>
<DIV>--&nbsp;</DIV>
<DIV>Regards,</DIV>
<DIV>Stephan</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>Message:&nbsp;4</DIV>
<DIV>Date:&nbsp;Mon,&nbsp;14&nbsp;Jan&nbsp;2013&nbsp;09:55:53&nbsp;+0530</DIV>
<DIV>From:&nbsp;Bharata&nbsp;B&nbsp;Rao&nbsp;&lt;bharata.rao@gmail.com&gt;</DIV>
<DIV>To:&nbsp;Stephan&nbsp;von&nbsp;Krawczynski&nbsp;&lt;skraw@ithnet.com&gt;</DIV>
<DIV>Cc:&nbsp;gluster-users@gluster.org</DIV>
<DIV>Subject:&nbsp;Re:&nbsp;[Gluster-users]&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down&nbsp;when&nbsp;VM&nbsp;on</DIV>
<DIV>Gluster</DIV>
<DIV>Message-ID:</DIV>
<DIV>&lt;CAGZKiBr--fYF-Awq0cYXJx1wPB52Odgm_PArE3Dvrt733mfwZw@mail.gmail.com&gt;</DIV>
<DIV>Content-Type:&nbsp;text/plain;&nbsp;charset=ISO-8859-1</DIV>
<DIV>&nbsp;</DIV>
<DIV>On&nbsp;Mon,&nbsp;Jan&nbsp;14,&nbsp;2013&nbsp;at&nbsp;4:25&nbsp;AM,&nbsp;Stephan&nbsp;von&nbsp;Krawczynski</DIV>
<DIV>&lt;skraw@ithnet.com&gt;&nbsp;wrote:</DIV>
<DIV>&gt;</DIV>
<DIV>&gt;&nbsp;Thank&nbsp;you&nbsp;for&nbsp;telling&nbsp;the&nbsp;people&nbsp;openly&nbsp;that&nbsp;FUSE&nbsp;is&nbsp;a&nbsp;performance&nbsp;problem</DIV>
<DIV>&gt;&nbsp;which&nbsp;could&nbsp;be&nbsp;solved&nbsp;by&nbsp;a&nbsp;kernel-based&nbsp;glusterfs.</DIV>
<DIV>&gt;</DIV>
<DIV>&gt;&nbsp;Do&nbsp;you&nbsp;want&nbsp;to&nbsp;make&nbsp;drivers&nbsp;for&nbsp;every&nbsp;application&nbsp;like&nbsp;qemu?&nbsp;How&nbsp;many&nbsp;burnt</DIV>
<DIV>&gt;&nbsp;manpower&nbsp;will&nbsp;it&nbsp;take&nbsp;until&nbsp;the&nbsp;real&nbsp;solution&nbsp;is&nbsp;accepted?</DIV>
<DIV>&gt;&nbsp;It&nbsp;is&nbsp;no&nbsp;solution&nbsp;to&nbsp;mess&nbsp;around&nbsp;_inside_&nbsp;the&nbsp;VM&nbsp;for&nbsp;most&nbsp;people,&nbsp;you&nbsp;simply</DIV>
<DIV>&gt;&nbsp;don't&nbsp;want&nbsp;_customers_&nbsp;on&nbsp;your&nbsp;VM&nbsp;with&nbsp;a&nbsp;glusterfs&nbsp;mount.&nbsp;You&nbsp;want&nbsp;them&nbsp;to&nbsp;see</DIV>
<DIV>&gt;&nbsp;a&nbsp;local&nbsp;fs&nbsp;only.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Just&nbsp;wondering&nbsp;if&nbsp;there&nbsp;is&nbsp;a&nbsp;value&nbsp;in&nbsp;doing&nbsp;dm-glusterfs&nbsp;on&nbsp;the&nbsp;lines</DIV>
<DIV>similar&nbsp;to&nbsp;dm-nfs</DIV>
<DIV>(https://blogs.oracle.com/OTNGarage/entry/simplify_your_storage_management_with).</DIV>
<DIV>&nbsp;</DIV>
<DIV>I&nbsp;understand&nbsp;GlusterFS&nbsp;due&nbsp;to&nbsp;its&nbsp;stackable&nbsp;translator&nbsp;nature&nbsp;and</DIV>
<DIV>having&nbsp;to&nbsp;deal&nbsp;with&nbsp;multiple&nbsp;translators&nbsp;at&nbsp;the&nbsp;client&nbsp;end&nbsp;might&nbsp;not</DIV>
<DIV>easily&nbsp;fit&nbsp;to&nbsp;this&nbsp;model,&nbsp;but&nbsp;may&nbsp;be&nbsp;something&nbsp;to&nbsp;think&nbsp;about&nbsp;?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Regards,</DIV>
<DIV>Bharata.</DIV>
<DIV>--&nbsp;</DIV>
<DIV>http://raobharata.wordpress.com/</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>Message:&nbsp;5</DIV>
<DIV>Date:&nbsp;Mon,&nbsp;14&nbsp;Jan&nbsp;2013&nbsp;06:53:58&nbsp;-0500</DIV>
<DIV>From:&nbsp;Jeff&nbsp;Darcy&nbsp;&lt;jdarcy@redhat.com&gt;</DIV>
<DIV>To:&nbsp;gluster-users@gluster.org</DIV>
<DIV>Subject:&nbsp;[Gluster-users]&nbsp;dm-glusterfs&nbsp;(was&nbsp;Re:&nbsp;IO&nbsp;performance&nbsp;cut&nbsp;down</DIV>
<DIV>when&nbsp;VM&nbsp;on&nbsp;Gluster)</DIV>
<DIV>Message-ID:&nbsp;&lt;50F3F1D6.10405@redhat.com&gt;</DIV>
<DIV>Content-Type:&nbsp;text/plain;&nbsp;charset=ISO-8859-1</DIV>
<DIV>&nbsp;</DIV>
<DIV>On&nbsp;1/13/13&nbsp;11:25&nbsp;PM,&nbsp;Bharata&nbsp;B&nbsp;Rao&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;Just&nbsp;wondering&nbsp;if&nbsp;there&nbsp;is&nbsp;a&nbsp;value&nbsp;in&nbsp;doing&nbsp;dm-glusterfs&nbsp;on&nbsp;the&nbsp;lines</DIV>
<DIV>&gt;&nbsp;similar&nbsp;to&nbsp;dm-nfs</DIV>
<DIV>&gt;&nbsp;(https://blogs.oracle.com/OTNGarage/entry/simplify_your_storage_management_with).</DIV>
<DIV>&gt;&nbsp;</DIV>
<DIV>&gt;&nbsp;I&nbsp;understand&nbsp;GlusterFS&nbsp;due&nbsp;to&nbsp;its&nbsp;stackable&nbsp;translator&nbsp;nature&nbsp;and</DIV>
<DIV>&gt;&nbsp;having&nbsp;to&nbsp;deal&nbsp;with&nbsp;multiple&nbsp;translators&nbsp;at&nbsp;the&nbsp;client&nbsp;end&nbsp;might&nbsp;not</DIV>
<DIV>&gt;&nbsp;easily&nbsp;fit&nbsp;to&nbsp;this&nbsp;model,&nbsp;but&nbsp;may&nbsp;be&nbsp;something&nbsp;to&nbsp;think&nbsp;about&nbsp;?</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>It's&nbsp;an&nbsp;interesting&nbsp;idea.&nbsp;&nbsp;You're&nbsp;also&nbsp;right&nbsp;that&nbsp;there&nbsp;are&nbsp;some&nbsp;issues&nbsp;with</DIV>
<DIV>the&nbsp;stackable&nbsp;translator&nbsp;model&nbsp;and&nbsp;so&nbsp;on.&nbsp;&nbsp;Porting&nbsp;all&nbsp;of&nbsp;that&nbsp;code&nbsp;into&nbsp;the</DIV>
<DIV>kernel&nbsp;would&nbsp;require&nbsp;an&nbsp;almost&nbsp;suicidal&nbsp;suspension&nbsp;of&nbsp;all&nbsp;other&nbsp;development</DIV>
<DIV>activity&nbsp;while&nbsp;competitors&nbsp;continue&nbsp;to&nbsp;catch&nbsp;up&nbsp;on&nbsp;manageability&nbsp;or&nbsp;add&nbsp;other</DIV>
<DIV>features,&nbsp;so&nbsp;that's&nbsp;not&nbsp;every&nbsp;appealing.&nbsp;&nbsp;Keeping&nbsp;it&nbsp;all&nbsp;out&nbsp;in&nbsp;user&nbsp;space&nbsp;with</DIV>
<DIV>a&nbsp;minimal&nbsp;kernel-interception&nbsp;layer&nbsp;would&nbsp;give&nbsp;us&nbsp;something&nbsp;better&nbsp;than&nbsp;FUSE&nbsp;(I</DIV>
<DIV>did&nbsp;something&nbsp;like&nbsp;this&nbsp;in&nbsp;a&nbsp;previous&nbsp;life&nbsp;BTW),&nbsp;but&nbsp;probably&nbsp;not&nbsp;enough&nbsp;better</DIV>
<DIV>to&nbsp;be&nbsp;compelling.&nbsp;&nbsp;A&nbsp;hybrid&nbsp;"fast&nbsp;path,&nbsp;slow&nbsp;path"&nbsp;approach&nbsp;might&nbsp;work.&nbsp;&nbsp;Keep</DIV>
<DIV>all&nbsp;of&nbsp;the&nbsp;code&nbsp;for&nbsp;common-case&nbsp;reads&nbsp;and&nbsp;writes&nbsp;in&nbsp;the&nbsp;kernel,&nbsp;punt&nbsp;everything</DIV>
<DIV>else&nbsp;back&nbsp;up&nbsp;to&nbsp;user&nbsp;space&nbsp;with&nbsp;hooks&nbsp;to&nbsp;disable&nbsp;the&nbsp;fast&nbsp;path&nbsp;when&nbsp;necessary</DIV>
<DIV>(e.g.&nbsp;during&nbsp;a&nbsp;config&nbsp;change).&nbsp;&nbsp;OTOH,&nbsp;how&nbsp;would&nbsp;this&nbsp;be&nbsp;better&nbsp;than&nbsp;e.g.&nbsp;an</DIV>
<DIV>iSCSI&nbsp;target,&nbsp;which&nbsp;is&nbsp;deployable&nbsp;today&nbsp;with&nbsp;essentially&nbsp;the&nbsp;same&nbsp;functionality</DIV>
<DIV>and&nbsp;even&nbsp;greater&nbsp;generality&nbsp;(e.g.&nbsp;to&nbsp;non-Linux&nbsp;platforms)?</DIV>
<DIV>&nbsp;</DIV>
<DIV>It's&nbsp;good&nbsp;to&nbsp;think&nbsp;about&nbsp;these&nbsp;things.&nbsp;&nbsp;We&nbsp;could&nbsp;implement&nbsp;ten&nbsp;other</DIV>
<DIV>alternative&nbsp;access&nbsp;mechanisms&nbsp;(Apache/nginx&nbsp;modules&nbsp;anyone?)&nbsp;and&nbsp;still&nbsp;burn</DIV>
<DIV>fewer&nbsp;resources&nbsp;than&nbsp;we&nbsp;would&nbsp;with&nbsp;"just&nbsp;put&nbsp;it&nbsp;all&nbsp;in&nbsp;the&nbsp;kernel"&nbsp;inanity.&nbsp;&nbsp;I</DIV>
<DIV>tried&nbsp;one&nbsp;of&nbsp;our&nbsp;much-touted&nbsp;alternatives&nbsp;recently&nbsp;and,&nbsp;despite&nbsp;having&nbsp;a&nbsp;kernel</DIV>
<DIV>client,&nbsp;they&nbsp;achieved&nbsp;less&nbsp;than&nbsp;1/3&nbsp;of&nbsp;our&nbsp;performance&nbsp;on&nbsp;this&nbsp;kind&nbsp;of</DIV>
<DIV>workload.&nbsp;&nbsp;If&nbsp;we&nbsp;want&nbsp;to&nbsp;eliminate&nbsp;sources&nbsp;of&nbsp;overhead&nbsp;we&nbsp;need&nbsp;to&nbsp;address&nbsp;more</DIV>
<DIV>than&nbsp;just&nbsp;that&nbsp;one.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>_______________________________________________</DIV>
<DIV>Gluster-users&nbsp;mailing&nbsp;list</DIV>
<DIV>Gluster-users@gluster.org</DIV>
<DIV>http://supercolony.gluster.org/mailman/listinfo/gluster-users</DIV>
<DIV>&nbsp;</DIV>
<DIV>End&nbsp;of&nbsp;Gluster-users&nbsp;Digest,&nbsp;Vol&nbsp;57,&nbsp;Issue&nbsp;31</DIV>
<DIV>*********************************************</DIV></DIV></BODY></HTML>