<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
Hi,<br><br>Indeed client on Linux are pretty stable and I don't have this issue on Linux (SLES11), but only on Solaris Client.<br>Moreover,&nbsp; I understand and respect the choice of gluster team to not develop GlusterFS native client on Unix platform but on the other hand, gNFS server needs to be compliant with all Unix/Linux client as NFS is a widely spread standard, and unfortunately that's not the case today :<br>Hp-UX : NFS does'nt work at all<br>Solaris : Sub directory export does'nt work and got some ramdom error<br>Linux : OK&nbsp; <br>AIX : OK but not widely tested<br><br><br>Thx anyway<br><br>Anthony<br><br><div>&gt; From: gluster-users-request@gluster.org<br>&gt; Subject: Gluster-users Digest, Vol 43, Issue 34<br>&gt; To: gluster-users@gluster.org<br>&gt; Date: Wed, 30 Nov 2011 08:58:47 -0800<br>&gt; <br>&gt; Send Gluster-users mailing list submissions to<br>&gt;         gluster-users@gluster.org<br>&gt; <br>&gt; To subscribe or unsubscribe via the World Wide Web, visit<br>&gt;         http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>&gt; or, via email, send a message with subject or body 'help' to<br>&gt;         gluster-users-request@gluster.org<br>&gt; <br>&gt; You can reach the person managing the list at<br>&gt;         gluster-users-owner@gluster.org<br>&gt; <br>&gt; When replying, please edit your Subject line so it is more specific<br>&gt; than "Re: Contents of Gluster-users digest..."<br>&gt; <br>&gt; <br>&gt; Today's Topics:<br>&gt; <br>&gt;    1. Re: NFS server crash under heavy load (Gerald Brandt)<br>&gt; <br>&gt; <br>&gt; ----------------------------------------------------------------------<br>&gt; <br>&gt; Message: 1<br>&gt; Date: Wed, 30 Nov 2011 10:50:03 -0600 (CST)<br>&gt; From: Gerald Brandt &lt;gbr@majentis.com&gt;<br>&gt; Subject: Re: [Gluster-users] NFS server crash under heavy load<br>&gt; To: anthony garnier &lt;sokar6012@hotmail.com&gt;<br>&gt; Cc: gluster-users@gluster.org<br>&gt; Message-ID: &lt;3325f872-9c8d-440d-8de8-df622c70ee13@gbr-laptop&gt;<br>&gt; Content-Type: text/plain; charset=utf-8<br>&gt; <br>&gt; Hi,<br>&gt; <br>&gt; I ran 3.2.3 under Ubuntu 10.04 LTS with some pretty serious IO tests.  My install was rock solid.  Doesn't help much, but may indicate to look outside of gluster.<br>&gt; <br>&gt; Gerald<br>&gt; <br>&gt; <br>&gt; ----- Original Message -----<br>&gt; From: "anthony garnier" &lt;sokar6012@hotmail.com&gt;<br>&gt; To: gluster-users@gluster.org<br>&gt; Sent: Wednesday, November 30, 2011 9:42:38 AM<br>&gt; Subject: [Gluster-users] NFS server crash under heavy load<br>&gt; <br>&gt; <br>&gt; <br>&gt; Hi, <br>&gt; <br>&gt; I've got some issues with gluster 3.2.3. <br>&gt; Servers are on SLES 11 <br>&gt; Client is on Solaris <br>&gt; <br>&gt; On my client when I try to do rm -rf on a folder with big files inside, the NFS server crash . <br>&gt; <br>&gt; <br>&gt; <br>&gt; Here is my volume configuration <br>&gt; <br>&gt; Volume Name: poolsave <br>&gt; Type: Distributed-Replicate <br>&gt; Status: Started <br>&gt; Number of Bricks: 2 x 2 = 4 <br>&gt; Transport-type: tcp <br>&gt; Bricks: <br>&gt; Brick1: ylal3550:/users3/poolsave <br>&gt; Brick2: ylal3570:/users3/poolsave <br>&gt; Brick3: ylal3560:/users3/poolsave <br>&gt; Brick4: ylal3580:/users3/poolsave <br>&gt; Options Reconfigured: <br>&gt; performance.io-thread-count: 64 <br>&gt; nfs.port: 2049 <br>&gt; performance.cache-refresh-timeout: 2 <br>&gt; performance.cache-max-file-size: 4GB <br>&gt; performance.cache-min-file-size: 1KB <br>&gt; network.ping-timeout: 10 <br>&gt; performance.cache-size: 6GB <br>&gt; <br>&gt; <br>&gt; <br>&gt; <br>&gt; nfs.log : <br>&gt; <br>&gt; [2011-11-30 16:14:19.3887] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 644) <br>&gt; [2011-11-30 16:14:19.3947] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 646) <br>&gt; [2011-11-30 16:14:19.3967] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 647) <br>&gt; [2011-11-30 16:14:19.4008] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 648) <br>&gt; [2011-11-30 16:14:19.4109] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 652) <br>&gt; [2011-11-30 16:14:19.4134] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 653) <br>&gt; [2011-11-30 16:14:19.4162] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 654) <br>&gt; [2011-11-30 16:14:19.4181] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 655) <br>&gt; [2011-11-30 16:14:19.4201] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 656) <br>&gt; [2011-11-30 16:14:19.4243] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 658) <br>&gt; [2011-11-30 16:14:19.4341] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 659) <br>&gt; [2011-11-30 16:14:19.4386] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 660) <br>&gt; [2011-11-30 16:14:19.4435] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 661) <br>&gt; [2011-11-30 16:14:19.4493] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 662) <br>&gt; [2011-11-30 16:14:19.4581] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 664) <br>&gt; [2011-11-30 16:14:19.4618] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 667) <br>&gt; [2011-11-30 16:14:19.4657] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 669) <br>&gt; [2011-11-30 16:14:19.4702] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 670) <br>&gt; [2011-11-30 16:14:19.4727] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 672) <br>&gt; [2011-11-30 16:14:19.4751] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 674) <br>&gt; [2011-11-30 16:14:19.4878] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 676) <br>&gt; [2011-11-30 16:14:19.5018] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 680) <br>&gt; [2011-11-30 16:14:19.5050] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 681) <br>&gt; [2011-11-30 16:14:19.5088] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 685) <br>&gt; [2011-11-30 16:14:19.5128] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 689) <br>&gt; [2011-11-30 16:14:19.5154] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 690) <br>&gt; [2011-11-30 16:14:19.5357] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 695) <br>&gt; [2011-11-30 16:14:19.5431] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 698) <br>&gt; [2011-11-30 16:14:19.5470] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 699) <br>&gt; [2011-11-30 16:14:19.5556] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 701) <br>&gt; [2011-11-30 16:14:19.5636] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 702) <br>&gt; [2011-11-30 16:14:19.5829] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 705) <br>&gt; [2011-11-30 16:14:19.5946] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 706) <br>&gt; [2011-11-30 16:14:19.6034] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 707) <br>&gt; [2011-11-30 16:14:19.6135] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 710) <br>&gt; [2011-11-30 16:14:19.6187] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 712) <br>&gt; [2011-11-30 16:14:19.6208] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 713) <br>&gt; [2011-11-30 16:14:19.6241] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 715) <br>&gt; [2011-11-30 16:14:19.6283] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 717) <br>&gt; [2011-11-30 16:14:19.6357] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 718) <br>&gt; [2011-11-30 16:14:19.6453] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 721) <br>&gt; [2011-11-30 16:14:19.6486] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 723) <br>&gt; [2011-11-30 16:14:19.6584] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 725) <br>&gt; [2011-11-30 16:14:19.6685] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 727) <br>&gt; [2011-11-30 16:14:19.6726] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 729) <br>&gt; [2011-11-30 16:14:19.6780] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 730) <br>&gt; [2011-11-30 16:14:19.6800] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 731) <br>&gt; [2011-11-30 16:14:19.6859] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 732) <br>&gt; [2011-11-30 16:14:19.6951] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 733) <br>&gt; [2011-11-30 16:14:19.7053] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 734) <br>&gt; [2011-11-30 16:14:19.7102] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 736) <br>&gt; [2011-11-30 16:14:19.7132] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 737) <br>&gt; [2011-11-30 16:14:19.7204] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 738) <br>&gt; [2011-11-30 16:14:19.7271] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 739) <br>&gt; [2011-11-30 16:14:19.7365] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 740) <br>&gt; [2011-11-30 16:14:19.7410] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 741) <br>&gt; [2011-11-30 16:14:19.7434] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 742) <br>&gt; [2011-11-30 16:14:19.7482] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 744) <br>&gt; [2011-11-30 16:14:19.7624] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 747) <br>&gt; [2011-11-30 16:14:19.7684] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 750) <br>&gt; [2011-11-30 16:14:19.7712] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 752) <br>&gt; [2011-11-30 16:14:19.7734] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 753) <br>&gt; [2011-11-30 16:14:19.7760] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 754) <br>&gt; [2011-11-30 16:14:19.7849] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 757) <br>&gt; [2011-11-30 16:14:19.7941] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 759) <br>&gt; [2011-11-30 16:14:19.8030] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 761) <br>&gt; [2011-11-30 16:14:19.8134] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 763) <br>&gt; [2011-11-30 16:14:19.8165] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 765) <br>&gt; [2011-11-30 16:14:19.8270] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 768) <br>&gt; [2011-11-30 16:14:19.8336] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 769) <br>&gt; [2011-11-30 16:14:19.8507] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 773) <br>&gt; [2011-11-30 16:14:19.8559] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 775) <br>&gt; [2011-11-30 16:14:19.8769] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 780) <br>&gt; [2011-11-30 16:14:19.8919] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 785) <br>&gt; [2011-11-30 16:14:19.8944] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 786) <br>&gt; [2011-11-30 16:14:19.9007] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-poolsave-client-2: reopendir on / succeeded (fd = 788) <br>&gt; [2011-11-30 16:14:19.9101] I [client-lk.c:617:decrement_reopen_fd_count] 0-poolsave-client-2: last fd open'd/lock-self-heal'd - notifying CHILD-UP <br>&gt; [2011-11-30 16:14:19.9396] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.9704] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.10052] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.10545] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:19.11189] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:19.11755] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.12171] W [dict.c:418:dict_unref] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x7f2247375672] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/protocol/client.so(client3_1_fstat_cbk+0x2c9) [0x7f2245424189] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d) [0x7f22452cc6ad]))) 0-dict: dict is NULL <br>&gt; [2011-11-30 16:14:19.12641] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.12933] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.13202] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.17414] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.21832] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:19.24762] W [afr-open.c:624:afr_openfd_flush] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:26.374702] I [afr-self-heal-algorithm.c:520:sh_diff_loop_driver_done] 0-poolsave-replicate-1: diff self-heal on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065: completed. (669 blocks of 29162 were different (2.29%)) <br>&gt; [2011-11-30 16:14:26.375814] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:26.375870] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:26.375886] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:26.376152] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:26.376757] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:26.378231] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:26.378274] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:26.378289] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:26.378532] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:26.379196] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:26.380324] W [dict.c:418:dict_unref] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x7f2247375672] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/protocol/client.so(client3_1_fstat_cbk+0x2c9) [0x7f2245424189] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d) [0x7f22452cc6ad]))) 0-dict: dict is NULL <br>&gt; [2011-11-30 16:14:33.110476] I [afr-self-heal-algorithm.c:520:sh_diff_loop_driver_done] 0-poolsave-replicate-1: diff self-heal on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065: completed. (0 blocks of 29162 were different (0.00%)) <br>&gt; [2011-11-30 16:14:33.111841] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:33.111956] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:33.111990] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:33.112295] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:33.113059] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:33.114314] W [dict.c:418:dict_unref] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x7f2247375672] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/protocol/client.so(client3_1_fstat_cbk+0x2c9) [0x7f2245424189] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d) [0x7f22452cc6ad]))) 0-dict: dict is NULL <br>&gt; [2011-11-30 16:14:39.819854] I [afr-self-heal-algorithm.c:520:sh_diff_loop_driver_done] 0-poolsave-replicate-1: diff self-heal on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065: completed. (0 blocks of 29163 were different (0.00%)) <br>&gt; [2011-11-30 16:14:39.821191] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:39.821251] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:39.821277] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:39.821565] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:39.822291] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:39.823922] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:39.823979] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:39.824006] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:39.824434] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:39.825269] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:39.826867] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:39.826925] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:39.826960] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:39.827437] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:39.828080] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:39.829501] W [dict.c:418:dict_unref] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x7f2247375672] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/protocol/client.so(client3_1_fstat_cbk+0x2c9) [0x7f2245424189] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d) [0x7f22452cc6ad]))) 0-dict: dict is NULL <br>&gt; [2011-11-30 16:14:46.521672] I [afr-self-heal-algorithm.c:520:sh_diff_loop_driver_done] 0-poolsave-replicate-1: diff self-heal on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065: completed. (0 blocks of 29163 were different (0.00%)) <br>&gt; [2011-11-30 16:14:46.523091] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:46.523134] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:46.523173] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:46.523475] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:46.524282] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:46.525721] W [dict.c:418:dict_unref] (--&gt;/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x7f2247375672] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/protocol/client.so(client3_1_fstat_cbk+0x2c9) [0x7f2245424189] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d) [0x7f22452cc6ad]))) 0-dict: dict is NULL <br>&gt; [2011-11-30 16:14:53.214149] I [afr-self-heal-algorithm.c:520:sh_diff_loop_driver_done] 0-poolsave-replicate-1: diff self-heal on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065: completed. (0 blocks of 29164 were different (0.00%)) <br>&gt; [2011-11-30 16:14:53.215561] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:53.215607] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:53.215648] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:53.215951] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:53.216646] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:53.218239] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:53.218292] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:53.218320] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; [2011-11-30 16:14:53.218630] I [afr-open.c:435:afr_openfd_sh] 0-poolsave-replicate-1: data self-heal triggered. path: /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065, reason: Replicate up down flush, data lock is held <br>&gt; [2011-11-30 16:14:53.219392] I [afr-self-heal-common.c:1233:sh_missing_entries_create] 0-poolsave-replicate-1: no missing files - /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065. proceeding to metadata check <br>&gt; [2011-11-30 16:14:53.221056] W [afr-common.c:122:afr_set_split_brain] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_flush_cbk+0x72) [0x7f22452cc8e2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_sh_data_done+0x42) [0x7f22452cacf2] (--&gt;/usr/local/lib//glusterfs/3.2.3/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x21b) [0x7f22452d0ccb]))) 0-poolsave-replicate-1: invalid argument: inode <br>&gt; [2011-11-30 16:14:53.221102] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk] 0-poolsave-replicate-1: background data data self-heal completed on /yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065 <br>&gt; [2011-11-30 16:14:53.221215] W [afr-open.c:326:afr_openfd_sh_unwind] 0-poolsave-replicate-1: fd not open on any subvolume 0x7f2241c8f948 (/yvask300/des01/save/r/p/des01/11-11-22/10h03m52s/inc0+arc/data_channel-1/134_1_1_767873065) <br>&gt; pending frames: <br>&gt; <br>&gt; patchset: git://git.gluster.com/glusterfs.git <br>&gt; signal received: 11 <br>&gt; time of crash: 2011-11-30 16:21:05 <br>&gt; configuration details: <br>&gt; argp 1 <br>&gt; backtrace 1 <br>&gt; dlfcn 1 <br>&gt; fdatasync 1 <br>&gt; libpthread 1 <br>&gt; llistxattr 1 <br>&gt; setfsid 1 <br>&gt; spinlock 1 <br>&gt; epoll.h 1 <br>&gt; xattr.h 1 <br>&gt; st_atim.tv_nsec 1 <br>&gt; package-string: glusterfs 3.2.3 <br>&gt; /lib64/libc.so.6(+0x329e0)[0x7f2246b069e0] <br>&gt; /usr/local/lib//glusterfs/3.2.3/xlator/nfs/server.so(nfs_fop_lookup_cbk+0x60)[0x7f2244adc1c0] <br>&gt; /usr/local/lib//glusterfs/3.2.3/xlator/debug/io-stats.so(io_stats_lookup_cbk+0xe4)[0x7f2244c281c4] <br>&gt; /usr/local/lib//glusterfs/3.2.3/xlator/performance/quick-read.so(qr_lookup_cbk+0x1cd)[0x7f2244d3e39d] <br>&gt; /usr/local/lib//glusterfs/3.2.3/xlator/performance/io-cache.so(ioc_lookup_cbk+0x32e)[0x7f2244e50bde] <br>&gt; /usr/local/lib/libglusterfs.so.0(default_lookup_cbk+0xaa)[0x7f22475616aa] <br>&gt; --------- <br>&gt; <br>&gt; <br>&gt; <br>&gt; _______________________________________________<br>&gt; Gluster-users mailing list<br>&gt; Gluster-users@gluster.org<br>&gt; http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>&gt; <br>&gt; <br>&gt; ------------------------------<br>&gt; <br>&gt; _______________________________________________<br>&gt; Gluster-users mailing list<br>&gt; Gluster-users@gluster.org<br>&gt; http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br>&gt; <br>&gt; <br>&gt; End of Gluster-users Digest, Vol 43, Issue 34<br>&gt; *********************************************<br></div>                                               </div></body>
</html>