The last patch ( 882,883 ) seems to resolve my problem.<br>Regards, <br>Nicolas<br><br><div class="gmail_quote">2009/1/23 nicolas prochazka <span dir="ltr"><<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">For my test, <br>I shutdown interface ( ifconfig eth0 down ) to reproduce this problem.<br>It seems that's this problem appear with certains application ( lock ? ) for example, i can reproduce it with qemu ( <a href="http://www.qemu.org" target="_blank">www.qemu.org</a>).<br>
I also notice that's qemu not working with booster , I do not know if this simalry problem ( perharps how qemu or other program open file ? )<br><br>booster debug : <br> LD_PRELOAD=/usr/local/lib64/glusterfs/glusterfs-booster.so /usr/local/bin/qemu -name calcul -k fr -localtime -usb -usbdevice tablet -net vde,vlan=0,sock=/tmpsafe/neoswitch -vnc <a href="http://10.98.98.1:1" target="_blank">10.98.98.1:1</a> -monitor tcp:<a href="http://127.0.0.1:10229" target="_blank">127.0.0.1:10229</a>,server,nowait,nodelay -vga std -m 512 -net nic,vlan=0,macaddr=ac:de:48:36:a2:aa,model=rtl8139 -drive file=/mnt/vdisk/images/vm_calcul -no-kvm<br>
*** glibc detected *** /usr/local/bin/qemu: double free or corruption (out): 0x0000000000bd71e0 ***<br>======= Backtrace: =========<br>/lib/libc.so.6[0x7f40e28c1aad]<br>/lib/libc.so.6(cfree+0x76)[0x7f40e28c3796]<br>/usr/local/bin/qemu[0x49f13f]<br>
/usr/local/bin/qemu[0x461f42]<br>/usr/local/bin/qemu[0x409400]<br>/usr/local/bin/qemu[0x40b940]<br>/lib/libc.so.6(__libc_start_main+0xf4)[0x7f40e2872b74]<br>/usr/local/bin/qemu[0x405629]<br>======= Memory map: ========<br>
00400000-005bb000 r-xp 00000000 00:01 92 /usr/local/bin/qemu-system-x86_64<br>007ba000-007bb000 r--p 001ba000 00:01 92 /usr/local/bin/qemu-system-x86_64<br>
007bb000-007c0000 rw-p 001bb000 00:01 92 /usr/local/bin/qemu-system-x86_64<br>
007c0000-00bf0000 rw-p 007c0000 00:00 0 [heap]<br>7f40dc000000-7f40dc021000 rw-p 7f40dc000000 00:00 0 <br>7f40dc021000-7f40e0000000 ---p 7f40dc021000 00:00 0 <br>7f40e17d1000-7f40e17de000 r-xp 00000000 00:01 5713 /lib64/libgcc_s.so.1<br>
7f40e17de000-7f40e19dd000 ---p 0000d000 00:01 5713 /lib64/libgcc_s.so.1<br>7f40e19dd000-7f40e19de000 r--p 0000c000 00:01 5713 /lib64/libgcc_s.so.1<br>7f40e19de000-7f40e19df000 rw-p 0000d000 00:01 5713 /lib64/libgcc_s.so.1<br>
7f40e19df000-7f40e19e9000 r-xp 00000000 00:01 5772 /lib64/<a href="http://libnss_files-2.6.1.so" target="_blank">libnss_files-2.6.1.so</a><br>7f40e19e9000-7f40e1be8000 ---p 0000a000 00:01 5772 /lib64/<a href="http://libnss_files-2.6.1.so" target="_blank">libnss_files-2.6.1.so</a><br>
7f40e1be8000-7f40e1be9000 r--p 00009000 00:01 5772 /lib64/<a href="http://libnss_files-2.6.1.so" target="_blank">libnss_files-2.6.1.so</a><br>7f40e1be9000-7f40e1bea000 rw-p 0000a000 00:01 5772 /lib64/<a href="http://libnss_files-2.6.1.so" target="_blank">libnss_files-2.6.1.so</a><br>
7f40e1bea000-7f40e1bf3000 r-xp 00000000 00:01 5796 /lib64/<a href="http://libnss_nis-2.6.1.so" target="_blank">libnss_nis-2.6.1.so</a><br>7f40e1bf3000-7f40e1df3000 ---p 00009000 00:01 5796 /lib64/<a href="http://libnss_nis-2.6.1.so" target="_blank">libnss_nis-2.6.1.so</a><br>
7f40e1df3000-7f40e1df4000 r--p 00009000 00:01 5796 /lib64/<a href="http://libnss_nis-2.6.1.so" target="_blank">libnss_nis-2.6.1.so</a><br>7f40e1df4000-7f40e1df5000 rw-p 0000a000 00:01 5796 /lib64/<a href="http://libnss_nis-2.6.1.so" target="_blank">libnss_nis-2.6.1.so</a><br>
7f40e1df5000-7f40e1e09000 r-xp 00000000 00:01 5777 /lib64/<a href="http://libnsl-2.6.1.so" target="_blank">libnsl-2.6.1.so</a><br>7f40e1e09000-7f40e2008000 ---p 00014000 00:01 5777 /lib64/<a href="http://libnsl-2.6.1.so" target="_blank">libnsl-2.6.1.so</a><br>
7f40e2008000-7f40e2009000 r--p 00013000 00:01 5777 /lib64/<a href="http://libnsl-2.6.1.so" target="_blank">libnsl-2.6.1.so</a><br>7f40e2009000-7f40e200a000 rw-p 00014000 00:01 5777 /lib64/<a href="http://libnsl-2.6.1.so" target="_blank">libnsl-2.6.1.so</a><br>
7f40e200a000-7f40e200c000 rw-p 7f40e200a000 00:00 0 <br>7f40e200c000-7f40e2013000 r-xp 00000000 00:01 5814 /lib64/<a href="http://libnss_compat-2.6.1.so" target="_blank">libnss_compat-2.6.1.so</a><br>
7f40e2013000-7f40e2212000 ---p 00007000 00:01 5814 /lib64/<a href="http://libnss_compat-2.6.1.so" target="_blank">libnss_compat-2.6.1.so</a><br>
7f40e2212000-7f40e2213000 r--p 00006000 00:01 5814 /lib64/<a href="http://libnss_compat-2.6.1.so" target="_blank">libnss_compat-2.6.1.so</a><br>7f40e2213000-7f40e2214000 rw-p 00007000 00:01 5814 /lib64/<a href="http://libnss_compat-2.6.1.so" target="_blank">libnss_compat-2.6.1.so</a><br>
7f40e2214000-7f40e2216000 r-xp 00000000 00:01 5794 /lib64/<a href="http://libdl-2.6.1.so" target="_blank">libdl-2.6.1.so</a><br>7f40e2216000-7f40e2416000 ---p 00002000 00:01 5794 /lib64/<a href="http://libdl-2.6.1.so" target="_blank">libdl-2.6.1.so</a><br>
7f40e2416000-7f40e2417000 r--p 00002000 00:01 5794 /lib64/<a href="http://libdl-2.6.1.so" target="_blank">libdl-2.6.1.so</a><br>7f40e2417000-7f40e2418000 rw-p 00003000 00:01 5794 /lib64/<a href="http://libdl-2.6.1.so" target="_blank">libdl-2.6.1.so</a><br>
7f40e2418000-7f40e2446000 r-xp 00000000 00:01 531 /usr/local/lib64/libglusterfs.so.0.0.0<br>7f40e2446000-7f40e2645000 ---p 0002e000 00:01 531 /usr/local/lib64/libglusterfs.so.0.0.0<br>
7f40e2645000-7f40e2646000 r--p 0002d000 00:01 531 /usr/local/lib64/libglusterfs.so.0.0.0<br>7f40e2646000-7f40e2647000 rw-p 0002e000 00:01 531 /usr/local/lib64/libglusterfs.so.0.0.0<br>
7f40e2647000-7f40e2649000 rw-p 7f40e2647000 00:00 0 <br>7f40e2649000-7f40e2654000 r-xp 00000000 00:01 528 /usr/local/lib64/libglusterfsclient.so.0.0.0<br>7f40e2654000-7f40e2853000 ---p 0000b000 00:01 528 /usr/local/lib64/libglusterfsclient.so.0.0.0<br>
7f40e2853000-7f40e2854000 r--p 0000a000 00:01 528 /usr/local/lib64/libglusterfsclient.so.0.0.0<br>7f40e2854000-7f40e2855000 rw-p 0000b000 00:01 528 /usr/local/lib64/libglusterfsclient.so.0.0.0<br>
7f40e2855000-7f40e298b000 r-xp 00000000 00:01 5765 /lib64/<a href="http://libc-2.6.1.so" target="_blank">libc-2.6.1.so</a><br>7f40e298b000-7f40e2b8a000 ---p 00136000 00:01 5765 /lib64/<a href="http://libc-2.6.1.so" target="_blank">libc-2.6.1.so</a><br>
7f40e2b8a000-7f40e2b8e000 r--p 00135000 00:01 5765 /lib64/<a href="http://libc-2.6.1.so" target="_blank">libc-2.6.1.so</a><br>7f40e2b8e000-7f40e2b8f000 rw-p 00139000 00:01 5765 /lib64/<a href="http://libc-2.6.1.so" target="_blank">libc-2.6.1.so</a><br>
7f40e2b8f000-7f40e2b94000 rw-p 7f40e2b8f000 00:00 0 <br>7f40e2b94000-7f40e2b98000 r-xp 00000000 00:01 535 /usr/local/lib64/libvdeplug.so.2.1.0<br>7f40e2b98000-7f40e2d97000 ---p 00004000 00:01 535 /usr/local/lib64/libvdeplug.so.2.1.0<br>
7f40e2d97000-7f40e2d98000 r--p 00003000 00:01 535 /usr/local/lib64/libvdeplug.so.2.1.0<br>7f40e2d98000-7f40e2d99000 rw-p 00004000 00:01 535 /usr/local/lib64/libvdeplug.so.2.1.0<br>
7f40e2d99000-7f40e2de6000 r-xp 00000000 00:01 5816 /lib64/libncurses.so.5.6<br>7f40e2de6000-7f40e2ee5000 ---p 0004d000 00:01 5816 /lib64/libncurses.so.5.6<br>7f40e2ee5000-7f40e2ef4000 rw-p 0004c000 00:01 5816 /lib64/libncurses.so.5.6<br>
7f40e2ef4000-7f40e2ef6000 r-xp 00000000 00:01 5704 /lib64/<a href="http://libutil-2.6.1.so" target="_blank">libutil-2.6.1.so</a><br>7f40e2ef6000-7f40e30f5000 ---p 00002000 00:01 5704 /lib64/<a href="http://libutil-2.6.1.so" target="_blank">libutil-2.6.1.so</a><br>
7f40e30f5000-7f40e30f6000 r--p 00001000 00:01 5704 /lib64/<a href="http://libutil-2.6.1.so" target="_blank">libutil-2.6.1.so</a><br>7f40e30f6000-7f40e30f7000 rw-p 00002000 00:01 5704 /lib64/<a href="http://libutil-2.6.1.so" target="_blank">libutil-2.6.1.so</a><br>
7f40e30f7000-7f40e30ff000 r-xp 00000000 00:01 5513 /lib64/<a href="http://librt-2.6.1.so" target="_blank">librt-2.6.1.so</a><br>7f40e30ff000-7f40e32fe000 ---p 00008000 00:01 5513 /lib64/<a href="http://librt-2.6.1.so" target="_blank">librt-2.6.1.so</a><br>
7f40e32fe000-7f40e32ff000 r--p 00007000 00:01 5513 /lib64/<a href="http://librt-2.6.1.so" target="_blank">librt-2.6.1.so</a><br>7f40e32ff000-7f40e3300000 rw-p 00008000 00:01 5513 /lib64/<a href="http://librt-2.6.1.so" target="_blank">librt-2.6.1.so</a><br>
7f40e3300000-7f40e3315000 r-xp 00000000 00:01 5767 /lib64/<a href="http://libpthread-2.6.1.so" target="_blank">libpthread-2.6.1.so</a><br>7f40e3315000-7f40e3515000 ---p 00015000 00:01 5767 /lib64/<a href="http://libpthread-2.6.1.so" target="_blank">libpthread-2.6.1.so</a><br>
7f40e3515000-7f40e3516000 r--p 00015000 00:01 5767 /lib64/<a href="http://libpthread-2.6.1.so" target="_blank">libpthread-2.6.1.so</a><br>7f40e3516000-7f40e3517000 rw-p 00016000 00:01 5767 /lib64/<a href="http://libpthread-2.6.1.so" target="_blank">libpthread-2.6.1.so</a><br>
7f40e3517000-7f40e351b000 rw-p 7f40e3517000 00:00 0 <br>7f40e351b000-7f40e359b000 r-xp 00000000 00:01 5780 /lib64/<a href="http://libm-2.6.1.so" target="_blank">libm-2.6.1.so</a><br>7f40e359b000-7f40e379a000 ---p 00080000 00:01 5780 /lib64/<a href="http://libm-2.6.1.so" target="_blank">libm-2.6.1.so</a><br>
7f40e379a000-7f40e379b000 r--p 0007f000 00:01 5780 /lib64/<a href="http://libm-2.6.1.so" target="_blank">libm-2.6.1.so</a><br>7f40e379b000-7f40e379c000 rw-p 00080000 00:01 5780 /lib64/<a href="http://libm-2.6.1.so" target="_blank">libm-2.6.1.so</a><br>
7f40e379c000-7f40e379f000 r-xp 00000000 00:01 515 /usr/local/lib64/glusterfs/glusterfs-booster.so<br>7f40e379f000-7f40e399e000 ---p 00003000 00:01 515 /usr/local/lib64/glusterfs/glusterfs-booster.so<br>
7f40e399e000-7f40e399f000 r--p 00002000 00:01 515 /usr/local/lib64/glusterfs/glusterfs-booster.so<br>7f40e399f000-7f40e39a0000 rw-p 00003000 00:01 515 /usr/local/lib64/glusterfs/glusterfs-booster.so<br>
7f40e39a0000-7f40e39bb000 r-xp 00000000 00:01 5788 /lib64/<a href="http://ld-2.6.1.so" target="_blank">ld-2.6.1.so</a><br>7f40e3a72000-7f40e3a9a000 rw-p 7f40e3a72000 00:00 0 <br>7f40e3a9a000-7f40e3aae000 r-xp 00000000 00:01 5815 /lib64/libz.so.1.2.3<br>
7f40e3aae000-7f40e3bad000 ---p 00014000 00:01 5815 /lib64/libz.so.1.2.3<br>7f40e3bad000-7f40e3bae000 rw-p 00013000 00:01 5815 /lib64/libz.so.1.2.3<br>7f40e3bae000-7f40e3baf000 rw-p 7f40e3bae000 00:00 0 <br>
7f40e3bb5000-7f40e3bba000 rw-p 7f40e3bb5000 00:00 0 <br>7f40e3bba000-7f40e3bbb000 r--p 0001a000 00:01 5788 /lib64/<a href="http://ld-2.6.1.so" target="_blank">ld-2.6.1.so</a><br>7f40e3bbb000-7f40e3bbc000 rw-p 0001b000 00:01 5788 /lib64/<a href="http://ld-2.6.1.so" target="_blank">ld-2.6.1.so</a><br>
7f40e3c00000-7f4105200000 rw-p 00000000 00:0f 5035416 /hugepages/kvm.XbPD2I (deleted)<br>7fffebba7000-7fffebbbc000 rw-p 7ffffffea000 00:00 0 [stack]<br>7fffebbff000-7fffebc00000 r-xp 7fffebbff000 00:00 0 [vdso]<br>
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]<br><br><br><br><div class="gmail_quote">2009/1/23 Krishna Srinivas <span dir="ltr"><<a href="mailto:krishna@zresearch.com" target="_blank">krishna@zresearch.com</a>></span><div>
<div></div><div class="Wj3C7c"><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Raghu,<br>
<br>
Nicolas sees the problem when the server is hard powered off. Killing<br>
server process seems to work fine for him...<br>
<font color="#888888"><br>
Krishna<br>
</font><div><div></div><div><br>
On Fri, Jan 23, 2009 at 9:34 AM, Raghavendra G<br>
<<a href="mailto:raghavendra@zresearch.com" target="_blank">raghavendra@zresearch.com</a>> wrote:<br>
> Avati,<br>
><br>
> ls/cd works fine for the test described by Nicolas. In fact, When I killed<br>
> both the glusterfs servers, I got ENOTCONN, but when I started one of the<br>
> servers 'ls' worked fine.<br>
><br>
> regards,<br>
> On Fri, Jan 23, 2009 at 6:22 AM, Anand Avati <<a href="mailto:avati@zresearch.com" target="_blank">avati@zresearch.com</a>> wrote:<br>
>><br>
>> Nicolas,<br>
>> Are you running any specific apps on the mountpoint? Or is it just<br>
>> regular ls/cd kind of commands?<br>
>><br>
>> Raghu,<br>
>> Can you try to reproduce this in our lab?<br>
>><br>
>> Thanks,<br>
>> Avati<br>
>><br>
>> On Wed, Jan 21, 2009 at 9:22 PM, nicolas prochazka<br>
>> <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>> wrote:<br>
>> > Hello,<br>
>> > I think I localise the problem more precisely :<br>
>> ><br>
>> > volume last<br>
>> > type cluster/replicate<br>
>> > subvolumes brick_10.98.98.1 brick_10.98.98.2<br>
>> > end-volume<br>
>> ><br>
>> > if i shutdown 10.98.98.2 , 10.98.98.1 is ok after timeout<br>
>> > if i shutdown 10.98.98.1 , 10.98.98.2 is not ok after timout, it become<br>
>> > ready if 10.98.98.1 comes back<br>
>> ><br>
>> > now if i change to : subvolumes brick_10.98.98.2 brick_10.98.98.1<br>
>> > the situation is inversing.<br>
>> ><br>
>> > In afr doc, you 're telling : default, AFR considers the first subvolume<br>
>> > as<br>
>> > the sole lock server.<br>
>> > perhaps bug comes from here, when lock server down other client does not<br>
>> > work ?<br>
>> ><br>
>> > Regards,<br>
>> > Nicolas Prochazka<br>
>> ><br>
>> ><br>
>> > 2009/1/19 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>><br>
>> >><br>
>> >> it is in private network,<br>
>> >> I'm going to try to simulate this issue in virtual qemu environnement,<br>
>> >> I recontact you ,<br>
>> >> Thanks a lot for your great job.<br>
>> >> Nicolas<br>
>> >><br>
>> >> 2009/1/19 Anand Avati <<a href="mailto:avati@zresearch.com" target="_blank">avati@zresearch.com</a>><br>
>> >>><br>
>> >>> nicolas,<br>
>> >>> It is hard for us to debug with such brief description. Is it<br>
>> >>> possible for us to inspect the system with a remote login while this<br>
>> >>> error is created?<br>
>> >>><br>
>> >>> avati<br>
>> >>><br>
>> >>> On Mon, Jan 19, 2009 at 8:32 PM, nicolas prochazka<br>
>> >>> <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>> wrote:<br>
>> >>> > hi again,<br>
>> >>> > with tla855 , now if i change network card interface ip, 'ls' test<br>
>> >>> > runs<br>
>> >>> > after timeout, so there's a big progress,<br>
>> >>> > but now, if im stopping server with hard powerdown ( swith on/off as<br>
>> >>> > a<br>
>> >>> > crash) , this problem persist , i do not understand différence<br>
>> >>> > between<br>
>> >>> > network cut and powerdown.<br>
>> >>> ><br>
>> >>> > Regards,<br>
>> >>> > Nicolas Prochazka<br>
>> >>> ><br>
>> >>> > 2009/1/19 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>><br>
>> >>> >><br>
>> >>> >> hi,<br>
>> >>> >> Do you more information about this bug ?<br>
>> >>> >> I do not understand how afr works,<br>
>> >>> >> with my initial configuration, if i change ip of network card (<br>
>> >>> >> from<br>
>> >>> >> 10.98.98.2 => 10.98.98.4 ) on server B during test,<br>
>> >>> >> on client and server (A ,C ) 'ls' works after some timeout, but<br>
>> >>> >> some<br>
>> >>> >> program seems to be block all system (<br>
>> >>> >> if i run my own program or qemu for example) 'ls' does not respond<br>
>> >>> >> anymore, and if i rechange from 10.98.98.4 => 10.98.98.2 ) then all<br>
>> >>> >> become<br>
>> >>> >> ok again.<br>
>> >>> >><br>
>> >>> >> Regards,<br>
>> >>> >> Nicolas Prochazka<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> 2009/1/14 Krishna Srinivas <<a href="mailto:krishna@zresearch.com" target="_blank">krishna@zresearch.com</a>><br>
>> >>> >>><br>
>> >>> >>> Nicolas,<br>
>> >>> >>><br>
>> >>> >>> It might be a bug. Let me try to reproduce the problem here and<br>
>> >>> >>> get<br>
>> >>> >>> back<br>
>> >>> >>> to you.<br>
>> >>> >>><br>
>> >>> >>> Krishna<br>
>> >>> >>><br>
>> >>> >>> On Wed, Jan 14, 2009 at 6:59 PM, nicolas prochazka<br>
>> >>> >>> <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>> wrote:<br>
>> >>> >>> > hello again,<br>
>> >>> >>> > To finish with this issue and information I can send you :<br>
>> >>> >>> > If i stop glusterfsd ( on server B) before to stop this server<br>
>> >>> >>> > (<br>
>> >>> >>> > hard<br>
>> >>> >>> > poweroff by pressed on/off ) , the problem does not occur. If i<br>
>> >>> >>> > hard<br>
>> >>> >>> > poweroff without stop gluster ( a real crash ) problem occur .<br>
>> >>> >>> > Regards<br>
>> >>> >>> > Nicolas Prochazka.<br>
>> >>> >>> ><br>
>> >>> >>> > 2009/1/14 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>><br>
>> >>> >>> >><br>
>> >>> >>> >> hi again,<br>
>> >>> >>> >> I continue my tests and :<br>
>> >>> >>> >> In my case, if one file is open on gluster mount during stop of<br>
>> >>> >>> >> one<br>
>> >>> >>> >> afr<br>
>> >>> >>> >> server,<br>
>> >>> >>> >> gluster mount can not be acces ( gap ? ) in this server. All<br>
>> >>> >>> >> other<br>
>> >>> >>> >> client<br>
>> >>> >>> >> ( C for example) which not opening file during stop, isn't<br>
>> >>> >>> >> affect,<br>
>> >>> >>> >> i<br>
>> >>> >>> >> can do<br>
>> >>> >>> >> a ls or open after transport timeout time.<br>
>> >>> >>> >> If i kill the process that's use this file, then i can using<br>
>> >>> >>> >> gluster<br>
>> >>> >>> >> mount<br>
>> >>> >>> >> point without problem.<br>
>> >>> >>> >><br>
>> >>> >>> >> Regards,<br>
>> >>> >>> >> Nicolas Prochazka.<br>
>> >>> >>> >><br>
>> >>> >>> >> 2009/1/12 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>><br>
>> >>> >>> >>><br>
>> >>> >>> >>> for your attention,<br>
>> >>> >>> >>> it seems that's this problem occur only when files is open and<br>
>> >>> >>> >>> use<br>
>> >>> >>> >>> and<br>
>> >>> >>> >>> gluster mount point .<br>
>> >>> >>> >>> I use big files of computation ( ~ 10G) with in the most<br>
>> >>> >>> >>> important<br>
>> >>> >>> >>> part,<br>
>> >>> >>> >>> read. In this case problem occurs.<br>
>> >>> >>> >>> If i using only small files which create only some time, no<br>
>> >>> >>> >>> problem<br>
>> >>> >>> >>> occur, gluster mount can use other afr server.<br>
>> >>> >>> >>><br>
>> >>> >>> >>> Regards,<br>
>> >>> >>> >>> Nicolas Prochazka<br>
>> >>> >>> >>><br>
>> >>> >>> >>><br>
>> >>> >>> >>><br>
>> >>> >>> >>> 2009/1/12 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>><br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> Hi,<br>
>> >>> >>> >>>> I'm tryning to set<br>
>> >>> >>> >>>> option transport-timeout 5<br>
>> >>> >>> >>>> in protocol/client<br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> so a max of 10 seconds before restoring gluster in normal<br>
>> >>> >>> >>>> situation<br>
>> >>> >>> >>>> ?<br>
>> >>> >>> >>>> no success, i always in the same situation, a 'ls<br>
>> >>> >>> >>>> /mnt/gluster'<br>
>> >>> >>> >>>> not<br>
>> >>> >>> >>>> respond after > 10 mins<br>
>> >>> >>> >>>> I can not reuse glustermount exept kill glusterfs process.<br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> Regards<br>
>> >>> >>> >>>> Nicolas Prochazka<br>
>> >>> >>> >>>><br>
>> >>> >>> >>>><br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> 2009/1/12 Raghavendra G <<a href="mailto:raghavendra@zresearch.com" target="_blank">raghavendra@zresearch.com</a>><br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>> Hi Nicolas,<br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>> how much time did you wait before concluding the mount point<br>
>> >>> >>> >>>>> to<br>
>> >>> >>> >>>>> be<br>
>> >>> >>> >>>>> not<br>
>> >>> >>> >>>>> working? afr waits for a maximum of (2 * transport-timeout)<br>
>> >>> >>> >>>>> seconds<br>
>> >>> >>> >>>>> before<br>
>> >>> >>> >>>>> returning sending reply to the application. Can you wait for<br>
>> >>> >>> >>>>> some<br>
>> >>> >>> >>>>> time and<br>
>> >>> >>> >>>>> check out is this the issue you are facing?<br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>> regards,<br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>> On Mon, Jan 12, 2009 at 7:49 PM, nicolas prochazka<br>
>> >>> >>> >>>>> <<a href="mailto:prochazka.nicolas@gmail.com" target="_blank">prochazka.nicolas@gmail.com</a>> wrote:<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> Hi.<br>
>> >>> >>> >>>>>> I've installed this model to test Gluster :<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> + 2 servers ( A B )<br>
>> >>> >>> >>>>>> - with glusterfsd server (<br>
>> >>> >>> >>>>>> glusterfs--mainline--3.0--patch-842 )<br>
>> >>> >>> >>>>>> - with glusterfs client<br>
>> >>> >>> >>>>>> server conf file .<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> + 1 server C only client mode.<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> My issue :<br>
>> >>> >>> >>>>>> If C open big file in this client configuration and then i<br>
>> >>> >>> >>>>>> stop<br>
>> >>> >>> >>>>>> server<br>
>> >>> >>> >>>>>> A (or B )<br>
>> >>> >>> >>>>>> gluster mount point on server C seems to be block, i can<br>
>> >>> >>> >>>>>> not<br>
>> >>> >>> >>>>>> do<br>
>> >>> >>> >>>>>> 'ls<br>
>> >>> >>> >>>>>> -l' for example.<br>
>> >>> >>> >>>>>> Is a this thing is normal ? as C open his file on A or B ,<br>
>> >>> >>> >>>>>> then it<br>
>> >>> >>> >>>>>> is<br>
>> >>> >>> >>>>>> blocking when server down ?<br>
>> >>> >>> >>>>>> I was thinking in client AFR, client can reopen file/block<br>
>> >>> >>> >>>>>> an<br>
>> >>> >>> >>>>>> other<br>
>> >>> >>> >>>>>> server , i'm wrong ?<br>
>> >>> >>> >>>>>> Should use HA translator ?<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> Regards,<br>
>> >>> >>> >>>>>> Nicolas Prochazka.<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume brickless<br>
>> >>> >>> >>>>>> type storage/posix<br>
>> >>> >>> >>>>>> option directory /mnt/disks/export<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume brick<br>
>> >>> >>> >>>>>> type features/posix-locks<br>
>> >>> >>> >>>>>> option mandatory on # enables mandatory locking on<br>
>> >>> >>> >>>>>> all<br>
>> >>> >>> >>>>>> files<br>
>> >>> >>> >>>>>> subvolumes brickless<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume server<br>
>> >>> >>> >>>>>> type protocol/server<br>
>> >>> >>> >>>>>> subvolumes brick<br>
>> >>> >>> >>>>>> option transport-type tcp<br>
>> >>> >>> >>>>>> option auth.addr.brick.allow 10.98.98.*<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>> ---------------------------<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> client config<br>
>> >>> >>> >>>>>> volume brick_10.98.98.1<br>
>> >>> >>> >>>>>> type protocol/client<br>
>> >>> >>> >>>>>> option transport-type tcp/client<br>
>> >>> >>> >>>>>> option remote-host 10.98.98.1<br>
>> >>> >>> >>>>>> option remote-subvolume brick<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume brick_10.98.98.2<br>
>> >>> >>> >>>>>> type protocol/client<br>
>> >>> >>> >>>>>> option transport-type tcp/client<br>
>> >>> >>> >>>>>> option remote-host 10.98.98.2<br>
>> >>> >>> >>>>>> option remote-subvolume brick<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume last<br>
>> >>> >>> >>>>>> type cluster/replicate<br>
>> >>> >>> >>>>>> subvolumes brick_10.98.98.1 brick_10.98.98.2<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume iothreads<br>
>> >>> >>> >>>>>> type performance/io-threads<br>
>> >>> >>> >>>>>> option thread-count 2<br>
>> >>> >>> >>>>>> option cache-size 32MB<br>
>> >>> >>> >>>>>> subvolumes last<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume io-cache<br>
>> >>> >>> >>>>>> type performance/io-cache<br>
>> >>> >>> >>>>>> option cache-size 1024MB # default is 32MB<br>
>> >>> >>> >>>>>> option page-size 1MB #128KB is default option<br>
>> >>> >>> >>>>>> option force-revalidate-timeout 2 # default is 1<br>
>> >>> >>> >>>>>> subvolumes iothreads<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> volume writebehind<br>
>> >>> >>> >>>>>> type performance/write-behind<br>
>> >>> >>> >>>>>> option aggregate-size 256KB # default is 0bytes<br>
>> >>> >>> >>>>>> option window-size 3MB<br>
>> >>> >>> >>>>>> option flush-behind on # default is 'off'<br>
>> >>> >>> >>>>>> subvolumes io-cache<br>
>> >>> >>> >>>>>> end-volume<br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>>> _______________________________________________<br>
>> >>> >>> >>>>>> Gluster-devel mailing list<br>
>> >>> >>> >>>>>> <a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
>> >>> >>> >>>>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>> >>> >>> >>>>>><br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>>> --<br>
>> >>> >>> >>>>> Raghavendra G<br>
>> >>> >>> >>>>><br>
>> >>> >>> >>>><br>
>> >>> >>> >>><br>
>> >>> >>> >><br>
>> >>> >>> ><br>
>> >>> >>> ><br>
>> >>> >>> > _______________________________________________<br>
>> >>> >>> > Gluster-devel mailing list<br>
>> >>> >>> > <a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
>> >>> >>> > <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>> >>> >>> ><br>
>> >>> >>> ><br>
>> >>> >><br>
>> >>> ><br>
>> >>> ><br>
>> >>> > _______________________________________________<br>
>> >>> > Gluster-devel mailing list<br>
>> >>> > <a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
>> >>> > <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>> >>> ><br>
>> >>> ><br>
>> >><br>
>> ><br>
>> ><br>
><br>
><br>
><br>
> --<br>
> Raghavendra G<br>
><br>
><br>
</div></div></blockquote></div></div></div><br>
</blockquote></div><br>