No, <br>When i run glusterfs with --debug option, nothing appear.<br>I just see that glusferfsd process consume a lot of cpu, and that's on big file, ( 10G ) acces by a lot of client simultanous, glusterfs/glusterfsd ? seems to be not respond, data not send like a deadlock or loop.<br>
<br>Regards, <br>Nicolas Prochazka<br><br><div class="gmail_quote">2008/12/9 Anand Avati <span dir="ltr"><<a href="mailto:avati@zresearch.com">avati@zresearch.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Nicolas,<br>
Do you have logs from the client and server?<br>
<br>
avati<br>
<br>
2008/12/9 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>>:<br>
<div><div></div><div class="Wj3C7c">> hi again,<br>
> about glusterfs--mainline--3.0--patch-727 with same configuration.<br>
> Now glusterfsd seems to be take a lot of cpu ressource > 20 % , and ls -l<br>
> /glustermount/ is very very long to respond ( > 5 minutes ).<br>
> We can notice that with -719 the issue is not appearing.<br>
><br>
> Nicolas Prochazka.<br>
><br>
> 2008/12/8 nicolas prochazka <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>><br>
>><br>
>> Thanks it's working now.<br>
>> Regards,<br>
>> Nicolas Prochazka<br>
>><br>
>> 2008/12/8 Basavanagowda Kanur <<a href="mailto:gowda@zresearch.com">gowda@zresearch.com</a>><br>
>>><br>
>>> Nicolas,<br>
>>> Please use glusterfs--mainline--3.0--patch-719.<br>
>>><br>
>>> --<br>
>>> gowda<br>
>>><br>
>>> On Mon, Dec 8, 2008 at 3:07 PM, nicolas prochazka<br>
>>> <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>>><br>
>>>> Hi,<br>
>>>> It seems that glusterfs--mainline--3.0--patch-717 has a new problem,<br>
>>>> which not appear at least witch glusterfs--mainline--3.0--patch-710<br>
>>>> Now i've :<br>
>>>> ls: cannot open directory /mnt/vdisk/: Software caused connection abort<br>
>>>><br>
>>>> Regards,<br>
>>>> Nicolas Prochazka.<br>
>>>><br>
>>>> my client spec file :<br>
>>>> volume brick1<br>
>>>> type protocol/client<br>
>>>> option transport-type tcp/client # for TCP/IP transport<br>
>>>> option remote-host <a href="http://10.98.98.1" target="_blank">10.98.98.1</a> # IP address of server1<br>
>>>> option remote-subvolume brick # name of the remote volume on server1<br>
>>>> end-volume<br>
>>>><br>
>>>> volume brick2<br>
>>>> type protocol/client<br>
>>>> option transport-type tcp/client # for TCP/IP transport<br>
>>>> option remote-host <a href="http://10.98.98.2" target="_blank">10.98.98.2</a> # IP address of server2<br>
>>>> option remote-subvolume brick # name of the remote volume on server2<br>
>>>> end-volume<br>
>>>><br>
>>>> volume afr<br>
>>>> type cluster/afr<br>
>>>> subvolumes brick1 brick2<br>
>>>> end-volume<br>
>>>><br>
>>>> volume iothreads<br>
>>>> type performance/io-threads<br>
>>>> option thread-count 4<br>
>>>> option cache-size 32MB<br>
>>>> subvolumes afr<br>
>>>> end-volume<br>
>>>><br>
>>>> volume io-cache<br>
>>>> type performance/io-cache<br>
>>>> option cache-size 256MB # default is 32MB<br>
>>>> option page-size 1MB #128KB is default option<br>
>>>> option force-revalidate-timeout 2 # default is 1<br>
>>>> subvolumes iothreads<br>
>>>> end-volume<br>
>>>><br>
>>>> my server spec-file<br>
>>>> volume brickless<br>
>>>> type storage/posix<br>
>>>> option directory /mnt/disks/export<br>
>>>> end-volume<br>
>>>><br>
>>>> volume brick<br>
>>>> type features/posix-locks<br>
>>>> option mandatory on # enables mandatory locking on all files<br>
>>>> subvolumes brickless<br>
>>>> end-volume<br>
>>>><br>
>>>> volume server<br>
>>>> type protocol/server<br>
>>>> subvolumes brick<br>
>>>> option transport-type tcp<br>
>>>> option auth.addr.brick.allow <a href="http://10.98.98." target="_blank">10.98.98.</a>*<br>
>>>> end-volume<br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>> Gluster-devel mailing list<br>
>>>> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>>>><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> hard work often pays off after time, but laziness always pays off now<br>
>><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
><br>
><br>
</div></div></blockquote></div><br>