Hi Nicolas, <br> Sure, We are in the process of internal testing. It should be out as release soon. Meanwhile, you can pull from git and test it out.<br><br>Regards,<br><br><div class="gmail_quote">On Wed, Mar 18, 2009 at 1:30 AM, nicolas prochazka <span dir="ltr"><<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hello,<br>
I see in git tree correction of afr heal bug ,<br>
can wa test this release, is stable enough in compare rc release ?<br>
nicolas<br>
<br>
On Tue, Mar 17, 2009 at 9:39 PM, nicolas prochazka<br>
<div><div></div><div class="h5"><<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
> My test is :<br>
> Set two server in AFR mode<br>
> copy file to mount point ( /mnt/vdisk ) : ok , synchro is ok on two server.<br>
> Then delete (rm ) all file from storage on server 1 ( /mnt/disks/export )<br>
> then wait for synchronisation.<br>
> with rc2 and rc4 => file with good size ( ls -l) but nothing here (<br>
> df -b shows no disk usage ) and files are corrupt<br>
> with rc1 : all is ok, server resynchro perfectly., i think is the right way ;)<br>
><br>
> nicoals<br>
><br>
> On Tue, Mar 17, 2009 at 6:49 PM, Amar Tumballi <<a href="mailto:amar@gluster.com">amar@gluster.com</a>> wrote:<br>
>> Hi Nicolas,<br>
>> When you mean you 'add' a server here, you are adding another server to<br>
>> replicate subvolume? (ie, 2 to 3), or you had one server down when copying<br>
>> data (of 2 servers), and you bring back another server up and trigger the<br>
>> afr self heal ?<br>
>><br>
>> Regards,<br>
>> Amar<br>
>><br>
>> On Tue, Mar 17, 2009 at 7:22 AM, nicolas prochazka<br>
>> <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>><br>
>>> Yes i'm trying without any translator but bugs persists.<br>
>>><br>
>>> Into logs i can not see anything interesting, size of file seems to be<br>
>>> always ok when it begin synchronize.<br>
>>> As i write before, if i cp files during normal operation ( 2 servers<br>
>>> ok ) all is ok, problem appears only when i try to resynchronize ( rm<br>
>>> all on one of server ( in storage/posix) directory, gluster recreate<br>
>>> file but empty or with buggy data.<br>
>>><br>
>>> I notice too, that with RC1, during resynchronise, if i try an ls on<br>
>>> mount point, ls is blocking until synchronisation is ending, with RC2,<br>
>>> ls is not blocking.<br>
>>><br>
>>> Regards,<br>
>>> Nicolas<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> On Tue, Mar 17, 2009 at 2:50 PM, Gordan Bobic <<a href="mailto:gordan@bobich.net">gordan@bobich.net</a>> wrote:<br>
>>> > Have you tried the later versions (rc2/rc4) without the performance<br>
>>> > trasnlators? Does the problem persist without them? Anything interesting<br>
>>> > looking in the logs?<br>
>>> ><br>
>>> > On Tue, 17 Mar 2009 14:46:41 +0100, nicolas prochazka<br>
>>> > <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>> >> hello again,<br>
>>> >> So this bug does not occur with RC1<br>
>>> >><br>
>>> >> RC2,RC4 contains bug describe below, not RC1 , any idea ?<br>
>>> >> Nicolas<br>
>>> >><br>
>>> >> On Tue, Mar 17, 2009 at 12:55 PM, nicolas prochazka<br>
>>> >> <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>> >>> I 'm just trying with rc2 , same bug as rc4.<br>
>>> >>> Regards,<br>
>>> >>> Nicolas<br>
>>> >>><br>
>>> >>> On Tue, Mar 17, 2009 at 12:06 PM, Gordan Bobic <<a href="mailto:gordan@bobich.net">gordan@bobich.net</a>><br>
>>> > wrote:<br>
>>> >>>> Can you check if it works correctly with 2.0rc2 and/or 2.0rc1?<br>
>>> >>>><br>
>>> >>>> On Tue, 17 Mar 2009 12:04:33 +0100, nicolas prochazka<br>
>>> >>>> <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>> >>>>> oups,<br>
>>> >>>>> same problem in fact with simple 8 bytes text file, the file seems<br>
>>> >>>>> to<br>
>>> >>>>> be corrupt.<br>
>>> >>>>><br>
>>> >>>>> Regards,<br>
>>> >>>>> Nicolas Prochazka<br>
>>> >>>>><br>
>>> >>>>> On Tue, Mar 17, 2009 at 11:20 AM, Gordan Bobic <<a href="mailto:gordan@bobich.net">gordan@bobich.net</a>><br>
>>> >>>>> wrote:<br>
>>> >>>>>> Are you sure this is rc4 specific? I've seen assorted weirdness<br>
>>> >>>>>> when<br>
>>> >>>>>> adding<br>
>>> >>>>>> and removing servers in all versions up to and including rc2 (rc4<br>
>>> >>>>>> seems<br>
>>> >>>>>> to<br>
>>> >>>>>> lock up when starting udev on it, so I'm not using it).<br>
>>> >>>>>><br>
>>> >>>>>> On Tue, 17 Mar 2009 11:15:30 +0100, nicolas prochazka<br>
>>> >>>>>> <<a href="mailto:prochazka.nicolas@gmail.com">prochazka.nicolas@gmail.com</a>> wrote:<br>
>>> >>>>>>> Hello guys,<br>
>>> >>>>>>><br>
>>> >>>>>>> strange problem :<br>
>>> >>>>>>> with rc4, afr synchronisation seems to be not work :<br>
>>> >>>>>>> - If i copy a file on mount gluster, all is ok on all servers<br>
>>> >>>>>>> - if i add a new server in gluster, this server create my files (<br>
>>> > 10G<br>
>>> >>>>>>> size ) , it's appear on XFS as 10G file but file does not contains<br>
>>> >>>>>>> original, just some octets,<br>
>>> >>>>>>> then gluster do not synchronise, perhaps because the size is same.<br>
>>> >>>>>>><br>
>>> >>>>>>> regards,<br>
>>> >>>>>>> NP<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> volume brickless<br>
>>> >>>>>>> type storage/posix<br>
>>> >>>>>>> option directory /mnt/disks/export<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>> volume brickthread<br>
>>> >>>>>>> type features/posix-locks<br>
>>> >>>>>>> option mandatory-locks on # enables mandatory locking<br>
>>> >>>>>>> on<br>
>>> >>>>>>> all<br>
>>> >>>>>> files<br>
>>> >>>>>>> subvolumes brickless<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>> volume brick<br>
>>> >>>>>>> type performance/io-threads<br>
>>> >>>>>>> option thread-count 4<br>
>>> >>>>>>> subvolumes brickthread<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> volume server<br>
>>> >>>>>>> type protocol/server<br>
>>> >>>>>>> subvolumes brick<br>
>>> >>>>>>> option transport-type tcp<br>
>>> >>>>>>> option auth.addr.brick.allow 10.98.98.*<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> -------------------------------------------<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> volume brick_10.98.98.1<br>
>>> >>>>>>> type protocol/client<br>
>>> >>>>>>> option transport-type tcp/client<br>
>>> >>>>>>> option transport-timeout 120<br>
>>> >>>>>>> option remote-host 10.98.98.1<br>
>>> >>>>>>> option remote-subvolume brick<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> volume brick_10.98.98.2<br>
>>> >>>>>>> type protocol/client<br>
>>> >>>>>>> option transport-type tcp/client<br>
>>> >>>>>>> option transport-timeout 120<br>
>>> >>>>>>> option remote-host 10.98.98.2<br>
>>> >>>>>>> option remote-subvolume brick<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> volume last<br>
>>> >>>>>>> type cluster/replicate<br>
>>> >>>>>>> subvolumes brick_10.98.98.1 brick_10.98.98.2<br>
>>> >>>>>>> option read-subvolume brick_10.98.98.1<br>
>>> >>>>>>> option favorite-child brick_10.98.98.1<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>> volume iothreads<br>
>>> >>>>>>> type performance/io-threads<br>
>>> >>>>>>> option thread-count 4<br>
>>> >>>>>>> subvolumes last<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>> volume io-cache<br>
>>> >>>>>>> type performance/io-cache<br>
>>> >>>>>>> option cache-size 2048MB # default is 32MB<br>
>>> >>>>>>> option page-size 128KB #128KB is default option<br>
>>> >>>>>>> option cache-timeout 2 # default is 1<br>
>>> >>>>>>> subvolumes iothreads<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>> volume writebehind<br>
>>> >>>>>>> type performance/write-behind<br>
>>> >>>>>>> option aggregate-size 128KB # default is 0bytes<br>
>>> >>>>>>> option window-size 512KB<br>
>>> >>>>>>> option flush-behind off # default is 'off'<br>
>>> >>>>>>> subvolumes io-cache<br>
>>> >>>>>>> end-volume<br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>> _______________________________________________<br>
>>> >>>>>>> Gluster-devel mailing list<br>
>>> >>>>>>> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>> >>>>>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>>> >>>>>><br>
>>> >>>>>><br>
>>> >>>>>> _______________________________________________<br>
>>> >>>>>> Gluster-devel mailing list<br>
>>> >>>>>> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>> >>>>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>>> >>>>>><br>
>>> >>>><br>
>>> >>>><br>
>>> >>>> _______________________________________________<br>
>>> >>>> Gluster-devel mailing list<br>
>>> >>>> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>> >>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>>> >>>><br>
>>> >>><br>
>>> ><br>
>>> ><br>
>>> > _______________________________________________<br>
>>> > Gluster-devel mailing list<br>
>>> > <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>> > <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>>> ><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-devel mailing list<br>
>>> <a href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
>>> <a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
>><br>
>><br>
>><br>
>> --<br>
>> Amar Tumballi<br>
>><br>
>><br>
><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Amar Tumballi<br><br>