Hi,<br><br>I've been trying to reproduce your problem. Some observations,<br>* I've run into 'No such file or directory' errors, but it was due to dd not creating the file due to block size being 0 (bs=0).<br>
* As per dd error msgs, there is a space between '/' and 'mnt' (/ mnt). <br>dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory<br><br>please make sure that the file is getting created in first place (file may not be created due to invalid parameters to dd, like bs=0 in above case).<br>
<br>* since I am not able to reproduce it on my setup, is it possible for you to try out the test with<br> - afr-self-heal turned on<br> - afr-self-heal turned off<br> - following options control afr self heal<br> option data-self-heal off<br>
option metadata-self-heal off<br> option entry-self-heal off<br><br>regards,<br><div class="gmail_quote">On Wed, Dec 10, 2008 at 7:51 PM, <span dir="ltr"><<a href="mailto:a_pirania@poczta.onet.pl">a_pirania@poczta.onet.pl</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>I have a problem. I have run two servers and two clients. On both clients operate in the background loop:<br><br> for ((j=0; j< $RANDOM; j++)) {<br> PLIK=$RANDOM.$RANDOM<br> dd if=/dev/urandom of=/mnt/glusterfs/$KAT/$PLIK bs=$RANDOM count=1<br>
dd if=/mnt/glusterfs/$KAT/$PLIK of=/dev/null<br> rm -f /mnt/glusterfs/$KAT/$PLIK<br> }<br><br><br><br>If both servers are connected to everything is fine. If one stops working after several minutes will go back to the server, a client I have:<br>
<br>dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory<br>dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory<br>dd: opening `/ mnt/glusterfs/24427/18649.25895 ': No such file or directory<br>
<br><br>after a few seconds, everything is working again.<br><br>I think that the client is trying to read the file from the new server. I think this is not work?<br><br><br>client:<br>
<br>
volume client1<br>
type protocol/client<br>
option transport-type tcp/client<br>
option remote-host 10.0.1.130<br>
option remote-port 6996<br>
option remote-subvolume posix1<br>
end-volume<br>
<br>
volume client2<br>
type protocol/client<br>
option transport-type tcp/client<br>
option remote-host 10.0.1.131<br>
option remote-port 6996<br>
option remote-subvolume posix2<br>
end-volume<br>
<br>
volume afr<br>
type cluster/afr<br>
subvolumes client1 client2<br>
end-volume<br>
<br>
volume rh<br>
type performance/read-ahead<br>
option page-size 100KB<br>
option page-count 3<br>
subvolumes afr<br>
end-volume<br>
<br>
volume wh<br>
type performance/write-behind<br>
option aggregate-size 1MB<br>
option flush-behind on<br>
subvolumes rh<br>
end-volume<br>
<br>
<br>
server:<br>
<br>
volume posix1<br>
type storage/posix<br>
option directory /var/storage/glusterfs<br>
option debug on<br>
end-volume<br>
<br>
volume posix-locks<br>
type features/posix-locks<br>
option mandatory on<br>
subvolumes posix1<br>
end-volume<br>
<br>
volume io-thr<br>
type performance/io-threads<br>
option thread-count 2<br>
option cache-size 64MB<br>
subvolumes posix-locks<br>
end-volume<br>
<br>
volume server<br>
type protocol/server<br>
option transport-type tcp/server<br>
option listen-port 6996<br>
subvolumes io-thr<br>
option auth.ip.posix1.allow 10.*.*.*<br>
end-volume<br>
<br>
<br></div>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users" target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Raghavendra G<br><br>