<div dir="ltr">Hi.<br><br>Same as advised on this list, see below.<br><br>By the way, I restarted both the clients and servers, and the reported size is still the same.<br>Whichever it is, it stuck quite persistently :).<br>
<br>server.vol<br><br>volume home1<br> type storage/posix # POSIX FS translator<br> option directory /media/storage # Export this directory<br>end-volume<br><br>volume posix-locks-home1<br> type features/posix-locks<br>
option mandatory-locks on<br> subvolumes home1<br>end-volume<br><br>### Add network serving capability to above home.<br>volume server<br> type protocol/server<br> option transport-type tcp<br> subvolumes posix-locks-home1<br>
option auth.addr.posix-locks-home1.allow * # Allow access to "home1" volume<br>end-volume<br><br><br>client.vol<br><br>## Reference volume "home1" from remote server<br>volume home1<br> type protocol/client<br>
option transport-type tcp/client<br> option remote-host 192.168.253.41 # IP address of remote host<br> option remote-subvolume posix-locks-home1 # use home1 on remote host<br> option transport-timeout 10 # value in seconds; it should be set relatively low<br>
end-volume<br><br>## Reference volume "home2" from remote server<br>volume home2<br> type protocol/client<br> option transport-type tcp/client<br> option remote-host 192.168.253.42 # IP address of remote host<br>
option remote-subvolume posix-locks-home1 # use home1 on remote host<br> option transport-timeout 10 # value in seconds; it should be set relatively low<br>end-volume<br><br>volume home<br> type cluster/afr<br>
option metadata-self-heal on<br> subvolumes home1 home2<br>end-volume<br><br>volume writebehind<br> type performance/write-behind<br> option aggregate-size 128KB<br> option window-size 1MB<br> subvolumes home<br>end-volume<br>
<br>volume cache<br> type performance/io-cache<br> option cache-size 512MB<br> subvolumes writebehind<br>end-volume<br><br><br>Regards.<br><br><div class="gmail_quote">2009/3/26 Vikas Gorur <span dir="ltr"><<a href="mailto:vikas@zresearch.com">vikas@zresearch.com</a>></span><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">2009/3/26 Stas Oskin <<a href="mailto:stas.oskin@gmail.com">stas.oskin@gmail.com</a>>:<br>
<div><div></div><div class="h5">> Hi.<br>
><br>
> We erased all the data from our mount point, but the df still reports<br>
> it's almost full:<br>
><br>
> glusterfs 31G 27G 2.5G 92% /mnt/glusterfs<br>
><br>
> Running du either in the mount point, or in the back-end directory,<br>
> reports 914M.<br>
><br>
> How do we get the space back?<br>
<br>
</div></div>What is your client and server configuration?<br>
<br>
Vikas<br>
<font color="#888888">--<br>
Engineer - Z Research<br>
<a href="http://gluster.com/" target="_blank">http://gluster.com/</a><br>
</font></blockquote></div></div>