<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>
<br><div><div id="SkyDrivePlaceholder"></div>

<style><!--
.ExternalClass .ecxhmmessage P
{padding:0px;}
.ExternalClass body.ecxhmmessage
{font-size:10pt;font-family:Tahoma;}

--</style>Hi all,<br><div dir="ltr"><br>I've got an issue , it's seems that the size reported by df -h grows indefinitely. Any help would be appreciated.<br><br>some details : <br>On the client : <br><br>yval9000:/users98 # df -h .<br>Filesystem&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size&nbsp; Used Avail Use% Mounted on<br>ylal3510:/poolsave/yval9000<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1.7T&nbsp; 1.7T&nbsp;&nbsp; 25G&nbsp; 99% /users98<br><br>yval9000:/users98 # du -ch .<br>5.1G&nbsp;&nbsp;&nbsp; /users98<br><br><br>My logs are full of&nbsp; :<br>[2012-04-27 12:14:32.402972] I [client3_1-fops.c:683:client3_1_writev_cbk] 0-poolsave-client-1: remote operation failed: No space left on device<br>[2012-04-27 12:14:32.426964] I [client3_1-fops.c:683:client3_1_writev_cbk] 0-poolsave-client-1: remote operation failed: No space left on device<br>[2012-04-27 12:14:32.439424] I [client3_1-fops.c:683:client3_1_writev_cbk] 0-poolsave-client-1: remote operation failed: No space left on device<br>[2012-04-27 12:14:32.441505] I [client3_1-fops.c:683:client3_1_writev_cbk] 0-poolsave-client-0: remote operation failed: No space left on device<br><br><br><br>This is my volume config : <br><br>Volume Name: poolsave<br>Type: Distributed-Replicate<br>Status: Started<br>Number of Bricks: 2 x 2 = 4<br>Transport-type: tcp<br>Bricks:<br>Brick1: ylal3510:/users3/poolsave<br>Brick2: ylal3530:/users3/poolsave<br>Brick3: ylal3520:/users3/poolsave<br>Brick4: ylal3540:/users3/poolsave<br>Options Reconfigured:<br>nfs.enable-ino32: off<br>features.quota-timeout: 30<br>features.quota: off<br>performance.cache-size: 6GB<br>network.ping-timeout: 60<br>performance.cache-min-file-size: 1KB<br>performance.cache-max-file-size: 4GB<br>performance.cache-refresh-timeout: 2<br>nfs.port: 2049<br>performance.io-thread-count: 64<br>diagnostics.latency-measurement: on<br>diagnostics.count-fop-hits: on<br><br><br>Space left on servers : <br><br>ylal3510:/users3 # df -h .<br>Filesystem&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size&nbsp; Used Avail Use% Mounted on<br>/dev/mapper/users-users3vol<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 858G&nbsp; 857G&nbsp; 1.1G 100% /users3<br>ylal3510:/users3 # du -ch /users3 | grep total<br>129G&nbsp;&nbsp;&nbsp; total<br>---<br><br>ylal3530:/users3 # df -h .<br>Filesystem&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size&nbsp; Used Avail Use% Mounted on<br>/dev/mapper/users-users3vol<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 858G&nbsp; 857G&nbsp; 1.1G 100% /users3<br>ylal3530:/users3 # du -ch /users3 | grep total<br>129G&nbsp;&nbsp;&nbsp; total<br>---<br><br>ylal3520:/users3 # df -h .<br>Filesystem&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size&nbsp; Used Avail Use% Mounted on<br>/dev/mapper/users-users3vol<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 858G&nbsp; 835G&nbsp;&nbsp; 24G&nbsp; 98% /users3<br>ylal3520:/users3 # du -ch /users3 | grep total<br>182G&nbsp;&nbsp;&nbsp; total<br>---<br><br>ylal3540:/users3 # df -h .<br>Filesystem&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Size&nbsp; Used Avail Use% Mounted on<br>/dev/mapper/users-users3vol<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 858G&nbsp; 833G&nbsp;&nbsp; 25G&nbsp; 98% /users3<br>ylal3540:/users3 # du -ch /users3 | grep total<br>181G&nbsp;&nbsp;&nbsp; total<br><br><br>This issue appears after those 2 scripts running during 2 weeks : <br><br><pre class="ecxbz_comment_text" id="ecxcomment_text_0">test_save.sh is executed each hour,it takes a bunch of data to compress (dest :
REP_SAVE_TEMP) and then move it in a folder (REP_SAVE) that the netback.sh
script will scan each 30 min

#!/usr/bin/ksh
# ________________________________________________________________________
#             |
# Nom         test_save.sh
# ____________|___________________________________________________________
#             |
# Description | test GlusterFS
# ____________|___________________________________________________________

UNIXSAVE=/users98/test
REP_SAVE_TEMP=${UNIXSAVE}/tmp
REP_SAVE=${UNIXSAVE}/gluster
LOG=/users/glusterfs_test


f_tar_mv()
{
  echo "\n"
ARCHNAME=${REP_SAVE_TEMP}/`date +%d-%m-%H-%M`_${SUBNAME}.tar

  tar -cpvf ${ARCHNAME} ${REPERTOIRE}

  echo "creation of ${ARCHNAME}"


  # mv ${REP_SAVE_TEMP}/*_${SUBNAME}.tar ${REP_SAVE}
  mv  ${REP_SAVE_TEMP}/* ${REP_SAVE}
  echo "Moving archive in ${REP_SAVE} "
  echo "\n"

  return $?
}

REPERTOIRE="/users2/"
SUBNAME="test_glusterfs_save"
f_tar_mv &gt;$LOG/save_`date +%d-%m-%Y-%H-%M`.log 2&gt;&amp;1


#!/usr/bin/ksh
# ________________________________________________________________________
#             |
# Nom         netback.sh
# ____________|___________________________________________________________
#             |
# Description | Sauvegarde test GlusterFS
# ____________|___________________________________________________________

UNIXSAVE=/users98/test
REP_SAVE_TEMP=${UNIXSAVE}/tmp
REP_SAVE=${UNIXSAVE}/gluster
LOG=/users/glusterfs_test

f_net_back()
{
if [[ `find ${REP_SAVE} -type f | wc -l` -eq 0 ]]
then
echo "nothing to save";
else
echo "Simulation netbackup, tar in /dev/null"
tar -cpvf /dev/null ${REP_SAVE}/*
echo "deletion archive"
rm ${REP_SAVE}/*

fi
return $?
}

f_net_back &gt;${LOG}/netback_`date +%d-%m-%H-%M`.log 2&gt;&amp;1</pre><br><br><br><br><br><br><br><br><br>                                               </div></div>                                               </div></body>
</html>