<div dir="ltr"><div><div><div><div><div><div><span style="font-family:courier new,monospace">Hi All,<br></span></div><span style="font-family:courier new,monospace"> I am wondering if this I am the only one seeing this or there are enuf reasons why mount.glusterfs returns 0 (which means success) as the exit code for error cases ?<br>
</span></div><span style="font-family:courier new,monospace">Bcos of this cinder (openstack service) code is misled as it thinks mounting glusterfs volume on already mounted volume is successfull and never gets into the warning/error flow!<br>
</span></div><div><span style="font-family:courier new,monospace">(Not to mention that i spent 1+ days debugging and reaching that conclusion!!!)<br></span></div><div><span style="font-family:courier new,monospace"><br></span></div>
<span style="font-family:courier new,monospace">I just did a quick sanity check to compare how mount.nfs and mount.glusterfs behave on similar error scenario.. and below is what i find<br><br></span>
<p><span style="font-family:courier new,monospace">
[stack@devstack-vm cinder]$ df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/vda1 9.9G 3.7G 6.1G 38% /<br>
devtmpfs 2.0G 0 2.0G 0% /dev<br>
tmpfs 2.0G 0 2.0G 0% /dev/shm<br>
tmpfs 2.0G 448K 2.0G 1% /run<br>
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup<br>
192.168.122.252:/opt/stack/nfs/brick 9.9G 3.7G 6.1G 38% /opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190</span></p><p><span style="font-family:courier new,monospace">
[stack@devstack-vm cinder]$ sudo mount -t nfs 192.168.122.252:/opt/stack/nfs/brick /opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190/<br>
mount.nfs: /opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190 is busy or already mounted<br>
<b>[stack@devstack-vm cinder]$ echo $?<br>
32</b></span></p><span style="font-family:courier new,monospace">
NOTE: mount.nfs exits w/ proper error code<br><br></span>
<p><span style="font-family:courier new,monospace">
[stack@devstack-vm ~]$ df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/vda1 9.9G 3.7G 6.1G 38% /<br>
devtmpfs 2.0G 0 2.0G 0% /dev<br>
tmpfs 2.0G 0 2.0G 0% /dev/shm<br>
tmpfs 2.0G 448K 2.0G 1% /run<br>
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup<br>
devstack-vm.localdomain:/gvol1 9.9G 3.7G 6.1G 38% /opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe<br>
devstack-vm.localdomain:/gvol2 9.9G 3.7G 6.1G 38% /opt/stack/data/cinder/mnt/413c1f8d14058d5b2d07f8a92814bd12</span></p><p><span style="font-family:courier new,monospace">
[stack@devstack-vm ~]$ sudo mount -t glusterfs devstack-vm.localdomain:/gvol1 /opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe/<br>
/sbin/mount.glusterfs: according to mtab, GlusterFS is already mounted on /opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe<br>
<b>[stack@devstack-vm ~]$ echo $?<br>
0</b><br></span>
</p><span style="font-family:courier new,monospace">
NOTE: mount.glusterfs exits with 0 (success)<br><br>******************************************************************************************<br><br></span></div><span style="font-family:courier new,monospace">A quick look at mount.glusterfs yeilds...<br>
<br> # No need to do a ! -d test, it is taken care while initializing the<br> # variable mount_point<br> [ -z "$mount_point" -o ! -d "$mount_point" ] && {<br> echo "ERROR: Mount point does not exist."<br>
usage;<br><b> exit 0;</b><br> }<br><br> # Simple check to avoid multiple identical mounts<br> if grep -q "[[:space:]+]${mount_point}[[:space:]+]fuse" $mounttab; then<br> echo -n "$0: according to mtab, GlusterFS is already mounted on "<br>
echo "$mount_point"<br><b> exit 0;</b><br> fi<br><br>******************************************************************<br><br></span></div><span style="font-family:courier new,monospace">Is this intended or bug or is there some history to why mount.glusterfs return 0 for many obvious error cases ?<br>
<br></span></div><span style="font-family:courier new,monospace">thanx,<br>deepak<br><br></span></div>