<div dir="ltr"><div>Hi,</div><div><br></div>I created replicated vol with two bricks on the same node and copied some data to it.<div><br></div><div>Now removed the disk which has hosted one of the brick of the volume.</div><div><br></div><div><div>Storage.health-check-interval is set to 30 seconds.<br></div></div><div><br></div><div>I could see the disk is unavailable using zpool command of zfs on linux but the gluster volume status still displays the brick process running which should have been shutdown by this time.</div><div><br></div><div>Is this a bug in 3.6 since it is mentioned as feature &quot;<a href="https://github.com/gluster/glusterfs/blob/release-3.6/doc/features/brick-failure-detection.md">https://github.com/gluster/glusterfs/blob/release-3.6/doc/features/brick-failure-detection.md</a>&quot;  or am I doing any mistakes here?</div><div><br></div><div><div>[root@fractal-c92e gluster-3.6]# gluster volume status</div><div>Status of volume: repvol</div><div>Gluster process<span class="" style="white-space:pre">                                                </span>Port<span class="" style="white-space:pre">        </span>Online<span class="" style="white-space:pre">        </span>Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 192.168.1.246:/zp1/brick1<span class="" style="white-space:pre">                                </span>49154<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>17671</div><div>Brick 192.168.1.246:/zp2/brick2<span class="" style="white-space:pre">                                </span>49155<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>17682</div><div>NFS Server on localhost<span class="" style="white-space:pre">                                        </span>2049<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>17696</div><div>Self-heal Daemon on localhost<span class="" style="white-space:pre">                                </span>N/A<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>17701</div><div> </div><div>Task Status of Volume repvol</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div></div><div><br></div><div><br></div><div><div>[root@fractal-c92e gluster-3.6]# gluster volume info</div><div> </div><div>Volume Name: repvol</div><div>Type: Replicate</div><div>Volume ID: d4f992b1-1393-43b8-9fda-2e2b6e3b5039</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 192.168.1.246:/zp1/brick1</div><div>Brick2: 192.168.1.246:/zp2/brick2</div><div>Options Reconfigured:</div><div>storage.health-check-interval: 30</div></div><div><br></div><div><div>[root@fractal-c92e gluster-3.6]# zpool status zp2</div><div>  pool: zp2</div><div> state: UNAVAIL</div><div>status: One or more devices are faulted in response to IO failures.</div><div>action: Make sure the affected devices are connected, then run &#39;zpool clear&#39;.</div><div>   see: <a href="http://zfsonlinux.org/msg/ZFS-8000-HC">http://zfsonlinux.org/msg/ZFS-8000-HC</a></div><div>  scan: none requested</div><div>config:</div><div><br></div><div><span class="" style="white-space:pre">        </span>NAME        STATE     READ WRITE CKSUM</div><div><span class="" style="white-space:pre">        </span>zp2         UNAVAIL      0     0     0  insufficient replicas</div><div><span class="" style="white-space:pre">        </span>  sdb       UNAVAIL      0     0     0</div><div><br></div><div>errors: 2 data errors, use &#39;-v&#39; for a list</div></div><div><br></div><div><br></div><div>Thanks,</div><div>Kiran.</div></div>