<div dir="ltr"><div>Version: glusterfs-server-3.4.2-1.el6.x86_64</div><div><br></div><div>I have an issue where I&#39;m not getting the correct status for geo-replication, this is shown below. Also I&#39;ve had issues where I&#39;ve not been able to stop geo-replication without using a firewall rule on the slave. I would get back a cryptic error and nothing useful in the logs.</div>
<div><br></div><div><div># gluster volume geo-replication status</div><div>NODE                 MASTER               SLAVE                                              STATUS    </div><div>---------------------------------------------------------------------------------------------------</div>
<div>ovirt001.miovision.corp rep1                 gluster://10.0.11.4:/rep1                          faulty    </div><div>ovirt001.miovision.corp miofiles             gluster://10.0.11.4:/miofiles                      faulty    </div>
<div><br></div><div># gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 start<br></div><div>geo-replication session between rep1 &amp; gluster://10.0.11.4:/rep1 already started</div><div>geo-replication command failed</div>
<div><br></div><div>[root@ovirt001 ~]# gluster volume geo-replication status</div><div>NODE                 MASTER               SLAVE                                              STATUS    </div><div>---------------------------------------------------------------------------------------------------</div>
<div>ovirt001.miovision.corp rep1                 gluster://10.0.11.4:/rep1                          faulty    </div><div>ovirt001.miovision.corp miofiles             gluster://10.0.11.4:/miofiles                      faulty    </div>
</div><div><br></div><div><br></div><div>How can I manually remove a geo-rep agreement?</div><div><br>Thanks,</div><div><br></div><br clear="all"><div><div dir="ltr"><span style="font-family:arial,sans-serif;font-size:16px"><strong>Steve <br>
</strong></span></div></div>
</div>