<div dir="ltr">Fixed by editing the geo-rep volumes gsyncd.conf file, changing /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes.<div><br></div><div>Any reason why this is in the default template? Also any reason why when I stop glusterd, change the template on both master nodes and start the gluster service its overwritten?</div>
<div><br></div><div>Steve</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Apr 29, 2014 at 12:11 PM, Steve Dainard <span dir="ltr">&lt;<a href="mailto:sdainard@miovision.com" target="_blank">sdainard@miovision.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Just setup geo-replication between two replica 2 pairs, gluster version 3.5.0.2.</div><div><br></div>
<div>Following this guide: <a href="https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html" target="_blank">https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html</a></div>

<div><br></div><div>Status is faulty/passive:</div><div><br></div><div><div># gluster volume geo-replication rep1 10.0.11.4::rep1 status</div><div> </div><div>MASTER NODE                MASTER VOL    MASTER BRICK                           SLAVE              STATUS     CHECKPOINT STATUS    CRAWL STATUS        </div>

<div>------------------------------------------------------------------------------------------------------------------------------------------------</div><div>ovirt001.miovision.corp    rep1          /mnt/storage/lv-storage-domain/rep1    10.0.11.4::rep1    faulty     N/A                  N/A                 </div>

<div>ovirt002.miovision.corp    rep1          /mnt/storage/lv-storage-domain/rep1    10.0.11.5::rep1    Passive    N/A                  N/A                 </div></div><div><br></div><div><br></div>geo-replication log from master:<div>

<br></div><div><div>[2014-04-29 12:00:07.178314] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------</div><div>[2014-04-29 12:00:07.178550] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker</div>

<div>[2014-04-29 12:00:07.344643] I [gsyncd(/mnt/storage/lv-storage-domain/rep1):532:main_i] &lt;top&gt;: syncing: gluster://localhost:rep1 -&gt; ssh://root@10.0.11.4:gluster://localhost:rep1</div><div>[2014-04-29 12:00:07.357718] D [repce(/mnt/storage/lv-storage-domain/rep1):175:push] RepceClient: call 21880:139789410989824:1398787207.36 __repce_version__() ...</div>

<div>[2014-04-29 12:00:07.631556] E [syncdutils(/mnt/storage/lv-storage-domain/rep1):223:log_raise_exception] &lt;top&gt;: connection to peer is broken</div><div>[2014-04-29 12:00:07.631808] W [syncdutils(/mnt/storage/lv-storage-domain/rep1):227:log_raise_exception] &lt;top&gt;: !!!!!!!!!!!!!</div>

<div>[2014-04-29 12:00:07.631947] W [syncdutils(/mnt/storage/lv-storage-domain/rep1):228:log_raise_exception] &lt;top&gt;: !!! getting &quot;No such file or directory&quot; errors is most likely due to MISCONFIGURATION, please consult <a href="https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html" target="_blank">https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html</a></div>

<div>[2014-04-29 12:00:07.632061] W [syncdutils(/mnt/storage/lv-storage-domain/rep1):231:log_raise_exception] &lt;top&gt;: !!!!!!!!!!!!!</div><div>[2014-04-29 12:00:07.632251] E [resource(/mnt/storage/lv-storage-domain/rep1):204:errlog] Popen: command &quot;ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-0_wqaI/cb6bb9e3af32ccbb7c8c0ae955f728db.sock <a href="mailto:root@10.0.11.4" target="_blank">root@10.0.11.4</a> /nonexistent/gsyncd --session-owner c0a6c74c-deb5-4ed0-9ef9-23756d593197 -N --listen --timeout 120 gluster://localhost:rep1&quot; returned with 127, saying:</div>

<div>[2014-04-29 12:00:07.632396] E [resource(/mnt/storage/lv-storage-domain/rep1):207:logerr] Popen: ssh&gt; bash: /nonexistent/gsyncd: No such file or directory</div><div>[2014-04-29 12:00:07.632689] I [syncdutils(/mnt/storage/lv-storage-domain/rep1):192:finalize] &lt;top&gt;: exiting.</div>

<div>[2014-04-29 12:00:07.634656] I [monitor(monitor):150:monitor] Monitor: worker(/mnt/storage/lv-storage-domain/rep1) died before establishing connection</div></div><div><br></div><div>Thanks,</div><div>Steve</div><div>

<br></div></div>
</blockquote></div><br></div>