<div dir="ltr">Symlinking gluster to /usr/bin/ seems to have resolved the path issue. Thanks for the tip there. <div><br></div><div>Now there's a different error throw in the geo-rep/ssh...log:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 07:32:42.609031] E [syncdutils(monitor):240:log_raise_exception] <top>: FAIL:</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Traceback (most recent call last):</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> main_i()</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 530, in main_i</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> return monitor(*rscs)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line 243, in monitor</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> return Monitor().multiplex(*distribute(*resources))</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line 205, in distribute</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> mvol = Volinfo(master.volume, master.host)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line 22, in __init__</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> vi = XET.fromstring(vix)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 963, in XML</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> parser.feed(text)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 1245, in feed</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> self._parser.Parse(data, 0)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">ExpatError: syntax error: line 2, column 0</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 07:32:42.610858] I [syncdutils(monitor):192:finalize] <top>: exiting.</blockquote></blockquote><div><br></div><div>I also get a bunch of these errors but have been assuming that they are being thrown because geo-replication hasn't started successfully yet. There is one for each brick:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 12:33:33.539737] E [glusterd-geo-rep.c:2685:glusterd_gsync_read_frm_status] 0-: Unable to read gsyncd status file</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 12:33:33.539742] E [glusterd-geo-rep.c:2999:glusterd_read_status_file] 0-: Unable to read the statusfile for /mnt/a-3-shares-brick-4/brick brick for shares(master), gfs-a-bkp::bkpshares(slave) session</blockquote></blockquote><div><br></div><div>Do I have a config file error somewhere that I need to track down? This volume <i>was</i> upgraded from 3.4.2 a few weeks ago. <br></div><div><br></div><div>Cheers,</div><div>Dave </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 10, 2014 at 7:29 AM, David Gibbons <span dir="ltr"><<a href="mailto:david.c.gibbons@gmail.com" target="_blank">david.c.gibbons@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Kotresh,<div><br></div><div>Thanks for the tip. Unfortunately that does not seem to have any effect. The path to the gluster binaries was already in $PATH. I did try adding the path to the gsyncd binary, but same result. Contents of $PATH are:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/libexec/glusterfs/<br></blockquote><div><br></div><div>It seems like perhaps one of the remote gsyncd processes cannot find the gluster binary, because I see the following in the geo-replication/shares/ssh...log. Can you point me toward how I can find out what is throwing this log entry?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 07:20:53.886676] E [syncdutils(monitor):218:log_raise_exception] <top>: execution of "gluster" failed with ENOENT (No such file or directory)</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[2014-12-10 07:20:53.886883] I [syncdutils(monitor):192:finalize] <top>: exiting.</blockquote></blockquote><div><br></div><div>I think that whatever process is trying to use the gluster command has the incorrect path to access it. Do you know how I could modify <i>that</i> path? </div><div><br></div><div>I've manually tested the ssh_command and ssh_command_tar variables in the relevant gsyncd.conf; both connect to the slave server successfully and appear to execute the command they're supposed to.</div><div><br></div><div>gluster_command_dir in gsyncd.conf is also the correct directory (/usr/local/sbin).</div><div><br></div><div>In summary: I think we're on to something with setting the path, but I think I need to set it somewhere other than my shell.</div><div> </div><div>Thanks,</div><div>Dave</div><div> </div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 9, 2014 at 11:52 PM, Kotresh Hiremath Ravishankar <span dir="ltr"><<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">If that is the case, as a workaround, try adding 'gluster' path<br>
to PATH environment variable or creating symlinks to gluster,<br>
glusterd binaries.<br>
<br>
1. export PATH=$PATH:<path where gluster binaries are installed><br>
<br>
Above should work, let me know if doesn't.<br>
<span><br>
Thanks and Regards,<br>
Kotresh H R<br>
<br>
----- Original Message -----<br>
From: "David Gibbons" <<a href="mailto:david.c.gibbons@gmail.com" target="_blank">david.c.gibbons@gmail.com</a>><br>
</span><div><div>To: "Kotresh Hiremath Ravishankar" <<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>><br>
Cc: "gluster-users" <<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>, <a href="mailto:vnosov@stonefly.com" target="_blank">vnosov@stonefly.com</a><br>
Sent: Tuesday, December 9, 2014 6:16:03 PM<br>
Subject: Re: [Gluster-users] Geo-Replication Issue<br>
<br>
Hi Kotresh,<br>
<br>
Yes, I believe that I am. Can you tell me which symlinks are missing/cause<br>
geo-replication to fail to start? I can create them manually.<br>
<br>
Thank you,<br>
Dave<br>
<br>
On Tue, Dec 9, 2014 at 3:54 AM, Kotresh Hiremath Ravishankar <<br>
<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>> wrote:<br>
<br>
> Hi Dave,<br>
><br>
> Are you hitting the below bug and so not able to sync symlinks ?<br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1105283" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1105283</a><br>
><br>
> Does geo-rep status say "Not Started" ?<br>
><br>
> Thanks and Regards,<br>
> Kotresh H R<br>
><br>
> ----- Original Message -----<br>
> From: "David Gibbons" <<a href="mailto:david.c.gibbons@gmail.com" target="_blank">david.c.gibbons@gmail.com</a>><br>
> To: "gluster-users" <<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
> Cc: <a href="mailto:vnosov@stonefly.com" target="_blank">vnosov@stonefly.com</a><br>
> Sent: Monday, December 8, 2014 7:03:31 PM<br>
> Subject: Re: [Gluster-users] Geo-Replication Issue<br>
><br>
> Apologies for sending so many messages about this! I think I may be<br>
> running into this bug:<br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1105283" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1105283</a><br>
><br>
> Would someone be so kind as to let me know which symlinks are missing when<br>
> this bug manifests, so that I can create them?<br>
><br>
> Thank you,<br>
> Dave<br>
><br>
><br>
> On Sun, Dec 7, 2014 at 11:01 AM, David Gibbons < <a href="mailto:david.c.gibbons@gmail.com" target="_blank">david.c.gibbons@gmail.com</a><br>
> > wrote:<br>
><br>
><br>
><br>
> Ok,<br>
><br>
> I was able to get geo-replication configured by changing<br>
> /usr/local/libexec/glusterfs/gverify.sh to use ssh to access the local<br>
> machine, instead of accessing bash -c directly. I then found that the hook<br>
> script was missing for geo-replication, so I copied that over manually. I<br>
> now have what appears to be a "configured" geo-rep setup:<br>
><br>
><br>
><br>
><br>
> # gluster volume geo-replication shares gfs-a-bkp::bkpshares status<br>
><br>
><br>
><br>
><br>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL<br>
> STATUS<br>
><br>
><br>
> --------------------------------------------------------------------------------------------------------------------------------------------------------<br>
><br>
> gfs-a-3 shares /mnt/a-3-shares-brick-1/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-3 shares /mnt/a-3-shares-brick-2/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-3 shares /mnt/a-3-shares-brick-3/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-3 shares /mnt/a-3-shares-brick-4/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-2 shares /mnt/a-2-shares-brick-1/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-2 shares /mnt/a-2-shares-brick-2/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-2 shares /mnt/a-2-shares-brick-3/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-2 shares /mnt/a-2-shares-brick-4/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-4 shares /mnt/a-4-shares-brick-1/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-4 shares /mnt/a-4-shares-brick-2/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-4 shares /mnt/a-4-shares-brick-3/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-4 shares /mnt/a-4-shares-brick-4/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-1 shares /mnt/a-1-shares-brick-1/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-1 shares /mnt/a-1-shares-brick-2/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-1 shares /mnt/a-1-shares-brick-3/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> gfs-a-1 shares /mnt/a-1-shares-brick-4/brick gfs-a-bkp::bkpshares Not<br>
> Started N/A N/A<br>
><br>
> So that's a step in the right direction (and I can upload a patch for<br>
> gverify to a bugzilla). However, gverify *should* have worked with bash-c,<br>
> and I was not able to figure out why it didn't work, other than it didn't<br>
> seem able to find some programs. I'm thinking that maybe the PATH variable<br>
> is wrong for Gluster, and that's why gverify didn't work out of the box.<br>
><br>
> When I attempt to start geo-rep now, I get the following in the geo-rep<br>
> log:<br>
><br>
><br>
> [2014-12-07 10:52:40.893594] E<br>
> [syncdutils(monitor):218:log_raise_exception] <top>: execution of "gluster"<br>
> failed with ENOENT (No such file or directory)<br>
><br>
> [2014-12-07 10:52:40.893886] I [syncdutils(monitor):192:finalize] <top>:<br>
> exiting.<br>
><br>
> Which seems to agree that maybe gluster isn't running with the same path<br>
> variable that my console session is running with. Is this possible? I know<br>
> I'm grasping :).<br>
><br>
> Any nudge in the right direction would be very much appreciated!<br>
><br>
> Cheers,<br>
> Dave<br>
><br>
><br>
> On Sat, Dec 6, 2014 at 10:06 AM, David Gibbons < <a href="mailto:david.c.gibbons@gmail.com" target="_blank">david.c.gibbons@gmail.com</a><br>
> > wrote:<br>
><br>
><br>
><br>
> Good Morning,<br>
><br>
> I am having some trouble getting geo-replication started on a 3.5.3 volume.<br>
><br>
> I have verified that password-less SSH is functional in both directions<br>
> from the backup gluster server, and all nodes in the production gluster. I<br>
> have verified that all nodes in production and backup cluster are running<br>
> the same version of gluster, and that name resolution works in both<br>
> directions.<br>
><br>
> When I attempt to start geo-replication with this command:<br>
><br>
><br>
> gluster volume geo-replication shares gfs-a-bkp::bkpshares create push-pem<br>
><br>
> I end up with the following in the logs:<br>
><br>
><br>
> [2014-12-06 15:02:50.284426] E<br>
> [glusterd-geo-rep.c:1889:glusterd_verify_slave] 0-: Not a valid slave<br>
><br>
> [2014-12-06 15:02:50.284495] E<br>
> [glusterd-geo-rep.c:2106:glusterd_op_stage_gsync_create] 0-:<br>
> gfs-a-bkp::bkpshares is not a valid slave volume. Error: Unable to fetch<br>
> master volume details. Please check the master cluster and master volume.<br>
><br>
> [2014-12-06 15:02:50.284509] E [glusterd-syncop.c:912:gd_stage_op_phase]<br>
> 0-management: Staging of operation 'Volume Geo-replication Create' failed<br>
> on localhost : Unable to fetch master volume details. Please check the<br>
> master cluster and master volume.<br>
><br>
> Would someone be so kind as to point me in the right direction?<br>
><br>
> Cheers,<br>
> Dave<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>