<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<div class="moz-cite-prefix">On 04/30/2014 07:14 AM, Paul Cuzner
wrote:<br>
</div>
<blockquote
cite="mid:749597994.6490605.1398822246425.JavaMail.zimbra@redhat.com"
type="cite">
<div style="font-family: lucida console,sans-serif; font-size:
12pt; color: #000000">
<div>I guess my point about loss of data following a restore
boils down to the change of brick names that the restore
process "forces". I may be wrong but doesn't this mean that
any "downstream" monitoring/scripts have to adjust their idea
of what the volume looked like - it just sounds like more work
for the admin to me...<br>
</div>
</div>
</blockquote>
<br>
<br>
From a Nagios monitoring view, we have an auto discovery plugin that
scans the volumes for any change. So once a volume snapshot is
restored, it will be detected as though new bricks have been added
to it (due to the new brick names) with the older bricks deleted. <br>
What we're currently missing is a mechanism to know that the change
in volume happened due to a snapshot restore. <br>
<br>
With regards to monitoring history, we plan to keep the previous
configuration as backup along with any performance data that was
collected on them.<br>
<br>
<br>
<blockquote
cite="mid:749597994.6490605.1398822246425.JavaMail.zimbra@redhat.com"
type="cite">
<div style="font-family: lucida console,sans-serif; font-size:
12pt; color: #000000">
<div><br>
</div>
<div>The other thing that wasn't clear from the video, was the
impact to fstab. What happens here in relation to the brick
name change? <br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<hr id="zwchr">
<blockquote style="border-left:2px solid
#1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From:
</b>"Rajesh Joseph" <a class="moz-txt-link-rfc2396E" href="mailto:rjoseph@redhat.com"><rjoseph@redhat.com></a><br>
<b>To: </b>"Paul Cuzner" <a class="moz-txt-link-rfc2396E" href="mailto:pcuzner@redhat.com"><pcuzner@redhat.com></a><br>
<b>Cc: </b>"gluster-devel" <a class="moz-txt-link-rfc2396E" href="mailto:gluster-devel@nongnu.org"><gluster-devel@nongnu.org></a>,
"Sahina Bose" <a class="moz-txt-link-rfc2396E" href="mailto:sabose@redhat.com"><sabose@redhat.com></a><br>
<b>Sent: </b>Tuesday, 29 April, 2014 10:13:00 PM<br>
<b>Subject: </b>Re: [Gluster-devel] Snapshot CLI question<br>
<div><br>
</div>
Hi Paul,<br>
<div><br>
</div>
We are thinking of providing policy-driven auto-delete in
future. Where user can provide various policies by which they
can control auto-delete.<br>
e.g. delete oldest, delete with maximum disk utilization, etc.
What you mentioned can also be part of the policy.<br>
<div><br>
</div>
Loss of monitoring history has nothing to do with dm-thinp.
Monitoring tool keeps the history of changes seen by the
brick. Now after the restore<br>
the monitoring tool has no ways to map the newer bricks to the
older bricks, therefore they discard the history of the older
bricks.<br>
I am sure monitoring team has plans to evolve this and fix
this.<br>
<div><br>
</div>
Best Regards,<br>
Rajesh<br>
<div><br>
</div>
<br>
----- Original Message -----<br>
From: "Paul Cuzner" <a class="moz-txt-link-rfc2396E" href="mailto:pcuzner@redhat.com"><pcuzner@redhat.com></a><br>
To: "Rajesh Joseph" <a class="moz-txt-link-rfc2396E" href="mailto:rjoseph@redhat.com"><rjoseph@redhat.com></a><br>
Cc: "gluster-devel" <a class="moz-txt-link-rfc2396E" href="mailto:gluster-devel@nongnu.org"><gluster-devel@nongnu.org></a>, "Sahina
Bose" <a class="moz-txt-link-rfc2396E" href="mailto:sabose@redhat.com"><sabose@redhat.com></a><br>
Sent: Tuesday, April 29, 2014 5:22:42 AM<br>
Subject: Re: [Gluster-devel] Snapshot CLI question<br>
<div><br>
</div>
No worries, Rajesh. <br>
<div><br>
</div>
Without --xml we're limiting the automation potential and
resorting to 'screen scraping' - so with that said, is --xml
in plan, or do you need an RFE? <br>
<div><br>
</div>
Brickpath's changing and loss of history presents an
interesting problem for monitoring and capacity planning -
especially if the data is lost! As an admin this would be a
real concern. Is this something that will evolve, or is this
just the flipside of using dm-thinp as the provider for the
volume/snapshots? <br>
<div><br>
</div>
The other question I raised was around triggers for
autodelete. Having a set number of snapshots is fine, but I've
seen environments in the past where autodelete was needed to
protect the pool when snapshots deltas were large - i.e.
autodelete gets triggered at a thinpool freespace threshold. <br>
<div><br>
</div>
Is this last item in the plan? Does it make sense? Does it
need an RFE? <br>
<div><br>
</div>
Cheers, <br>
<div><br>
</div>
PC <br>
<div><br>
</div>
----- Original Message -----<br>
<div><br>
</div>
> From: "Rajesh Joseph" <a class="moz-txt-link-rfc2396E" href="mailto:rjoseph@redhat.com"><rjoseph@redhat.com></a><br>
> To: "Paul Cuzner" <a class="moz-txt-link-rfc2396E" href="mailto:pcuzner@redhat.com"><pcuzner@redhat.com></a><br>
> Cc: "gluster-devel" <a class="moz-txt-link-rfc2396E" href="mailto:gluster-devel@nongnu.org"><gluster-devel@nongnu.org></a>,
"Sahina Bose"<br>
> <a class="moz-txt-link-rfc2396E" href="mailto:sabose@redhat.com"><sabose@redhat.com></a><br>
> Sent: Monday, 28 April, 2014 9:47:04 PM<br>
> Subject: Re: [Gluster-devel] Snapshot CLI question<br>
<div><br>
</div>
> Sorry Paul for this late reply.<br>
<div><br>
</div>
> As of now we are not supporting --xml option.<br>
<div><br>
</div>
> And restore does change the brick path. Users using their
own monitoring<br>
> script need to be aware of this scenario.<br>
> RHS monitoring will monitor the new bricks once restored,
but the history<br>
> related to older brick might be lost.<br>
<div><br>
</div>
> Sahin: Would you like to comment on the monitoring part
of the question?<br>
<div><br>
</div>
> Thanks & Regards,<br>
> Rajesh<br>
<div><br>
</div>
> ----- Original Message -----<br>
> From: "Paul Cuzner" <a class="moz-txt-link-rfc2396E" href="mailto:pcuzner@redhat.com"><pcuzner@redhat.com></a><br>
> To: "gluster-devel" <a class="moz-txt-link-rfc2396E" href="mailto:gluster-devel@nongnu.org"><gluster-devel@nongnu.org></a><br>
> Sent: Wednesday, April 23, 2014 5:14:44 AM<br>
> Subject: [Gluster-devel] Snapshot CLI question<br>
<div><br>
</div>
> Hi,<br>
<div><br>
</div>
> having seen some of the demo's/material around the
snapshot cli, it raised a<br>
> couple of questions;<br>
<div><br>
</div>
> Will --xml be supported?<br>
<div><br>
</div>
> The other question I have relates to brick names. On a
demo video I saw the<br>
> brick names change following a 'restore' operation (i.e.
vol info shows<br>
> different paths - pointing to the paths associated with
the snapshot.)<br>
<div><br>
</div>
> Is this the case currently, and if so does this pose a
problem for<br>
> monitoring?<br>
<div><br>
</div>
> Cheers,<br>
<div><br>
</div>
> Paul C<br>
<div><br>
</div>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@nongnu.org">Gluster-devel@nongnu.org</a><br>
> <a class="moz-txt-link-freetext" href="https://lists.nongnu.org/mailman/listinfo/gluster-devel">https://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
</blockquote>
<div><br>
</div>
</div>
</blockquote>
<br>
</body>
</html>