<html><body><div style="font-family: lucida console,sans-serif; font-size: 12pt; color: #000000"><div>I guess my point about loss of data following a restore boils down to the change of brick names that the restore process "forces". I may be wrong but doesn't this mean that any "downstream" monitoring/scripts have to adjust their idea of what the volume looked like - it just sounds like more work for the admin to me...<br></div><div><br></div><div>The other thing that wasn't clear from the video, was the impact to fstab. What happens here in relation to the brick name change? <br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Rajesh Joseph" <rjoseph@redhat.com><br><b>To: </b>"Paul Cuzner" <pcuzner@redhat.com><br><b>Cc: </b>"gluster-devel" <gluster-devel@nongnu.org>, "Sahina Bose" <sabose@redhat.com><br><b>Sent: </b>Tuesday, 29 April, 2014 10:13:00 PM<br><b>Subject: </b>Re: [Gluster-devel] Snapshot CLI question<br><div><br></div>Hi Paul,<br><div><br></div>We are thinking of providing policy-driven auto-delete in future. Where user can provide various policies by which they can control auto-delete.<br>e.g. delete oldest, delete with maximum disk utilization, etc. What you mentioned can also be part of the policy.<br><div><br></div>Loss of monitoring history has nothing to do with dm-thinp. Monitoring tool keeps the history of changes seen by the brick. Now after the restore<br>the monitoring tool has no ways to map the newer bricks to the older bricks, therefore they discard the history of the older bricks.<br>I am sure monitoring team has plans to evolve this and fix this.<br><div><br></div>Best Regards,<br>Rajesh<br><div><br></div><br>----- Original Message -----<br>From: "Paul Cuzner" <pcuzner@redhat.com><br>To: "Rajesh Joseph" <rjoseph@redhat.com><br>Cc: "gluster-devel" <gluster-devel@nongnu.org>, "Sahina Bose" <sabose@redhat.com><br>Sent: Tuesday, April 29, 2014 5:22:42 AM<br>Subject: Re: [Gluster-devel] Snapshot CLI question<br><div><br></div>No worries, Rajesh. <br><div><br></div>Without --xml we're limiting the automation potential and resorting to 'screen scraping' - so with that said, is --xml in plan, or do you need an RFE? <br><div><br></div>Brickpath's changing and loss of history presents an interesting problem for monitoring and capacity planning - especially if the data is lost! As an admin this would be a real concern. Is this something that will evolve, or is this just the flipside of using dm-thinp as the provider for the volume/snapshots? <br><div><br></div>The other question I raised was around triggers for autodelete. Having a set number of snapshots is fine, but I've seen environments in the past where autodelete was needed to protect the pool when snapshots deltas were large - i.e. autodelete gets triggered at a thinpool freespace threshold. <br><div><br></div>Is this last item in the plan? Does it make sense? Does it need an RFE? <br><div><br></div>Cheers, <br><div><br></div>PC <br><div><br></div>----- Original Message -----<br><div><br></div>> From: "Rajesh Joseph" <rjoseph@redhat.com><br>> To: "Paul Cuzner" <pcuzner@redhat.com><br>> Cc: "gluster-devel" <gluster-devel@nongnu.org>, "Sahina Bose"<br>> <sabose@redhat.com><br>> Sent: Monday, 28 April, 2014 9:47:04 PM<br>> Subject: Re: [Gluster-devel] Snapshot CLI question<br><div><br></div>> Sorry Paul for this late reply.<br><div><br></div>> As of now we are not supporting --xml option.<br><div><br></div>> And restore does change the brick path. Users using their own monitoring<br>> script need to be aware of this scenario.<br>> RHS monitoring will monitor the new bricks once restored, but the history<br>> related to older brick might be lost.<br><div><br></div>> Sahin: Would you like to comment on the monitoring part of the question?<br><div><br></div>> Thanks & Regards,<br>> Rajesh<br><div><br></div>> ----- Original Message -----<br>> From: "Paul Cuzner" <pcuzner@redhat.com><br>> To: "gluster-devel" <gluster-devel@nongnu.org><br>> Sent: Wednesday, April 23, 2014 5:14:44 AM<br>> Subject: [Gluster-devel] Snapshot CLI question<br><div><br></div>> Hi,<br><div><br></div>> having seen some of the demo's/material around the snapshot cli, it raised a<br>> couple of questions;<br><div><br></div>> Will --xml be supported?<br><div><br></div>> The other question I have relates to brick names. On a demo video I saw the<br>> brick names change following a 'restore' operation (i.e. vol info shows<br>> different paths - pointing to the paths associated with the snapshot.)<br><div><br></div>> Is this the case currently, and if so does this pose a problem for<br>> monitoring?<br><div><br></div>> Cheers,<br><div><br></div>> Paul C<br><div><br></div>> _______________________________________________<br>> Gluster-devel mailing list<br>> Gluster-devel@nongnu.org<br>> https://lists.nongnu.org/mailman/listinfo/gluster-devel<br></blockquote><div><br></div></div></body></html>