<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On Monday 01 December 2014 04:51 PM,
Raghavendra G wrote:<br>
</div>
<blockquote
cite="mid:CADRNtgQF7Hd+fdW5dJzSbA4GYyExr0qnmcJaNiSrwwgg223_CQ@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Nov 28, 2014 at 6:48 PM,
RAGHAVENDRA TALUR <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:raghavendra.talur@gmail.com"
target="_blank">raghavendra.talur@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><span
class="">On Thu, Nov 27, 2014 at 2:59 PM, Raghavendra
Bhat <<a moz-do-not-send="true"
href="mailto:rabhat@redhat.com">rabhat@redhat.com</a>>
wrote:<br>
> Hi,<br>
><br>
> With USS to access snapshots, we depend on last
snapshot of the volume (or<br>
> the latest snapshot) to resolve some issues.<br>
> Ex:<br>
> Say there is a directory called "dir" within the
root of the volume and USS<br>
> is enabled. Now when .snaps is accessed from "dir"
(i.e. /dir/.snaps), first<br>
> a lookup is sent on /dir which snapview-client
xlator passes onto the normal<br>
> graph till posix xlator of the brick. Next the
lookup comes on /dir/.snaps.<br>
> snapview-client xlator now redirects this call to
the snap daemon (since<br>
> .snaps is a virtual directory to access the
snapshots). The lookup comes to<br>
> snap daemon with parent gfid set to the gfid of
"/dir" and the basename<br>
> being set to ".snaps". Snap daemon will first try
to resolve the parent gfid<br>
> by trying to find the inode for that gfid. But
since that gfid was not<br>
> looked up before in the snap daemon, it will not be
able to find the inode.<br>
> So now to resolve it, snap daemon depends upon the
latest snapshot. i.e. it<br>
> tries to look up the gfid of /dir in the latest
snapshot and if it can get<br>
> the gfid, then lookup on /dir/.snaps is also
successful.<br>
<br>
</span>From the user point of view, I would like to be
able to enter into the<br>
.snaps anywhere.<br>
To be able to do that, we can turn the dependency upside
down, instead<br>
of listing all<br>
snaps in the .snaps dir, lets just show whatever snapshots
had that dir.<br>
</blockquote>
<div><br>
</div>
<div style="">Currently readdir in snap-view server is
listing _all_ the snapshots. However if you try to do "ls"
on a snapshot which doesn't contain this directory (say
dir/.snaps/snap3), I think it returns ESTALE/ENOENT. So,
to get what you've explained above, readdir(p) should
filter out those snapshots which doesn't contain this
directory (to do that, it has to lookup dir on each of the
snapshots).</div>
<div style=""><br>
</div>
<div style="">Raghavendra Bhat explained the problem and
also a possible solution to me in person. There are some
pieces missing in the problem description as explained in
the mail (but not in the discussion we had). The problem
explained here occurs when you restore a snapshot (say
snap3) where the directory got created, but deleted before
next snapshot. So, directory doesn't exist in snap2 and
snap4, but exists only in snap3. Now, when you restore
snap3, "ls" on dir/.snaps should show nothing. Now, what
should be result of lookup (gfid-of-dir, ".snaps") should
be?</div>
<div style=""><br>
</div>
<div style="">1. we can blindly return a virtual inode,
assuming there is atleast one snapshot contains dir. If
fops come on specific snapshots (eg., dir/.snaps/snap4),
they'll anyways fail with ENOENT (since dir is not present
on any snaps).</div>
<div style="">2. we can choose to return ENOENT if we figure
out that dir is not present on any snaps.</div>
<div style=""><br>
</div>
<div style="">The problem we are trying to solve here is how
to achieve 2. One simple solution is to lookup for
<gfid-of-dir> on all the snapshots and if every
lookup fails with ENOENT, we can return ENOENT. The other
solution is to just lookup in snapshots before and after
(if both are present, otherwise just in latest snapshot).
If both fail, then we can be sure that no snapshots
contain that directory.</div>
<div style=""><br>
</div>
<div style="">Rabhat, Correct me if I've missed out anything
:).</div>
<div style=""><br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<br>
If a readdir on .snaps entered from a non root directory has to show
the list of only those snapshots where the directory (or rather gfid
of the directory) is present, then the way to achieve will be bit
costly.<br>
<br>
When readdir comes on .snaps entered from a non root directory (say
ls /dir/.snaps), following operations have to be performed<br>
1) In a array we have the names of all the snapshots. So, do a
nameless lookup on the gfid of /dir on all the snapshots<br>
2) Based on which snapshots have sent success to the above lookup,
build a new array or list of snapshots.<br>
3) Then send the above new list as the readdir entries.<br>
<br>
But the above operation it costlier. Because, just to serve one
readdir request we have to make a lookup on each snapshot (if there
are 256 snapshots, then we have to make 256 lookup calls via
network). <br>
<br>
One more thing is resource usage. As of now any snapshot will be
initied (i.e. via gfapi a connection is established with the
corresponding snapshot volume, which is equivalent to a mounted
volume.) when that snapshot is accessed (from fops point of view a
lookup comes on the snapshot entry, say "ls /dir/.snaps/snap1").
Now to serve readdir all the snapshots will be accessed and all the
snapshots are initialized. This means there can be 256 instances of
gfapi connections with each instance having its own inode table and
other resources). After readdir if a snapshot is not accessed, so
many resources of that snapshots will add up to the snap daemon's
usage.<br>
<br>
With the above points in mind, I was thinking about different
approaches to handle this situation. We need latest snapshot (and as
per the patch, adjacent snapshots to handle restore) to resolve
lookups coming on .snaps. Mainly for resolving the parent gfid so
that we can look it up somewhere (if "ls /dir/.snaps is done, then
lookup comes with parent gfid set to gfid of /dir and name set to
".snaps". But since /dir has not been looked up yet in snap daemon,
it has to first resolve parent gfid for which it looks at latest
snapshot).<br>
<br>
What we can do is, while sending lookup on .snaps (again, say "ls
/dir/.snaps") within the dict add a key, which snapview-server can
look for. That key is kinda hint from snapview-client to the
snapview-server that the parent gfid of this particular lookup call
exists and valid one. When snapview-server gets lookup as part of
resolution from protocol/server on the parent gfid, it can look at
the dict for the key. If the key is set, then simply return success
to that lookup. <br>
<br>
With the above way we can handle many situations such as this:<br>
Entering .snaps from a directory which is created after taking the
latest snapshot.<br>
<br>
Please provide feedback on the above approach (the hint being set in
the dict).<br>
<br>
Regards,<br>
Raghavendra Bhat<br>
<br>
<br>
<br>
<blockquote
cite="mid:CADRNtgQF7Hd+fdW5dJzSbA4GYyExr0qnmcJaNiSrwwgg223_CQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div style=""><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
May be it is good enough if we resolve the parent on the
main volume<br>
and rely on that<br>
in snapview client and server.<br>
<div class="HOEnZb">
<div class="h5"><br>
><br>
> But, there can be some confusion in the case of
snapshot restore. Say there<br>
> are 5 snapshots (snap1, snap2, snap3, snap4,
snap5) for a volume vol. Now<br>
> say the volume is restored to snap3. If there was
a directory called<br>
> "/a" at the time of taking snap3 and was later
removed, then after snapshot<br>
> restore accessing .snaps from that directory (in
fact all the directories<br>
> which were present while taking snap3) might
cause problems. Because now the<br>
> original volume is nothing but the snap3 and snap
daemon when gets the<br>
> lookup on "/a/.snaps", it tries to find the gfid
of "/a" in the latest<br>
> snapshot (which is snap5) and if a was removed
after taking snap3, then the<br>
> lookup of "/a" in snap5 fails and thus the lookup
of "/a/.snaps" will also<br>
> fail.<br>
<br>
<br>
><br>
> Possible Solution:<br>
> One of the possible solution that can be helpful
in this case is, whenever<br>
> glusterd sends the list of snapshots to snap
daemon after snapshot restore,<br>
> send the list in such a way that the snapshot
which is previous to the<br>
> restored snapshot is sent as the latest snapshot
(in the example above,<br>
> since snap3 is restored, glusterd should send
snap2 as the latest snapshot<br>
> to snap daemon).<br>
><br>
> But in the above solution also, there is a
problem. If there are only 2<br>
> snapshots (snap1, snap2) and the volume is
restored to the first snapshot<br>
> (snap1), there is no previous snapshot to look
at. And glusterd will send<br>
> only one name in the list which is snap2 but it
is in a future state than<br>
> the volume.<br>
><br>
> A patch has been submitted for the review to
handle this<br>
> (<a moz-do-not-send="true"
href="http://review.gluster.org/#/c/9094/"
target="_blank">http://review.gluster.org/#/c/9094/</a>).<br>
> And in the patch because of the above confusions
snapd tries to consult the<br>
> adjacent snapshots of the restored snapshot to
resolve the gfids. As per<br>
> the 5 snapshots example, it tries to look at
snap2 and snap4 (i.e. look into<br>
> snap2 first, if it fails then look into snap4).
If there is no previous<br>
> snapshot, then look at the next snapshot (2
snapshots example). If there is<br>
> no next snapshot, then look at the previous
snapshot.<br>
><br>
> Please provide feed back about how this issue can
be handled.<br>
><br>
> Regards,<br>
> Raghavendra Bhat<br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a moz-do-not-send="true"
href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a moz-do-not-send="true"
href="http://supercolony.gluster.org/mailman/listinfo/gluster-devel"
target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-devel</a><br>
<br>
<br>
<br>
</div>
</div>
<span class="HOEnZb"><font color="#888888">--<br>
Raghavendra Talur<br>
</font></span>
<div class="HOEnZb">
<div class="h5">_______________________________________________<br>
Gluster-devel mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://supercolony.gluster.org/mailman/listinfo/gluster-devel"
target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-devel</a><br>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="gmail_signature">Raghavendra G<br>
</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>