<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 02/12/2013 08:43 AM, Anand Avati
wrote:<br>
</div>
<blockquote
cite="mid:CAFboF2x6mV6CmaDBgv0OsmcPEY=v3TfUPwcHcPBLSYCKaihoUg@mail.gmail.com"
type="cite"><br>
<br>
<div class="gmail_quote">On Mon, Feb 11, 2013 at 7:02 PM, Pranith
Kumar K <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
hi,<br>
Problem:<br>
<br>
When there are multiple fds writing to same file with
eager-lock enabled, the fd which acquires the eager-lock waits
for<br>
post-op-delay secs before doing the unlock. Because of this
all other fds opened on the file face extra delay when<br>
performing writes. Eager-locking, post-op-delay need to be
disabled when there are multiple fds opened on the file.<br>
<br>
Here is the profile info output for the case above:<br>
Execute the following command on the mount point.<br>
for n in `seq 1 50` ; do eval "/home/pranithk/workspace/gerrit-repo/append2log.py
./ben.log 10000 0.001 &" ; done ; wait<br>
<br>
%-latency Avg-latency Min-Latency Max-Latency No. of
calls Fop<br>
--------- ----------- ----------- -----------
------------ ----<br>
0.00 0.00 us 0.00 us 0.00 us
50 RELEASE<br>
0.00 0.00 us 0.00 us 0.00 us
60 RELEASEDIR<br>
0.00 55.00 us 55.00 us 55.00 us
1 GETXATTR<br>
0.00 31.50 us 27.00 us 36.00 us
2 STATFS<br>
0.00 41.00 us 29.00 us 53.00 us
2 ENTRYLK<br>
0.00 198.00 us 198.00 us 198.00 us
1 CREATE<br>
0.00 124.00 us 108.00 us 140.00 us
2 READDIR<br>
0.00 27.04 us 17.00 us 95.00 us
49 OPEN<br>
0.00 74.89 us 13.00 us 206.00 us
47 STAT<br>
0.01 87.02 us 11.00 us 391.00 us
50 FLUSH<br>
0.01 102.43 us 20.00 us 268.00 us
60 OPENDIR<br>
0.02 344.27 us 22.00 us 940.00 us
44 WRITE<br>
0.02 228.80 us 52.00 us 345.00 us
82 FXATTROP<br>
0.03 199.89 us 19.00 us 404.00 us
120 READDIRP<br>
0.05 91.41 us 23.00 us 832.00 us
421 LOOKUP<br>
99.86 632698.45 us 17.00 us 1999724.00 us
126 FINODELK<br>
<br>
Observe that most of the delay is in FINODELK fop.<br>
<br>
Possible Solution:<br>
With the patch: <a moz-do-not-send="true"
href="http://review.gluster.org/4468" target="_blank">http://review.gluster.org/4468</a>
we started maintaining open-fd count in the inode. We need to
implement xdata based xattr retrieval in write-fop and get
open-fd-count in write fop. Remember the open-fd-count
received from the write-fops and maintain it in afr-fd-ctx. If
the open-fd count is >1 post-op-delay is immediately
disabled for that write fop. All write-fops take into
consideration this count to determine whether to enable
eager-lock, post-op-delay for that write fop.<br>
<br>
Let me know if you foresee any issues with this approach.<br>
<br>
<a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=910217"
target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=910217</a>
is tracking this issue.<br>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>Ideally you would want open-fd count to be retrieved in all
fops, and only when an eager lock has been acquired. Any fop
callback's xattr_rsp inspection should potentially wake up the
sleeping post-op-delay in that inode (and disable further
eager locking temporarily).</div>
<div><br>
</div>
<div>Avati</div>
</div>
</blockquote>
Avati,<br>
Makes sense. Since it is in-memory virtual xattr retrieval,
performance should not be affected too much IMO. I will have to
implement and run perf-test to make sure of that. Other than that
every thing else is ok right?<br>
I will start the implementation if no other issues are
foreseen.<br>
<br>
pranith.<br>
</body>
</html>