<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div><blockquote type="cite"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div><div></div><div>Begin forwarded message:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>From: </b></span><span style="font-family:'Helvetica'; font-size:medium;">Jiri Lunacek &lt;<a href="mailto:jiri.lunacek@hosting90.cz">jiri.lunacek@hosting90.cz</a>&gt;<br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>Date: </b></span><span style="font-family:'Helvetica'; font-size:medium;">14. června 2011 14:59:56 GMT+02:00<br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>To: </b></span><span style="font-family:'Helvetica'; font-size:medium;"><a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>Subject: </b></span><span style="font-family:'Helvetica'; font-size:medium;"><b>Re: [Gluster-users] Apache hung tasks still occur with glusterfs 3.2.1</b><br></span></div><br><div>Hi.<br><br><br><blockquote type="cite">hello.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">do you maybe have already feedback?<br></blockquote><blockquote type="cite">was it successfull? (disabled io-cache, disabled stat-prefetch, inreades io-thread count to 64)<br></blockquote><blockquote type="cite"><br></blockquote><br>For now it seems that the work-arround has worked. We have not encountered any hung processes on the server since the change (io-cache disable, &nbsp;stat-prefetch disable io-thread-count=64).<br><br>The only "bad" influence is expectable, the pages (mainly list of several hundred images per page) take a little while longer. Of course this is caused by the files not being cached.<br><br><blockquote type="cite">is/was your problem similar to this one?<br></blockquote><blockquote type="cite"><a href="http://bugs.gluster.com/show_bug.cgi?id=3011">http://bugs.gluster.com/show_bug.cgi?id=3011</a><br></blockquote><br>The symptoms were the same. The processes were hung on ioctl. /proc//wchan for the PIDs showed "sync_page".<br><br>I'll experiment a bit once again today and set the volume back to original parameters and wait for a hung process to get you the information (/tmp/glusterdump.pid).<br><br>I'll report back later.<br><br>Jiri<br><br><blockquote type="cite"><br></blockquote><blockquote type="cite">Am 13.06.2011 19:14, schrieb Jiri Lunacek:<br></blockquote><blockquote type="cite"><blockquote type="cite">Thanks for the tip. I disabled io-cache and stat-prefetch, increased io-thread-count to 64 and<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">rebooted the server to clean off the hung apache processes. We'll see tomorrow.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">On 13.6.2011, at 15:58, Justice London wrote:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Disable io-cache and up the threads to 64 and your problems should disappear. They did for me when<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I made both of these changes.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Justice London<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*From:<a href="mailto:*gluster-users-bounces@gluster.org">*gluster-users-bounces@gluster.org</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="mailto:gluster-users-bounces@gluster.org">mailto:gluster-users-bounces@gluster.org</a>&gt;[mailto:gluster-users-bounces@gluster.org]*On Behalf<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Of*Jiri Lunacek<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*Sent:*Monday, June 13, 2011 1:49 AM<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*To:<a href="mailto:*gluster-users@gluster.org">*gluster-users@gluster.org</a> &lt;<a href="mailto:gluster-users@gluster.org">mailto:gluster-users@gluster.org</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*Subject:*[Gluster-users] Apache hung tasks still occur with glusterfs 3.2.1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hi all.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">We have been having problems with hung tasks of apache reading from glusterfs 2-replica volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ever since upgrading to 3.2.0. The problems were identical to those described here:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="http://gluster.org/pipermail/gluster-users/2011-May/007697.html">http://gluster.org/pipermail/gluster-users/2011-May/007697.html</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Yesterday we updated to 3.2.1.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">A good thing is that the hung tasks stopped appearing when gluster is in "intact" operation, i.e.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">when there are no modifications to the gluster configs at all.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Today we modified some other volume exported by the same cluster (but not sharing anything with<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">the volume used by the apache process). And, once again, two requests of apache reading from<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">glusterfs volume are stuck.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Any help with this issue would be very appreciated as right now we have to nightly-reboot the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">machine as the processes re stuck in iowait -&gt; unkillable.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I really do not want to go through the downgrade to 3.1.4 since it seems from the mailing list<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">that it may not go exactly smooth. We are exporting millions of files and any large operation on<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">the exported filesystem takes days.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I am attaching tech info on the problem.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">client:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Centos 5.6<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2.6.18-238.9.1.el5<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">fuse-2.7.4-8.el5<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">glusterfs-fuse-3.2.1-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">glusterfs-core-3.2.1-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">servers:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Centos 5.6<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2.6.18-194.32.1.el5<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">fuse-2.7.4-8.el5<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">glusterfs-fuse-3.2.1-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">glusterfs-core-3.2.1-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">dmesg:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">INFO: task httpd:1246 blocked for more than 120 seconds.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">httpd D ffff81000101d7a0 0 1246 2394 1247 1191 (NOTLB)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff81013ee7dc38 0000000000000082 0000000000000092 ffff81013ee7dcd8<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff81013ee7dd04 000000000000000a ffff810144d0f7e0 ffff81019fc28100<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">0000308f8b444727 00000000000014ee ffff810144d0f9c8 000000038006e608<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Call Trace:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006ec4e&gt;] do_gettimeofday+0x40/0x90<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c5a&gt;] sync_page+0x0/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800637ca&gt;] io_schedule+0x3f/0x67<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c98&gt;] sync_page+0x3e/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006390e&gt;] __wait_on_bit_lock+0x36/0x66<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8003ff27&gt;] __lock_page+0x5e/0x64<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a2921&gt;] wake_bit_function+0x0/0x23<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8003fd85&gt;] pagevec_lookup+0x17/0x1e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800cc666&gt;] invalidate_inode_pages2_range+0x73/0x1bd<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8004fc94&gt;] finish_wait+0x32/0x5d<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884b9798&gt;] :fuse:wait_answer_interruptible+0xb6/0xbd<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a28f3&gt;] autoremove_wake_function+0x0/0x2e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8009a485&gt;] recalc_sigpending+0xe/0x25<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001decc&gt;] sigprocmask+0xb7/0xdb<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bd456&gt;] :fuse:fuse_finish_open+0x36/0x62<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bda11&gt;] :fuse:fuse_open_common+0x147/0x158<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bda22&gt;] :fuse:fuse_open+0x0/0x7<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001eb99&gt;] __dentry_open+0xd9/0x1dc<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8002766e&gt;] do_filp_open+0x2a/0x38<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001a061&gt;] do_sys_open+0x44/0xbe<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8005d28d&gt;] tracesys+0xd5/0xe0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">INFO: task httpd:1837 blocked for more than 120 seconds.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">httpd D ffff810001004420 0 1837 2394 1856 1289 (NOTLB)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff81013c6f9c38 0000000000000086 ffff81013c6f9bf8 00000000fffffffe<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff810170ce7000 000000000000000a ffff81019c0ae7a0 ffffffff80311b60<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">0000308c0f83d792 0000000000000ec4 ffff81019c0ae988 000000008006e608<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Call Trace:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006ec4e&gt;] do_gettimeofday+0x40/0x90<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c5a&gt;] sync_page+0x0/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800637ca&gt;] io_schedule+0x3f/0x67<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c98&gt;] sync_page+0x3e/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006390e&gt;] __wait_on_bit_lock+0x36/0x66<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8003ff27&gt;] __lock_page+0x5e/0x64<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a2921&gt;] wake_bit_function+0x0/0x23<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8003fd85&gt;] pagevec_lookup+0x17/0x1e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800cc666&gt;] invalidate_inode_pages2_range+0x73/0x1bd<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8004fc94&gt;] finish_wait+0x32/0x5d<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884b9798&gt;] :fuse:wait_answer_interruptible+0xb6/0xbd<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a28f3&gt;] autoremove_wake_function+0x0/0x2e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8009a485&gt;] recalc_sigpending+0xe/0x25<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001decc&gt;] sigprocmask+0xb7/0xdb<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bd456&gt;] :fuse:fuse_finish_open+0x36/0x62<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bda11&gt;] :fuse:fuse_open_common+0x147/0x158<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884bda22&gt;] :fuse:fuse_open+0x0/0x7<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001eb99&gt;] __dentry_open+0xd9/0x1dc<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8002766e&gt;] do_filp_open+0x2a/0x38<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8001a061&gt;] do_sys_open+0x44/0xbe<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8005d28d&gt;] tracesys+0xd5/0xe0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">INFO: task httpd:383 blocked for more than 120 seconds.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">httpd D ffff81019fa21100 0 383 2394 534 (NOTLB)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff81013e497c08 0000000000000082 ffff810183eb8910 ffffffff884b9219<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ffff81019e41c600 0000000000000009 ffff81019b1e2100 ffff81019fa21100<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">0000308c0e2c2bfb 0000000000016477 ffff81019b1e22e8 000000038006e608<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Call Trace:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff884b9219&gt;] :fuse:flush_bg_queue+0x2b/0x48<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006ec4e&gt;] do_gettimeofday+0x40/0x90<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8005a9f9&gt;] getnstimeofday+0x10/0x28<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c5a&gt;] sync_page+0x0/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800637ca&gt;] io_schedule+0x3f/0x67<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80028c98&gt;] sync_page+0x3e/0x43<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8006390e&gt;] __wait_on_bit_lock+0x36/0x66<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8003ff27&gt;] __lock_page+0x5e/0x64<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a2921&gt;] wake_bit_function+0x0/0x23<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8000c48d&gt;] do_generic_mapping_read+0x1df/0x359<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8000d279&gt;] file_read_actor+0x0/0x159<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8000c753&gt;] __generic_file_aio_read+0x14c/0x198<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800c8c45&gt;] generic_file_read+0xac/0xc5<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff800a28f3&gt;] autoremove_wake_function+0x0/0x2e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80130778&gt;] selinux_file_permission+0x9f/0xb4<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8000b78d&gt;] vfs_read+0xcb/0x171<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff80011d34&gt;] sys_read+0x45/0x6e<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[&lt;ffffffff8005d28d&gt;] tracesys+0xd5/0xe0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">possibly relevant log messages:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:09:14.985576] W [socket.c:1494:__socket_proto_state_machine] 0-glusterfs: reading<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">from socket failed. Error (Transport endpoint is not connected), peer (81.0.225.122:24007)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:09:25.741055] I [glusterfsd-mgmt.c:637:mgmt_getspec_cbk] 0-: No change in volfile,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">continuing<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.644130] I [afr-common.c:639:afr_lookup_self_heal_check] 0-isifa1-replicate-0:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">size differs for /data/foto/thumbs/3140/31409780.jpg<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.644269] I [afr-common.c:801:afr_lookup_done] 0-isifa1-replicate-0: background<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">meta-data data self-heal triggered. path: /data/foto/thumbs/3140/31409780.jpg<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.822524] W [dict.c:437:dict_ref]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/protocol/client.so(client3_1_fstat_cbk+0x2cb) [0x2aaaab1afa0b]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[0x2aaaab3e4c9d]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/cluster/replicate.so(afr_sh_data_fix+0x1fc)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[0x2aaaab3e493c]))) 0-dict: dict is NULL<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.824323] I [afr-common.c:639:afr_lookup_self_heal_check] 0-isifa1-replicate-0:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">size differs for /data/foto/thumbs/3140/31409781.jpg<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.824356] I [afr-common.c:801:afr_lookup_done] 0-isifa1-replicate-0: background<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">meta-data data self-heal triggered. path: /data/foto/thumbs/3140/31409781.jpg<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.826494] I [afr-self-heal-common.c:1557:afr_self_heal_completion_cbk]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">0-isifa1-replicate-0: background meta-data data self-heal completed on<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">/data/foto/thumbs/3140/31409780.jpg<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2011-06-13 09:59:00.830902] W [dict.c:437:dict_ref]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/protocol/client.so(client3_1_fstat_cbk+0x2cb) [0x2aaaab1afa0b]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/cluster/replicate.so(afr_sh_data_fstat_cbk+0x17d)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[0x2aaaab3e4c9d]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(--&gt;/opt/glusterfs/3.2.1/lib64/glusterfs/3.2.1/xlator/cluster/replicate.so(afr_sh_data_fix+0x1fc)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[0x2aaaab3e493c]))) 0-dict: dict is NULL<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">gluster-server volume config:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-posix<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type storage/posix<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option directory /mnt/data/isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-access-control<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type features/access-control<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-posix<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-locks<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type features/locks<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-access-control<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-io-threads<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/io-threads<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-locks<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-marker<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type features/marker<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option volume-uuid 39d5c509-ad39-4b24-a272-c33e212cf912<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option timestamp-file /etc/glusterd/vols/isifa1/marker.tstamp<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option xtime off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option quota off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-io-threads<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume /mnt/data/isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type debug/io-stats<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option latency-measurement off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option count-fop-hits off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-marker<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-server<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type protocol/server<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option transport-type tcp<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option auth.addr./mnt/data/isifa1.allow<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">81.0.225.120,81.0.225.121,81.0.225.90,81.0.225.117,81.0.225.118,82.208.17.113<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes /mnt/data/isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">gluster-fuse config:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-client-0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type protocol/client<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option remote-host isifa-data1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option remote-subvolume /mnt/data/isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option transport-type tcp<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-client-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type protocol/client<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option remote-host isifa-data2<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option remote-subvolume /mnt/data/isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option transport-type tcp<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-replicate-0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type cluster/replicate<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-client-0 isifa1-client-1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-write-behind<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/write-behind<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-replicate-0<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-read-ahead<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/read-ahead<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-write-behind<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-io-cache<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/io-cache<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-read-ahead<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-quick-read<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/quick-read<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-io-cache<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1-stat-prefetch<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type performance/stat-prefetch<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-quick-read<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">volume isifa1<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">type debug/io-stats<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option latency-measurement off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option count-fop-hits off<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">subvolumes isifa1-stat-prefetch<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">end-volume<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Gluster-users mailing list<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a><br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">-- <br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Mag. Christopher Anderlik<br></blockquote><blockquote type="cite">Leiter Technik<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">________________________________________________________________________________<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Xidras GmbH<br></blockquote><blockquote type="cite">Stockern 47<br></blockquote><blockquote type="cite">3744 Stockern<br></blockquote><blockquote type="cite">Austria<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Tel: &nbsp;&nbsp;&nbsp;&nbsp;0043 2983 201 30 5 01<br></blockquote><blockquote type="cite">Fax: &nbsp;&nbsp;&nbsp;&nbsp;0043 2983 201 30 5 01 9<br></blockquote><blockquote type="cite">Email: &nbsp;&nbsp;<a href="mailto:christopher.anderlik@xidras.com">christopher.anderlik@xidras.com</a><br></blockquote><blockquote type="cite">Web: &nbsp;&nbsp;&nbsp;&nbsp;<a href="http://www.xidras.com/">http://www.xidras.com</a><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">FN 317036 f | Landesgericht Krems | ATU64485024<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">________________________________________________________________________________<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">VERTRAULICHE INFORMATIONEN!<br></blockquote><blockquote type="cite">Diese eMail enthält vertrauliche Informationen und ist nur für den berechtigten<br></blockquote><blockquote type="cite">Empfänger bestimmt. Wenn diese eMail nicht für Sie bestimmt ist, bitten wir Sie,<br></blockquote><blockquote type="cite">diese eMail an uns zurückzusenden und anschließend auf Ihrem Computer und<br></blockquote><blockquote type="cite">Mail-Server zu löschen. Solche eMails und Anlagen dürfen Sie weder nutzen,<br></blockquote><blockquote type="cite">noch verarbeiten oder Dritten zugänglich machen, gleich in welcher Form.<br></blockquote><blockquote type="cite">Wir danken für Ihre Kooperation!<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">CONFIDENTIAL!<br></blockquote><blockquote type="cite">This email contains confidential information and is intended for the authorised<br></blockquote><blockquote type="cite">recipient only. If you are not an authorised recipient, please return the email<br></blockquote><blockquote type="cite">to us and then delete it from your computer and mail-server. You may neither<br></blockquote><blockquote type="cite">use nor edit any such emails including attachments, nor make them accessible<br></blockquote><blockquote type="cite">to third parties in any manner whatsoever.<br></blockquote><blockquote type="cite">Thank you for your cooperation<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">________________________________________________________________________________<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><br>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a></div></blockquote></div><br></div></div>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users<br></blockquote></div><br></div></body></html>