<html><body bgcolor="#FFFFFF"><div>Brandon -</div><div>&nbsp;&nbsp; &nbsp;SQLite uses POSIX locking to implement some of its ACID compliant behavior and requires the filesystem to fully implement POSIX advisory locks. Most network filesystems (including Gluster native and NFS) don't support everything that SQLite needs and so using SQLite on a networked filesystem isn't recommend by the SQLite team, see this excerpt from the link I sent earlier -</div><div><br></div><div><span class="Apple-style-span" style="font-family: Verdana, sans-serif; -webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); font-size: medium; ">SQLite uses POSIX advisory locks to implement locking on Unix. On Windows it uses the LockFile(), LockFileEx(), and UnlockFile() system calls. SQLite assumes that these system calls all work as advertised. If that is not the case, then database corruption can result. One should note that POSIX advisory locking is known to be buggy or even unimplemented on many NFS implementations (including recent versions of Mac OS X) and that there are reports of locking problems for network filesystems under Windows. Your best defense is to not use SQLite for files on a network filesystem.</span><br><div><br></div><div>Craig</div><div><br></div>Sent from a mobile device, please excuse my tpyos.</div><div><br>On Sep 24, 2011, at 0:19, Brandon Simmons &lt;<a href="mailto:bsimmons@labarchives.com">bsimmons@labarchives.com</a>&gt; wrote:<br><br></div><div></div><blockquote type="cite"><div><span>On Fri, Sep 23, 2011 at 4:11 PM, Anand Babu Periasamy &lt;<a href="mailto:ab@gluster.com">ab@gluster.com</a>&gt; wrote:</span><br><blockquote type="cite"><span>This is a known issue. Gluster NFS doesn't support NLM (locking) yet. 3.4</span><br></blockquote><blockquote type="cite"><span>may implement this.&nbsp; Did you try on GlusterFS native mount?</span><br></blockquote><span></span><br><span>Thanks for that information.</span><br><span></span><br><span>I did test with the native fuse mount, but the results were difficult</span><br><span>to interpret. We have a rails application that writes to multiple</span><br><span>sqlite databases, and a test script that simulates a bunch of random</span><br><span>writes to a specified DB, retrying if it fails.</span><br><span></span><br><span>On NFS this test runs reasonably well: both clients take turns, a</span><br><span>couple retries, all writes complete without failures.</span><br><span></span><br><span>But mounted over gluster (same machines, underlying disk as above) one</span><br><span>client always runs while the other gets locked out (different client</span><br><span>machines depending on which was started first). At some point during</span><br><span>this test the client that was locked out from writing to the DB</span><br><span>actually gets disconnected from gluster and I have to remount:</span><br><span></span><br><span> &nbsp;&nbsp;&nbsp;$ ls /mnt/gluster</span><br><span> &nbsp;&nbsp;&nbsp;ls: cannot access /websites/: Transport endpoint is not connected</span><br><span></span><br><span>One client is consistently locked out even if they are writing to</span><br><span>DIFFERENT DBs altogether.</span><br><span></span><br><span>The breakage of the mountpoint happened every time the test was run</span><br><span>concurrently against the SAME DB, but did not seem to occur when</span><br><span>clients were running against different DBs.</span><br><span></span><br><span>But like I said, this was a very high level test with many moving</span><br><span>parts so I'm not sure how useful the above details are for you to</span><br><span>know.</span><br><span></span><br><span>Happy to hear any ideas for testing,</span><br><span>Brandon</span><br><span></span><br><span>/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:</span><br><span>[2011-09-16 19:32:38.122196] W</span><br><span>[socket.c:1494:__socket_proto_state_machine] 0-socket.management:</span><br><span>reading from socket failed. Error (Transport endpoint is not</span><br><span>connected), peer (127.0.0.1:1017)</span><br><span></span><br><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>--AB</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>On Sep 23, 2011 10:00 AM, "Brandon Simmons" &lt;<a href="mailto:bsimmons@labarchives.com">bsimmons@labarchives.com</a>&gt;</span><br></blockquote><blockquote type="cite"><span>wrote:</span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>I am able to successfully mount a gluster volume using the NFS client</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>on my test servers. Simple reading and writing seems to work, but</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>trying to work with sqlite databases seems to cause the sqlite client</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>and libraries to freeze. I have to send KILL to stop the process.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Here is an example, server 1 and 2 are clients mounting gluster volume</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>over NFS:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>server1# echo "working" &gt; /mnt/gluster/test_simple</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>server2# echo "working" &gt;&gt; /mnt/gluster/test_simple</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>server1# cat /mnt/gluster/test_simple</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>working</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>working</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>server1# sqlite3 /websites/new.sqlite3</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>SQLite version 3.6.10</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Enter ".help" for instructions</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Enter SQL statements terminated with a ";"</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>sqlite&gt; create table memos(text, priority INTEGER);</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>(...hangs forever, have to detach screen and do kill -9)</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>the gluster volume was created and NFS-mounted as per the instructions</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>here:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="http://www.gluster.com/community/documentation/index.php/Gluster_3.2_Filesystem_Administration_Guide">http://www.gluster.com/community/documentation/index.php/Gluster_3.2_Filesystem_Administration_Guide</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>If I mount the volume using the nolock option, then things work:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>mount -t nfs -o nolock server:/test-vol /mnt/gluster</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>So I assume this has something to do with the locking RPC service</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>stufff, which I don't know much about. Here's output from rpc info:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>server# rpcinfo -p</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>program vers proto port</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 tcp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 udp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 udp 56286 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 tcp 40356 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100005 3 tcp 38465 mountd</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100005 1 tcp 38466 mountd</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100003 3 tcp 38467 nfs</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>client1# rpcinfo -p server</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>program vers proto port</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 tcp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 udp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 udp 56286 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 tcp 40356 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100005 3 tcp 38465 mountd</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100005 1 tcp 38466 mountd</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100003 3 tcp 38467 nfs</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>client1# # rpcinfo -p</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>program vers proto port</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 tcp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100000 2 udp 111 portmapper</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 udp 32768 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>100024 1 tcp 58368 status</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Thanks for any help,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Brandon</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>_______________________________________________</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Gluster-users mailing list</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a></span><br></blockquote></blockquote><blockquote type="cite"><span></span><br></blockquote><span>_______________________________________________</span><br><span>Gluster-users mailing list</span><br><span><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a></span><br><span><a href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a></span><br></div></blockquote></body></html>