<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=windows-1257">
<META content="MSHTML 6.00.2900.2180" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN id=hidsubpartcontentdiscussion>
<P><FONT face=Tahoma size=2>Hello, i <SPAN class=251484218-21032009>want
</SPAN>to use sorage/dbd for web cluster <SPAN
class=251484218-21032009>(</SPAN>many small files<SPAN
class=251484218-21032009>)</SPAN>. I found no examples in the documentation for
this storage so write by own. Unfortaly glusterfs crashes w<SPAN
class=251484218-21032009>h</SPAN>en i want to add dir </FONT></P><FONT
face=Tahoma size=2></FONT></SPAN></DIV>
<DIV><SPAN><FONT face=Tahoma>
<P><SPAN class=251484218-21032009></SPAN><FONT size=2><FONT face=Arial>I<SPAN
class=251484218-21032009>s anyone have cluster running with <FONT
face=Tahoma>sorage/dbd?</FONT></SPAN></FONT><BR></FONT></P></FONT>
<P><FONT face=Tahoma size=2>my config file same on 2 servers <BR></FONT></P>
<P><FONT face=Tahoma size=2>--- <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume bdb <BR>type storage/bdb <BR>option directory
/cluster_dir <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume locks <BR>type features/locks <BR>subvolumes
bdb <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume brick <BR>type performance/io-threads
<BR>subvolumes locks <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume server <BR>type protocol/server <BR>option
transport-type tcp <BR>option auth.addr.brick.allow * <BR>#option
auth.login.foo-brick.allow foo <BR>#option auth.login.foo.password foo-password
<BR>subvolumes brick <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume remote1 <BR>type protocol/client <BR>option
transport-type tcp <BR>option remote-host 192.168.0.5 <BR>option
transport.socket.listen-port 1023 <BR>option remote-subvolume brick <BR>#option
username foo <BR>#option password foo-password <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume remote2 <BR>type protocol/client <BR>option
transport-type tcp <BR>option remote-host 192.168.0.230 <BR>option
transport.socket.listen-port 1023 <BR>option remote-subvolume brick <BR>#option
username foo <BR>#option password foo-password <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume replicate <BR>type cluster/replicate
<BR>subvolumes remote1 remote2 <BR>option scheduler nufa <BR>#option
nufa.local-volume-name brick <BR></FONT></P>
<P><FONT face=Tahoma size=2>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume writebehind <BR>type performance/write-behind
<BR>option aggregate-size 128KB <BR>option window-size 1MB <BR>subvolumes
replicate <BR>end-volume <BR></FONT></P>
<P><FONT face=Tahoma size=2>volume cache <BR>type performance/io-cache
<BR>option cache-size 512MB <BR>subvolumes writebehind <BR>end-volume
<BR></FONT></P>
<P><FONT face=Tahoma size=2>volume readahead <BR>type performance/read-ahead
<BR>option page-size 65536 # unit in bytes <BR>option page-count 16 # cache per
file = (page-count x page-size) <BR>subvolumes writebehind <BR>end-volume
<BR></FONT></P>
<P><FONT face=Tahoma size=2>--- <BR></FONT></P>
<P><FONT face=Tahoma size=2>after mkdir tt <BR></FONT></P>
<P><FONT face=Tahoma size=2>gluster fs crashes on all servers with
<BR></FONT></P>
<P><FONT face=Tahoma size=2>2009-03-21 18:55:36 D
[fuse-bridge.c:457:fuse_lookup] glusterfs-fuse: 49: LOOKUP /tt <BR>2009-03-21
18:55:36 D [inode.c:471:__inode_create] fuse/inode: create inode(0)
<BR>2009-03-21 18:55:36 D [inode.c:293:__inode_activate] fuse/inode: activating
inode(0), lru=0/0 active=2 purge=0 <BR>2009-03-21 18:55:36 D
[bdb-ll.c:468:bdb_db_get] bdb-ll: failed to do DB->get() for key: tt. key not
found in storage DB <BR>2009-03-21 18:55:36 D [bdb.c:1098:bdb_lookup] bdb:
returning ENOENT for /tt <BR>2009-03-21 18:55:36 D
[name.c:214:af_inet_client_get_remote_sockaddr] remote2: option remote-port
missing in volume remote2. Defaulting to 6996 <BR>2009-03-21 18:55:36 D
[common-utils.c:85:gf_resolve_ip6] resolver: flushing DNS cache <BR>2009-03-21
18:55:36 D [common-utils.c:92:gf_resolve_ip6] resolver: DNS cache not present,
freshly probing hostname: 192.168.0.230 <BR>2009-03-21 18:55:36 D
[common-utils.c:129:gf_resolve_ip6] resolver: returning ip-192.168.0.230
(port-6996) for hostname: 192.168.0.230 and port: 6996 <BR>2009-03-21 18:55:36 D
[fuse-bridge.c:408:fuse_entry_cbk] glusterfs-fuse: 49: LOOKUP() /tt => -1 (No
such file or directory) <BR>2009-03-21 18:55:36 D [inode.c:336:__inode_retire]
fuse/inode: retiring inode(0) lru=0/0 active=1 purge=1 <BR>2009-03-21 18:55:36 D
[socket.c:175:__socket_disconnect] remote2: shutdown() returned -1. set
connection state to -1 <BR>2009-03-21 18:55:36 D
[client-protocol.c:6046:protocol_client_cleanup] remote2: cleaning up state in
transport object 0x805ab30 <BR>2009-03-21 18:55:36 D [socket.c:90:__socket_rwv]
remote2: EOF from peer <BR>2009-03-21 18:55:36 D
[socket.c:561:__socket_proto_state_machine] remote2: read (Transport endpoint is
not connected) in state 1 () <BR>2009-03-21 18:55:36 D
[client-protocol.c:6046:protocol_client_cleanup] remote2: cleaning up state in
transport object 0x805ab30 <BR>2009-03-21 18:55:36 D
[inode.c:471:__inode_create] fuse/inode: create inode(0) <BR>2009-03-21 18:55:36
D [inode.c:293:__inode_activate] fuse/inode: activating inode(0), lru=0/0
active=2 purge=0 <BR>2009-03-21 18:55:36 D [fuse-bridge.c:1133:fuse_mkdir]
glusterfs-fuse: 50: MKDIR /tt <BR>pending frames: <BR>frame : type(1) op(MKDIR)
<BR>frame : type(1) op(MKDIR) <BR></FONT></P>
<P><FONT face=Tahoma size=2>patchset: cb602a1d7d41587c24379cb2636961ab91446f86 +
<BR>signal received: 11 <BR>configuration details:argp 1 <BR>backtrace 1
<BR>db.h 1 <BR>dlfcn 1 <BR>fdatasync 1 <BR>libpthread 1 <BR>llistxattr 1
<BR>setfsid 1 <BR>spinlock 1 <BR>epoll.h 1 <BR>xattr.h 1 <BR>st_atim.tv_nsec 1
<BR>package-string: glusterfs 2.0.0rc4 <BR>[0xb7f51420]
<BR>/glusterfs/lib/libglusterfs.so.0(default_xattrop+0x117)[0xb7f2e529]
<BR>/glusterfs/lib/libglusterfs.so.0(default_xattrop+0x117)[0xb7f2e529]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/protocol/client.so(client_xattrop+0x1cb)[0xb7b5061c]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so(afr_changelog_pre_op+0x88f)[0xb7b2ae46]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so[0xb7b2b20a]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so(afr_lock_cbk+0x172)[0xb7b2aff1]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/protocol/client.so[0xb7b48a6e]
<BR>/glusterfs/lib/libglusterfs.so.0[0xb7f2ed09]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/features/locks.so(pl_entrylk+0x32c)[0xb7b8abdb]
<BR>/glusterfs/lib/libglusterfs.so.0(default_entrylk+0x11e)[0xb7f2ee2e]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/protocol/client.so(client_entrylk+0x168)[0xb7b54352]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so[0xb7b2bae7]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so(afr_lock+0x2d)[0xb7b2bb29]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so(afr_transaction+0xd1)[0xb7b2bcf4]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/cluster/replicate.so(afr_mkdir+0x36e)[0xb7b1b3ae]
<BR>/glusterfs/lib/libglusterfs.so.0(default_mkdir+0x10d)[0xb7f2c700]
<BR>/glusterfs/lib/libglusterfs.so.0(default_mkdir+0x10d)[0xb7f2c700]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/mount/fuse.so[0xb7ae78a2]
<BR>/usr/local/lib/libfuse.so.2[0xb7aceebf]
<BR>/usr/local/lib/libfuse.so.2[0xb7acfc0d]
<BR>/usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb7ad14d6]
<BR>/glusterfs/lib/glusterfs/2.0.0rc4/xlator/mount/fuse.so[0xb7aed67d]
<BR>/lib/tls/libpthread.so.0[0xb7e220bd]
<BR>/lib/tls/libc.so.6(__clone+0x5e)[0xb7db701e] <BR>--------- <BR>Segmentation
fault (core dumped</FONT></P></SPAN></DIV></BODY></HTML>