<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=US-ASCII">
<META content="MSHTML 6.00.6000.16809" name=GENERATOR>
<STYLE>@font-face {
        font-family: 宋
}
@font-face {
        font-family: Verdana;
}
@font-face {
        font-family: @宋
}
@page Section1 {size: 595.3pt 841.9pt; margin: 72.0pt 90.0pt 72.0pt 90.0pt; layout-grid: 15.6pt; }
P.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; FONT-SIZE: 10.5pt; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; TEXT-ALIGN: justify
}
LI.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; FONT-SIZE: 10.5pt; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; TEXT-ALIGN: justify
}
DIV.MsoNormal {
        TEXT-JUSTIFY: inter-ideograph; FONT-SIZE: 10.5pt; MARGIN: 0cm 0cm 0pt; FONT-FAMILY: "Times New Roman"; TEXT-ALIGN: justify
}
A:link {
        COLOR: blue; TEXT-DECORATION: underline
}
SPAN.MsoHyperlink {
        COLOR: blue; TEXT-DECORATION: underline
}
A:visited {
        COLOR: purple; TEXT-DECORATION: underline
}
SPAN.MsoHyperlinkFollowed {
        COLOR: purple; TEXT-DECORATION: underline
}
SPAN.EmailStyle17 {
        FONT-WEIGHT: normal; COLOR: windowtext; FONT-STYLE: normal; FONT-FAMILY: Verdana; TEXT-DECORATION: none; mso-style-type: personal-compose
}
DIV.Section1 {
        page: Section1
}
UNKNOWN {
        FONT-SIZE: 10pt
}
BLOCKQUOTE {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
</STYLE>
</HEAD>
<BODY style="FONT-SIZE: 10pt; MARGIN: 10px; FONT-FAMILY: verdana">
<DIV><FONT face=Verdana color=#000080 size=2>Hello </FONT></DIV>
<DIV><FONT face=Verdana color=#000080
size=2> </FONT></DIV>
<DIV><FONT color=#000080>The directory of GFS server export was on one
server .</DIV>
<DIV>
<DIV>client ARF </DIV>
<DIV>volume client1</DIV>
<DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6996 </DIV>
<DIV> option remote-subvolume brick1 </DIV>
<DIV>end-volume</DIV>
<DIV></DIV>
<DIV> </DIV>
<DIV>volume client2</DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6997 </DIV>
<DIV> option remote-subvolume brick2 </DIV>
<DIV>end-volume</DIV>
<DIV></DIV>
<DIV>volume client3</DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6998 </DIV>
<DIV> option remote-subvolume brick3 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV>volume client4</DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6999 </DIV>
<DIV> option remote-subvolume brick4 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV></DIV>
<DIV>volume ns1 </DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6996 </DIV>
<DIV> option remote-subvolume name1 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV></DIV>
<DIV>volume ns2 </DIV>
<DIV> type protocol/client</DIV>
<DIV> option transport-type tcp </DIV>
<DIV> option remote-host 172.20.92.249 </DIV>
<DIV> option transport.socket.remote-port 6997 </DIV>
<DIV> option remote-subvolume name2 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV></DIV>
<DIV>volume rep1</DIV>
<DIV>
<DIV> type cluster/replicate</DIV>
<DIV> option data-self-heal on </DIV>
<DIV> option entry-self-heal on</DIV>
<DIV> option metadata-self-heal on</DIV>
<DIV> option data-lock-server-count 2</DIV>
<DIV> option entry-lock-server-count 2</DIV>
<DIV> subvolumes client1 client2 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV>volume rep2</DIV>
<DIV> type cluster/replicate</DIV>
<DIV> option data-self-heal on </DIV>
<DIV> option entry-self-heal on</DIV>
<DIV> option metadata-self-heal on</DIV>
<DIV> option data-lock-server-count 2</DIV>
<DIV> option entry-lock-server-count 2</DIV>
<DIV> subvolumes client3 client4 </DIV>
<DIV>end-volume</DIV>
<DIV> </DIV>
<DIV></DIV>
<DIV>volume rep-ns</DIV>
<DIV> type cluster/replicate</DIV>
<DIV> option data-self-heal on</DIV>
<DIV> option entry-self-heal on </DIV>
<DIV> option metadata-self-heal on</DIV>
<DIV> option data-lock-server-count 2</DIV>
<DIV> option entry-lock-server-count 2</DIV>
<DIV> subvolumes ns1 ns2 </DIV>
<DIV>end-volume</DIV></DIV>
<DIV>
<DIV>olume bricks</DIV>
<DIV> type cluster/unify</DIV>
<DIV> option namespace rep-ns # this will not be storage child of unify.</DIV>
<DIV> subvolumes rep1 rep2</DIV>
<DIV> option self-heal background # foreground off # default is foreground</DIV>
<DIV> option scheduler rr</DIV></DIV>
<DIV>end-volume </DIV>
<DIV> </DIV></FONT></DIV>
<DIV><FONT color=#000080> When i test replicate mode , i "rm "
a file in GFS server ,and execute "ll -h " in GFS client ,the DEBUG log is
this :</FONT></DIV>
<DIV> </DIV>
<DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:368:fuse_entry_cbk] glusterfs-fuse: 41: LOOKUP() / => 1 (1)</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1738:fuse_opendir] glusterfs-fuse: 42: OPENDIR /</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:652:fuse_fd_cbk] glusterfs-fuse: 42: OPENDIR() / => 0x8280cc0</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:368:fuse_entry_cbk] glusterfs-fuse: 43: LOOKUP() / => 1 (1)</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1825:fuse_readdir] glusterfs-fuse: 44: READDIR (0x8280cc0, size=4096, offset=0)</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1771:fuse_readdir_cbk] glusterfs-fuse: 44: READDIR => 6/4096,0</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1825:fuse_readdir] glusterfs-fuse: 45: READDIR (0x8280cc0, size=4096, offset=2147483647)</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1771:fuse_readdir_cbk] glusterfs-fuse: 45: READDIR => 0/4096,2147483647</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1843:fuse_releasedir] glusterfs-fuse: 46: RELEASEDIR 0x8280cc0</DIV>
<DIV>2009-03-04 15:38:00 D [inode.c:293:__inode_activate] fuse/inode: activating inode(3538958), lru=3/0 active=2 purge=0</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:461:fuse_lookup] glusterfs-fuse: 47: LOOKUP /11(3538958)</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:1041:afr_self_heal] rep1: performing self heal on /11 (metadata=1 data=1 entry=1)</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:998:afr_self_heal_missing_entries] rep1: attempting to recreate missing entries for path=/11</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:962:sh_missing_entries_lk_cbk] rep1: inode of /11 on child 136837152 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:962:sh_missing_entries_lk_cbk] rep1: inode of /11 on child 136839776 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:915:sh_missing_entries_lookup] rep1: looking up /11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:915:sh_missing_entries_lookup] rep1: looking up /11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 W [afr-self-heal-common.c:871:sh_missing_entries_lookup_cbk] rep1: path /11 on subvolume client1 => -1 (No such file or directory)</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:863:sh_missing_entries_lookup_cbk] rep1: path /11 on subvolume client2 is of mode 0100644</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:608:sh_missing_entries_mknod] rep1: mknod /11 mode 0100644 on 1 subvolumes</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:555:sh_missing_entries_newentry_cbk] rep1: chown /11 to 0 0 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:502:sh_missing_entries_finish] rep1: unlocking 1/11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:502:sh_missing_entries_finish] rep1: unlocking 1/11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:441:afr_sh_missing_entries_done] rep1: proceeding to metadata check on /11</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:752:afr_sh_metadata_lock] rep1: locking /11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:752:afr_sh_metadata_lock] rep1: locking /11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:706:afr_sh_metadata_lk_cbk] rep1: inode of /11 on child 0 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:706:afr_sh_metadata_lk_cbk] rep1: inode of /11 on child 1 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:658:afr_sh_metadata_lookup] rep1: looking up /11 on client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:658:afr_sh_metadata_lookup] rep1: looking up /11 on client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:604:afr_sh_metadata_lookup_cbk] rep1: path /11 on subvolume client1 is of mode 0100644</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:604:afr_sh_metadata_lookup_cbk] rep1: path /11 on subvolume client2 is of mode 0100644</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:170:afr_sh_print_pending_matrix] rep1: pending_matrix: [ 0 0 ]</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:170:afr_sh_print_pending_matrix] rep1: pending_matrix: [ 0 0 ]</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:491:afr_sh_metadata_sync_prepare] rep1: syncing metadata of /11 from subvolume client2 to 1 active sinks</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:383:afr_sh_metadata_sync] rep1: syncing metadata of /11 from client2 to client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:249:afr_sh_metadata_erase_pending] rep1: erasing pending flags from /11 on client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:249:afr_sh_metadata_erase_pending] rep1: erasing pending flags from /11 on client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:156:afr_sh_metadata_finish] rep1: unlocking /11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:156:afr_sh_metadata_finish] rep1: unlocking /11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-metadata.c:83:afr_sh_metadata_done] rep1: proceeding to data check on /11</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:992:afr_sh_data_lock] rep1: locking /11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:992:afr_sh_data_lock] rep1: locking /11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:944:afr_sh_data_lock_cbk] rep1: inode of /11 on child 0 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:944:afr_sh_data_lock_cbk] rep1: inode of /11 on child 1 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:170:afr_sh_print_pending_matrix] rep1: pending_matrix: [ 0 0 ]</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-common.c:170:afr_sh_print_pending_matrix] rep1: pending_matrix: [ 0 0 ]</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:752:afr_sh_data_sync_prepare] rep1: syncing data of /11 from subvolume client2 to 1 active sinks</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:642:afr_sh_data_open_cbk] rep1: fd for /11 opened, commencing sync</DIV>
<DIV>2009-03-04 15:38:00 W [afr-self-heal-data.c:646:afr_sh_data_open_cbk] rep1: sourcing file /11 from client2 to other sinks</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:501:afr_sh_data_read_cbk] rep1: read 0 bytes of data from /11 on child 1, offset 0</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:379:afr_sh_data_trim_cbk] rep1: ftruncate of /11 on subvolume client1 completed</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:328:afr_sh_data_erase_pending] rep1: erasing pending flags from /11 on client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:328:afr_sh_data_erase_pending] rep1: erasing pending flags from /11 on client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:252:afr_sh_data_finish] rep1: finishing data selfheal of /11</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:228:afr_sh_data_unlock] rep1: unlocking /11 on subvolume client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:228:afr_sh_data_unlock] rep1: unlocking /11 on subvolume client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:185:afr_sh_data_unlck_cbk] rep1: inode of /11 on child 0 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:185:afr_sh_data_unlck_cbk] rep1: inode of /11 on child 1 locked</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:134:afr_sh_data_close] rep1: closing fd of /11 on client2</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:149:afr_sh_data_close] rep1: closing fd of /11 on client1</DIV>
<DIV>2009-03-04 15:38:00 D [afr-self-heal-data.c:70:afr_sh_data_done] rep1: self heal of /11 completed</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:368:fuse_entry_cbk] glusterfs-fuse: 47: LOOKUP() /11 => 3538958 (3538958)</DIV>
<DIV>2009-03-04 15:38:00 D [inode.c:112:__dentry_unhash] fuse/inode: dentry unhashed 11 (3538958)</DIV>
<DIV>2009-03-04 15:38:00 D [inode.c:94:__dentry_hash] fuse/inode: dentry hashed 11 (3538958)</DIV>
<DIV>2009-03-04 15:38:00 D [inode.c:312:__inode_passivate] fuse/inode: passivating inode(3538958) lru=4/0 active=1 purge=0</DIV>
<DIV>2009-03-04 15:38:00 D [inode.c:293:__inode_activate] fuse/inode: activating inode(3538958), lru=3/0 active=2 purge=0</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1512:fuse_open] glusterfs-fuse: 48: OPEN /11</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:652:fuse_fd_cbk] glusterfs-fuse: 48: OPEN() /11 => 0x827e918</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1573:fuse_readv] glusterfs-fuse: 49: READ (0x827e918, size=4096, offset=0)</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1538:fuse_readv_cbk] glusterfs-fuse: 49: READ => 0/4096,0/88</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1657:fuse_flush] glusterfs-fuse: 50: FLUSH 0x827e918</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:896:fuse_err_cbk] glusterfs-fuse: 50: FLUSH() ERR => 0</DIV>
<DIV>2009-03-04 15:38:00 D [fuse-bridge.c:1677:fuse_release] glusterfs-fuse: 51: RELEASE 0x827e918</DIV>
<DIV> </DIV>
<DIV>Why
D [afr-self-heal-data.c:501:afr_sh_data_read_cbk] rep1: read 0 bytes of data from /11 on child 1, offset 0
???</DIV>
<DIV> </DIV></DIV>
<DIV><FONT face=Verdana color=#000080 size=2>Wait for your return ,thanks a lot
</FONT></DIV>
<DIV><FONT face=Verdana color=#c0c0c0 size=2>2009-03-04 </FONT></DIV><FONT
face=Verdana color=#000080 size=2>
<HR style="WIDTH: 100px" align=left color=#b5c4df SIZE=1>
</FONT>
<DIV><FONT face=Verdana color=#c0c0c0 size=2><SPAN>eagleeyes</SPAN>
</FONT></DIV>
<DIV><FONT face=Verdana size=2>
<DIV> </DIV></FONT></DIV></BODY></HTML>