Hi Ate,<br> Thanks for this info. Let me look into this again.<br><br>Regards,<br><br><div class="gmail_quote">On Tue, Jul 7, 2009 at 12:39 AM, Ate Poorthuis <span dir="ltr"><<a href="mailto:atepoorthuis@gmail.com">atepoorthuis@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi Amar,<br><br>Unfortunately, it does not seem to be a problem in the OS itself. If I use MacFuse and sshfs, I am getting the right disk size, independent whether or not I mount with -o local.<br>
<br>./sshfs-static-leopard ate@10.0.0.11:/mnt/gluster /mnt/ssh/<br>
root@10.0.0.11:/mnt/gluster 21Ti 4.9Ti 15Ti 25% /mnt/ssh<br><font color="#888888"><br>Ate</font><div><div></div><div class="h5"><br><br><div class="gmail_quote">On Tue, Jul 7, 2009 at 7:40 AM, Amar Tumballi <span dir="ltr"><<a href="mailto:amar@gluster.com" target="_blank">amar@gluster.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi Ate,<br> The problem here is that, in OSX the OS itself restricts the 'df' output to 16TB, after which it is overflowed (~22 - 16 = ~5.4). This behavior is a limitation from the OSX itself (I traced it through fuse stack and asked the same question in macfuse mailing list too). I suspect we can't solve the issue within the glusterfs + fuse space. <br>
<br>Regards,<br><br><div class="gmail_quote"><div><div></div><div>On Mon, Jul 6, 2009 at 2:21 AM, Ate Poorthuis <span dir="ltr"><<a href="mailto:atepoorthuis@gmail.com" target="_blank">atepoorthuis@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div></div><div>
A little extra information after experimenting with the distribute sequence. The outputted disk size is NOT that of the first brick. In the below mentioned setup, I have 4 bricks (2 are 4.5TB and 2 are 6.3TB). If I only add 2 or 3 bricks to the dht translator disk size is outputted correctly, regardless of which bricks I add - the same errors remain in the logs though. Upon adding a fourth, disk size is set to 5.4TB, which happens to be the size of the largest and the smallest brick divided by two.<div>
<div></div><div><br>
<br><div class="gmail_quote">On Fri, Jul 3, 2009 at 2:45 PM, Ate Poorthuis <span dir="ltr"><<a href="mailto:atepoorthuis@gmail.com" target="_blank">atepoorthuis@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br><br>Using gluster 2.0.2 client os OS X 10.5. Disk size is not outputted correctly, only the size of the first brick in dht is counted. Used disk space is calculated correctly.<br><br>OS X client: /Users/ate/glusterfs_ip.vol 5.4Ti 1.1Ti 3.2Ti 26% /mnt/gluster<br>
Debian client: /etc/glusterfs/client_st.vol 22T 1.1T 20T 6% /mnt/gluster<br><br>There are also some errors from dht-diskusage in the log about full disks. The % differs each time I mount (ranging from 96-100%).<br>
<br>
================================================================================<br>Version : glusterfs 2.0.2 built on Jun 30 2009 14:14:33<br>TLA Revision : 07019da2e16534d527215a91904298ede09bb798<br>Starting Time: 2009-07-03 14:40:34<br>
Command line : /usr/local/gluster/sbin/glusterfs --debug -f /Users/ate/glusterfs_ip.vol /mnt/gluster/ <br>PID : 97785<br>System name : Darwin<br>Nodename : ate-poorthuiss-macbook-2.local<br>Kernel Release : 9.7.0<br>
Hardware Identifier: i386<br><br>Given volfile:<br>+------------------------------------------------------------------------------+<br> 1: ### file: client-volume.vol<br> 2: <br> 3: #####################################<br>
4: ### GlusterFS Client Volume File ##<br> 5: #####################################<br> 6: <br> 7: #### CONFIG FILE RULES:<br> 8: ### "#" is comment character.<br> 9: ### - Config file is case sensitive<br>
10: ### - Options within a volume block can be in any order.<br> 11: ### - Spaces or tabs are used as delimitter within a line. <br> 12: ### - Each option should end within a line.<br> 13: ### - Missing or commented fields will assume default values.<br>
14: ### - Blank/commented lines are allowed.<br> 15: ### - Sub-volumes should already be defined above before referring.<br> 16: <br> 17: ### Add client feature and attach to remote subvolume<br> 18: volume gfs-001-afr1<br>
19: type protocol/client<br> 20: option transport-type tcp<br> 21: option remote-host 10.0.0.30 # IP address of the remote brick<br> 22: option remote-subvolume afr1 # name of the remote volume<br>
23: # option ping-timeout 5<br> 24: end-volume<br> 25: volume gfs-001-afr2<br> 26: type protocol/client<br> 27: option transport-type tcp<br> 28: option remote-host 10.0.0.30 # IP address of the remote brick<br>
29: option remote-subvolume afr2 # name of the remote volume<br> 30: # option ping-timeout 5<br> 31: end-volume<br> 32: volume gfs-002-afr1<br> 33: type protocol/client<br> 34: option transport-type tcp<br>
35: option remote-host 10.0.0.32 # IP address of the remote brick<br> 36: option remote-subvolume afr1 # name of the remote volume<br> 37: # option ping-timeout 5<br> 38: end-volume<br> 39: volume gfs-002-afr2<br>
40: type protocol/client<br> 41: option transport-type tcp<br> 42: option remote-host 10.0.0.32 # IP address of the remote brick<br> 43: option remote-subvolume afr2 # name of the remote volume<br>
44: # option ping-timeout 5<br> 45: end-volume<br> 46: <br> 47: volume bricks<br> 48: type cluster/distribute<br> 49: # option lookup-unhashed yes<br> 50: option min-free-disk 5%<br> 51: subvolumes gfs-001-afr1 gfs-001-afr2 gfs-002-afr1 gfs-002-afr2<br>
52: end-volume<br> 53: <br> 54: ### Add readahead feature<br> 55: volume readahead<br> 56: type performance/read-ahead<br> 57: option page-count 8 # cache per file = (page-count x page-size)<br> 58: subvolumes bricks<br>
59: end-volume<br> 60: <br> 61: ### Add IO-Cache feature<br> 62: volume iocache<br> 63: type performance/io-cache<br> 64: option cache-size 256MB<br> 65: subvolumes readahead<br> 66: end-volume<br> 67: volume iothreads<br>
68: type performance/io-threads<br> 69: option thread-count 32 # default is 1<br> 70: subvolumes iocache<br> 71: end-volume<br> 72: ### Add writeback feature<br> 73: volume writeback<br> 74: type performance/write-behind<br>
75: option cache-size 2MB<br> 76: option flush-behind off<br> 77: subvolumes iothreads <br> 78: end-volume<br> 79: <br><br>+------------------------------------------------------------------------------+<br>[2009-07-03 14:40:34] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify) on dlsym(0x101730, notify): symbol not found -- neglecting<br>
[2009-07-03 14:40:34] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify) on dlsym(0x101a40, notify): symbol not found -- neglecting<br>[2009-07-03 14:40:34] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify) on dlsym(0x101d50, notify): symbol not found -- neglecting<br>
[2009-07-03 14:40:34] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify) on dlsym(0x102060, notify): symbol not found -- neglecting<br>[2009-07-03 14:40:34] D [glusterfsd.c:256:_add_fuse_mount] glusterfs: 'direct-io-mode' in fuse causes data corruption if O_APPEND is used. disabling 'direct-io-mode'<br>
[2009-07-03 14:40:34] D [glusterfsd.c:1179:main] glusterfs: running in pid 97785<br>[2009-07-03 14:40:34] D [client-protocol.c:5948:init] gfs-001-afr1: defaulting frame-timeout to 30mins<br>[2009-07-03 14:40:34] D [client-protocol.c:5959:init] gfs-001-afr1: defaulting ping-timeout to 10<br>
[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>
[2009-07-03 14:40:34] D [client-protocol.c:5948:init] gfs-001-afr2: defaulting frame-timeout to 30mins<br>[2009-07-03 14:40:34] D [client-protocol.c:5959:init] gfs-001-afr2: defaulting ping-timeout to 10<br>[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>
[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>[2009-07-03 14:40:34] D [client-protocol.c:5948:init] gfs-002-afr1: defaulting frame-timeout to 30mins<br>
[2009-07-03 14:40:34] D [client-protocol.c:5959:init] gfs-002-afr1: defaulting ping-timeout to 10<br>[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>
[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>[2009-07-03 14:40:34] D [client-protocol.c:5948:init] gfs-002-afr2: defaulting frame-timeout to 30mins<br>
[2009-07-03 14:40:34] D [client-protocol.c:5959:init] gfs-002-afr2: defaulting ping-timeout to 10<br>[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>
[2009-07-03 14:40:34] D [transport.c:141:transport_load] transport: attempt to load file /usr/local/gluster/lib/glusterfs/2.0.2/transport/socket.so<br>[2009-07-03 14:40:34] D [read-ahead.c:786:init] readahead: Using conf->page_count = 8<br>
[2009-07-03 14:40:34] D [io-threads.c:2280:init] iothreads: io-threads: Autoscaling: off, min_threads: 32, max_threads: 32<br>[2009-07-03 14:40:34] D [write-behind.c:1859:init] writeback: disabling write-behind for first 1 bytes<br>
[2009-07-03 14:40:34] D [dict.c:297:dict_get] dict: @this=0x0 @key=0x22dd0b<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-001-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr1: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>[2009-07-03 14:40:34] D [client-protocol.c:6276:notify] gfs-002-afr2: got GF_EVENT_PARENT_UP, attempting connect on transport<br>
[2009-07-03 14:40:34] N [glusterfsd.c:1198:main] glusterfs: Successfully started<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-001-afr1: got GF_EVENT_CHILD_UP<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-001-afr1: got GF_EVENT_CHILD_UP<br>
[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-001-afr2: got GF_EVENT_CHILD_UP<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-001-afr2: got GF_EVENT_CHILD_UP<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-002-afr1: got GF_EVENT_CHILD_UP<br>
[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-002-afr1: got GF_EVENT_CHILD_UP<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-002-afr2: got GF_EVENT_CHILD_UP<br>[2009-07-03 14:40:34] D [client-protocol.c:6290:notify] gfs-002-afr2: got GF_EVENT_CHILD_UP<br>
[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-001-afr1: Connected to <a href="http://10.0.0.30:6996" target="_blank">10.0.0.30:6996</a>, attached to remote volume 'afr1'.<br>[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-001-afr1: Connected to <a href="http://10.0.0.30:6996" target="_blank">10.0.0.30:6996</a>, attached to remote volume 'afr1'.<br>
[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-001-afr2: Connected to <a href="http://10.0.0.30:6996" target="_blank">10.0.0.30:6996</a>, attached to remote volume 'afr2'.<br>[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-002-afr1: Connected to <a href="http://10.0.0.32:6996" target="_blank">10.0.0.32:6996</a>, attached to remote volume 'afr1'.<br>
[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-002-afr1: Connected to <a href="http://10.0.0.32:6996" target="_blank">10.0.0.32:6996</a>, attached to remote volume 'afr1'.<br>[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-002-afr2: Connected to <a href="http://10.0.0.32:6996" target="_blank">10.0.0.32:6996</a>, attached to remote volume 'afr2'.<br>
[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-002-afr2: Connected to <a href="http://10.0.0.32:6996" target="_blank">10.0.0.32:6996</a>, attached to remote volume 'afr2'.<br>[2009-07-03 14:40:34] N [client-protocol.c:5551:client_setvolume_cbk] gfs-001-afr2: Connected to <a href="http://10.0.0.30:6996" target="_blank">10.0.0.30:6996</a>, attached to remote volume 'afr2'.<br>
[2009-07-03 14:40:35] D [dht-common.c:1405:dht_err_cbk] bricks: subvolume gfs-001-afr1 returned -1 (Invalid argument)<br>[2009-07-03 14:40:51] C [dht-diskusage.c:197:dht_is_subvol_filled] bricks: disk space on subvolume 'gfs-001-afr1' is getting full (97.00 %), consider adding more nodes<br>
[2009-07-03 14:40:51] W [dht-diskusage.c:231:dht_free_disk_available_subvol] bricks: No subvolume has enough free space to create<br>
</blockquote></div><br>
</div></div><br></div></div><div>_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@nongnu.org" target="_blank">Gluster-devel@nongnu.org</a><br>
<a href="http://lists.nongnu.org/mailman/listinfo/gluster-devel" target="_blank">http://lists.nongnu.org/mailman/listinfo/gluster-devel</a><br>
<br></div></blockquote></div><br><br clear="all"><br>-- <br>Regards,<br><font color="#888888">Amar Tumballi<br><br>
</font></blockquote></div><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Regards,<br>Amar Tumballi<br><br>