I am messing around with gluster management and I've added a couple bricks and did a rebalance, first fix-layout and then migrate data. When I do this I seem to get a lot of failures:<div><br></div><div><div><br></div>
<div>gluster> volume rebalance MAIL status</div><div> Node Rebalanced-files size scanned failures status</div><div> --------- ----------- ----------- ----------- ----------- ------------</div>
<div> localhost 6657 36611748 71014 10458 completed</div><div> gs3 3528 21167454 51122 8491 completed</div>
<div> gs2 0 0 45079 0 completed</div><div> gs4 0 0 45069 0 completed</div>
</div><div><br></div><div><br></div><div>The logs for this show:</div><div><br></div><div><div>[2012-06-28 13:35:54.100842] I [dht-rebalance.c:639:dht_migrate_file] 0-MAIL-dht: /testuser393/maildir/new/1340909913.V14Ia51f68d6a66824a5M364717.test-gluster-client1: attempting to move from MAIL-replicate-1 to MAIL-replicate-0</div>
<div>[2012-06-28 13:35:54.111880] W [dht-rebalance.c:353:__dht_check_free_space] 0-MAIL-dht: data movement attempted from node (MAIL-replicate-1) with higher disk space to a node (MAIL-replicate-0) with lesser disk space (/testuser393/maildir/new/1340909913.V14Ia51f68d6a66824a5M364717.test-gluster-client1)</div>
<div>[2012-06-28 13:35:54.111947] E [dht-rebalance.c:1194:gf_defrag_migrate_data] 0-MAIL-dht: migrate-data failed for /testuser393/maildir/new/1340909913.V14Ia51f68d6a66824a5M364717.test-gluster-client1</div></div><div><br>
</div><div><br></div><div><br></div><div>All the gluster servers have the same amount of disk space so I'm confused by the failure. Is this something to be worried about?</div><div><br></div>