<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small">Glusterfs version 3.2.2 <br><br>I have a Gluster volume in which one our of the 4 peers/nodes had crashed some time ago, prior to my joining service here. <br>
<br>I see from volume info that the crashed (non-existing) node is still listed as one of the peers and the bricks are also listed. I would like to detach this node and its bricks and rebalance the volume with remaining 3 peers. But I am unable to do so. Here are my setps:<br>
<br>1. #gluster peer status<br> Number of Peers: 3 -- (note: excluding the one I run this command from)<br><br> Hostname: dbstore4r294 --- (note: node/peer that is down)<br> Uuid: 8bf13458-1222-452c-81d3-565a563d768a<br>
State: Peer in Cluster (Disconnected)<br><br> Hostname: 172.16.1.90<br> Uuid: 77ebd7e4-7960-4442-a4a4-00c5b99a61b4<br> State: Peer in Cluster (Connected)<br><br> Hostname: dbstore3r294<br> Uuid: 23d7a18c-fe57-47a0-afbc-1e1a5305c0eb<br>
State: Peer in Cluster (Connected)<br><br>2. #gluster peer detach dbstore4r294<br> Brick(s) with the peer dbstore4r294 exist in cluster<br><br>3. #gluster volume info<br><br> Volume Name: test-volume<br> Type: Distributed-Replicate<br>
Status: Started<br> Number of Bricks: 4 x 2 = 8<br> Transport-type: tcp<br> Bricks:<br> Brick1: dbstore1r293:/datastore1<br> Brick2: dbstore2r293:/datastore1<br> Brick3: dbstore3r294:/datastore1<br> Brick4: dbstore4r294:/datastore1<br>
Brick5: dbstore1r293:/datastore2<br> Brick6: dbstore2r293:/datastore2<br> Brick7: dbstore3r294:/datastore2<br> Brick8: dbstore4r294:/datastore2<br> Options Reconfigured:<br> network.ping-timeout: 42s<br> performance.cache-size: 64MB<br>
performance.write-behind-window-size: 3MB<br> performance.io-thread-count: 8<br> performance.cache-refresh-timeout: 2<br><br>Note that the non-existent node/peer is -- dbstore4r294 (bricks are :/datastore1 & /datastore2 - i.e. brick4 and brick8)<br>
<br>4. #gluster volume remove-brick test-volume dbstore4r294:/datastore1<br> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y<br> Remove brick incorrect brick count of 1 for replica 2<br><br>
5. #gluster volume remove-brick test-volume dbstore4r294:/datastore1 dbstore4r294:/datastore2<br>
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y<br> Bricks not from same subvol for replica<br><br>How do I remove the peer? What are the steps considering that the node is non-existent?<br clear="all">
</div><div><div dir="ltr"><div><font color="#666666"><font face="verdana, sans-serif"><b><i style="color:rgb(51,102,102)"><span style="font-family:verdana,sans-serif"><br>Regards,<br></span></i></b></font></font></div><div>
<p style="font-family:Helvetica,Arial,sans-serif;font-size:12px;line-height:14px;color:rgb(153,153,153)"><span style="font-weight:bold">Anup Nair</span><font face="verdana, sans-serif"><span style="border-collapse:collapse"><span style="font-family:arial,sans-serif;font-size:13px"><br>
</span></span></font></p></div></div></div>
</div>