<div dir="ltr">To add to this it appears that replace brick is in a broken state. I can't abort it, or commit it. And I can run any other commands until it thinks the replace-brick is complete.<div><br></div><div>Is there a way to manually remove the task since it failed?</div>
<div><br><div><br></div><div><div>root@pixel-glusterfs1:/# gluster volume status gdata2tb</div><div>Status of volume: gdata2tb</div><div>Gluster process Port Online Pid</div><div>
------------------------------------------------------------------------------</div><div>Brick 10.0.1.31:/mnt/data2tb/gbrick3 49157 Y 14783</div><div>Brick 10.0.1.152:/mnt/raid10/gbrick3 49158 Y 2622</div>
<div>Brick 10.0.1.153:/mnt/raid10/gbrick3 49153 Y 3034</div><div>NFS Server on localhost 2049 Y 14790</div><div>Self-heal Daemon on localhost N/A Y 14794</div>
<div>NFS Server on 10.0.0.205 N/A N N/A</div><div>Self-heal Daemon on 10.0.0.205 N/A Y 10323</div><div>NFS Server on 10.0.1.153 2049 Y 12735</div>
<div>Self-heal Daemon on 10.0.1.153 N/A Y 12742</div><div>NFS Server on 10.0.1.152 2049 Y 2629</div><div>Self-heal Daemon on 10.0.1.152 N/A Y 2636</div>
<div><br></div><div> Task ID Status</div><div> ---- -- ------</div><div> Replace brick 1dace9f0-ba98-4db9-9124-c962e74cce07 completed</div>
<div><br></div><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">Joseph Jozwik</b> <span dir="ltr"><<a href="mailto:jjozwik@printsites.com">jjozwik@printsites.com</a>></span><br>
Date: Tue, Aug 26, 2014 at 3:42 PM<br>Subject: Moving brick of replica volume to new mount on filesystem.<br>To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br><br><br><div dir="ltr"><br clear="all">
<div><div>Hello,</div><div><br></div><div>I need to move a brick to another location on the filesystem. </div><div>My initial plan was to stop the gluster server with </div><div>1. service glusterfs-server stop </div>
<div>2. rsync -ap brick3 folder to new volume on server </div><div>3. umount old volume and bind mount the new to the same location.</div><div><br></div><div>However I stopped the glusterfs-server on the node and there was still background processes running glusterd. So I was not sure how to safely stop them.</div>
<div><br></div><div><br></div><div>I also attempted to replace-brick to a new location on the server but that did not work with "volume replace-brick: failed: Commit failed on localhost. Please check the log file for more details."</div>
<div><br></div><div>Then attempted remove brick with </div><div><br></div><div>"volume remove-brick gdata2tb replica 2 10.0.1.31:/mnt/data2tb/gbrick3 start"</div><div>gluster> volume remove-brick gdata2tb 10.0.1.31:/mnt/data2tb/gbrick3 status</div>
<div>volume remove-brick: failed: Volume gdata2tb is not a distribute volume or contains only 1 brick.</div><div>Not performing rebalance</div><div>gluster></div><div><br></div><div><br></div><div><br></div><div>Volume Name: gdata2tb</div>
<div>Type: Replicate</div><div>Volume ID: 6cbcb2fc-9fd7-467e-9561-bff1937e8492</div><div>Status: Started</div><div>Number of Bricks: 1 x 3 = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 10.0.1.31:/mnt/data2tb/gbrick3</div>
<div>Brick2: 10.0.1.152:/mnt/raid10/gbrick3</div><div>Brick3: 10.0.1.153:/mnt/raid10/gbrick3</div></div></div>
</div><br></div></div></div>