<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">For removing the bricks from the
replica , we can just execute the command "replace-brick" with
"commit force" option <br>
<br>
Following is the procedure to replace the brick in the replicated
volume.<br>
<br>
##Replacing brick in Replicate/Distributed Replicate volumes<br>
<br>
This section of the document contains how brick:
`pranithk-laptop:/home/gfs/r2_0` is replaced with brick:
`pranithk-laptop:/home/gfs/r2_5` in volume `r2` with replica count
`2`.<br>
<br>
Steps:<br>
0. Make sure there is no data in the new brick
pranithk-laptop:/home/gfs/r2_5<br>
1. Check that all the bricks are running. It is okay if the brick
that is going to be replaced is down.<br>
2. Bring the brick that is going to be replaced down if not
already.<br>
<br>
1. Get the pid of the brick by executing 'gluster volume
<volname> status'<br>
<br>
```<br>
12:37:49 ⚡ gluster volume status<br>
Status of volume: r2<br>
Gluster process Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick pranithk-laptop:/home/gfs/r2_0 49152 Y
5342<br>
Brick pranithk-laptop:/home/gfs/r2_1 49153 Y
5354<br>
Brick pranithk-laptop:/home/gfs/r2_2 49154 Y
5365<br>
Brick pranithk-laptop:/home/gfs/r2_3 49155 Y
5376<br>
```<br>
<br>
2. Login to the machine where the brick is running and kill the
brick.<br>
<br>
```<br>
12:38:33 ⚡ kill -9 5342<br>
```<br>
<br>
3. Confirm that the brick is not running anymore and the other
bricks are running fine.<br>
<br>
```<br>
12:38:38 ⚡ gluster volume status<br>
Status of volume: r2<br>
Gluster process Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick pranithk-laptop:/home/gfs/r2_0 N/A N
5342 <<---- brick is not running, others are running
fine.<br>
Brick pranithk-laptop:/home/gfs/r2_1 49153 Y
5354<br>
Brick pranithk-laptop:/home/gfs/r2_2 49154 Y
5365<br>
Brick pranithk-laptop:/home/gfs/r2_3 49155 Y
5376<br>
```<br>
<br>
3. Using the gluster volume fuse mount (In this example:
`/mnt/r2`) set up metadata so that data will be synced to new
brick (In this case it is from `pranithk-laptop:/home/gfs/r2_1` to
`pranithk-laptop:/home/gfs/r2_5`)<br>
1. Create a directory on the mount point that doesn't already
exist. Then delete that directory, do the same for metadata
changelog by doing setfattr. This operation marks the pending
changelog which will tell self-heal damon/mounts to perform
self-heal from /home/gfs/r2_1 to /home/gfs/r2_5.<br>
<br>
```<br>
mkdir /mnt/r2/<name-of-nonexistent-dir><br>
rmdir /mnt/r2/<name-of-nonexistent-dir><br>
setfattr -n trusted.non-existent-key -v abc /mnt/r2<br>
setfattr -x trusted.non-existent-key /mnt/r2<br>
```<br>
<br>
2. Check that there are pending xattrs:<br>
<br>
```<br>
getfattr -d -m. -e hex /home/gfs/r2_1<br>
# file: home/gfs/r2_1<br>
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.r2-client-0=0x000000000000000300000002
<<---- xattrs are marked from source brick
pranithk-laptop:/home/gfs/r2_1<br>
trusted.afr.r2-client-1=0x000000000000000000000000<br>
trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe<br>
trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440<br>
```<br>
<br>
4. Replace the brick with 'commit force' option. Please note that
other variants of replace-brick command are not supported.<br>
<br>
1. Execute replace-brick command<br>
<br>
```<br>
12:58:46 ⚡ gluster volume replace-brick r2
`hostname`:/home/gfs/r2_0 `hostname`:/home/gfs/r2_5 commit force<br>
volume replace-brick: success: replace-brick commit successful<br>
```<br>
<br>
2. Check that the new brick is now online<br>
<br>
```<br>
12:59:21 ⚡ gluster volume status<br>
Status of volume: r2<br>
Gluster process Port Online Pid<br>
------------------------------------------------------------------------------<br>
Brick pranithk-laptop:/home/gfs/r2_5 49156 Y
5731 <<<---- new brick is online<br>
Brick pranithk-laptop:/home/gfs/r2_1 49153 Y
5354<br>
Brick pranithk-laptop:/home/gfs/r2_2 49154 Y
5365<br>
Brick pranithk-laptop:/home/gfs/r2_3 49155 Y
5376<br>
```<br>
<br>
3. Once self-heal completes the changelogs will be removed.<br>
<br>
```<br>
12:59:27 ⚡ getfattr -d -m. -e hex /home/gfs/r2_1<br>
getfattr: Removing leading '/' from absolute path names<br>
# file: home/gfs/r2_1<br>
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.r2-client-0=0x000000000000000000000000
<<---- Pending changelogs are cleared.<br>
trusted.afr.r2-client-1=0x000000000000000000000000<br>
trusted.gfid=0x00000000000000000000000000000001<br>
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe<br>
trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440<br>
```<br>
<br>
<br>
On 08/27/2014 02:59 AM, Joseph Jozwik wrote:<br>
</div>
<blockquote
cite="mid:CAL2VQWUJ6MndgTv2gVBMabbNEH75i8MENCTMV8ErLAA+anauCw@mail.gmail.com"
type="cite">
<div dir="ltr">To add to this it appears that replace brick is in
a broken state. I can't abort it, or commit it. And I can run
any other commands until it thinks the replace-brick is
complete.
<div><br>
</div>
<div>Is there a way to manually remove the task since it failed?</div>
<div><br>
<div><br>
</div>
<div>
<div>root@pixel-glusterfs1:/# gluster volume status gdata2tb</div>
<div>Status of volume: gdata2tb</div>
<div>Gluster process
Port Online Pid</div>
<div>
------------------------------------------------------------------------------</div>
<div>Brick 10.0.1.31:/mnt/data2tb/gbrick3
49157 Y 14783</div>
<div>Brick 10.0.1.152:/mnt/raid10/gbrick3
49158 Y 2622</div>
<div>Brick 10.0.1.153:/mnt/raid10/gbrick3
49153 Y 3034</div>
<div>NFS Server on localhost
2049 Y 14790</div>
<div>Self-heal Daemon on localhost
N/A Y 14794</div>
<div>NFS Server on 10.0.0.205
N/A N N/A</div>
<div>Self-heal Daemon on 10.0.0.205
N/A Y 10323</div>
<div>NFS Server on 10.0.1.153
2049 Y 12735</div>
<div>Self-heal Daemon on 10.0.1.153
N/A Y 12742</div>
<div>NFS Server on 10.0.1.152
2049 Y 2629</div>
<div>Self-heal Daemon on 10.0.1.152
N/A Y 2636</div>
<div><br>
</div>
<div> Task ID
Status</div>
<div> ---- --
------</div>
<div> Replace brick 1dace9f0-ba98-4db9-9124-c962e74cce07
completed</div>
<div><br>
</div>
<br>
<div class="gmail_quote">---------- Forwarded message
----------<br>
From: <b class="gmail_sendername">Joseph Jozwik</b> <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:jjozwik@printsites.com">jjozwik@printsites.com</a>></span><br>
Date: Tue, Aug 26, 2014 at 3:42 PM<br>
Subject: Moving brick of replica volume to new mount on
filesystem.<br>
To: <a moz-do-not-send="true"
href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
<br>
<br>
<div dir="ltr"><br clear="all">
<div>
<div>Hello,</div>
<div><br>
</div>
<div>I need to move a brick to another location on the
filesystem. </div>
<div>My initial plan was to stop the gluster server
with </div>
<div>1. service glusterfs-server stop </div>
<div>2. rsync -ap brick3 folder to new volume on
server </div>
<div>3. umount old volume and bind mount the new to
the same location.</div>
<div><br>
</div>
<div>However I stopped the glusterfs-server on the
node and there was still background processes
running glusterd. So I was not sure how to safely
stop them.</div>
<div><br>
</div>
<div><br>
</div>
<div>I also attempted to replace-brick to a new
location on the server but that did not work with
"volume replace-brick: failed: Commit failed on
localhost. Please check the log file for more
details."</div>
<div><br>
</div>
<div>Then attempted remove brick with </div>
<div><br>
</div>
<div>"volume remove-brick gdata2tb replica 2
10.0.1.31:/mnt/data2tb/gbrick3 start"</div>
<div>gluster> volume remove-brick gdata2tb
10.0.1.31:/mnt/data2tb/gbrick3 status</div>
<div>volume remove-brick: failed: Volume gdata2tb is
not a distribute volume or contains only 1 brick.</div>
<div>Not performing rebalance</div>
<div>gluster></div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Volume Name: gdata2tb</div>
<div>Type: Replicate</div>
<div>Volume ID: 6cbcb2fc-9fd7-467e-9561-bff1937e8492</div>
<div>Status: Started</div>
<div>Number of Bricks: 1 x 3 = 3</div>
<div>Transport-type: tcp</div>
<div>Bricks:</div>
<div>Brick1: 10.0.1.31:/mnt/data2tb/gbrick3</div>
<div>Brick2: 10.0.1.152:/mnt/raid10/gbrick3</div>
<div>Brick3: 10.0.1.153:/mnt/raid10/gbrick3</div>
</div>
</div>
</div>
<br>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<br>
</body>
</html>