<html><body><div style="color:#000; background-color:#fff; font-family:Courier New, courier, monaco, monospace, sans-serif;font-size:10pt">I've created a distributed replicated volume:<br><br><div style="color:rgb(0, 0, 0);font-size:13.3333px;font-family:Courier New, courier, monaco, monospace, sans-serif;background-color:transparent;font-style:normal;">> gluster> volume info<br>> <br>> Volume Name: Repositories<br>> Type: Distributed-Replicate<br>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c<br>> Status: Started<br>> Number of Bricks: 2 x 2 = 4<br>> Transport-type: tcp<br>> Bricks:<br>> Brick1: 192.168.1.1:/srv/sda7<br>> Brick2: 192.168.1.2:/srv/sda7<br>> Brick3: 192.168.1.1:/srv/sdb7<br>> Brick4: 192.168.1.2:/srv/sdb7<br></div><br>...by allocating physical partitions on each HDD of each node for the volumes' bricks: e.g.,<br><div><br>> [eric@sn1 srv]$ mount | grep xfs<br>> /dev/sda7 on /srv/sda7
type xfs (rw)<br>> /dev/sdb7 on /srv/sdb7 type xfs (rw)<br>> /dev/sda8 on /srv/sda8 type xfs (rw)<br>> /dev/sdb8 on /srv/sdb8 type xfs (rw)<br>> /dev/sda9 on /srv/sda9 type xfs (rw)<br>> /dev/sdb9 on /srv/sdb9 type xfs (rw)<br>> /dev/sda10 on /srv/sda10 type xfs (rw)<br>> /dev/sdb10 on /srv/sdb10 type xfs (rw)<br><br>I plan to re-provision both nodes (e.g., convert them from CentOS -> SLES) and need to preserve the data on the current bricks.<br><br>It seems to me that the procedure for this endeavor would be to: detach the node that will be re-provisioned; re-provision the node; add the node back to the trusted storage pool, and then; add the bricks back to the volume - *but* this plan fails at Step #1. i.e.,<br><br> * When attempting to detach the second node from the volume, the Console Manager <br> complains "Brick(s) with the peer 192.168.1.2 exist in cluster".<br> * When attempting to remove the
second node's bricks from the volume, the Console <br> Manager complains "Bricks not from same subvol for replica".<br><br>Is this even feasible? I've already verified that bricks can be *added* to the volume (by adding two additional local partitions to the volume) but I'm not sure where to begin preparing the nodes for re-provisioning.<br><br>Eric Pretorious<br>Truckee, CA<br></div></div></body></html>