Comment 43 for bug 495370

Revision history for this message
Amos Hayes (ahayes-polkaroo) wrote :

Hi Jools. Thanks very much for the packages. I wanted to migrate a RAID 5 to RAID 6 on 10.04 and you saved the day. I too would like to see this picked up for a 10.04 backport or the like. :)

I was thrown off a bit when the device was renamed after installing/restarting, but I sorted it out. My conversion looks like it will take a while though.

My experience looked like:

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdf[3] sde[2] sdd[1] sdc[0]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# dpkg -i mdadm_3.1.4-1_i386.deb
(Reading database ... 43484 files and directories currently installed.)
Preparing to replace mdadm 2.6.7.1-1ubuntu15 (using mdadm_3.1.4-1_i386.deb) ...
 * Stopping MD monitoring service mdadm --monitor
   ...done.
Unpacking replacement mdadm ...
Setting up mdadm (3.1.4-1) ...
Generating array device nodes... Removing any system startup links for /etc/init.d/mdadm-raid ...
update-initramfs: deferring update (trigger activated)
update-rc.d: warning: mdadm start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (S)
update-rc.d: warning: mdadm stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6)
 * Starting MD monitoring service mdadm --monitor
   ...done.

Processing triggers for man-db ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-25-generic-pae

# reboot

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sde[2] sdf[3] sdd[1] sdc[0]
      7814057984 blocks

unused devices: <none>

# vgchange -an
# mdadm --stop /dev/md127
# mdadm --assemble /dev/md0 /dev/sdc /dev/sdd /dev/sde /dev/sdf

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# mdadm --add /dev/md0 /dev/sdb
mdadm: added /dev/sdb

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb[4](S) sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

# mdadm --grow /dev/md0 --level=6 --backup-file=/root/backup-md0
mdadm level of /dev/md0 changed to raid6
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdb[4] sdc[0] sdf[3] sde[2] sdd[1]
      5860543488 blocks super 0.91 level 6, 128k chunk, algorithm 18 [5/4] [UUUU_]
      [>....................] reshape = 0.0% (16384/1953514496) finish=3970.5min speed=8192K/sec

unused devices: <none>