Comment 69 for bug 569900

Revision history for this message
Plutocrat (plutocrat) wrote : Re: mount: mounting /dev/md0 on /root/ failed: Invalid argument

Confirmed here as well. I couldn't believe that such a fundamental bug would be released, but really, yes, my RAID system is unbootable after using the installer. Two and a half days wasted.
I've got an IBM 3200 server with four 500Gb disks. My partitioning scheme is as follows.
On each disk:
/boot 1Gb
/ 15Gb
swap 2Gb
/home 482Gb
Then I have a
md0 RAID 1 on the four boot partitions (sd[abcd]1),
md1 RAID 5 across the / partitions (sd[abcd]2) ,
md2 RAID5 across the /home partittions (sd[abcd]4)

The /etc/mdadm/mdadm.conf file looks correct before I reboot.

After install I get the initramfs prompt.

doing cat /proc/mdstat tells me only the md2 array is detected and its rebuilding.
mdadm --detail /dev/md2 tells me that it is compose of sda, sdb, sdc, sdd ie the WHOLE disks

I can do mdadm --stop /dev/md2
and then.
mdadm --assemble /dev/md0 /dev/sd[abcd]1
mdadm --assemble /dev/md1 /dev/sd[abcd]2
mdadm --assemble /dev/md2 /dev/sd[abcd]4

After this the md2 only adds 3 of the 4 disks, but I can now boot by typing exit to get out of the initramfs.

I've tried all my tricks to get this config to stick but apparently every reboot I have to manually stop the wrong array and manually assemble them all again.

This is the amd/64 iso for ubuntu server 10.04. I also had the same problem with the 386 version.

I've tried vaping the partition tables, reformatting the drives, and zeroing the superblock and none of these fix it. Two and a half days. Is the bug in mdadm, the partitioner, or what?