[13:14] <CIA-9> main-menu: cjwatson * r139 ubuntu/ (17 files in 3 dirs): merge from Debian 1.31
[13:16] <CIA-9> main-menu: cjwatson * r140 ubuntu/debian/changelog: releasing version 1.31ubuntu1
[13:50] <CIA-9> yaboot-installer: cjwatson * r268 ubuntu/debian/ (16 files in 2 dirs): merge from Debian 1.1.16
[13:52] <CIA-9> yaboot-installer: cjwatson * r269 ubuntu/debian/changelog: releasing version 1.1.16ubuntu1
[21:50] <mark> hi
[21:50] <mark> is this symptom known for 10.04 lucid?
[21:51] <mark> preseeded sw raid1+lvm installs, where lucid makes a raid1 nested in another raid array?
[21:51] <mark> md0 : active raid1 md1p1[0]
[21:51] <mark>       9764800 blocks [2/1] [U_]
[21:51] <mark>       
[21:51] <mark> md1 : active raid1 sdb[1] sda[0]
[21:51] <mark>       478620608 blocks [2/2] [UU]
[21:51] <mark> not exactly what we had in mind... ;)
[21:53] <cjwatson> not known by me but I'm a bit behind on bugs
[21:53] <cjwatson> (and am about to go to bed but thought I'd at least reply with that much)
[21:54] <mark> thanks :)
[21:54] <mark> good night then
[21:54] <cjwatson> certainly isn't expected, we generally try to avoid using partitioned RAIDs
[21:55] <mark> yeah
[21:55] <mark> the idea here is of course two adjacent raid1 arrays
[21:55] <mark> one for containing / fs, the other (md1) for an LVM PV/VG
[21:55] <mark> this worked in karmic
[21:56] <cjwatson> sounds like something we should fix for 10.04.1
[21:56] <mark> indeed
[21:56] <mark> ok I'll search for bugs and file one if I can't find it
[22:02] <cjwatson> I'd rather you just file a fresh one
[22:04] <mark> alright