[00:42] <phillw> Hi, whilst not 100% kernel, when we do test (or are asked to test) we often need a kernel and a build.. So, https://launchpad.net/~jsalisbury have you any idea why kernel team can build ISO's and https://answers.launchpad.net/ubuntu/+source/debian-cd/+question/240037 gets closed?
[00:45] <phillw> Oh, and please do not think that is me being harsh, Joe is a complete and utter star for the work he did for the zram issue in such short time. 
[02:31] <rsalveti> apw: you published the meta package for saucy (linux-mako, ppa), mind copying it for trusty as well?
[07:36] <ppisati> hallo
[08:37] <smb> ppisati, Gudda morga
[09:03] <apw> rsalveti, bah... will do
[09:05] <apw> phillw, that question gets closed by the launchpad autocloser cause noone did anything with it in 15 days, that seems to be a "feature" of questions; building CDs is a complex process and has changed in the last cycles to match upstream supported tooling.  as far as i know the appropriate tooling does exist in published repos, but it is slightly outside the normal archive system so some of it may not
[09:05] <apw> be at all obvious
[09:06] <apw> phillw, as for what joe did, most likely what he did was sub in a replacement kernle into an existing CD which with a lot of playing can be done in a "dirty" way, but you would have to ask him.
[09:07] <apw> phillw, it is unlikely he is mastering a cd the way the main builders do, as they have dedicated kit to do so currently
[09:09] <apw> phillw, all of that said, you are likely to find more authorative answers from the installer team 
[09:13] <brendand> does anyone know why the -proposed kernels haven't been copied yet even though they've been ready since friday?
[09:22] <apw> i believe there is a delay in the point release which is in the way, but i would not like to claim to be cirtain
[09:27] <smb> apw, I think that is exactly what bjf was saying in yesterday's kernel-team irc meeting
[09:30] <apw> smb, ahh yes there was "Holding" all over the place
[09:30] <apw> smb, though i blinked and missed it with the meeting
[09:31] <smb> apw, True, the email Joe sends out is more reliable as a data source
[09:33] <smb> https://lists.ubuntu.com/archives/kernel-team/2014-January/037696.html
[09:36] <apw> brendand, ahh there it is: "We are in a holding pattern waiting to see if any regressions show up that would cause us to respin before the 12.04.4 release goes out."
[09:38] <brendand> apw - okay - but the kernels aren't in -proposed so how will regressions be detected?
[09:39] <apw> brendand, that refers to the previous kernels which are in -proposed/-updates; though technically it presumably only means that for precise 
[09:40] <apw> though i guess that is actually saucy as well as we're using the lts kerenl on the cd
[09:40] <apw> this is about leaving a window where we can pick up whats in the -updates pocket and rev that separatly from what we are staging for the next cycle
[10:04] <cking> oops, rebooted the wrong machine
[10:10] <apw> cking, that is an oops ... especially when it has been running that test for 3 days
[10:11] <cking> unfortunately the ssh session I was in exited and thus I didn't reboot the machine I intended to reboot
[10:12] <apw> cking, i hate that when it happens
[10:12] <apw> as you always notice after you have asked your fiinger to press return but before it has done so, but there is no stopping it
[10:13]  * cking wonders why ACPICA always gets released the day before fwts is, I can never get these in sync
[10:39] <apw> cking, you need an entire cycle to get apica working again anyhow
[10:40] <apw> it always has new 'features'
[10:40] <cking> well, a bunch of fixes, so it's new and shiny and good ;-)
[10:42] <apw> yeah but out of sync with the releases is probabally good, so you have time to adapt :)
[10:42] <cking> cadence cadence cadence
[12:49] <smb> apw, https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1269401
[12:49] <ubot2> Launchpad bug 1269401 in linux (Ubuntu) "[Trusty] Switching framebuffers fails on VMs using cirrus" [Undecided,Confirmed]
[13:48] <rtg> apw, suspend stopped working on my Lenovo within the last couple of uploads. I'm gonna have to spend part of the day figuring that out lest it drive me nuts.
[14:24] <cking> rtg, is that suspend or resume?
[14:26] <rtg> cking, it is suspend, but now its working with -rc8. oh well.
[14:29] <henrix> rtg: looks like you forgot to apply my last patch to the Lucid kernel
[14:30] <rtg> henrix, uh, wich one ?
[14:31] <henrix> rtg: b4789b8e6be3151a955ade74872822f30e8cd914 ("[Lucid][LTS-Raring][CVE-2013-6380] aacraid: prevent invalid pointer dereference")
[14:31] <rtg> henrix, right, sorry.
[14:31] <henrix> rtg: np ;)
[14:33] <rtg> henrix, done
[14:34] <henrix> rtg: ack, thanks
[14:42] <apw> henrix, if these stables exist is there anything which prevents them being applied to master-next btw ?
[14:43] <henrix> apw: not sure i understand what you mean... are you talking about the CVEs fixes?
[14:44] <apw> henrix, no i was thnking about this fix which i proposed and is coming down from stables in fact
[14:44] <apw> or have the stables themselves not yet released with that in
[14:44] <apw> as i looked for the commit on master-next on the assumption if they were in stable they weould be on ther
[14:44] <apw> e
[14:45] <henrix> apw: ah, right :) no, the fix can be applied.  bjf (or sconklin) will handle that when merging the stable trees into the ubuntu kernels
[14:45] <henrix> apw: i believe there are scripts to handle that
[14:45] <apw> henrix, i don't need to apply it cause i know the sta
[14:46] <apw> stables will apply it in the next window anyhow, i was more trying to work out what triggers stable to be applied
[14:46] <henrix> apw: there are several sources. the primary source are the commits tagged for stable
[14:47] <henrix> apw: then, for network-related patches, davem explicitly sends requests to the stable mailing list (there are no network commits tagged for stable!)
[14:47] <henrix> apw: this also applies for sparc patches
[14:48] <rtg> apw, if you apply it, then you can also jam a bug number in the commit log
[14:48] <henrix> apw: and then... there's stable mailing list.  people keep sending requests for stable inclusion (CVE fixes, fixes that missed the stable tag, etc)
[14:57] <brendand> bjf, hi
[14:58] <brendand> bjf, can you clarify when the kernels are going to be pushed to -proposed? i understand you're holding them because of 12.04.4
[15:18] <rtg> henrix, please have a look at https://bugs.launchpad.net/bugs/1268780 - that commit looks like a candidate for 3.11 stable
[15:18] <ubot2> Launchpad bug 1268780 in linux (Ubuntu) "Please backport the "libata.force disable" patch to 13.10 kernel" [Medium,Triaged]
[15:19] <henrix> rtg: looking
[15:19] <rtg> henrix, ah, nm. you are already on it
[15:19] <henrix> rtg: yep :)
[15:27] <BenC> apw: Working through a few build errors (mostly missing include for of_address stuff that got moved)
[15:39] <apw> BenC, sounds promising
[16:01] <brendand> bjf, around?
[16:01] <bjf> brendand, just
[16:02] <bjf> brendand, well.. no i can't actually. we have not found any regression at this point that would cause a respin so i don't think it will be much longer.
[16:03] <bjf> brendand, and yes i know this messes with everyone's schedule.. we'll just be flexible about it and when the next cycle starts
[16:04] <brendand> bjf, by the end of the week?
[16:05] <bjf> brendand, probably next week
[16:05] <brendand> bjf, ok
[16:06] <brendand> bjf, are all of them being held to the same extent? i would have thought you'd just be concerned with the saucy kernel
[16:08] <bjf> brendand, yeah, we're holding them all right now . we could just hold saucy but we like to maintain a single sru cycle that has them all. if we do them individually it could get more messy schedule wise. (which one am i doing this week)
[16:08] <brendand> bjf, makes sense
[16:09] <bjf> brendand, honestly, we should have just skipped this entire cycle
[16:09] <bjf> brendand, i sort of screwed this up
[16:25] <rtg> cking, have you had any luck reproducing the 3.13 performance issues on tangerine ?
[16:26] <cking> rtg, I'm still gathering data on some other kit at the mo to get an idea
[16:26] <rtg> cking, cool
[16:26] <cking> it does involve a load of builds to get some idea what's going on
[16:26] <rtg> yup
[16:28] <cking> rtg, perhaps we should also cull a load of unused files on the device just for the sake of cleaning up the drive anyhow
[16:29] <rtg> cking, on tangerine ?
[16:29] <cking> yup
[16:29] <rtg> I do periodically hassle folks. is it full again ?
[16:29] <cking> 75% or so, so its not too bad
[16:30] <cking> rtg, anyhow, I'm still gathering data on some boxes, I may get an idea tomorrow once I've collated the results
[16:30] <rtg> cking, I generally don't start to worry until 95% or so. do you think a 75% full file system will have an impact ?
[16:31] <cking> rtg, most probably not the issues you see
[16:32] <rtg> cking, shall I update the kernel to 3.13-rc8 ? 3.13.0-3
[16:32] <cking> rtg, well, it's another data point I guess
[16:32] <BenC> apw: I think I've fixed them all...letting all 5 builds go before claiming victory
[16:32] <rtg> cking, didn't want to mess with your results or testing
[16:33] <cking> rtg, how about tomorrow, I want to get a spin on it tomorrow morning when it's quiet
[16:33] <rtg> cking, np
[17:19] <lamont> I have a machine booting (root is an lv on swraid1) that keeps telling me "WARNING: Thereappears to be one or more degraded RAID devices", spews mdstat showing the rcovery in progress (eta 137 min), says that it's starting it, complains that CREATE {user_root,group_disk} not found, says that it started the RAID in degraded mode, and then pauses for several seconds and repeats.
[17:20] <lamont> is that the "don't boot with degraded RAID" crap, or something else, I wonder?
[17:22] <rtg> smb, ^^
[17:22] <apw> smb, ^^ ... oh heh
[18:20]  * apw relocates
[19:28] <smb> lamont, Donboot would drop into busybox shell imo (not that I had that case)
[19:29] <lamont> ok. sigh
[19:30] <smb> lamont, But maybe it is actually what you say.
[19:30] <lamont> it`s also possibly me..
[19:31] <lamont> I don't remember if I rebooted after I did all my shenanigans to partition the new drive and add it to the raid1s
[19:32] <lamont> another 45 min or so, and I can see where I'm at... if this is really hating me, are you around then for me to pester with mdadm questions, or am I reading source/
[19:32] <lamont> ?
[19:32] <smb> Can you get the actual /proc/mdstat output 
[19:33] <lamont> it's on the screen in front of me
[19:33] <lamont> recovery = 87.1%.... finish=36.6
[19:33] <lamont> min
[19:33] <smb> lamont, Well I could try to be around if you get my attention then
[19:33] <lamont> ta
[19:33] <lamont> hopefully it'll just be to tell you that all is better
[19:34] <smb> Heh, one always can hope
[19:36]  * smb hope to get enlightened meanwhile what d-i option finally causes the partitions to be re-done with the preseed
[20:23] <lamont> smb: it was either me, or strange...  fixed by: lvm vgscan; lvm lvdisplay -C (root not active); lvm lvchange -a y ${ROOTlv}
[20:26] <smb> lamont, quite weird. cannot remember ever having to activate an lv manually
[20:27] <lamont> smb: I deactivated it manually after I chrooted into it to fdisk the new drive
[20:27] <lamont> hence "lamont" is the most likely root cause here
[20:29] <smb> Sounds like deep surgery . A bit more than I usually do for my "servers" which actually only serve one person
[20:33] <lamont> well... it was being... difficult
[20:34] <lamont> note to self: do not have the ilo(equivalent) on your dhcp server use dhcp to get an address. :/
[22:25] <sforshee> hallyn: is ns_capable(&init_user_ns, CAP_SYS_ADMIN) equivalent to capable(CAP_SYS_ADMIN)?
[22:35] <hallyn> sforshee: hes
[22:35] <hallyn> yes
[22:36] <sforshee> hallyn: thanks