[12:37] <marga> infinity, hey there. The installer is still not matching the latest kernel, I thought it was going to be updated soon... Is there an ETA on that?
[13:14] <caribou> apw: smb: I have just realised that /etc/default/grub.d/kexec-tools.cfg is the script responsible for setting up crashkernel= as a boot argument
[13:14] <caribou> apw: smb: any reason for that ? this is a requirement for kdump-tools, not kexec itself
[13:26] <caribou> thank god we are doing good QA on our flagship software
[13:39] <apw> caribou, sounds like it is missplaced at best
[13:40] <caribou> apw: kdump-tools is not the only reverse depends of kexec-tools so that variable may be defined for things that have nothing to do with crashkernel
[13:41] <caribou> (petitboot & pxe-kexec)
[13:41] <apw> caribou, i suppose... all it does is "make a hole in ram" so it could be used by other things for the same side-effect
[13:42] <caribou> apw: yes, it has minimal impact
[13:42] <caribou> apw: maybe it is worth moving it to kdump-tools package
[13:42] <caribou> apw: plus this is Ubuntu specific
[13:46] <smb> Yeah, guess a misnomer from times when all was mushed together
[13:47] <apw> caribou, presumably debian has to have handling for that cmdline thing as well tho ?
[13:47] <apw> where do they do it?
[13:47] <caribou> apw: nope, it is a manual addition to /etc/default/grub documented in the README
[13:48] <apw> caribou, oh right, so we can move it to whever it makes most sense and then upstream it to debian :)
[13:48] <caribou> apw: there is on linux-crashdump metapackage on debian
[13:48] <caribou> iep
[13:48] <caribou> yep
[13:49] <apw> anyhow very likely we put it in kexec tools because it _is_ related to kexec tools ...
[13:49] <apw> so it might make sense there really, because you have kexec --crash or whatever to use it don't you
[13:49] <caribou> apw: true
[13:50] <caribou> apw: I was looking at LP: #1318111
[13:54] <smb> apw, thats what I meant with mushed together...
[13:55] <apw> oh that is very broken, sigh
[14:33] <xnox> caribou, i think there is a fix for that in mdadm package where Laney fixed a "similar" bug.
[14:34] <caribou> xnox: mdadm ??? oh, maybe a more generic fix
[14:34] <caribou> xnox: that would explain why I can no longer reproduce it
[14:34] <caribou> I'll fetch the source
[14:34] <xnox> caribou, well, it's a bit tricky. grub.d hooks can readd things over and over and over again. there was a bug in the way mdadm was doing it.
[14:35] <xnox> there was a feedback loop and it depended on how many kernels one was generating the initramfs for....
[14:35] <xnox> so try to have multiple kernels installed and have update-grub run across them all or some such.
[14:36] <caribou> xnox: yeah, that's what I'm testing atm
[14:37] <caribou> xnox: lp: #1465567
[14:53] <smoser> http://paste.ubuntu.com/13576380/
[14:54] <smoser>  does that stack trace look important ?
[14:54] <smoser> root fs got remounted read only  and i'm just going to reboot.
[14:54] <smoser> the reason i ask if it looks important is that its quite possible that the underlying disk in this vmware system is foobarred.
[14:55] <smoser> (the provider has done that before, so if it smells like that I wont open a bug).
[14:56] <xnox> [700872.486476] sd 2:0:0:0: [sda] task abort on host 2, ffff88001e469000
[14:56] <xnox> [700882.773096] sd 2:0:0:0: [sda] Failed to get completion for aborted cmd ffff88001e469000
[14:56] <xnox> hardware problem =)
[15:03] <smoser> xnox, thanks.
[19:21] <jderose> Any insight into why 4.2.0-19 wasn't released last week, if it will be released this week?
[20:07] <apw> we do try and avoid releasing on friday so we acoid breaking people at the weekend
[20:22] <jderose> apw: was the reason 4.2.0-19 wasn't released due to regressions found, or because of the holidays? just eager for 4.2.0-19 as it fixes as critical NVMe + suspend issue :)
[21:21] <jderose> apw: BTW, this is the specific fix we're waiting for - http://kernel.ubuntu.com/git/ubuntu/ubuntu-wily.git/commit/?h=master-next&id=babbf7db6d39d809ae132394f1196463ef118ca0
[21:22] <jderose> without it, suspend/resume is broken on UEFI systems with NVMe drives (and UEFI is, AFAIK, required to work with NVMe drives at all)
[22:04] <stgraber> sdeziel: bjf is looking for someone who can test fixes
[22:04] <sdeziel> bjf: I'm ready to test when you are
[22:05] <bjf> sdeziel, thanks, jsalisbury ^
[22:05] <bjf> sdeziel, are we talking Trusty?
[22:06] <sdeziel> bjf: yes, I can test both 3.13 and 3.16 on trusty
[22:06] <bjf> sdeziel, thanks
[22:25] <bjf> sdeziel, we've duplicated it here. i don't think we'll need you
[22:25] <sdeziel> bjf: OK, I'm still available if you think I can help
[22:25] <sdeziel> thanks for looking into this
[22:26] <bjf> sdeziel, we'll probably want you to test as soon as we've identified the bad commit but that might take a bit
[22:26] <sdeziel> bjf: OK, no problem
[22:27] <sdeziel> bjf: I didn't bisect it myself but looking at the changelog, "fib_rules: fix fib rule dumps across multiple skbs" seems like a potential culprit