[08:07] <jtaylor> where should I file bugs for the lts-yakkety from the ppa?
[08:16] <apw> jtaylor, https://bugs.launchpad.net/ubuntu/+source/linux-lts-yakkety/+filebug i guess
[08:16] <apw> jtaylor, but do tell us the bug # here
[08:27] <jtaylor> hm is launchpad broken?
[08:27] <jtaylor> get an oops on submit ...
[08:28] <apw> lovely... try fileing it against linux itself, and i'll sort it out afterwards
[08:28] <apw> jtaylor, ^
[08:31] <jtaylor> apw: bug 1631298
[08:34] <apw> jtaylor, can i assume this is installing into an existing working system ...  if so could you pastebin me a lsinitamfs of the working and not working initrds please
[08:36] <jtaylor> 4.8 http://paste.ubuntu.com/23288094/
[08:36] <jtaylor> 4.4 (working) http://paste.ubuntu.com/23288095/
[08:37] <apw> thanks
[10:08] <apw> jtaylor, could you boot up this test kernel for me so we can find out what the heck the feature flags are set to ... http://people.canonical.com/~apw/lts-backport-yakkety-xenial/
[10:09] <apw> jtaylor, not expecting it to fix anything but should drop an APW: in the dmesg with some info
[10:23] <apw> jtaylor, ok naive testing of raid1s created on 4.4 and booted with 4.8 seem to assemble for me
[10:23] <jtaylor> apw: my raid is probably a lot older
[10:23] <apw> jtaylor, so we need to get that debug output to see what differs on yours to mine
[10:23] <jtaylor> maybe 3.13 or more ;)
[10:23] <apw> jtaylor, probabally and that is a little scarey :)
[10:23] <jtaylor> I'll reboot soon
[10:23] <apw> jtaylor, thanks
[10:35] <jtaylor> apw: hm all zeros http://paste.ubuntu.com/23288447/
[10:37] <apw> jtaylor, oh you are right i have the same error, and yet the lv is there, oh but is it running
[10:38] <jtaylor> it is displayed but not active
[10:38] <jtaylor> though I haven't tried activating it manually, I just figured if it puts me in an emergency shell that doesn't work
[10:38] <apw> i would doubt it will start
[10:38] <jtaylor> I actually forgot to mention that, the system doesn't boot, despite / and /boot being there
[10:38] <apw> ok i am reproducing in fact, just not seeing what it is saying
[10:46] <apw> jtaylor, i am going to assume the data you ahve in your raid1 is something you care about (given it is raid1d)
[10:46] <apw> jtaylor, ie this is not something you can trivially test :/
[10:47] <jtaylor> yes, though I have backups
[10:47] <jtaylor> but destroying it would still be inconvinient
[10:47] <apw> jtaylor, do you have a test environment at all?  obviously i can test a fix here but this is pretty scarey code to change
[10:48] <apw> jtaylor, anyhow i have a theory and i am testing it, and if that works i'll ask upstream for safety
[10:49] <jtaylor> no, this is my home setup I only have this one raid1 system
[10:50] <apw> jtaylor, though i guess in theory we could pull off one of the mirrors as a backup, hrm anyho
[10:50] <apw> lets get to that once we have any idea if this is right
[10:50] <jtaylor> if you are reasonably confident in a fix I can update my backups and test it
[13:56] <om26er> jsalisbury: updated comments on the bug report. The last kernel is good, second-last is bad.
[13:59] <jsalisbury> om26er, great, I'll build the next one.  Only one or two left.
[13:59] <om26er> jsalisbury: ack.
[15:38] <manjo> rtg, something going on with tangerine ? compiling arm64  Error: selected processor does not support `staddlh w0,[x1]'
[15:40] <manjo> rtg, looks like assembler messages from xenial-amd64 chroot 
[15:43] <manjo> bjf, apw ^^ anyone know ? trying to build 4.8.0-21.23
[15:45] <manjo> I am also seeing ==> Error: attempt to move .org backwards
[15:47] <manjo> build works fine with yakkety-amd64 chroot.. .something broken with xenial-amd64 chroot ? 
[15:56] <om26er> jsalisbury: Hi! that's a good kernel.
[15:56] <jsalisbury> om26er, ack
[15:58] <om26er> jsalisbury: how big is the diff from this stage ? (or number of commits)
[16:01] <jsalisbury> om26er, this next kernel will narrow it down between 5 sched commits:
[16:01] <jsalisbury> 55e16d3 sched/fair: Rework throttle_count sync
[16:01] <jsalisbury> 599b484 sched/core: Fix sched_getaffinity() return value kerneldoc comment
[16:01] <jsalisbury> 8663e24 sched/fair: Reorder cgroup creation code
[16:01] <jsalisbury> 3d30544 sched/fair: Apply more PELT fixes
[16:01] <jsalisbury> 7dc603c sched/fair: Fix PELT integrity for new tasks
[16:02] <om26er> jsalisbury: I am betting on the first one
[16:03] <jsalisbury> om26er, :-)  We shall know shortly
[16:45] <jsalisbury> om26er, next kernel is posted
[17:24] <om26er> jsalisbury: that's a bad kernel
[17:30] <om26er> jsalisbury: do we finally have a commit blame ?
[18:20] <jsalisbury> om26er, we have to test only one more.  Then I'll bulild a test kernel with the reported commit reverted.  The next kernel should be done in a few moments
[18:20] <georgios> hi. i saw gresurity related utilities but no grsecurity kernel. how so?
[18:20] <om26er> jsalisbury: great
[18:42] <jsalisbury> om26er, next test kernel is posted
[18:42] <om26er> jsalisbury: thanks, testing it now.
[18:42] <jsalisbury> om26er, ack
[18:49] <om26er> jsalisbury: that's a good one.
[18:49] <jsalisbury> om26er, The bisect reports this as the offending commit:
[18:49] <jsalisbury> commit 3d30544f02120b884bba2a9466c87dba980e3be5
[18:49] <jsalisbury> Author: Peter Zijlstra <peterz@infradead.org>
[18:49] <jsalisbury> Date:   Tue Jun 21 14:27:50 2016 +0200
[18:49] <jsalisbury>     sched/fair: Apply more PELT fixes
[18:50] <jsalisbury> om26er, I'll build a test kerne with it reverted to see if it fixes the bug.
[18:51] <om26er> jsalisbury: you mean 4.8 release minus that commit ?
[18:51] <jsalisbury> om26er, I'll build a yakkety kernel, minus that commit
[18:51] <om26er> jsalisbury: great
[18:54] <om26er> feels like a small diff https://kernel.googlesource.com/pub/scm/linux/kernel/git/tip/tip/+/3d30544f02120b884bba2a9466c87dba980e3be5%5E%21/
[19:19] <jsalisbury> om26er, kernel posted.  Note with this kernel, you need to install both the linux-image and linux-image-extra .deb packages.
[19:30] <om26er> jsalisbury: that's it.
[19:30] <om26er> latest kernel is good
[19:30] <jsalisbury> om26er, good news.  I'm going to ping upstream and the patch author and get their feedback
[19:31] <om26er> jsalisbury: ok, the test case is `stress -c $your_total_cpu_cores`
[19:32] <jsalisbury> om26er, thanks.  Can you post that in the bug?  I'll be pointing upstream there.
[19:34] <om26er> done.