[07:43] <infinity> zequence: Friendly reminder about https://bugs.launchpad.net/ubuntu/+source/linux-lowlatency/+bug/1396193
[11:06] <zequence> infinity: Thanks. Had a busy weekend. All waiting to build in the ppa.
[14:56] <NikTh> apw: Hello, if you have time check this : https://bugs.launchpad.net/linux/+bug/1386695 . Thanks
[15:24] <apw> NikTh, wassup?  the end of that implies you are testing a -25 kernel, but the one in proposed with the fixes applied claims to be a -26, did you get a test with -26 at all ?
[15:53] <NikTh> apw: No. I didn't test with -26. I didn't get the -26 update when I updated the system today. Weird. Proposed are enabled, that's for sure. 
[16:10] <NikTh> apw: I just get the -26 update, tested it and unfortunately it does not work. 
[16:11] <NikTh> s/get/got
[16:29] <apw> NikTh, well say that, and they will rip out the fixes for this update
[16:29] <apw> NikTh, not really sure what one is supposed to do if a bios update changes things
[16:35] <NikTh> apw: This is the latest version of BIOS available. I would update it either way, someday. The thing is your first kernel now works. But not the second. In comment #33 I have listed the kernels I tested.
[16:35] <apw> NikTh, right which is the opposite of what you reported before the update, which is confusing at best
[16:37] <NikTh> apw: correct. Although I didn't see any reference to ACPI or similar at release notes of BIOS. Here you can read the latest update (http://www.msi.com/support/mb/880GMAE45.html#down-bios_
[16:46] <apw> NikTh, well teh right thing to do is mark it verification-failed, and we'll have to start again
[16:50] <NikTh> apw: start again ? all over ? with this kernel bisecting thing ? Haha, I don't have the time. I have the time to test any kernel you (or anyone else) produce, but to follow this procedure all over again..hmm, a bit difficult. 
[16:51] <apw> NikTh, well i think it is likely the patches we found before will do the trick, but we need this fix removed, as it does not work, then start again on top of the new release
[16:52] <NikTh> apw:  OK, but your first kernel Works ( I have the links in comment #33). Should I remove the #verification-needed-utopic and replace it with the #verification-failed tag ?
[16:53] <apw> NikTh, verification-failed-utopic probabally but yes
[16:53] <apw> bjf, ^ looks like we have a verification failure on utopic
[16:54] <NikTh> verification-failed is a ready tag as I can see. But should I remove the other one or leave it as it is ? (both)
[16:55] <bjf> apw, bug #?
[16:56] <apw> https://bugs.launchpad.net/linux/+bug/1386695
[16:56] <apw> seems a bios update has changed the bug to not need the fixes found by the preceeding testing
[16:56] <apw> but to need different ones
[16:58] <NikTh> I'll leave @penalver to handle the tags because it seems we edit them at the same time.. a bit confusing. :-)
[17:00] <NikTh> apw: right. But I still believe for such bugs the latest BIOS update should be tested. Not an "outdated" one. 
[17:01] <apw> NikTh, oh indeed, just that it wasn't and has had an effect on update, makes all the work we did before, wrong and moot
[17:03] <NikTh> apw : Indeed  
[17:03] <apw> and indeed will create a crap-load of work for stable to remove those broken fixes, and then we have to start again once we have a new base, sigh
[17:06] <NikTh> apw: I thought it was easier to merge the patches from the first kernel, the one that now works (with the new BIOS) rather to start all over again. 
[17:06] <NikTh> This one works : http://people.canonical.com/~apw/lp1386695-utopic/
[17:07] <apw> yes, it likely is, but those are on the wrong base so i have to wait for stable to revert and respin their tree, and for that to make it out, then i can start retesting those again on top of that base, as that is where it will need to be
[17:08] <NikTh> Ah, so you will probably release the fix when ? With the happy new year ? :P 
[17:13] <NikTh> apw: I have to go. Sorry for the extra work, but this time we (you) will fix it once and for all. There is other BIOS update available :-) 
[17:15] <bjf> apw, i'm looking at those two commits. do you feel like we should revert them. i'm leaning that way
[17:16] <apw> bjf, as they don't clearly fix the bug, i don't see how we can not revert them
[17:16] <apw> much as i hate to do it
[17:29] <bjf> apw, ack, i'm dealing with it. henrix, looks like i'm respinning utopic and lts-utopic
[17:30] <henrix> bjf: fun!  /me goes read backlog
[21:48] <binBASH> tinoco arges present?
[21:48] <arges> hi
[21:48] <binBASH> hi
[21:48] <binBASH> I'm trying to help with https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1318551
[21:49] <binBASH> So I've read now at https://wiki.ubuntu.com/Kernel/CrashdumpRecipe how to produce dump
[21:49] <arges> yea I've just started to look at the bug, tinoco has done the most research at this point
[21:50] <binBASH> cat /sys/kernel/kexec_crash_loaded 
[21:50] <binBASH> 1
[21:50] <binBASH> the question now would be how to reproduce that bug :)
[21:50] <tinoco> binBASH: keeping machine idle
[21:50] <binBASH> as it occurs for me mostly at boot
[21:50] <tinoco> :(
[21:50] <tinoco> binBASH: if you are using intel_idle (and you are, by default probably)
[21:51] <tinoco> and HP proliant servers
[21:51] <tinoco> having CPU idle will trigger the problem
[21:52] <tinoco> binBASH: if you are trying to reproduce this
[21:52] <binBASH> does governor matter?
[21:52] <tinoco> binBASH: make sure to test kdump tool before
[21:52] <binBASH> Because I'm using powersave currently
[21:52] <tinoco> binBASH: keeping CPU idle will trigger C-states to go lower and lower
[21:52] <tinoco> P-states will go lower on C1E state.. 
[21:52] <tinoco> (frequency governor)
[21:53] <tinoco> C-states will shutdown parts of the core
[21:53] <tinoco> and that is where the problem is 
[21:53] <binBASH> Ok, so I should probably test this without running X :)
[21:54] <binBASH> tinoco: thing is I saw at also at running servers in my datacenter. But mostly directly during bootup :)
[21:54] <tinoco> binBASH: we might be facing 2 separate things 
[21:54] <tinoco> on the same case.. i´ll split them if thats the case
[21:55] <tinoco> now im focused on proliant 360/380 panics due to NMI being triggered 
[21:55] <binBASH> Probably, but the question for me is, how to get crashdump when it crash at boot. Dunno if this is possible :)
[21:55] <tinoco> (original case description)
[21:55] <tinoco> binBASH: can´t u even boot ? 
[21:55] <binBASH> yup
[21:56] <arges> binBASH: it depends on when at 'boot' it crashes. I think if you can use serial consoles and add 'debug' to the grub options it may give us more information
[21:57] <tinoco> binBASH: curious though.. this bug is intermittent and usually does not happen @ boot
[21:57] <tinoco> since all cpus are going to be powered on and on C0 or C1E state
[21:58] <tinoco> binBASH: you can use the workaround i provided . to be started as a init script
[21:58] <tinoco> (keeping all cpus under C1E max state)
[21:58] <tinoco> and then.. turn the workaround off
[21:58] <tinoco> and wait for the dump
[22:03] <binBASH> Ok, I will look into this the next days. If I can manage to get a trace/dump :)
[22:04] <binBASH> The funny thing is tinoco, if it occurs after boot, probably after some days. There is message output to screen every few seconds
[22:04] <binBASH> with some stacktrace :)
[22:06] <binBASH> The boot crashs are more like russian roulette. But with only one bullet taken out :)
[22:06] <tinoco> binBASH: thats what NMI are all about
[22:06] <tinoco> and CPU general faults
[22:06] <tinoco> if you have a double/triple fault
[22:06] <tinoco> you have a panic
[22:06] <tinoco> binBASH: do you have a stack trace example
[22:06] <tinoco> to show me ?
[22:06] <tinoco> binBASH: id like to make sure you are not suffering from x2apic bug also
[22:07] <tinoco> similar stack trace.. 
[22:07] <binBASH> Not yet, because I don't know how to get it :)
[22:07] <binBASH> I could take camera and make pic :D
[22:07] <tinoco> if you get the beginning of stack trace
[22:08] <tinoco> works for me :)
[22:08] <tinoco> if the stack trace is huge.. you are getting the first lines.. not the latest frames (the one i need)
[22:09] <binBASH> Could make movie as well :)
[22:10] <tinoco> binBASH: lol
[22:10] <binBASH> If you have better idea...
[22:11] <tinoco> binBASH: are u having this on a proliant ?
[22:11] <tinoco> OR on a gigabyte based motherboard ?
[22:11] <tinoco> cause i really think you case is different
[22:11] <tinoco> (we talked before, right ?)
[22:11] <binBASH> I have it on multiple boards
[22:11] <binBASH> not only Gigabyte
[22:11] <binBASH> Wait, I look what is the other
[22:12] <tinoco> binBASH: ALL your machines are workaround by the use of ¨noautogroup¨ ?
[22:13] <tinoco> binBASH: can i ask you to open a different bug (if not proliant servers)
[22:13] <tinoco> and tell me the bug #
[22:13] <tinoco> ?
[22:13] <binBASH> Intel S2600CP
[22:13] <tinoco> binBASH: i really think we are dealing with different cases on this bug
[22:13] <binBASH> is the other one
[22:14] <tinoco> binBASH: this way you can open a new bug and attach the core file on it
[22:14] <tinoco> for me to investigate in parallel
[22:14] <arges> binBASH: you can use from the Ubuntu machine 'ubuntu-bug linux' to gather relevant information
[22:15] <binBASH> tinoco: the problem is, I don't know how to get the core file :D
[22:16] <tinoco> binBASH: just a sec
[22:16] <tinoco> ill paste it to you
[22:16] <tinoco> just made an example
[22:17] <tinoco> binBASH: http://paste.ubuntu.com/9336310/
[22:17] <tinoco> an example for precise.. 
[22:17] <tinoco> but its pretty similar if not the same for trusty
[22:18] <tinoco> binBASH: could you open a new bug
[22:18] <tinoco> keep a good description on whats happening
[22:18] <tinoco> and attach the /var/crash/* after the dump was created ?
[22:18] <binBASH> sure
[22:18] <tinoco> then i´ll assign myself to it 
[22:18] <tinoco> just so i can keep cases separate 
[22:19] <binBASH> the problem will be probably, that it occurs before file systems are mounted :)
[22:19] <binBASH> but we'll see...
[22:19] <binBASH> because tinoco I have /var/crash/* stuff
[22:19] <binBASH> but not from that boot crashs
[22:20] <tinoco> binBASH: checking if noautogroup can be changed online somehow
[22:21] <tinoco> inaddy@workstation:~$ sudo sysctl -a | grep autogrou
[22:21] <tinoco> kernel.sched_autogroup_enabled = 1
[22:21] <tinoco> you can boot with ¨noautogroup¨
[22:21] <tinoco> and enable it online
[22:21] <tinoco> so cores are generated (for your cause)
[22:22] <tinoco> it might work
[22:22] <tinoco> this commit: https://lkml.org/lkml/2011/2/20/10 enabled it to be a runtime flag
[22:22] <binBASH> ok, I've enabled it now
[22:22] <binBASH> though no crash yet :)
[22:22] <tinoco> while true; do ps -ef; sleep 1; done
[22:23] <tinoco> :o) to create some work
[22:24] <binBASH> let's hope it will trigger anything
[22:38] <tinoco> binBASH: i´m leaving now. please let me know when you open the new bug
[22:38] <tinoco> and when/if you could provide core dumps.. 
[22:38] <tinoco> in your case i think core dumps are going to be needed (not only pictures :()
[22:38] <tinoco> tks ;)
[22:38] <binBASH> :-)
[22:39] <binBASH> I really would like to, but I think no way when it crashs during boot and no fs initialized :)
[22:39] <tinoco> lets see if enabling this runtime works
[22:40] <tinoco> we might trigger some ¨logic¨ after sometime
[22:40] <tinoco> and for some reason it is being triggered @ the boot
[22:40] <binBASH> Yeah, I've atted it now to sysctl.conf so it will be enabled by default here
[22:40] <tinoco> lets see..
[22:40] <binBASH> added
[22:40] <tinoco> good
[22:40] <tinoco> i´ll be back tomorrow 
[22:40] <tinoco> let me know when you´ve opened the other bug
[22:40] <tinoco> so i can assign myself to it
[22:40] <tinoco> ;)
[22:40] <binBASH> Yup, I will
[22:40] <tinoco> cya guys .. bb tomorrow
[22:40] <binBASH> tinoco: I mean, I'm fine with that noautogroup
[22:40] <binBASH> if it has no negative result.
[22:41] <tinoco> binBASH: yep.. i would like to investigate this
[22:41] <tinoco> if you can help us on reproducing/opening the bug
[22:41] <tinoco> it would be awesome
[22:41] <tinoco> since others can be facing this also
[22:41] <tinoco> and we don´t know the deep of this
[22:41] <tinoco> ;)
[22:42] <binBASH> and the question is, why it appeared suddenly at 3.8 kernel :D
[22:42] <binBASH> anyways, sleep well tinoco and thx for additional hints
[22:42] <tinoco> binBASH: yep.. this is even better in case i have to bisect something for you
[22:42] <tinoco> if you have a ¨good¨ vs ¨no good¨
[22:42] <tinoco> i can provide bisection for you to test
[22:42] <tinoco> and give me feedback
[22:43] <tinoco> (maybe 10 kernels to be tested ? :o)
[22:43] <binBASH> would be np
[22:43] <tinoco> great, lets do this then.. if you can´t get a core
[22:43] <tinoco> fill the bug and let me know
[22:43] <tinoco> i can start a bisection for you
[22:43] <tinoco> and you test the kernels i generate
[22:43] <binBASH> Ok, will do it tomorrow evening
[22:43] <tinoco> great ;) 
[22:43] <tinoco> tks binBASH, talk to you tomorrow then
[22:43] <binBASH> sleep well, bye