[06:35] <FourDollars> Hi, I opened a bug https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1502772. Could you help to fix this problem?
[06:35] <ubot5`> Ubuntu bug 1502772 in linux (Ubuntu) "Linux kernel in Ubuntu doesn't provide mmc-modules udeb." [Undecided,New]
[07:29] <Guest15243> infinity
[07:30] <Guest15243> So do you want to do this?
[07:48] <Guest15243> Hi brendand
[07:52] <brendand> Guest15243, hello?
[08:08] <Guest15243> What is going on brendand?
[10:26] <einfinity> "We call the police" bug again.
[10:26] <apw> FourDollars, hrm, are they not built in
[10:26] <einfinity> Looks like it.
[10:28] <einfinity> saftey protocol
[10:29] <einfinity> overactive survival mechanisms
[10:30] <einfinity> machinae niil complex
[10:32] <einfinity> machinae nihil complex
[10:32] <einfinity> cold winter at the 4 seasons
[10:34] <einfinity> biological upper respitory infection
[10:34] <einfinity> tenure
[10:38] <einfinity> brendand: can you send a software update to my android?
[10:42] <einfinity> I don't like the version.
[10:45] <soulkz> que pendejos
[10:57] <einfinity> I pray for you.
[10:58] <einfinity> To be abused.
[10:58] <einfinity> ab-used
[10:58] <einfinity> tools
[10:59] <einfinity> To be used.
[11:07] <FourDollars> apw: no
[12:36] <Eduard_Munteanu> What's the relationship between versions in ubuntu/linux.git and those in ubuntu/ubuntu-${release}.git?
[12:37] <Eduard_Munteanu> e.g. 3.16.0.50.41 vs 3.16.7-ckt17
[12:42] <henrix> Eduard_Munteanu: 3.16.7-ckt17 is a stable kernel release (as in "upstream stable"); the kernels in ubuntu-${released}.git are ubuntu kernels.  stable kernels commits are applied into the ubuntu kernels
[12:45] <caribou> apw: hi, got a chance to look at my fix for the crashkernel= bug ?
[12:45] <caribou> apw: bug: #1496317
[12:45] <ubot5`> bug 1496317 in kexec-tools (Ubuntu) "kexec fails with OOM killer with the current crashkernel=128 value" [High,In progress] https://launchpad.net/bugs/1496317
[12:50] <Eduard_Munteanu> henrix, thanks, I see...
[12:58] <Eduard_Munteanu> henrix, is there a place where I can see what patches have been applied in addition to the -ckt series?
[13:04] <henrix> Eduard_Munteanu: i'm affraid the only way is to look at the release git tree and compare both
[13:05] <Eduard_Munteanu> Ah, I see.
[13:29] <smb> caribou, fwiw it appears to make sense to me. Of course actual trying out will be even better. I can try to play with it 
[13:30] <caribou> smb: let me check I think it's in a ppa somewhere
[13:30] <smb> caribou, Oh I can just use that attached debdiff
[13:30] <caribou> smb: ppa:louis-bouchard/test-makedumpfile
[13:31] <caribou> smb: quicker :)
[13:31] <smb> caribou, ah ok, yeah. Well usually I find that the VM I try to use needs serious updating anyway
[13:31] <smb> :)
[13:42] <smb> caribou, some other interruption. testing might be a bit "delayed". sorry
[13:45] <caribou> smb: np
[15:42] <smoser> smb, aroudn ?
[15:42] <smoser> http://paste.ubuntu.com/12690169/
[15:50] <smoser> any other kernel people are welcome to reply.
[15:51] <smoser> the gist is : How reliable is uptime?
[15:51] <smoser> delta1 = (read_reliable_remote_clock() - read_uptime())
[15:51] <smoser> sleep some-time
[15:51] <smoser> delta2 = read_reliable_remote_clock() - read_uptime()
[15:51] <smoser> will 'delta1' and 'delta2' going to be within a reasonable value of eachother?
[15:51] <smoser> 'reasonable'  here is < 2 seconds.
[16:16] <apw> smoser, how long is the some-time
[16:17] <apw> you'd expect the clocks to be reasonably stable in terms of progress, on each machine
[16:17] <apw> but if they arn't sync'd with anything outside there is no guarentee that the particular local clock
[16:17] <apw> ticks with any specfic frequency
[16:18] <apw> the crystals they use for clocks tend to always be the same, but not necessarily accurate to what is needed for wall cloc
[16:18] <apw> clock
[16:24] <smoser> apw, well, how long is probably < 2 minutes
[16:24] <smoser> most rprobably < 15
[16:25] <apw> i'd expect the delta to be reasonable in those cases yes
[16:26] <smoser> ok. thanks
[16:56] <infinity> smoser: Wouldn't it be more sensible to  leverage ntpdate/systemd-timed to fix the clock before cloud-init starts making assumptions?
[16:57] <infinity> smoser: Instead of essentially reimplementing ntp's drift tracking?
[16:57] <smoser> well, i'd just pitch it if it seemed wrong.
[16:58] <smoser> i think the issue with use of ntpdate or systemd-timed is there is not a guarantee of access to a time server.
[16:59] <smoser> i dont have to deal with drift per say. oauth doesn't require a perfectc clock, but its not going to allow you to use a time from last month or January of 1970.
[16:59] <infinity> smoser: No, but with no time server, the system probably won't tear the clock out from under you either.
[17:00] <infinity> smoser: ie: just running after those services would mean your clock would probably remain consistent.
[17:00] <smoser> well, at least in upstart world, those services run quite non-deterministicly
[17:00] <smoser> and quite annoyingly.
[17:00] <infinity> smoser: THough, I get the "if the clock is totally busted, oauth will explode" issue.  But so will half their system.
[17:01] <infinity> Anyhow, was just an aside.  If you've argued with service ordering and decided it can't solve your issue, your proposed solution seems "sane", just weird. ;)
[17:02] <smoser> well, in upstart ntp would run on ifup and it was backgrounded
[17:02] <smoser> to make sure it didnt' block anything.
[17:02] <infinity> Yeah.  In retrospect, that was probably a bug, not a feature, but we ain't fixing it now.
[17:03] <smoser> well, its hard to block on a given interface if you have 6 interfaces and only one of them is going to get a routeu that would go to your ntp server
[17:04] <infinity> No, I understand why the bug/feature was implemented, it's just a bit nutty to tear out the system time at a nondeterministic point in boot.
[17:04] <smoser> yes
[17:04] <smoser> quite nutty
[17:04] <smoser> and painful.
[17:04] <smoser> sleep 2. clock backwards 1 month. wake up in 1 month .
[17:05] <infinity> Thankfully, most systems only drift a second or two on boot, not years.
[17:05] <smoser> i'm honestly nto sure how it works in systemd.
[17:05] <smoser> right. but if their clock has no battery
[17:05] <infinity> Except, notably, ARM systems without a battery-backed RTC, and idiots who don't use the system->VM RTC bridge in qemu.
[17:05] <smoser> or is just that bad and system is off for quite a while.
[17:05] <smoser> right
[17:05] <smoser> yes. arm is the thing :)
[17:06] <smoser> and interestingly, my first experience with sucky clocks is in your favorite arch
[17:06] <infinity> I wouldn't call it my favourite. ;)
[17:06] <smoser> ppc64 systems i had would lose seconds in a day
[17:06] <infinity> Oh.  Yeah, that might be my favourite.
[17:06] <smoser> i never understood why a $4 watch from walmart keeps time to seconds in multi-years
[17:06] <smoser> but expensive hardware cant do that.
[17:06] <infinity> PPC clocks are notoriously incorrect.  I'm not sure why.
[17:07] <infinity> Perhaps because they were always so "server-oriented" that they couldn't grasp why everyone wouldn't use ntp everywhere.
[17:07] <infinity> And, thus, didn't care.
[17:07] <infinity> While PC clocks kept improving year on year because until vaguely recently, Windows had no concept of ntp without 3rd party software.
[21:19] <leitao> arges, ogasawara: What is going to be the first SRU for 15.10? 3 weeks after the GA?
[21:20] <ogasawara> bjf: ^^
[21:20] <ogasawara> leitao: should be ~3wks following release, but I'll let bjf officially confirm his schedule
[21:20] <leitao> ogasawara, ok, thanks
[21:21] <bjf> leitao, let me look at a calendar ... one sec
[21:21] <leitao> bjf, is the calendar public?
[21:22] <bjf> leitao, it's a "count the number of weeks from today and see where that lands relative to the release" kind of operation :-)
[21:22] <leitao> bjf, it was not clear if the cycle start counting after the GA or after the kernel freeze.
[21:22] <bjf> leitao, the two are not related at all
[21:23] <leitao> bjf, hmmm
[21:24] <bjf> leitao, the SRU cycle runs for 3 weeks and repeats forever. how a release falls within a given SRU cycle is what i'm looking for
[21:25] <bjf> leitao, the first SRU cycle after the 15.10 releaes should start on Mon. Nov. 9 
[21:26] <leitao> bjf, right.  I am doing the math over here also.
[21:26] <leitao> thank you
[21:26] <bjf> leitao, that's only projected, if there is a delay between now and then it could slip ... but that doesn't happen often
[21:26] <leitao> so, if we miss any patch for 15.10 (kernel freeze this week), we will only have it released end of nov, correct?
[21:27] <bjf> leitao, correct
[21:27] <leitao> thank you
[21:38] <bjf> leitao, you may be interested in the "kernel-sru-announce" mailing list
[21:39] <leitao> bjf, definitely, thank you
[23:45] <mamarley> apw: You mentioned before it might be a good idea to add a command-line arg for the intel_pstate module to force HWP on for Skylake hardware.  Since the kernel freeze is coming up, should I go ahead and write such a patch?  Any preference on what the arg should be?
[23:52] <infinity> skylake_hwp?