=== AceLan_ is now known as AceLan [06:35] Hi, I opened a bug https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1502772. Could you help to fix this problem? [06:35] Ubuntu bug 1502772 in linux (Ubuntu) "Linux kernel in Ubuntu doesn't provide mmc-modules udeb." [Undecided,New] === smb` is now known as smb [07:29] infinity [07:30] So do you want to do this? [07:48] Hi brendand [07:52] Guest15243, hello? [08:08] What is going on brendand? === Guest15243 is now known as einfinity [10:26] "We call the police" bug again. [10:26] FourDollars, hrm, are they not built in [10:26] Looks like it. [10:28] saftey protocol [10:29] overactive survival mechanisms [10:30] machinae niil complex [10:32] machinae nihil complex [10:32] cold winter at the 4 seasons [10:34] biological upper respitory infection [10:34] tenure [10:38] brendand: can you send a software update to my android? [10:42] I don't like the version. [10:45] que pendejos [10:57] I pray for you. [10:58] To be abused. [10:58] ab-used [10:58] tools [10:59] To be used. [11:07] apw: no [12:36] What's the relationship between versions in ubuntu/linux.git and those in ubuntu/ubuntu-${release}.git? [12:37] e.g. 3.16.0.50.41 vs 3.16.7-ckt17 [12:42] Eduard_Munteanu: 3.16.7-ckt17 is a stable kernel release (as in "upstream stable"); the kernels in ubuntu-${released}.git are ubuntu kernels. stable kernels commits are applied into the ubuntu kernels [12:45] apw: hi, got a chance to look at my fix for the crashkernel= bug ? [12:45] apw: bug: #1496317 [12:45] bug 1496317 in kexec-tools (Ubuntu) "kexec fails with OOM killer with the current crashkernel=128 value" [High,In progress] https://launchpad.net/bugs/1496317 [12:50] henrix, thanks, I see... [12:58] henrix, is there a place where I can see what patches have been applied in addition to the -ckt series? [13:04] Eduard_Munteanu: i'm affraid the only way is to look at the release git tree and compare both [13:05] Ah, I see. [13:29] caribou, fwiw it appears to make sense to me. Of course actual trying out will be even better. I can try to play with it [13:30] smb: let me check I think it's in a ppa somewhere [13:30] caribou, Oh I can just use that attached debdiff [13:30] smb: ppa:louis-bouchard/test-makedumpfile [13:31] smb: quicker :) [13:31] caribou, ah ok, yeah. Well usually I find that the VM I try to use needs serious updating anyway [13:31] :) === Odd_Blok1 is now known as Odd_Bloke [13:42] caribou, some other interruption. testing might be a bit "delayed". sorry [13:45] smb: np === davmor2_ is now known as davmor2 [15:42] smb, aroudn ? [15:42] http://paste.ubuntu.com/12690169/ [15:50] any other kernel people are welcome to reply. [15:51] the gist is : How reliable is uptime? [15:51] delta1 = (read_reliable_remote_clock() - read_uptime()) [15:51] sleep some-time [15:51] delta2 = read_reliable_remote_clock() - read_uptime() [15:51] will 'delta1' and 'delta2' going to be within a reasonable value of eachother? [15:51] 'reasonable' here is < 2 seconds. [16:16] smoser, how long is the some-time [16:17] you'd expect the clocks to be reasonably stable in terms of progress, on each machine [16:17] but if they arn't sync'd with anything outside there is no guarentee that the particular local clock [16:17] ticks with any specfic frequency [16:18] the crystals they use for clocks tend to always be the same, but not necessarily accurate to what is needed for wall cloc [16:18] clock [16:24] apw, well, how long is probably < 2 minutes [16:24] most rprobably < 15 [16:25] i'd expect the delta to be reasonable in those cases yes [16:26] ok. thanks [16:56] smoser: Wouldn't it be more sensible to leverage ntpdate/systemd-timed to fix the clock before cloud-init starts making assumptions? [16:57] smoser: Instead of essentially reimplementing ntp's drift tracking? [16:57] well, i'd just pitch it if it seemed wrong. [16:58] i think the issue with use of ntpdate or systemd-timed is there is not a guarantee of access to a time server. [16:59] i dont have to deal with drift per say. oauth doesn't require a perfectc clock, but its not going to allow you to use a time from last month or January of 1970. [16:59] smoser: No, but with no time server, the system probably won't tear the clock out from under you either. [17:00] smoser: ie: just running after those services would mean your clock would probably remain consistent. [17:00] well, at least in upstart world, those services run quite non-deterministicly [17:00] and quite annoyingly. [17:00] smoser: THough, I get the "if the clock is totally busted, oauth will explode" issue. But so will half their system. [17:01] Anyhow, was just an aside. If you've argued with service ordering and decided it can't solve your issue, your proposed solution seems "sane", just weird. ;) [17:02] well, in upstart ntp would run on ifup and it was backgrounded [17:02] to make sure it didnt' block anything. [17:02] Yeah. In retrospect, that was probably a bug, not a feature, but we ain't fixing it now. [17:03] well, its hard to block on a given interface if you have 6 interfaces and only one of them is going to get a routeu that would go to your ntp server [17:04] No, I understand why the bug/feature was implemented, it's just a bit nutty to tear out the system time at a nondeterministic point in boot. [17:04] yes [17:04] quite nutty [17:04] and painful. [17:04] sleep 2. clock backwards 1 month. wake up in 1 month . [17:05] Thankfully, most systems only drift a second or two on boot, not years. [17:05] i'm honestly nto sure how it works in systemd. [17:05] right. but if their clock has no battery [17:05] Except, notably, ARM systems without a battery-backed RTC, and idiots who don't use the system->VM RTC bridge in qemu. [17:05] or is just that bad and system is off for quite a while. [17:05] right [17:05] yes. arm is the thing :) [17:06] and interestingly, my first experience with sucky clocks is in your favorite arch [17:06] I wouldn't call it my favourite. ;) [17:06] ppc64 systems i had would lose seconds in a day [17:06] Oh. Yeah, that might be my favourite. [17:06] i never understood why a $4 watch from walmart keeps time to seconds in multi-years [17:06] but expensive hardware cant do that. [17:06] PPC clocks are notoriously incorrect. I'm not sure why. [17:07] Perhaps because they were always so "server-oriented" that they couldn't grasp why everyone wouldn't use ntp everywhere. [17:07] And, thus, didn't care. [17:07] While PC clocks kept improving year on year because until vaguely recently, Windows had no concept of ntp without 3rd party software. [21:19] arges, ogasawara: What is going to be the first SRU for 15.10? 3 weeks after the GA? [21:20] bjf: ^^ [21:20] leitao: should be ~3wks following release, but I'll let bjf officially confirm his schedule [21:20] ogasawara, ok, thanks [21:21] leitao, let me look at a calendar ... one sec [21:21] bjf, is the calendar public? [21:22] leitao, it's a "count the number of weeks from today and see where that lands relative to the release" kind of operation :-) [21:22] bjf, it was not clear if the cycle start counting after the GA or after the kernel freeze. [21:22] leitao, the two are not related at all [21:23] bjf, hmmm [21:24] leitao, the SRU cycle runs for 3 weeks and repeats forever. how a release falls within a given SRU cycle is what i'm looking for [21:25] leitao, the first SRU cycle after the 15.10 releaes should start on Mon. Nov. 9 [21:26] bjf, right. I am doing the math over here also. [21:26] thank you [21:26] leitao, that's only projected, if there is a delay between now and then it could slip ... but that doesn't happen often [21:26] so, if we miss any patch for 15.10 (kernel freeze this week), we will only have it released end of nov, correct? [21:27] leitao, correct [21:27] thank you [21:38] leitao, you may be interested in the "kernel-sru-announce" mailing list [21:39] bjf, definitely, thank you [23:45] apw: You mentioned before it might be a good idea to add a command-line arg for the intel_pstate module to force HWP on for Skylake hardware. Since the kernel freeze is coming up, should I go ahead and write such a patch? Any preference on what the arg should be? [23:52] skylake_hwp?