[11:39] hi again, I'm wondering if it should be more clear that "cloud-init clean -l" ALSO cleans the cache, not *just* the logs, as indicated in the help output '-l, --logs Remove cloud-init logs.' [12:00] aw-: clean such [12:00] is the basis function [12:01] zu, '-l, --logs' sind be, 'Also remove cloud-init logs.' [14:45] That sounds like a good help text improvement to me. [14:46] aw-: What is "unbearably slow" in your use case? [14:46] (If it's mostly Python startup time, then there's a limited amount we can do about it, unfortunately.) [14:53] Odd_Bloke: yes sorry, i tested other things and turns out it's only slow on my computer - due to Python startup time [14:53] aw-: So it is acceptable in your production environment? [14:55] haven't gotten that far yet ;) but I tested on an t2.medium ec2 instance and it was fine [14:56] OK, good to hear. [15:51] 15:46 <@Odd_Bloke> (If it's mostly Python startup time, then there's a limited amount we can do about it, unfortunately.) <= we've done a ton in ioc [15:51] "ioc"? [16:30] Odd_Bloke see https://github.com/bsdci/libioc/pull/184 [16:54] Heya tribaal, I chatted with the team today, related to that 2 min ec2 timeout you were seeing... Is ds-identify running on your platform during image builds? /lib/systemd/system-generators/cloud-init-generator should have run ds-identify out of the gate and limited the datasource_list to what ds-identify detected.... results should be in /run/cloud/ds-identify.log (which should no longer list Ec2 as a viable [16:54] platform on Exoscale [16:55] I'd expect to have seen ds-identify report Found single datasource: Exoscale [16:56] if not, then we may have to tweak ds-identify to make sure Ec2 isn't a potential match for exoscale platforms. [16:56] though I expect ds-identify is just not run for some reason [17:10] rharper: I think it's your triage day today; I'm following up on https://bugs.launchpad.net/cloud-init/+bug/1855430 ATM. [17:10] Ubuntu bug 1855430 in cloud-init "Unclear documentation how a complete minimal working configuration could look like for Ubuntu and Apache CloudStack" [Undecided,New] [17:10] ok [17:34] blackboxsw: yeah so today I did run through the SRU and couldn't really reproduce what I was seeing last time, so it might be some PEBCAK, or some yet undiscovered weirdness in our build system indeed [17:35] blackboxsw: so, related to the SRU validation: I did successfully run cloud-init from -proposed on our platform. I'm not sure I should tag with verification-done though. Would you like me to put just a comment on the SRU bug or something? [17:36] hi all, BTW :P [17:36] tribaal: o/ [17:36] a comment is good [17:36] ack, I'll do that now [17:36] we've quite a bit more to complete for the SRU [17:36] yeah I bet [17:37] thanks for the help! [17:37] I've not run my soon-to-be sru-verification template, but it should be available for next time :) [17:37] rharper: welcome! [17:44] thanks tribaal!. yeah comment is good. thanks again for the test run ++ the vericication template work [17:45] no worries, I'll finish that ASAP - but right now our current template situation needs some fixing for that to be smooth sailing (we don't have an official Disco template for some reason - that needs to happen) [18:19] hi guys. you saved my a** last week, I'm asking you again: https://pastebin.com/Sev7HE0S that user-data config 'was working before', but now it doesn't apply. what am I missing? let me know if you need more logs contents [18:23] StucKman, have you passed your YAML through a YAML validator? [18:24] yeah, I just fixed that, let me try with the new version [18:29] oh, so the yaml I have been writing for ansible is off-spec :( [18:32] * blackboxsw grabs oracle for manual SRU validation [18:35] ok, not i only get errors about lines too long [18:37] 2019-12-09 18:36:34,274 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'runcmd:...' [18:43] so it doesn't accept yaml anymore? [18:46] I just added the #cloud-config first line, same thing [18:50] ok, now it doesn't complain at all, but the config does not apply either [18:51] ok, clean and init again worked [18:55] StucKman: So you're all good? [18:57] meena: Thanks for the pointer to the ioc stuff. Our medium-term plan is https://github.com/canonical/cloud-init/pull/48, so that we only have to pay start-up costs once. [18:58] But improving the start-up time of that one instantiation would of course still be valuable! [18:58] someone has to pay the call to getrandom() once for python; and it's likely going to be cloud-init on first boot; [19:01] Odd_Bloke: yeah, thanks [19:02] :) [19:49] blackboxsw: rharper: powersj: I just wrote up a proposal to modify how we do manual SRU verification a little: https://github.com/cloud-init/ubuntu-sru/issues/69 [19:49] ok [19:56] +1 Odd_Bloke rharper I'm game for that sru testing simplicity [20:26] hrm, what's the best way for folks to find out about an cloud-init new SRUs that need validation? I can't actually use launchpad corrects to search for verification-needed tag on them (and people would need to query both the ubuntu package and potentially cloud-init upstream project to see when a bug is queued for SRU verification. I'm wondering if the response we should be giving to community who want's [20:26] to be aware of pending cloud-init SRUs is to have them pull cloud-init from -proposed and if there is a delta, then they can attempt to test. [20:27] rharper: Odd_Bloke powersj ^ what should our response be for folks interested in keeping up to date (and participating in SRU verification0 [20:27] this was a question from one of the datasource authors I pinged in email about this SRU. [20:28] I was thinking about telling them to just camp on the -proposed pocket, but not sure if there is a better solution [20:28] blackboxsw: I wonder if in the creation of the sru uploads, the new-upstream-snapshot filters out the list of bugs being fixed, right ? [20:28] maybe a 'subscribe' button from the pendind-sru.html page for a certain package :) [20:28] could we use that to tag/link to the bug ? [20:29] original bug authors would get an update (could just be a comment to point to a rtd pages on cloud-init SRU testing) ? [20:29] or an email/discourse page [20:30] rharper: yeah that works for bugs that are included in this release, and I think that makes sense. This particular case was someone interested in validating every SRU published to ensure their datasource didn't regress [20:30] so I was trying to figure out if there is a good notification mechanism for interested parties (other than say watch the mailing list?) [20:30] That list is going to be pretty short, could we just keep it in the SRU docs somewhere and manually email them? [20:31] Or, yeah, the ML is pretty low volume. [20:31] so two things; we've been asked to have a general SRU calendar/cadence; so it's somewhat predictable outside of regressions/hotfixes [20:31] yeah let's just say mailing list for now (and others can pay attention as they care to) [20:31] something like our quarterly releases, and in between expect two to three SRUs; [20:32] for notification, I would suggest the cloud-init mailing list; and we can certainly pre-notify (SRU cut date is XXX, like we do for upstream releases) [20:32] right, that cadence would be good to better establish. (we'll barely make the 4 upstream releases this year) [20:32] as well as once release branches are uploaded, notify the mailing list again [20:32] If we're barely making the upstream releases on a cadence, I'm not sure the right move is to add yet more things on a cadence. :p [20:32] heh, just need to turn the crank faster [20:33] less vacations :) [20:33] sooner; I think ; long SRU gaps are the troublesome ones... [20:33] then the list of verifications get long enough that it's hard to complete it within the 7 day window [20:35] yeah 2 short emails about SRU expected date and after SRU call for testing should do it. Not too heavy weight. [20:35] I agree that leaving SRUs for too long is a pain, but I don't know that a prepublished cadence is the way to deal with that. [20:35] might bake that email into the build-and-push script to send it out once SRU is complete :) [20:36] I genuinely don't believe we have the capacity to commit to another cadence, for one. I wasn't being flippant. (Well, not _just_ flippant. :p) [20:37] at least for me, sending out a single email to the cloud-init mailing list announcing the SRU completion is way cheaper than 13 emails to separate contributors telling them that their commit is under test (and trying to remember to include specific additional DS authors that care to retest) [20:38] And even if we did have capacity, I think we'd end up off the cadence due to "out of cycle" SRU requests on a regular enough basis that people wouldn't be able to rely on it anyway. [20:38] Odd_Bloke: I misunderstood [20:38] I genuinely don't believe we have the capacity to commit to another cadence, for one. (ahh you mean first of all; I agree) [20:39] Ahhha, right. [20:39] Yes, that comma is doing a lot of heavy lifting there. :p [20:40] rharper: I'm also not sure we should be giving people a cut-off to get stuff in for an SRU; part of the reason for the SRU process is that we get testing in the devel release, and if people all land stuff on the day we are cutting the SRU then we lose some of that benefit. [20:40] yes, and also I think my initial statement about barely making 4 upstream releases this year, probably accounts for our team being oversubscribed as it is. There wasn't [20:40] we certainly have control over what lands when [20:41] bandwidth to do a good job on an upstream cut/verification because we didn't have enough hands available. As such, the upstream cut got delayed a bit. [20:42] replace two between releases with whatever number it was, it's roughly one to two SRUs between the releaeses; we're _already_ doing them, and _must_ do them to get the features into the release; the request was to pencil them in on the calendar a head of time so other teams can schedule time to help verify SRU releases on their platforms [20:42] and when we get near an SRU event, we'll want to gate/block certain in flight features/branch that may introduce undue risk. [20:43] IMO, it's not signing up for anything more than what we already do; it's announcing the rough dates/weeks for when it's likely to happen [20:44] I think that's fair. emailing when something is going to happen will give concerned devs a chance to contribute, either content that needs to go "in" or testing/verification before publication [20:44] whatever our cadence of SRU's or upstream releases is. [20:47] I certainly don't mind giving advanced warning ("we're cutting an SRU at the end of next week"). [20:48] But we've seen over the past couple of years that we do slip on SRUs (for various reasons, some good, some not), and I don't think that having a schedule is going to change that. [20:49] And if we can't _commit_ to a schedule, my view is that it's better to not have one. [20:55] I'm +1 w/ emailing before/after an event (SRU or upstream release) and not promising a flat out schedule. The high level goal that we've advertized is 4 upstream time-based releases per year. I think that level of schedule is hard enough to hit as is. As we reduce the debt/cost of each release then we can better set a schedule for srus. As it is I think we are making good steps in that we are trying to publish [20:55] to ubuntu/devel more frequently and trying to get an SRU or two in per upstream release. when those SRUs fall doesn't matter to me. Also to note with SRUs, we more often have external commercial interests to squeeze a feature in which may delay the SRU making it a bit harder to nail down a release time. (though more frequent SRUs would cut down on that tension) [21:35] is there any sensible way to automate the SRU testing? [21:35] has any of you discovered any bugs thru that so far? [21:41] meena, we definitely do find bugs - the testing is valuable in preventing escapes [21:42] this round I don't think I've seen any yet, but some questions/follow-ups for the next rounds have been pointed out [21:42] this outlines the cloud-init specific process https://wiki.ubuntu.com/CloudinitUpdates [21:43] meena: yes, almost all of our manual testing can be converted to cloud_tests, however, the set of platforms that are currently integrated in cloud_tests isn't wide enough [21:44] in some cases, we're testing edge scenarios which are not easily included in cloud_tests at this time; (some things only happen on upgrade, or with custom networking/subnets, multiple nics, platform specific features (Azure Advanced networking); but long term, that's certainly the plan; as we get more platforms enabled in cloud_tests, we can also start requiring a bug fix to also include a cloud_test integration test-case to verify the fix