=== vrubiolo1 is now known as vrubiolo [17:52] hi [17:54] o/ [17:55] I'm trying to build some failsafe checks in our packer pipeline. when 'cloud-init status' is 'running', what's the exit code? [17:56] error results in expected non-zero, but I want to make sure my check doesn't fail if it's still running [18:01] actually, nevermind, I'll just rely on --wait [18:01] "/usr/bin/cloud-init status --wait || (cat /var/log/cloud-init.log /var/log/cloud-init-output.log; exit 1)" [18:04] Yep, --wait is your best bet, I think. [18:05] my problem was that we already used --wait, but if it resulted in errors we never knew what happened: packer would terminate the provisioner and kill everything [18:06] so by adding the second part I will now have logs emitted to stdout if something goes wrong [18:13] You could also consider using --long, if you already capture the logs some other way. [18:14] That gives you some indication of the outcome, without emitting 100s of (mostly) irrelevant lines. [18:38] Hi, I'm looking for way to load larger cloud-init.yaml (AWS 16kb limit) to instance. Is there anyway I can download the file during startup and then pass to cloud-init? maybe somebody can point me into right direction. Thank you in advance [18:44] potoftea: your user-data can use #include http://path/to/your/cloud-config [18:51] rharper sadly that is not an option, download requires auth access/secret. But thank you for suggestion. My plan was to somehow download it from S3 if that's possible at all [18:54] Could you configure S3 to allow "unauthed" access to a particular role, which you could then grant to your instances before launch? I'm not sure if that would affect HTTP(S) traffic though, or only S3 API traffic. [18:55] Not really file contains keys (certs), which requires limited access. [18:56] Right, you would limit access using IAM roles. [18:56] I do have that now, but as far as I know that works only with S3 API, not thought HTTP [18:57] Hmm, bummer, OK. [18:58] I know that ignition before it was deprecated , solved is this in a nice way, where I could load data from S3 remote storage. [18:59] I was going to suggest a part-handler, but that doesn't give you a way of feeding into the rest of cloud-init's operation: https://cloudinit.readthedocs.io/en/latest/topics/format.html?highlight=%23include#part-handler [19:03] Can I call cloud-init from cloud-init ? Going trough doc didn't gave clear picture, how can I from cli execute cloud-init and pass config as arg [19:03] potoftea: Oh, are you compressing your user-data? [19:03] cloud-init should detect it as gzip compressed and uncompress it before processing it. [19:03] yeah I'm with compresion 24kb and without around 30kb [19:03] Damn. [19:04] This is a chunky user-data file! :) [19:05] I've bunch kubernetes certs '=D [19:07] Aha, that would explain why compression doesn't help much [19:07] potoftea: cloud-init --file /path/to/config ... so, you can run cloud-init from cloud-init; is there a particular config module you want to run ? if so, then cloud-init --file config single --name cc_module --frequency=always will make it happen [19:09] I have module: "apt", "write_files" [19:10] you'd need to run single twice, once with different module names ... ; how does this help your payload size issue ? [19:10] or unrelated question [19:10] blackboxsw: not sure if you saw, but daily-ppa focal image failed to build again with patches not applying ... [19:10] rharper: It would allow for a runcmd which fetches the large config from elsewhere but still uses cloud-init to apply it. [19:11] oh, nice, "kubernetes" [19:11] * meena jumps ship [19:11] Odd_Bloke: I see [19:12] (And that runcmd could use s3cmd or whatever to fetch from S3, which we can't/don't do generically for #include.) [19:12] Odd_Bloke: that'd be neat to do [19:12] rharper I would have 2 cloud init config. 1. bare bone that contains awscli, download config from s3, and then call cloud-init with 2. config. [19:12] Agreed, though we would need to think about how to handle the required dependencies for each object store. [19:12] potoftea: yeah, got it [19:12] (And whether they're even in main, from an Ubuntu perspective.) [19:13] is it in their tools snap ? [19:13] I do wonder if we're also missing a generic way of letting people fetch from $wherever. [19:13] other than the #include URL ? [19:13] Like a part-handler, but which returns cloud-config instead of just doing some stuff. [19:13] but delegating the "acquire" to tool specified ? [19:13] like #include ? [19:14] I was thinking even more like a part-handler: user specifies a Python script with a "def fetch(whatever, common, params, make, sense):" and it returns a YAML string (or a parsed dict, perhaps). [19:15] rharper yeah it worked. [19:16] nice! [19:16] rharper: Thanks for all these reviews! [19:16] Now to clog up Travis for the rest of the day landing them. :p [19:16] Odd_Bloke I would be nice to have this kind of functionality of the box, as to me it seems 16kb is only useful for simple configurations [19:17] Thank you for helping me with this issue. [19:18] So I do think a lot of people will graduate to config management if they would be producing 16kb of user-data (and then their user-data just needs to configure Chef/Puppet/..., so is much smaller), but if you have a lot of big files (big in the context of 16kB, at least :p) to write then I agree that it's a low limit. [19:18] And particularly if those are certs, because then compression isn't going to be very effective. [19:22] Odd_Bloke: =) [19:23] Odd_Bloke: I definitely s3 seems like the right place to put blobs and it would nice for cloud-init to security get them from within the instance; [19:27] Do you guys have features request? I can create one, where I ask for S3 support, if that makes any sense [19:27] Yeah, S3 is right for EC2, I'd definitely want us to think about how to make it cloud-agnostic (even if we leave implementation of others to later). [19:28] potoftea: yeah, https://bugs.launchpad.net/cloud-init/+filebug ; we'll triage it to Wishlist (that's the bucket we have for feature requests) [19:28] Odd_Bloke: Yeah, object-store fetching ... I suspect most clouds have something like that [19:29] though I think many of them have s3-like apis [19:29] And we also shouldn't tie S3 to EC2 tightly; plenty of places will deploy across multiple platforms but want to store objects in a single place. [19:30] right, I think s3 is fairly generic object-store api ... though I've not dealt with it in detail to know if that's accurate ; just what I've seen mentioned in various places [19:30] Yeah, S3 support would buy us a decent chunk. [19:31] Ideally, I think we'd implement this as a generic API that people could implement for other data stores, and ship a concrete implementation of S3 as part of cloud-init. [19:32] So then we could allow people who want to fetch userdata from a SVN repo that's stored on NFS to write a Python script to that themselves, but support 90% of people out-of-the-box. [19:47] I've created ticket, thank you [19:47] Odd_Bloke: github actions busy often? I got an error report mentioning There was a failure in sending the provision message: Unexpected response code from remote provider InternalServerError [19:48] Yeah, I haven't seen that before, I'm seeing it too. [19:48] https://www.githubstatus.com/incidents/zdxk6xq21405 [19:48] GitHub are having quite the week. [19:49] heh [19:49] maybe it's release week for them as well ? [19:49] Haha, I was about to say, perhaps someone there is having a worse week than you. ;) [19:49] hehe [20:03] Oh, actually, we don't require the CLA check to pass, so I can still land my branches. \o/ [20:03] (Maybe we should fix that, though let's do that after I've landed my branches. ;) === tds1 is now known as tds [22:00] rharper: I had to push another commit to fix CI, and it's a bit of an odd one (albeit small) so I'm asking for a re-review of it: https://github.com/canonical/cloud-init/pull/322/commits/315478ba587ef0165d846d45ac0d6407a3e948b9 [22:00] (As we squash-merge, we'll also need to make sure the merge commit has that info.) [22:37] Odd_Bloke: ok