=== vrubiolo1 is now known as vrubiolo | ||
mydog2 | hi | 17:52 |
---|---|---|
Odd_Bloke | o/ | 17:54 |
ananke | I'm trying to build some failsafe checks in our packer pipeline. when 'cloud-init status' is 'running', what's the exit code? | 17:55 |
ananke | error results in expected non-zero, but I want to make sure my check doesn't fail if it's still running | 17:56 |
ananke | actually, nevermind, I'll just rely on --wait | 18:01 |
ananke | "/usr/bin/cloud-init status --wait || (cat /var/log/cloud-init.log /var/log/cloud-init-output.log; exit 1)" | 18:01 |
Odd_Bloke | Yep, --wait is your best bet, I think. | 18:04 |
ananke | my problem was that we already used --wait, but if it resulted in errors we never knew what happened: packer would terminate the provisioner and kill everything | 18:05 |
ananke | so by adding the second part I will now have logs emitted to stdout if something goes wrong | 18:06 |
Odd_Bloke | You could also consider using --long, if you already capture the logs some other way. | 18:13 |
Odd_Bloke | That gives you some indication of the outcome, without emitting 100s of (mostly) irrelevant lines. | 18:14 |
potoftea | Hi, I'm looking for way to load larger cloud-init.yaml (AWS 16kb limit) to instance. Is there anyway I can download the file during startup and then pass to cloud-init? maybe somebody can point me into right direction. Thank you in advance | 18:38 |
rharper | potoftea: your user-data can use #include http://path/to/your/cloud-config | 18:44 |
potoftea | rharper sadly that is not an option, download requires auth access/secret. But thank you for suggestion. My plan was to somehow download it from S3 if that's possible at all | 18:51 |
Odd_Bloke | Could you configure S3 to allow "unauthed" access to a particular role, which you could then grant to your instances before launch? I'm not sure if that would affect HTTP(S) traffic though, or only S3 API traffic. | 18:54 |
potoftea | Not really file contains keys (certs), which requires limited access. | 18:55 |
Odd_Bloke | Right, you would limit access using IAM roles. | 18:56 |
potoftea | I do have that now, but as far as I know that works only with S3 API, not thought HTTP | 18:56 |
Odd_Bloke | Hmm, bummer, OK. | 18:57 |
potoftea | I know that ignition before it was deprecated , solved is this in a nice way, where I could load data from S3 remote storage. | 18:58 |
Odd_Bloke | I was going to suggest a part-handler, but that doesn't give you a way of feeding into the rest of cloud-init's operation: https://cloudinit.readthedocs.io/en/latest/topics/format.html?highlight=%23include#part-handler | 18:59 |
potoftea | Can I call cloud-init from cloud-init ? Going trough doc didn't gave clear picture, how can I from cli execute cloud-init and pass config as arg | 19:03 |
Odd_Bloke | potoftea: Oh, are you compressing your user-data? | 19:03 |
Odd_Bloke | cloud-init should detect it as gzip compressed and uncompress it before processing it. | 19:03 |
potoftea | yeah I'm with compresion 24kb and without around 30kb | 19:03 |
Odd_Bloke | Damn. | 19:03 |
Odd_Bloke | This is a chunky user-data file! :) | 19:04 |
potoftea | I've bunch kubernetes certs '=D | 19:05 |
Odd_Bloke | Aha, that would explain why compression doesn't help much | 19:07 |
rharper | potoftea: cloud-init --file /path/to/config ... so, you can run cloud-init from cloud-init; is there a particular config module you want to run ? if so, then cloud-init --file config single --name cc_module --frequency=always will make it happen | 19:07 |
potoftea | I have module: "apt", "write_files" | 19:09 |
rharper | you'd need to run single twice, once with different module names ... ; how does this help your payload size issue ? | 19:10 |
rharper | or unrelated question | 19:10 |
rharper | blackboxsw: not sure if you saw, but daily-ppa focal image failed to build again with patches not applying ... | 19:10 |
Odd_Bloke | rharper: It would allow for a runcmd which fetches the large config from elsewhere but still uses cloud-init to apply it. | 19:10 |
meena | oh, nice, "kubernetes" | 19:11 |
* meena jumps ship | 19:11 | |
rharper | Odd_Bloke: I see | 19:11 |
Odd_Bloke | (And that runcmd could use s3cmd or whatever to fetch from S3, which we can't/don't do generically for #include.) | 19:12 |
rharper | Odd_Bloke: that'd be neat to do | 19:12 |
potoftea | rharper I would have 2 cloud init config. 1. bare bone that contains awscli, download config from s3, and then call cloud-init with 2. config. | 19:12 |
Odd_Bloke | Agreed, though we would need to think about how to handle the required dependencies for each object store. | 19:12 |
rharper | potoftea: yeah, got it | 19:12 |
Odd_Bloke | (And whether they're even in main, from an Ubuntu perspective.) | 19:12 |
rharper | is it in their tools snap ? | 19:13 |
Odd_Bloke | I do wonder if we're also missing a generic way of letting people fetch from $wherever. | 19:13 |
rharper | other than the #include URL ? | 19:13 |
Odd_Bloke | Like a part-handler, but which returns cloud-config instead of just doing some stuff. | 19:13 |
rharper | but delegating the "acquire" to tool specified ? | 19:13 |
rharper | like #include <cmd> <input> ? | 19:13 |
Odd_Bloke | I was thinking even more like a part-handler: user specifies a Python script with a "def fetch(whatever, common, params, make, sense):" and it returns a YAML string (or a parsed dict, perhaps). | 19:14 |
potoftea | rharper yeah it worked. | 19:15 |
rharper | nice! | 19:16 |
Odd_Bloke | rharper: Thanks for all these reviews! | 19:16 |
Odd_Bloke | Now to clog up Travis for the rest of the day landing them. :p | 19:16 |
potoftea | Odd_Bloke I would be nice to have this kind of functionality of the box, as to me it seems 16kb is only useful for simple configurations | 19:16 |
potoftea | Thank you for helping me with this issue. | 19:17 |
Odd_Bloke | So I do think a lot of people will graduate to config management if they would be producing 16kb of user-data (and then their user-data just needs to configure Chef/Puppet/..., so is much smaller), but if you have a lot of big files (big in the context of 16kB, at least :p) to write then I agree that it's a low limit. | 19:18 |
Odd_Bloke | And particularly if those are certs, because then compression isn't going to be very effective. | 19:18 |
rharper | Odd_Bloke: =) | 19:22 |
rharper | Odd_Bloke: I definitely s3 seems like the right place to put blobs and it would nice for cloud-init to security get them from within the instance; | 19:23 |
potoftea | Do you guys have features request? I can create one, where I ask for S3 support, if that makes any sense | 19:27 |
Odd_Bloke | Yeah, S3 is right for EC2, I'd definitely want us to think about how to make it cloud-agnostic (even if we leave implementation of others to later). | 19:27 |
rharper | potoftea: yeah, https://bugs.launchpad.net/cloud-init/+filebug ; we'll triage it to Wishlist (that's the bucket we have for feature requests) | 19:28 |
rharper | Odd_Bloke: Yeah, object-store fetching ... I suspect most clouds have something like that | 19:28 |
rharper | though I think many of them have s3-like apis | 19:29 |
Odd_Bloke | And we also shouldn't tie S3 to EC2 tightly; plenty of places will deploy across multiple platforms but want to store objects in a single place. | 19:29 |
rharper | right, I think s3 is fairly generic object-store api ... though I've not dealt with it in detail to know if that's accurate ; just what I've seen mentioned in various places | 19:30 |
Odd_Bloke | Yeah, S3 support would buy us a decent chunk. | 19:30 |
Odd_Bloke | Ideally, I think we'd implement this as a generic API that people could implement for other data stores, and ship a concrete implementation of S3 as part of cloud-init. | 19:31 |
Odd_Bloke | So then we could allow people who want to fetch userdata from a SVN repo that's stored on NFS to write a Python script to that themselves, but support 90% of people out-of-the-box. | 19:32 |
potoftea | I've created ticket, thank you | 19:47 |
rharper | Odd_Bloke: github actions busy often? I got an error report mentioning There was a failure in sending the provision message: Unexpected response code from remote provider InternalServerError | 19:47 |
Odd_Bloke | Yeah, I haven't seen that before, I'm seeing it too. | 19:48 |
Odd_Bloke | https://www.githubstatus.com/incidents/zdxk6xq21405 | 19:48 |
Odd_Bloke | GitHub are having quite the week. | 19:48 |
rharper | heh | 19:49 |
rharper | maybe it's release week for them as well ? | 19:49 |
Odd_Bloke | Haha, I was about to say, perhaps someone there is having a worse week than you. ;) | 19:49 |
rharper | hehe | 19:49 |
Odd_Bloke | Oh, actually, we don't require the CLA check to pass, so I can still land my branches. \o/ | 20:03 |
Odd_Bloke | (Maybe we should fix that, though let's do that after I've landed my branches. ;) | 20:03 |
=== tds1 is now known as tds | ||
Odd_Bloke | rharper: I had to push another commit to fix CI, and it's a bit of an odd one (albeit small) so I'm asking for a re-review of it: https://github.com/canonical/cloud-init/pull/322/commits/315478ba587ef0165d846d45ac0d6407a3e948b9 | 22:00 |
Odd_Bloke | (As we squash-merge, we'll also need to make sure the merge commit has that info.) | 22:00 |
rharper | Odd_Bloke: ok | 22:37 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!