/srv/irclogs.ubuntu.com/2020/04/23/#cloud-init.txt

=== vrubiolo1 is now known as vrubiolo
mydog2hi17:52
Odd_Blokeo/17:54
anankeI'm trying to build some failsafe checks in our packer pipeline. when 'cloud-init status' is 'running', what's the exit code?17:55
anankeerror results in expected non-zero, but I want to make sure my check doesn't fail if it's still running17:56
anankeactually, nevermind, I'll just rely on --wait18:01
ananke                "/usr/bin/cloud-init status --wait || (cat /var/log/cloud-init.log /var/log/cloud-init-output.log; exit 1)"18:01
Odd_BlokeYep, --wait is your best bet, I think.18:04
anankemy problem was that we already used --wait, but if it resulted in errors we never knew what happened: packer would terminate the provisioner and kill everything18:05
anankeso by adding the second part I will now have logs emitted to stdout if something goes wrong18:06
Odd_BlokeYou could also consider using --long, if you already capture the logs some other way.18:13
Odd_BlokeThat gives you some indication of the outcome, without emitting 100s of (mostly) irrelevant lines.18:14
potofteaHi, I'm looking for way to load larger cloud-init.yaml (AWS 16kb limit) to instance. Is there anyway I can download the file during startup and then pass to cloud-init? maybe somebody can point me into right direction. Thank you in advance18:38
rharperpotoftea: your user-data can use #include http://path/to/your/cloud-config18:44
potoftearharper sadly that is not an option, download requires auth access/secret. But thank you for suggestion. My plan was to somehow download it from S3 if that's possible at all18:51
Odd_BlokeCould you configure S3 to allow "unauthed" access to a particular role, which you could then grant to your instances before launch?  I'm not sure if that would affect HTTP(S) traffic though, or only S3 API traffic.18:54
potofteaNot really file contains keys (certs), which requires limited access.18:55
Odd_BlokeRight, you would limit access using IAM roles.18:56
potofteaI do have that now, but as far as I know that works only with S3 API, not thought HTTP18:56
Odd_BlokeHmm, bummer, OK.18:57
potofteaI know that ignition before it was deprecated , solved is this in a nice way, where I could load data from S3 remote storage.18:58
Odd_BlokeI was going to suggest a part-handler, but that doesn't give you a way of feeding into the rest of cloud-init's operation: https://cloudinit.readthedocs.io/en/latest/topics/format.html?highlight=%23include#part-handler18:59
potofteaCan I call cloud-init from cloud-init ? Going trough doc didn't gave clear picture, how can I from cli execute cloud-init and pass config as arg19:03
Odd_Blokepotoftea: Oh, are you compressing your user-data?19:03
Odd_Blokecloud-init should detect it as gzip compressed and uncompress it before processing it.19:03
potofteayeah I'm with compresion 24kb and without around 30kb19:03
Odd_BlokeDamn.19:03
Odd_BlokeThis is a chunky user-data file! :)19:04
potofteaI've bunch kubernetes certs '=D19:05
Odd_BlokeAha, that would explain why compression doesn't help much19:07
rharperpotoftea: cloud-init --file /path/to/config ... so, you can run cloud-init from cloud-init; is there a particular config module you want to run ?  if so, then cloud-init --file config single --name cc_module --frequency=always will make it happen19:07
potofteaI have module: "apt", "write_files"19:09
rharperyou'd need to run single twice, once with different module names ...  ;  how does this help your payload size issue ?19:10
rharperor unrelated question19:10
rharperblackboxsw: not sure if you saw, but daily-ppa focal image failed to build again with patches not applying ...19:10
Odd_Blokerharper: It would allow for a runcmd which fetches the large config from elsewhere but still uses cloud-init to apply it.19:10
meenaoh, nice, "kubernetes"19:11
* meena jumps ship19:11
rharperOdd_Bloke: I see19:11
Odd_Bloke(And that runcmd could use s3cmd or whatever to fetch from S3, which we can't/don't do generically for #include.)19:12
rharperOdd_Bloke: that'd be neat to do19:12
potoftearharper I would have 2 cloud init config. 1. bare bone that contains awscli, download config from s3, and then call cloud-init with 2. config.19:12
Odd_BlokeAgreed, though we would need to think about how to handle the required dependencies for each object store.19:12
rharperpotoftea: yeah, got it19:12
Odd_Bloke(And whether they're even in main, from an Ubuntu perspective.)19:12
rharperis it in their tools snap ?19:13
Odd_BlokeI do wonder if we're also missing a generic way of letting people fetch from $wherever.19:13
rharperother than the #include URL ?19:13
Odd_BlokeLike a part-handler, but which returns cloud-config instead of just doing some stuff.19:13
rharperbut delegating the "acquire" to tool specified ?19:13
rharperlike #include <cmd> <input> ?19:13
Odd_BlokeI was thinking even more like a part-handler: user specifies a Python script with a "def fetch(whatever, common, params, make, sense):" and it returns a YAML string (or a parsed dict, perhaps).19:14
potoftearharper yeah it worked.19:15
rharpernice!19:16
Odd_Blokerharper: Thanks for all these reviews!19:16
Odd_BlokeNow to clog up Travis for the rest of the day landing them. :p19:16
potofteaOdd_Bloke I would be nice to have this kind of functionality of the box, as to me it seems 16kb is only useful for simple configurations19:16
potofteaThank you for helping me with this issue.19:17
Odd_BlokeSo I do think a lot of people will graduate to config management if they would be producing 16kb of user-data (and then their user-data just needs to configure Chef/Puppet/..., so is much smaller), but if you have a lot of big files (big in the context of 16kB, at least :p) to write then I agree that it's a low limit.19:18
Odd_BlokeAnd particularly if those are certs, because then compression isn't going to be very effective.19:18
rharperOdd_Bloke: =)19:22
rharperOdd_Bloke: I definitely s3 seems like the right place to put blobs and it would nice for cloud-init to security get them from within the instance;19:23
potofteaDo you guys have features request? I can create one, where I ask for S3 support, if that makes any sense19:27
Odd_BlokeYeah, S3 is right for EC2, I'd definitely want us to think about how to make it cloud-agnostic (even if we leave implementation of others to later).19:27
rharperpotoftea: yeah, https://bugs.launchpad.net/cloud-init/+filebug ; we'll triage it to Wishlist (that's the bucket we have for feature requests)19:28
rharperOdd_Bloke: Yeah, object-store fetching ... I suspect most clouds have something like that19:28
rharperthough I think many of them have s3-like apis19:29
Odd_BlokeAnd we also shouldn't tie S3 to EC2 tightly; plenty of places will deploy across multiple platforms but want to store objects in a single place.19:29
rharperright, I think s3 is fairly generic object-store api ... though I've not dealt with it in detail to know if that's accurate ; just what I've seen mentioned in various places19:30
Odd_BlokeYeah, S3 support would buy us a decent chunk.19:30
Odd_BlokeIdeally, I think we'd implement this as a generic API that people could implement for other data stores, and ship a concrete implementation of S3 as part of cloud-init.19:31
Odd_BlokeSo then we could allow people who want to fetch userdata from a SVN repo that's stored on NFS to write a Python script to that themselves, but support 90% of people out-of-the-box.19:32
potofteaI've created ticket, thank you19:47
rharperOdd_Bloke: github actions busy often?  I got an error report mentioning There was a failure in sending the provision message: Unexpected response code from remote provider InternalServerError19:47
Odd_BlokeYeah, I haven't seen that before, I'm seeing it too.19:48
Odd_Blokehttps://www.githubstatus.com/incidents/zdxk6xq2140519:48
Odd_BlokeGitHub are having quite the week.19:48
rharperheh19:49
rharpermaybe it's release week for them as well ?19:49
Odd_BlokeHaha, I was about to say, perhaps someone there is having a worse week than you. ;)19:49
rharperhehe19:49
Odd_BlokeOh, actually, we don't require the CLA check to pass, so I can still land my branches. \o/20:03
Odd_Bloke(Maybe we should fix that, though let's do that after I've landed my branches. ;)20:03
=== tds1 is now known as tds
Odd_Blokerharper: I had to push another commit to fix CI, and it's a bit of an odd one (albeit small) so I'm asking for a re-review of it: https://github.com/canonical/cloud-init/pull/322/commits/315478ba587ef0165d846d45ac0d6407a3e948b922:00
Odd_Bloke(As we squash-merge, we'll also need to make sure the merge commit has that info.)22:00
rharperOdd_Bloke: ok22:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!