=== kuraudo1 is now known as kuraudo | ||
faiqus | holmanb: i think a restart is necessary. or at the very least we can restart the config and final modes? | 15:35 |
---|---|---|
=== dbungert1 is now known as dbungert | ||
holmanb | connor_k: yeah maybe - I didn't bother to ask why you are using vendor-data | 18:13 |
holmanb | faiqus: shouldn't be | 18:14 |
holmanb | faiqus: are you going to support the code that you're using? | 18:15 |
faiqus | holmanb: yeah its going to be supported by CAPA maintianers/open source community | 18:16 |
holmanb | are you a CAPA maintainer? | 18:16 |
faiqus | reviewer | 18:17 |
holmanb | ah | 18:17 |
faiqus | do you think this approach makes any sense? throwing out/ignoring the idea of restarting it or not. | 18:18 |
holmanb | faiqus: I mean, I wrote the code. What do you think I'm going to say :P | 18:19 |
holmanb | faiqus: restarting is _really_ undesireable for a number of reasons | 18:20 |
holmanb | faiqus: I just don't have the ability to test it | 18:20 |
faiqus | haha, no i meant having this two part approach where the user-data starts as a script off by fetching some code and that replaces the old user-data | 18:21 |
holmanb | ahhh, I see | 18:21 |
holmanb | uhh | 18:21 |
faiqus | that's the part that is confusing me. i think this stuff needs to be executed as something else and not necessarily as user-data? Maybe a boothook. I'm not sure though. | 18:22 |
holmanb | it would be much simpler to just write the whole datasource in Python - this approach was just a hack to prove that it is possible to do without the restart | 18:23 |
holmanb | and the person I was working with tested it | 18:23 |
holmanb | and it apparently worked - maybe something changed or this is an old version | 18:23 |
faiqus | an old version of your data source? | 18:24 |
holmanb | yeah, idk | 18:24 |
holmanb | I don't know if I ever put it in version control | 18:25 |
faiqus | maybe - i tried the same code richard had and didn't have any luck before experimenting on my own. | 18:25 |
faiqus | your code was in version control - let me find it | 18:26 |
holmanb | faiqus: I can walk you through some ideas or give you some changes to gather more info to debug the issue | 18:26 |
holmanb | but I'm a bit busy at the moment, I probably will not have time to dig into it today | 18:26 |
faiqus | https://github.com/canonical/cloud-init/commit/f2796cd8260b8f3f463aecdd19feb6524182aaf3 | 18:26 |
-ubottu:#cloud-init- Commit f2796cd in canonical/cloud-init "feat: add POC datasource for Ec2 / Kubernetes" | 18:26 | |
faiqus | no sweat. thanks for your support. maybe i can make the whole thing work in python. will credentials for AWS be present at the datasource time | 18:27 |
holmanb | faiqus: apparently, yes - the script that gets pulled down has them | 18:28 |
holmanb | faiqus: does the CAPA project have the ability to modify that script? | 18:28 |
faiqus | sure does | 18:28 |
faiqus | if you have an idea for a direction you want to go in please let me know and i can explore it. thanks again for all the guidance you're helping us get out of a crazy hole | 18:29 |
holmanb | faiqus: if you can modify what gets exposed by the IMDS server, what I'd suggest is to put the credentials that are exposed in that script into a configuration -> json / yaml / whatever | 18:30 |
faiqus | and have the datasource read from that source to fetch the "real" cloud-init user data? | 18:32 |
holmanb | cloud-init uses the python requests library | 18:33 |
holmanb | just query the IMDS, grab the configuration file (which has the credentials and whatever else you need), convert to a dict (json.loads() / yaml.safe_load() / whatever) and then implement the rest of that bash script in python | 18:33 |
holmanb | faiqus: yeah, basically | 18:34 |
faiqus | ok that sounds good. i think all of these instances use some sort of instance principal authentication so maybe that credentials will simply be somewhere | 18:35 |
holmanb | calling LOG.info() sends logs to /var/log/cloud-init.log by default (if you don't have a broken logging config) | 18:36 |
holmanb | and there is a helper called log_util.multi_log() if you need stuff to go to the console too (which ends up in /var/log/cloud-init-output.log) | 18:36 |
holmanb | > thanks again for all the guidance you're helping us get out of a crazy hole | 18:37 |
holmanb | happy to help | 18:37 |
holmanb | like I said, I don't have a ton of time to contribute but I'd like to see it get resolved and using cloud-init in a way that is more sustainable | 18:38 |
holmanb | faiqus: one more thing | 18:41 |
holmanb | the semantics of the datasource file are non-obvious, but the tl;dr is that when your code runs is defined by the datasources = [...] list | 18:42 |
holmanb | so if you have e.g. | 18:44 |
holmanb | datasources = [(DataSourceFooLocal), (sources.DEP_FILESYSTEM,)), (DataSourceFooNetwork, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK))] | 18:44 |
holmanb | then DataSourceFooLocal will be used during the "local" stage (cloud-init-local.service) | 18:44 |
holmanb | and DataSourceFooNetwork will be used during the "network" stage (cloud-init-network.service) | 18:44 |
holmanb | faiqus: network isn't guaranteed to be available during local stage, but on some platforms a dhcp client is used to bring up a temp network to get the configuration | 18:46 |
holmanb | for more reading: https://docs.cloud-init.io/en/latest/explanation/boot.html | 18:46 |
dean | Hello! I'm having a problem with cloud-init 22.4.2 on a Debian AArch64 system. I'm pulling my configs from a self-hosted NoCloud provider, both user-data and vendor-data as multi-part MIME. Both configs have files with write_files entries and I've included the merge_how hack in them. However the write_file entries in my vendor-data files are being overridden. How do I fix this? | 20:07 |
dean | Any help would be greatly appreciated. | 20:07 |
minimal | dean: "NoCloud provider", "multi-part MIME" - not sure exactly what you mean | 20:09 |
minimal | are you using an ISO/filesystem to provide the config or HTTP/HTTPS? | 20:09 |
dean | I'm using an HTTP data source. | 20:09 |
minimal | so then each of the configs is pulled separately | 20:09 |
dean | yes | 20:09 |
minimal | so I don't see where the multi-part MIME comes in | 20:09 |
minimal | if they're separately pulled then there is no multi-part | 20:10 |
dean | The user and vendor data configs are compiled from sets of #cloud-config files.into multi-part archives. | 20:11 |
dean | https://cloudinit.readthedocs.io/en/latest/explanation/format.html#mime-multi-part-archive | 20:12 |
minimal | ok, haven't used that myself. So being overriden by what? | 20:14 |
dean | The write_file entries in the user-data files are being kept. The write_file entries in the vendor-data files are being dropped. | 20:17 |
dean | In theory, the merge_how hack is supposed to fix that. | 20:17 |
minimal | not familiar with the merge_how hack, it's behaviour might differ between cloud-init versions. 22.4.2 is not exactly recent, is there no more recent c-i version available to use? | 20:19 |
dean | Unfortunately no. | 20:20 |
minimal | I'd suggest you open a Github Issue and provide logs etc | 20:20 |
dean | Alright. | 20:20 |
faiqus | writing a datasource that uses aws APIs seems...strange does anyone have examples of datasoruces that reach out to services on the local cloud provider? i dont really know how im supposed to import the aws sdk either | 21:53 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!