[11:14] <amansi26> Hi, I have a doubt, the network-details when we pass through openstack, how is that get handled by cloud init?
[11:15] <amansi26> Like where is that information get processed. I am using v19.1. Not able to figure out the exact place where it get processed
[12:05] <sackrebutz> hey again - i have yum_repos enabled but it seems it’s silently ignored by cloud-init when run on the server. Config: https://dpaste.org/xCEg // Log: https://dpaste.org/MafQ
[12:06] <sackrebutz> Also no file is created within /etc/yum.repos.d/
[12:15] <felfert_> sackrebutz: Have a look in /var/lib/cloud/instance/
[12:16] <felfert_> In there are several files which represent your user-data and how it is recognized by cloud-init
[12:18] <felfert_> Your config looks goot to me but there might be yaml errors somewhere else and if there are errors, then /var/lib/cloud/instance/user-data.txt.i and /var/lib/cloud/instance/user-data.txt show the difference of the raw data and how cloud-init has interpreted it
[12:20] <sackrebutz> felfert_: Thanks - i can see that in both files, there is the yum_repos config and it’s just as i have put it into the config
[12:23] <sackrebutz> is there any other log that would tell me what it’s doing ?
[12:24] <sackrebutz> at some point, cloud-init has do make a decision whether it takes the yum_repos config into account or not i guess?
[12:26] <felfert_> It does not specifically mention that in the logs
[12:36] <sackrebutz> I now copy-pasted the example from https://cloudinit.readthedocs.io/en/latest/topics/examples.html?highlight=yum#adding-a-yum-repository and it’s not workiing either - might be a bug ?
[12:37] <felfert_> Well, I use a much older cloud-init (that comes preinstalled with CentOS-7 stock cloud images and *that* one works)
[12:39] <felfert_> Is yours a stock CentOS-8 cloud image? Or did you install cloud-init yourself?
[12:40] <sackrebutz> it’s the centos8 stock
[12:41] <sackrebutz> v18.5
[12:44] <felfert_> The only other difference: My yaml files use 2-space indentation, yours use 4-spaces. Oh perhaps check, if you have indented with tabs accidentially.
[12:48] <felfert_> Do you use multiple yaml files as userdata or just a single yaml?
[12:48] <sackrebutz> I already converted to 2-spaces for the sake of testing, but dodn’t make a difference
[12:48] <sackrebutz> only 1 yml file
[12:50] <felfert_> Then to me this looks like a bug.
[12:56] <rharper> sackrebutz: can you check if  'yum-add-repo' is present in your /etc/cloud/cloud.cfg  ; do you have the complete cloud-init.log from your test ?
[12:57] <sackrebutz> rharper: I stumbled over the same on https://bugzilla.redhat.com/show_bug.cgi?id=1027406 - testing it right now
[12:59] <rharper> sackrebutz: ah, ok
[12:59] <sackrebutz> rharper: it is presend below ‘cloud_config_modules’ is that corret ?
[12:59] <sackrebutz> present*
[12:59] <rharper> yes
[12:59] <sackrebutz> hm
[13:00] <felfert_> But 1027406 is ancient (and fixed in 0.7.5-1.el6)
[13:00] <rharper> ok, so next, let's look if it ran in cloud-init.log
[13:00] <sackrebutz> yep, one sec
[13:03] <sackrebutz> https://dpaste.org/WctE
[13:03] <rharper> Skipping modules 'yum-add-repo' because they are not verified on distro 'centos'.  To run anyway, add them to 'unverified_modules' in config.
[13:03] <rharper> nasty
[13:03] <rharper> distros = ['fedora', 'rhel']
[13:03] <sackrebutz> ahhh
[13:04] <rharper> sackrebutz: so, you can add to your config,  unverified_modules: [yum-add-repos]   ;  let me confirm that
[13:05] <sackrebutz> yum-add-repo it would be, no?
[13:05] <rharper> yeah, a list with the module name
[13:05] <rharper> yes
[13:06] <rharper> I suspect that's worth an upstream patch
[13:07] <sackrebutz> yes! now it’s adding it correctly
[13:07] <rharper> \o/
[13:08] <sackrebutz> i was already doubting on my whitespace indentation skills even after 20years of python
[13:08] <rharper> =(
[13:08] <sackrebutz> once again, thank you very much rharper  felfert_
[13:09] <rharper> I generally use something like python3 -c 'import sys, yaml; print(yaml.dump(yaml.load(open(sys.argv[1]))))' my.cfg
[13:10] <rharper> sackrebutz: sure
[13:14] <rharper> https://github.com/canonical/cloud-init/pull/340
[13:16] <potoftea> Hi I'm trying to run single module "runcmd" with command: "cloud-init --file /root/cloud-init.yaml single --name runcmd --frequency=always". From logs I see that it commands are being written (Shellified 7 commands) but not execute.  Could this be a bug?
[13:17] <rharper> potoftea: it's a two step process, runcmd writes it's output to a file and later in final, runparts is called on it, let me get the final handler name
[13:19] <rharper> cloud-init single --name scripts_user --frequency=always
[13:20] <rharper> potoftea: ^
[13:20] <potoftea> ohh ok, thank you for quick response and yeah it helped.  rharper Thank you
[13:21] <rharper> sure
[13:48] <amansi26> rharper: Hi, I have a doubt, the network-details when we pass through openstack, how is that get handled by cloud init?  Like where is that information get processed. I am using v19.1. Not able to figure out the exact place where it get processed
[13:50] <rharper> amansi26: in cloudinit/sources/DataSourceOpenStack.py the  'network_config' property  uses cloudinit/sources/helpers/openstack.py:convert_net_json() which converts network_data.json from openstack into network-config v1 format
[13:53] <amansi26> rharper: but I am using datasource as configDrive
[13:54] <rharper> ConfigDrive has it's own network-config property, it has two network config formats it reads
[13:54] <amansi26> you mean version1 and version2. Right?
[13:54] <rharper> one is network-config which is a debian etc/network/interfaces file format, we convert that into v1 config, via  cloudinit.net.eni.convert_eni_data()  if network_data.json is present in the config drive, then we use the same openstack.convert_net_json
[13:54] <rharper> amansi26: no
[13:59] <amansi26> rharper: The issue I am facing is when I try deploying dhcp network. it shows type as static in netcfg (stages.py) http://paste.openstack.org/show/792946/
[13:59] <rharper> what config files are inside your config drive ?
[14:01] <amansi26> didn't get you rharper
[14:02] <rharper> amansi26: inside the configdrive, which is an iso attacted to your instance, it contains files that cloud-init reads, in particular, there may be a file 'network_config' or 'networkdata'  ;  they wil define what your network config will be
[14:06] <rharper> amansi26:  like so  https://paste.ubuntu.com/p/BGJ7fbM37p/
[14:07] <amansi26> So if I need to change something in the netcfg details it has to be done from openstack commands (like we need to pass a new parameter)
[14:08] <rharper> amansi26: typically yes, your tenent networks define this,  is it a dhcp subnet, or a static ip allocation, etc
[14:09] <amansi26> rharper: Thanks, cleared a lot of underline concepts
[14:09] <rharper> amansi26: great
[14:19] <Odd_Bloke> blackboxsw: Do we have any automated testing that validates our schemas?
[14:19] <Odd_Bloke> (e.g. ensures that the examples are valid under the schema.)
[14:29] <Odd_Bloke> blackboxsw: https://github.com/canonical/cloud-init/pull/335 <-- ready for your review, I believe
[16:31] <mutantturkey> guten morgan
[16:35] <blackboxsw> good mornin :)
[16:36] <blackboxsw> Odd_Bloke: approved yet needs rebase https://github.com/canonical/cloud-init/pull/341
[16:37] <blackboxsw> Odd_Bloke: unittests CI runs perform schema validation against all our existing rtd examples to make sure they adhere to the schema we have just defined.
[16:37] <blackboxsw> Odd_Bloke: I think it would be nice to extend that unittest run to also validate each of the schema['examples'] defined in the config modules too
[16:37] <blackboxsw> to validate our docs
[16:38] <blackboxsw> and the schema we claim will work
[16:57] <Odd_Bloke> It sounds like you're drawing a distinction between "our existing rtd examples" and the schema examples: don't the schema examples end up in our generated docs?
[16:57] <Odd_Bloke> (Just making sure I understand. :)
[17:04] <bpo> I wonder if someone here might point me in the right direction to analyze my cloud-init setup. I tried to get started with `cloud-init analyze blame` but that was empty - and `cloud-init analyze dump` returns `[]`. Any hints on how I should proceed? `systemd-analyze blame` shows `cloud-config.service` taking 8.5s, and `cloud-init.service` taking
[17:04] <bpo> 1.092s. This system is running Amazon Linux 2.
[17:15] <Odd_Bloke> bpo: Is /var/log/cloud-init.log readable by the user you're running these commands as?
[17:15] <Odd_Bloke> (And what version of cloud-init?)
[17:16] <Odd_Bloke> (I'm about to be AFK for a while, so either someone else will answer or you'll have to be patient. ;)
[17:16] <bpo> @Odd_Bloke Yes, /var/log/cloud-init.log is readable (running with sudo). `/usr/bin/cloud-init 19.3-2.amzn2`
[17:16] <bpo> No rush
[17:17] <bpo> (and the log has entries)
[17:19] <blackboxsw> Odd_Bloke: right, I was trying to draw the distinction. But I was misremembering our unittests actually only validate schema against our integration tests examples in tests/cloud_tests/testcases https://github.com/canonical/cloud-init/blob/master/tests/unittests/test_handler/test_schema.py#L400-L4012
[17:21] <blackboxsw> But I think it would be helpful if we actually extended unit tests to validate both the rtd examples in doc/examples as well as each cc_*.py module's schema["examples"] as well
[17:21] <blackboxsw> as I believe we have found a couple doc errors in the past
[17:21] <blackboxsw> ... or been informed of
[17:22] <blackboxsw> it'd help vet our new schema changes as well because example cloud-config  with top-level keys that are not covered by existing schema, don't error (as we aren't strict at that level).
[17:23] <blackboxsw> let's try proper grammar
[17:24] <blackboxsw> Having unittests attempt shema validation on doc/examples cloud-config as well as any config module schema["examples"] will help validate schema in CI as we add it to each config module.
[17:25] <blackboxsw> Performing schema validation against cloud-config from doc/examples that contains config keys not yet in schema will not warn about unknown top-level keys.
[17:38] <LongLiveCHIEF> is there a way to control the order of module execution during the final 2 boot stages?
[17:44] <blackboxsw> paride: for tomorrow I think we can re-enable unittests on centos in your branch https://github.com/canonical/cloud-init/pull/231  ; a git rebase with those changes and it can land
[17:44] <blackboxsw> LongLiveCHIEF: if you created your own image you can edit the order of modules named in /etc/cloud/cloud.cfg
[17:45] <LongLiveCHIEF> thanks!
[17:46] <blackboxsw> LongLiveCHIEF: or more generally add an additional config in/etc/cloud/cloud.cfg.d with the full cloud_config_modules:  in whatever order you wanted and it would override the defaults in /etc/cloud/cloud.cfg
[17:47] <LongLiveCHIEF> what if I put them in a specific order in user-data of the datasource?
[17:49] <LongLiveCHIEF> i'm actually doing this with NoCloud dsmode local
[17:49] <blackboxsw> LongLiveCHIEF: something like this https://paste.ubuntu.com/p/77pqCKjrBB/
[17:50] <blackboxsw> so providing user-data that re-orders modules is trickier
[17:50] <LongLiveCHIEF> makes sense
[17:51] <blackboxsw> hrm, you could use #cloud-config's write_files  to emit that additional config file to /etc/cloud/cloud.cfg.d
[17:51] <blackboxsw> https://cloudinit.readthedocs.io/en/latest/topics/modules.html#write-files
[17:52] <LongLiveCHIEF> i'm working on raspberry pi booting with ubuntu 20. Got it working pretty well, but only if i disable and mask systemd-networkd-wait-online, and then run netplan apply
[17:52] <blackboxsw> write-files is run the init phase which would write that config out before you got to the later stages
[17:52] <LongLiveCHIEF> which means I can't use user-data modules for easy things and instead have to put them in a script called by runcmd module
[17:54] <blackboxsw> nice on rasppi work. you mentioning running netplan apply cloud-init should be doing that for you pre-network bringup if you provide network configuration to it
[17:54]  * blackboxsw looks up nocloud again to see the mechanism
[17:55] <LongLiveCHIEF> yeah, that's the trick i use for writing out and then running netplan
[17:55] <LongLiveCHIEF> i do, but it never resolves. I have a power_state module usage at the end that reboots, and the second boot is when the network finally resolves
[17:55] <LongLiveCHIEF> (wifis obviously... ethernet works fine)
[17:56] <LongLiveCHIEF> but if I do it the workaround way, i can do it purely with wifi and it works prior to cloud-init finishing
[17:56] <LongLiveCHIEF> so I'm hoping to plug back in for other modules in the final stage
[17:57] <blackboxsw> LongLiveCHIEF: hrm, so your seed directory or seed url contains a network-config file in it?
[17:57] <LongLiveCHIEF> s=/boot/firmware/cloud-init
[17:58] <LongLiveCHIEF> i'm creating the cloud-init directory on the boot partition, which gets mounted to /boot/firmware during initial boot
[17:58] <LongLiveCHIEF> and that /cloud-init directory has just user-data and meta-data
[17:58] <blackboxsw> +1 on that and /boot/firmwware/cloud-init/ contains user-data and meta-data and maybe a network-config?
[17:59] <blackboxsw> ok I *think* you cold provide your netplan configuration in the network-config file as network config v2 to cloud-init to represent your entire network config on the rasp https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v2.html#network-config-v2
[17:59] <LongLiveCHIEF> /boot/firmware contains network-config... which winds up merging into 50-cloud-init.conf
[17:59] <blackboxsw> rharper: or Odd_Bloke could correct me if I'm wrong there
[17:59] <blackboxsw> LongLiveCHIEF: right I believe so
[18:00] <LongLiveCHIEF> yes, confirmed, i CAN
[18:00] <LongLiveCHIEF> BUT
[18:00] <blackboxsw> when network config v2 is passed to cloud-init, it should be passed directly though to /etc/netplan/50-cloud-init.yaml
[18:00] <LongLiveCHIEF> if I do, then network won't fully resolve until after reboot, which means all the package modules in the final cloud-init stage will fail
[18:01] <blackboxsw> hrm, that part lost me. why wouldn't it resolve if you define dhcp on eth0 and whatever wireless config you have?
[18:02] <LongLiveCHIEF> but if I run netplan apply using the runcmd module during the final stage, the  network connection gets established
[18:02] <LongLiveCHIEF> no wait, i don't have eth0
[18:03] <LongLiveCHIEF> it's the whole wifis with cloud-init thing
[18:03] <blackboxsw> I was wondering about something like this https://paste.ubuntu.com/p/YrmftwVD9G/
[18:03] <blackboxsw> but I think I'm missing your problem
[18:03] <LongLiveCHIEF> <blackboxsw "hrm, that part lost me. why woul"> @blackbox that's the part I can't figure out either.  I admit, i haven't spent enough time digging through the logs yet though.
[18:03] <LongLiveCHIEF> i'm doing something like that, only i've removed eth0 entirely
[18:04] <LongLiveCHIEF> if i leave it in, then cloud init hangs forever
[18:04] <LongLiveCHIEF> if i take it out, then it finishes, but doesn't establish network connection until reboot
[18:04] <LongLiveCHIEF> which means I can't use modules that run in the final stage that require connection
[18:05] <LongLiveCHIEF> and the funny thing is, canonical has these instructions verbatim in their new 20.04 docs for IoT, despite literally everyone you see on youtube stating wifi networking will fail if you do this: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet
[18:07] <blackboxsw> LongLiveCHIEF: this reminds me of this bug https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1870346
[18:08] <blackboxsw> is this also what you are experiencing?
[18:09] <blackboxsw> I think it could be.... or it'll be what you hit next if you provide your network-config file.
[18:09]  * blackboxsw has to jump away for a bit. will check later 
[18:10] <LongLiveCHIEF> https://github.com/HackerHappyHour/bootcc/tree/setup-config-options/system-boot
[18:11] <LongLiveCHIEF> yes, that's what I'm experiencing.  As the bug report states, it's not an inconvenience to reboot... however, that means that you can't use many of the cloud-init modules in the final stage that require a network connection
[18:13] <LongLiveCHIEF> i've got it working with the project i linked above so network will work even on first boot, but I want to make sure I can control the order of the final stage modules to ensure I can still use them, instead of having to run another script in the runcmd stage that will install packages and do other configs that cloud-init already has modules for
[18:15] <LongLiveCHIEF> *working so that even wifi networks will work on first boot
[18:21] <LongLiveCHIEF> It would be cool if the docs for the modules included what stage each module was run in.  Some of the modules specify, but most don't
[18:26] <LongLiveCHIEF> https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1874377
[18:27] <LongLiveCHIEF> I think i know how to fix this. I'll work on it. thanks for the help blackboxsw_
[18:31] <LongLiveCHIEF> scratch that, there's already a fix merged! https://github.com/CanonicalLtd/netplan/pull/133
[18:37] <rharper> blackboxsw: we don't netplan apply specifically because we run before networkd starts, so once we netplan generate (which creates the required systemd-networkd files) and naturally systemd-networkd finds what it needs.
[18:42] <rharper> LongLiveCHIEF: w.r.t netplan and wifi; and the linked PR; it's not yet clear to me that the fixed version is actually fixed, as in the bug blackboxsw mentioned,   (and in your case) we have a valid wifi config, and the "fixed" version of netplan. yet not seeing wifi come up;  so something is still missing in netplan;  that is, the apply fix is too late;  whatever needs to happen during apply should happen on the initial boot;  so maybe this clean up
[18:42] <rharper> of other networkd services mentioned needs to be figured out without a netplan apply command
[18:44] <LongLiveCHIEF> yeah, i'm seeing the same thing now that i've gone through the PR
[18:45] <LongLiveCHIEF> my end goal is the same as it would be on a digital-ocean droplet really. To setup a specified user, and packages, on first boot/provision
[18:46] <LongLiveCHIEF> using wifi only
[18:46] <rharper> yeah, AFAICT, the only issue is netplan itself;
[18:46] <LongLiveCHIEF> and I can actually do that now using cloud-init, it's just a hackier way of doing it than i prefer
[18:49] <LongLiveCHIEF> just annoying because I need that netplan apply prior to the start of the config stage of cloud-init, (and it seems it would definitely work by that time), but I can't really hook into it until the final stage
[18:49] <rharper> well, you shouldn't
[18:49] <LongLiveCHIEF> haha, why, there's a reason network stage comes before config stage :p
[18:50] <rharper> right
[18:50] <rharper> when we generate, everything should be fine ... I'm going to see what gets written out on generate;  are you in a focal image ?
[18:50] <LongLiveCHIEF> yessir
[18:51] <rharper> k
[18:51] <LongLiveCHIEF> i can confirm that generate creates everything correctly
[18:51] <rharper> that's confusing then ...
[18:52] <LongLiveCHIEF> no kidding!! haha
[18:52] <rharper> we're still missing something if apply fixes things
[18:52] <LongLiveCHIEF> yep
[18:53] <rharper> ok, so 0.99-ubuntu1 writes a /run/netplan/wpa-wlan0.conf (and stuff)
[18:54] <rharper> upgrading to ubuntu2;
[18:54] <rharper> and it still writes it
[18:55] <LongLiveCHIEF> not sure which build i was on, downloaded it early in the day on the 24th, which is the day 0.99 was released
[18:55] <rharper> ah, that;'s the config
[18:55] <LongLiveCHIEF> so i may have just missed it
[18:55] <rharper> could you try the daily image from today ?
[18:55] <LongLiveCHIEF> point me to the dl?
[18:56] <rharper> https://cloud-images.ubuntu.com/daily/server/server/focal/current/
[19:01] <LongLiveCHIEF> they don't make dailies for the preinstalled arm64
[19:01] <LongLiveCHIEF> not sure how to work with the arm64 cloud root image
[19:06] <rharper> ah, that's a bummer
[19:06] <rharper> well, you can mount up the image, upgrade the package, and try with that
[19:07] <rharper> you can use cloud-init --clean --logs
[19:07] <rharper> to clear out the previous run (mostly, some things like package instakks and user-adds will remain) but you can test the networking bit
[19:08] <LongLiveCHIEF> yeah, that's what I was going to do
[19:08] <LongLiveCHIEF> i have a hunch that I want to test first
[19:08] <LongLiveCHIEF> i'm telling netplan what IP to assign.  I wonder what will happen if I just let an IP get assigned by DHCP instead
[19:10] <LongLiveCHIEF> might be a bug with managing resolve when using manually assigned IP's
[19:11] <Odd_Bloke> LongLiveCHIEF: (I'm catching up on backlog so apologies if this isn't useful info, but) there are known issues with wifi on Pi that I believe are being worked on.  I'll see if I can find a bug reference.
[19:13] <LongLiveCHIEF> no worries.
[19:14] <LongLiveCHIEF> no rush. I'm just waiting to put everything together for docs for github.com/octoprint/docker for a recommended setup guide for our less tech savvy users
[19:16] <Odd_Bloke> bpo: Hmm, that's strange.  I'm not sure what's going; what does `cloud-init analyze blame -i /var/log/cloud-init.log` give you?  (The same, I expect.)
[19:16] <bpo> Yep, same:
[19:16] <bpo> -- Boot Record 01 --1 boot records analyzed
[19:17] <LongLiveCHIEF> i get 4
[19:17] <LongLiveCHIEF> ah n/m, i've booted that particular one a few times
[19:18] <Odd_Bloke> rharper: The netplan fix only landed 2 days ago, it's a better version of the pre-release wifi fix.  Have we definitely seen runs with that new version (that's only in groovy-proposed currently)?
[19:21] <Odd_Bloke> bpo: Would you be able to pastebin your cloud-init.log?  (Or, alternatively, could you try copying that file onto an updated Ubuntu server, xenial or later, and running `cloud-init analyze blame -i that_file`?)
[19:22] <LongLiveCHIEF> I'm guessing if I change the cloud.cfg.g final_module order using write_files, it wouldn't take the new config in until service restart
[19:22] <LongLiveCHIEF>  * I'm guessing if I change the cloud.cfg.d final_module order using write_files, it wouldn't take the new config in until service restart
[19:22] <Odd_Bloke> LongLiveCHIEF: cloud-init actually executes from scratch for each phase, so I believe later phases would pick up the new configuration.
[19:23] <Odd_Bloke> rharper: (^ this is perhaps something we need to consider for the daemon plans, in terms of reloading configuration?)
[19:23] <LongLiveCHIEF> write_files happens in the final stage though, correct?
[19:23] <LongLiveCHIEF> so it would be too late
[19:23] <Odd_Bloke> LongLiveCHIEF: You can see the order by looking at /etc/cloud/cloud.cfg; write_files runs early in the init phase (i.e. the first one).
[19:24] <Odd_Bloke> (At least in the configuration we ship upstream and in Ubuntu by default. :)
[19:24] <LongLiveCHIEF> cool
[19:26] <LongLiveCHIEF> i have a workaround method that works for me atm, so I'm going to run with that for the next week, but lmk if you guys want me to test anything/share logs with you in the meantime
[19:27] <LongLiveCHIEF> i did test out my dhcp hunch, and it was a no-go. Still failed to resolve when enabling dhcp (both 4 and 6)
[19:28] <LongLiveCHIEF> i've learned a ton about cloud-init as a result of all this though, so at the very least, I'll likely be submitting a few docs contributions over the next 2 weeks
[19:29] <Odd_Bloke> \o/
[19:29] <Odd_Bloke> (That's why we leave the bugs in. ;)
[19:29] <LongLiveCHIEF> i'm also considering writing a cloud-init module that would allow you to do things like set the default pin states and such for pi
[19:30] <Odd_Bloke> Oh, that would be cool!
[19:30] <LongLiveCHIEF> you fix wifi, i'll add gpio. deal?
[19:31] <LongLiveCHIEF> 😏
[19:32] <LongLiveCHIEF> my pipedream goal however, is to enable raspberry-pi imager to be a datasource
[19:33] <Odd_Bloke> LongLiveCHIEF: I'm not familiar with the RPi ecosystem, I'd be interested to hear a little more about how you would see that working.
[19:35] <LongLiveCHIEF> raspberry pi imager is an electron based application that burns bootable usb's.  What that means though is that it could also use local network details to lookup and generate network-config and user-data at an endpoint accessible to your own internal network, and it could easily add that network endpoint to the ds seed in cmdline.txt
[19:36] <LongLiveCHIEF> so imagine the rpi imager having similar configuration screen as digital-ocean, and in one step it downloads, burns, and configures a bootable microsd for your pi that will install whatever packages you desire
[19:37] <LongLiveCHIEF> many of the options I'm talking about i've organized into a project here: https://github.com/HackerHappyHour/bootcc/projects/1
[19:38] <LongLiveCHIEF> i'm going to be hacking on a prototype for the next 4 days, and then see if it's something that makes sense to contribute to the https://github.com/raspberrypi/rpi-imager project
[19:38] <LongLiveCHIEF> annonced here: https://www.raspberrypi.org/blog/raspberry-pi-imager-imaging-utility/
[19:39] <blackboxsw> good point Odd_Bloke on daemon config reload. that'd be a gap vs current implementation
[19:40] <LongLiveCHIEF> i'll ping you guys when i have a simple demo ready. Probably about this time tomorrow
[19:40] <blackboxsw> good deal LongLiveCHIEF.
[19:41] <Odd_Bloke> LongLiveCHIEF: Cool!
[19:42] <bpo> Odd_Bloke: https://pastebin.com/uhJNysgm
[19:43] <bpo> Must be an issue with the distro, that's from a fresh AWS Linux 2 AMI with no customisations - same behavior.
[19:44] <Odd_Bloke> bpo: That doesn't give me any output with cloud-init master so Something Strange is happening; I'll see if I can figure it out.
[19:47] <bpo> Odd_Bloke: thank you! I'll be curious to hear what you find. I will check back in periodically but will probably be slow to respond.
[19:51] <LongLiveCHIEF> bpo: you guys talking about the wifi/network issue or something else?
[20:03] <Odd_Bloke> Something else. :)
[20:11] <Odd_Bloke> bpo: OK, the issue has something to do with the log format differing between upstream and Amazon Linux (meaning that the upstream code doesn't identify any of the cloud-init.log lines as being cloud-init log lines).
[20:11] <Odd_Bloke> bpo: Could you run `grep _log -R /etc/cloud` on an instance to find any cloud-init logging configuration (and if you find some, pastebin it)?
[20:42] <blackboxsw> Odd_Bloke: https://github.com/canonical/cloud-init/pull/335 is good (needs rebase)
[21:29] <Odd_Bloke> blackboxsw: https://github.com/canonical/cloud-init/pull/329 is ready for your re-review.
[21:36] <blackboxsw> hrm Odd_Bloke. so lxc images are all a different hash, how do we know if a hash is from a xenial image or a bionic. and if we've changed from xenial -> bionic across travis runs aren't we just removing all images and re-downloading the new series
[21:36] <Odd_Bloke> blackboxsw: We only use one lxd image in our builds, a xenial one.
[21:36] <Odd_Bloke> Let me add that to the explanatory comment.
[21:38] <blackboxsw> ahh right --os-name xenial
[21:38] <blackboxsw> ok I'm good with that then
[21:39] <blackboxsw> but comment welcome for future me
[21:40] <Odd_Bloke> blackboxsw: Pushed as a separate commit for ease of review: https://github.com/canonical/cloud-init/pull/329/commits/32b2ea23d28accf4c4c34f32865c8041712e2480
[22:01] <bpo> Odd_Bloke: Here is /etc/cloud/cloud.cfg.d/05_logging.cfg: https://pastebin.com/x6NZ4NFJ
[22:03] <Odd_Bloke> bpo: Thanks!  I'm just finishing up my day, so I'll take a look in the morning.
[22:04] <bpo> Odd_Bloke: sounds good, thanks