/srv/irclogs.ubuntu.com/2022/08/17/#cloud-init.txt

MICROburstholmanb: The apt: stuff substitutes the variable but offers only one line for source. So you can have only 'deb' or 'deb-src', using tricks like '\n' fail. As a result the stuff like here at rspamd (https://rspamd.com/downloads.html -- debian tab) has to be put in an extra file. Actually apt: is pretty useless.05:18
SDes91does someone have some example cloud config examples on parititioning disks for QEMU?07:30
holmanbMICROburst: I don't see anything there that demands multiline deb src support, so you'll have to be more specific about why you think it's necessary - it doesn't appear to be necessary to me. That all looks doable with apt:13:15
holmanbah, they left nvm13:18
cjp256falcojr: does my last comment on #1667 make more sense?19:37
falcojrcjp256: it does. Me and (mostly) blackboxsw have been doing some digging to figure out exactly what's going on and the best way forward19:39
blackboxswcjp256: sorry we are actively discussing this :)19:40
blackboxswand want to hold 22.3 release on getting this included19:40
blackboxswok so, per your question "can you think of a time when we update metadata but wouldn't want to write it back out?".... the only case I can think of is if/when we wanted to provide the ability to check via cloud-init CLI if metadata was updated.. Like a `cloud-init query  --no-cache some_key`19:43
blackboxswfalcojr: ^19:43
blackboxswthis was something we talked about a few years ago, but persisting the obj.pkl updates or not due to a Datasource.get_data() call could easily be a param/switch provided to the get_data method that opts to update the cache or not 19:44
blackboxsw"this was something we talked about" == the feature of potentially surfacing a cloud-init CLI  query param to allow for direct datasource queries without using a cache.. so it implies some sort of ephemeral get_data() run without persisting the updated output to JSON or obj.pkl files19:45
blackboxswbut both instance-data.json and obj.pkl output would fall into the same boat. you either want to persist all metadata updates to disk via get_data() if the data changes, or avoid it for all files19:46
falcojryeah, for now at least, I think it makes the most sense to keep the pickle writing with the update metadata19:47
falcojrbasically right below where we write the json19:47
falcojrif we need to move it out later, we'll need to move the json writing out too, but I think it makes more sense to keep them together19:48
cjp256makes sense19:48
blackboxswyeah works for mw19:48
blackboxswyeah works for me19:48
blackboxswfalcojr: I'm cobbling up a 2nd quick patch suggestion  for this Azure obj.pkl cache handling. ~5 mins20:02
falcojrblackboxsw: sounds good, thanks!20:03
blackboxsw5 mins, 500 mins what's the big diff. unraveling circular imports 20:24
blackboxswok PR up for review falcojr cjp256 holmanb I've migrated pkl_load/pkl_store into datsource/__init__ because it is really only applicable there. https://github.com/canonical/cloud-init/pull/166921:00
ubottuPull 1669 in canonical/cloud-init "sources: obj.pkl cache should be written anyime get_data is run" [Open]21:00
blackboxswI'm going through testing now on a live Azure system21:00
blackboxswwhile adding removing NICs21:00
cjp256thanks :)21:19

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!