[05:18] holmanb: The apt: stuff substitutes the variable but offers only one line for source. So you can have only 'deb' or 'deb-src', using tricks like '\n' fail. As a result the stuff like here at rspamd (https://rspamd.com/downloads.html -- debian tab) has to be put in an extra file. Actually apt: is pretty useless. [07:30] does someone have some example cloud config examples on parititioning disks for QEMU? [13:15] MICROburst: I don't see anything there that demands multiline deb src support, so you'll have to be more specific about why you think it's necessary - it doesn't appear to be necessary to me. That all looks doable with apt: [13:18] ah, they left nvm [19:37] falcojr: does my last comment on #1667 make more sense? [19:39] cjp256: it does. Me and (mostly) blackboxsw have been doing some digging to figure out exactly what's going on and the best way forward [19:40] cjp256: sorry we are actively discussing this :) [19:40] and want to hold 22.3 release on getting this included [19:43] ok so, per your question "can you think of a time when we update metadata but wouldn't want to write it back out?".... the only case I can think of is if/when we wanted to provide the ability to check via cloud-init CLI if metadata was updated.. Like a `cloud-init query --no-cache some_key` [19:43] falcojr: ^ [19:44] this was something we talked about a few years ago, but persisting the obj.pkl updates or not due to a Datasource.get_data() call could easily be a param/switch provided to the get_data method that opts to update the cache or not [19:45] "this was something we talked about" == the feature of potentially surfacing a cloud-init CLI query param to allow for direct datasource queries without using a cache.. so it implies some sort of ephemeral get_data() run without persisting the updated output to JSON or obj.pkl files [19:46] but both instance-data.json and obj.pkl output would fall into the same boat. you either want to persist all metadata updates to disk via get_data() if the data changes, or avoid it for all files [19:47] yeah, for now at least, I think it makes the most sense to keep the pickle writing with the update metadata [19:47] basically right below where we write the json [19:48] if we need to move it out later, we'll need to move the json writing out too, but I think it makes more sense to keep them together [19:48] makes sense [19:48] yeah works for mw [19:48] yeah works for me [20:02] falcojr: I'm cobbling up a 2nd quick patch suggestion for this Azure obj.pkl cache handling. ~5 mins [20:03] blackboxsw: sounds good, thanks! [20:24] 5 mins, 500 mins what's the big diff. unraveling circular imports [21:00] ok PR up for review falcojr cjp256 holmanb I've migrated pkl_load/pkl_store into datsource/__init__ because it is really only applicable there. https://github.com/canonical/cloud-init/pull/1669 [21:00] Pull 1669 in canonical/cloud-init "sources: obj.pkl cache should be written anyime get_data is run" [Open] [21:00] I'm going through testing now on a live Azure system [21:00] while adding removing NICs [21:19] thanks :)