[05:18] <MICROburst> holmanb: The apt: stuff substitutes the variable but offers only one line for source. So you can have only 'deb' or 'deb-src', using tricks like '\n' fail. As a result the stuff like here at rspamd (https://rspamd.com/downloads.html -- debian tab) has to be put in an extra file. Actually apt: is pretty useless.
[07:30] <SDes91> does someone have some example cloud config examples on parititioning disks for QEMU?
[13:15] <holmanb> MICROburst: I don't see anything there that demands multiline deb src support, so you'll have to be more specific about why you think it's necessary - it doesn't appear to be necessary to me. That all looks doable with apt:
[13:18] <holmanb> ah, they left nvm
[19:37] <cjp256> falcojr: does my last comment on #1667 make more sense?
[19:39] <falcojr> cjp256: it does. Me and (mostly) blackboxsw have been doing some digging to figure out exactly what's going on and the best way forward
[19:40] <blackboxsw> cjp256: sorry we are actively discussing this :)
[19:40] <blackboxsw> and want to hold 22.3 release on getting this included
[19:43] <blackboxsw> ok so, per your question "can you think of a time when we update metadata but wouldn't want to write it back out?".... the only case I can think of is if/when we wanted to provide the ability to check via cloud-init CLI if metadata was updated.. Like a `cloud-init query  --no-cache some_key`
[19:43] <blackboxsw> falcojr: ^
[19:44] <blackboxsw> this was something we talked about a few years ago, but persisting the obj.pkl updates or not due to a Datasource.get_data() call could easily be a param/switch provided to the get_data method that opts to update the cache or not 
[19:45] <blackboxsw> "this was something we talked about" == the feature of potentially surfacing a cloud-init CLI  query param to allow for direct datasource queries without using a cache.. so it implies some sort of ephemeral get_data() run without persisting the updated output to JSON or obj.pkl files
[19:46] <blackboxsw> but both instance-data.json and obj.pkl output would fall into the same boat. you either want to persist all metadata updates to disk via get_data() if the data changes, or avoid it for all files
[19:47] <falcojr> yeah, for now at least, I think it makes the most sense to keep the pickle writing with the update metadata
[19:47] <falcojr> basically right below where we write the json
[19:48] <falcojr> if we need to move it out later, we'll need to move the json writing out too, but I think it makes more sense to keep them together
[19:48] <cjp256> makes sense
[19:48] <blackboxsw> yeah works for mw
[19:48] <blackboxsw> yeah works for me
[20:02] <blackboxsw> falcojr: I'm cobbling up a 2nd quick patch suggestion  for this Azure obj.pkl cache handling. ~5 mins
[20:03] <falcojr> blackboxsw: sounds good, thanks!
[20:24] <blackboxsw> 5 mins, 500 mins what's the big diff. unraveling circular imports 
[21:00] <blackboxsw> ok PR up for review falcojr cjp256 holmanb I've migrated pkl_load/pkl_store into datsource/__init__ because it is really only applicable there. https://github.com/canonical/cloud-init/pull/1669
[21:00] <blackboxsw> I'm going through testing now on a live Azure system
[21:00] <blackboxsw> while adding removing NICs
[21:19] <cjp256> thanks :)