[03:49] <holman> meena: 1773 merged
[03:57] <holman> meena: I didn't realize openrc is using a service wrapper. I thought thought the !systemd branch was just sysv not the union of sysv and openrc.
[04:16] <holman> meena: yeah scratch my request I don't think it makes sense in light of that information
[11:48] <meena> who's got opinions on versioning of development versions?
[11:49] <meena> I have https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=266847 cloud-init-devel-22.4 (and will be bumping the revision every week or so)
[11:49] -ubottu:#cloud-init- bugs.freebsd.org bug 266847 in Ports & Packages "[newport] net/cloud-init-devel" [Affects Only Me, New]
[11:49] <meena> but, does that make any sense? will there be a cloud-init 22.4? should I just go with 23.1 ?
[12:28] <aciba> meena: There will be a 22.4 soon -> https://discourse.ubuntu.com/t/cloud-init-2022-release-schedule/25413
[13:15] <meena> wow, i just installed alpine into a VM for testing, and it boots faster than some docker containers
[13:30] <meena> git pull, and I'm getting [13:30] <meena> what did I do??
[13:33] <meena> minimal: 14:15 <meena> wow, i just installed alpine into a VM for testing, and it boots faster than some docker containers
[13:44] <minimal> meena: yeah it's pretty lightweight :-)
[13:49] <minimal> meena: I am working on changes to deal with service enable/disable in distros/alpine.py
[13:49] <meena> minimal: oh cool, so I don't have to :D
[13:50] <minimal> meena: it will just be a change to the existing manage_service function in alpine.py
[13:56] <minimal> meena: doh! getting confused. It will be a change to manage_service in __init__.py
[13:56] <meena> minimal: __init__.py handles the general cases, that's why there's only systemctl and service in there. I put rcctl into openbsd.py, and a specialized service into freebsd.py
[13:59] <minimal> meena: coffee effects are wearing off plus I'm in the middle of other non-cloud-init stuff, anyway I'm already working on the Alpine-specific case
[14:08] <minimal> meena: the "complication" for enabling/disabling services is, in which runlevel?
[14:09] <minimal> so I'm only going to do so for the default runlevel as manage_service doesn't pass a runlevel
[14:11] <meena> minimal: yeah, i think it would be overkill to add runlevels
[14:37] <minimal> meena: also am trying to figure out how exactly to handle the "status" option for manage_service
[14:38] <minimal> OpenRC returns different exit codes for "started" (0), "stopped" (3), and no such service(1)
[14:41] <minimal> looks like cc_set_passwords is the only that calls manage_service("status", ...) and it doesn't distinguish between "stopped" and "no such service"
[14:59] <meena> minimal: subp takes an array of expected return values, IIRC
[17:52] <meena> how can i get pytest to log a useful log file i can share with people to help me figure out why those tests are failing on FreeBSD
[17:52] <meena> also, i wonder now, if OpenBSD / NetBSD have the same or different failures
[17:52] <falcojr> meena: what are you looking for? Default log usually works fine for me. '-s' and '-v' can be added for more verbosity
[17:53] <meena> falcojr: oh
[18:49] <falcojr> blackboxsw and holman: with hotplug enabled, if we have a new interface added with no details other than an interface name, what would you expect to happen?
[18:50] <falcojr> ignore it completely? Render a config that uses dhcp on both interfaces? If we did that, we'll get two defaults routes which...technically won't actually hurt anything but doesn't make a whole lot of sense either
[19:13] <blackboxsw> hrm, @falcojr would we be able to add a metic: 200 to the config on that new device we add?
[19:14] <blackboxsw> then any dhcp we setup will prioritize the original device that was attached over  the hotplugged device
[19:14] <blackboxsw> then any dhcp we setup will prioritize the default routes on original device that was attached over  the hotplugged device
[19:15] <blackboxsw> we do the "route-metric" increment by 100 netplan dance for any 'secondary' nics in both Azure and Ec2 based on NIC ordering
[19:16] <blackboxsw> using `dhcp4-overrides`
[19:16] <blackboxsw> though on hotremove, we'd have to somehow reset those metrics
[19:18] <holman> no config provided? I'd expect us not to configure it
[19:19] <holman> Why would we configure it if not given config information?
[19:21] <falcojr> well by default we're not given config for the primary interface either
[19:21] <blackboxsw> I was only thinking if a cloud-platform default is dhcp on my devices and their IMDS only gives us a device name as each NIC is added to a vm/container cloud-init would opt to setup dhcp there.
[19:26] <blackboxsw> But maybe that's an incorrect assumption to make in the absence of specific config data. Note as well if there are non-dhcp uses in LXD, those launching the platform can specify their passthrough cloud-init.network-config to state otherwise if that is an unwanted.
[19:32] <holman> question: do any clouds provide the ability via the cloud api/cli/ui to actually configure a dhcp server?
[19:32] <holman> (not manually via another server on the same subnet)
[19:42] <minimal> holman: MAAS seems to provide a means to configure an external DHCP server instead of builtin MAAS one...
[19:42] <holman> minimal: nice, thanks
[19:43] <holman> Okay, lets say we dhcp by default on every interface - the user passes in (via kernel cli) a config with static IPs and dhcp disabled. What would they expect when they log in?
[19:46]  * holman goes to read code
[19:58] <holman> A few assumptions that I have about this question if @blackboxsw @falcojr or anyone feels different let me know:
[19:58] <holman>  1) If default case is for cloud dhcp servers to hand out IP information in the same subnet and only a default route, configuring a second interface doesn't make much sense (why add two default routes?)
[19:58] <holman>  2) I think there is a case to be made for assuming dhcp on every interface if nics are to be in separate subnets and that subnet information is disseminated via dhcp (presumably configured via a web console/api/cli).
[19:58] <holman>  3) A user should be able to override the dhcp information with a provided config.
[20:26] <meena> https://cloud-init.io/ somewhat broken: the frame on the right says: can’t connect to the server at tube.cthd.icu.
[20:28] <holman> meena: I don't see the same thing. Is that the  frame next the text that says "The standard for customising cloud instances"
[20:28] <falcojr> meena: got any extensions installed that might block/redirect youtube? It looks ok on my end
[20:28] <meena> falcojr: ah, yeah, i do
[20:30] <meena> okay, so, where does pytest log into?
[20:31] <meena> I'm not seeing anything… and I'm not seeing anything in .gitignore either
[20:31] <falcojr> should all spit out to stdout/stderr unless you have some other redirection happening
[20:32] <falcojr> there's also flags for specifying log related stuff including --log-file
[20:33] <blackboxsw> thanks holman: I think I leaned toward, LXD is using dhcp as primary setup config for systems and we could provide an easy automation for folks who opt into hotplug to just auto-cfg of secondary NICs with a deprioritized dhcp route-metric. But, per your point and stgraber's, having multiple NICs with dhcp setup is something that someone could easily configure via lxc config set cloud-init.network-config.
[20:35] <meena> falcojr: ah
[20:35] <blackboxsw> holman: but per our discussion, it sounds like the configuration of the NIC expressed in /dev/lxd/sock: 1.0/devices is more intended for the LXD host than client config, and the consensus is that anyone hotplugging the secondary NICs should be providing explicit cloud-init.network-config anyway for their use case prior to adding the new NIC so that cloud-init knows specifically the conifg intent of those devices.
[20:37] <blackboxsw> so our approach for hotplug on LXD is to document set cloud-init.network-config first, then add NICs to ensure cloud-init sets up the network correctly for your needs (as needs wll vary)
[20:40] <meena> lots of subp mocks failing… weird.
[20:50] <falcojr> meena: I wouldn't be surprised if we have a number of tests that forgot to mock something that is Linux specific
[20:51] <falcojr> when it "works on my machine", it's hard to know if there's something 5 calls down from what you're testing that needs to be mocked
[21:00] <meena> falcojr: aye
[21:03] <meena> falcojr: here's the output: https://gist.github.com/75c61c84c09a8edbe357713ca3aa1f14
[21:04] <falcojr> oh hmmm...I wouldn't be expecting all of those subp errors. My guess is that there might be a lower level function being used everywhere that has a python version in linux, but shells out on BSD.
[21:05] <falcojr> or we're mocking the linux version, but not the BSD version
[21:43] <meena> falcojr: can we file this officially as bug ?
[21:43] <falcojr> meena: sure
[21:44] <meena> also, can we add a circle-ci build, too? well, once we got enough of them fixed…
[21:44] <meena> (i don't know any other CI that has not-Linux…)
[21:45] <meena> well, source hut, too
[21:46] <falcojr> that's a bigger discussion I think
[21:47] <falcojr> our upstream CI is currently in Travis, and I think it'd be more work than we'd like to migrate it to another 3rd party service
[21:47] <falcojr> we keep the surface fairly small on CI to keep costs down as well
[21:49] <meena> falcojr: are you paying Travis?
[21:49] <meena> (pretty sure anyone who doesn't, has migrated away from it)
[21:50] <falcojr> yes, Canonical pays for it
[21:58] <blackboxsw> meena: yeah we had a bit of a set of hiccups when Travis started rate-limiting on opensource projects and we had to start paying for Quality of service for CI builds.  Some of our test loads are run by jenkins as well as github actions/workflows for some projects. So, now we are spread across three frameworks for test-related efforts. And in the works is potentially a move to github self-hosted runners 
[21:58] <blackboxsw> https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners
[21:59] <meena> oh, i didn't know GitHub had that…
[21:59] <meena> a FLOSSfriend looked into GitHub actions and their security model was… shaky… i wonder if they've done anything about that since…
[22:00] <meena> but self-hosted runners would be pretty sweet, if you have an OpenStack (?) fleet with different OSes…
[22:02]  * meena tries to remember what she was trying to do after getting a request to do… something else
[22:02] <meena> at least i'm done with my report writing
[22:09] <meena> ah, right, test my ntp changes
[22:27] <meena> ooh, i need to extend the docs!
[23:04] <meena> cc_ntp works to enable ntpd on FreeBSD. Now to test with chrony and openntpd.
[23:05] <meena> actually, i might leave that until tomorrow, given that I can't quite keep my eyes open…
[23:05] <minimal> meena: working on the manage_service related stuff for alpine, think the cc_ntp tests need changed to handle non-systemd
[23:06] <meena> minimal: \o/
[23:06]  * meena should actually be working on tests for her ifconfig parser
[23:07] <minimal> meena: am finding 1 test failure where the ntp testcase is expecting systemd's systemctl to be used when (Alpine's) rc-service is used, in process of figuring it out
[23:08] <meena> just for my laptop, i should be able to start ntpd with -g
[23:08] <meena> (i hope i won't need that in real cloud environments…)
[23:31] <meena> openntpd on FreeBSD: ✅
[23:31] <meena> Gonna check chrony tomorrow