[15:12] <blackboxsw> aciba: a nubmer of integration test failures on jenkins are "DEPRECATED." vs "DEPRECATED:". https://jenkins.canonical.com/server-team/view/cloud-init/job/cloud-init-integration-bionic-lxd_container/78/testReport/junit/tests.integration_tests.cmd.test_schema/TestSchemaDeprecations/test_clean_log/   You mentioned volunteering to sort that failure with a PR that'd be great.
[15:12] <blackboxsw> aciba: I'll keep looking through other jenkins failures  now to see if we might be able to release again to Ubuntu kinetic
[15:12] <aciba> blackbox: yes, I fix them!
[15:13] <aciba> blackboxsw: yes, I fix them!
[15:13] <blackboxsw> Thanks!
[15:14] <blackboxsw> community-notice: cloud-init upstream plans to cut a quarterly time-based release of version 22.3 in 8 days for tip of main. If there are pull requests features or bugs in flight that need attention please raise them here or on the mailing list. We'll send and email about this as well.
[15:14] <blackboxsw> our release schedule is documented in the IRC channel header and points to https://discourse.ubuntu.com/t/cloud-init-2022-release-schedule/25413
[15:17] <blackboxsw> community-notice: plan for Ubuntu after upstream release of 22.3 is to start an Ubuntu SRU process to also publish cloud-init 22.3 into 18.04, 20.04., 22.04 and 22.10 about a week after upstream release verification completes.
[15:28] <blackboxsw> wow it seems paride added jenkins build log analyzer plugin. So I've started adding regex values for typical intermittent Jenkins errors we've seen (like LXD getting saturated w/ requests). for identified problems we will now see a footer for "identified problems" on the build runner page https://jenkins.canonical.com/server-team/view/cloud-init/job/cloud-init-integration-bionic-lxd_container/78/
[15:29] <blackboxsw> build 78 hit inability to lxc init ... Image not found....  I'm going to check our jenkins worker nodes to get status of lxd && services
[15:41] <blackboxsw> aciba: did you point me to a pycloudlib PR you thought might have contributed to some jenkins failures?
[15:41] <blackboxsw> I'm re-running a couple of SSH-related integration test failures at the moment on our jenkins node to see if it repeats the errors
[15:41] <blackboxsw> I know we recently added the azure/api changes too
[15:43] <blackboxsw> so that could be causing a minor bump in SDK/API integration w/ azure because our jenkins nodes aren't  configured w/ the right version of the azure tooling/libs
[15:49] <blackboxsw> aciba: looks like the rest of the SSH-related integration test failures were all unable to find image for LXD in https://jenkins.canonical.com/server-team/view/cloud-init/job/cloud-init-integration-bionic-lxd_container/78/   so your PR should solve LXD failures there and I'll re-kick the job once the daily PPA biulds it
[15:50] <blackboxsw> I still have to sort the auzre-related jobs and what is causing pycloudlib probilems I think.
[15:56] <holmanb> blackboxsw: you may have already found it, but this is the link aciba shared: https://github.com/canonical/pycloudlib/pull/209, and the PR where we bumped pycloudlib version: https://github.com/canonical/cloud-init/pull/1604
[15:56] <holmanb> I was reviewer on both of those
[15:57] <blackboxsw> aah thanks holmanb . yeah not sure exactly what our jenkins node it missing yet that seems to work locally. but I bet that's our only issue w/ jenkins job failures at the moment
[15:57] <blackboxsw> well that and https://github.com/canonical/cloud-init/pull/1641/files
[15:58] <holmanb> +1 seems likely
[15:58] <blackboxsw> I think what happened with all the image not found errors on LXD is one of our cleanup jobs may have collided w/ the current active run
[15:58] <blackboxsw> and blew away the test images mid test-run.
[15:59] <blackboxsw> so we might have a timing thing to sort with how our test jobs are striped/time-splayed
[16:00] <aciba> One way to authenticate is using the az-cli and I provided a PR with the installation. There could exist a cli-less way of log-in, but I have no access to the Jenkins secrets to see how the credentials are defined. But I am pretty sure the non lxd failing ones are related to that
[16:06] <blackboxsw> aciba: your integration test PR is landed. I've just kicked off our github-mirror job. I'm testing Azure jobs locally now and from the jenkins worker node
[16:08] <aciba> blackboxsw: holmand: thanks for merging it. I was waiting for the build to pass. But your did it faster :)
[19:03] <blackboxsw> aciba: holmanb falcojr https://github.com/canonical/pycloudlib/pull/210 should fix jenkins jobs for Azure. If I can get a review on that today it'd be great, then we can unblock a kinetic release for tomorrow
[19:04] <blackboxsw> I'm going to kickoff an integration test locally on a vm that has now Azure CLI to confirm behavior now, though spot testing on our jenkins node appeared to work 
[20:38] <blackboxsw> found another integration test the failed due to us adding ansible module. https://jenkins.canonical.com/server-team/view/cloud-init/job/cloud-init-integration-kinetic-ec2/55/testReport/junit/tests.integration_tests.modules.test_ca_certs/TestCaCerts/test_clean_log/   Fix is here: https://github.com/canonical/cloud-init/pull/1643
[20:39] <blackboxsw> and looks like our lxd lvm test using bootcmd may experience intermittent errors holmanb https://jenkins.canonical.com/server-team/view/cloud-init/job/cloud-init-integration-bionic-gce/66/testReport/tests.integration_tests.modules/test_lxd/test_storage_lvm/
[20:39] <blackboxsw> doesn't affect every cloud, or even every run.
[20:39] <blackboxsw> but exit 100 from apt I think generally means another process is calling apt update
[20:39] <blackboxsw> or apt install rather
[20:39] <holmanb> blackboxsw: thanks for the heads up, looking
[20:39] <blackboxsw> and holds the apt locks
[20:44] <blackboxsw> I'm also bumping the pycloudlib commit hash in that cloud-init PR to get azure fixes too.
[20:50] <holmanb> blackboxsw: gitcha, was going to ask about that
[20:50] <holmanb> they aren't logically related are they?
[20:50] <blackboxsw> holmanb: I wasn't sure if there'd by one more pycloudlib fix or not
[20:50] <blackboxsw> not at all, so we could separate those commits and just push separately
[20:50] <holmanb> not a big deal for a 2-line commit, up to you
[20:50] <holmanb> just checking 
[20:51] <holmanb> *3-line diff
[20:51] <blackboxsw> yeah lets save the 5 ¢ on Travis runs and consolidate :)
[20:52] <blackboxsw> really it's the didn't want to wait 35 mins consecutively for both runs to complete
[20:52] <holmanb> that's where the real $$ savings lies ;)
[20:53] <blackboxsw> you mean you get paid more than 10 ¢ an hour?
[20:53] <blackboxsw> time for me to ask for a raise
[20:53] <holmanb> hehehe
[21:32] <blackboxsw> https://bugs.launchpad.net/cloud-init/+bug/1983516 ok this affect the same type of bigger azure multi-nic systems. the more nics we have, the bigger the race window with the kernel's NIC renaming that happens during init-local timeframe. Thanks cjp256 for the triage work there!
[21:33] <blackboxsw> I definitely think we should see what we can do in this space for cloud-init 22.3 if we can.
[21:47] <blackboxsw> thanks for the integration test merges. I've sync'd content to Launchpad and kicked off daily recipe builders https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-jammy etc
[21:48] <blackboxsw> I expect to kickoff jenkins runners once bits are published in ppa:cloud-init-dev/daily
[21:48] <blackboxsw> I'll be spending the rest of the day on azure v5 systems