[16:01] <stanguturi> Hi, Is this correct channel for the bi-weekly meeting for 18.1 or do I need to login in any other channel. Thans.
[16:01] <blackboxsw> stanguturi: absolutely. probably going to start it in a couple minutes
[16:01] <stanguturi> ok Great. Thanks.
[16:04] <blackboxsw> ok here goes
[16:04] <blackboxsw>  #startmeeting Cloud-init bi-weekly status meeting
[16:04] <blackboxsw> hey folks thanks for joining in to another cloud-init biweekly status meeting
[16:05] <blackboxsw> the early meeting day this week is to avoid hitting the upcoming US holiday on Monday
[16:05] <blackboxsw> This meeting is probably going to be short, but we wanted to generate any discussion around the release we have scheduled for next week. I'll go through the following topics
[16:06] <blackboxsw> recent changes, In-progress development, Release 18.1 Discussion, Office hours (30 mins)
[16:06] <blackboxsw> Without further ado...
[16:06] <blackboxsw> #topic Recent changes
[16:07]  * blackboxsw is sad I don't think meetingology is logging this meeting
[16:07] <nacc> blackboxsw: i saw a leading space in #startmeeting
[16:07] <nacc> not sure if it matters
[16:07] <blackboxsw> ahh nacc I'll try again
[16:08] <blackboxsw> #startmeeting Cloud-init bi-weekly status meeting
[16:08] <meetingology> Meeting started Fri Feb 16 16:08:08 2018 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
[16:08] <meetingology> Available commands: action commands idea info link nick
[16:08] <blackboxsw> much better thanks nacc
[16:08] <nacc> blackboxsw: yw
[16:08] <blackboxsw> #topic Recent changes
[16:08] <blackboxsw> Cloud-init upstream team has been working on an SRU for Artful and Xenial.
[16:09] <blackboxsw> We discovered a couple of SRU-blocking bugs on EC2 as well as cloud-init subcommands so we've landed a couple of fixes there which are queued for SRU now
[16:09] <blackboxsw> * cloud-init status --wait blocks until all stages complete (LP: #1747965)
[16:09] <blackboxsw> * SRU EC2 upgrade path fix for 'systemctl restart cloud-init.service' (LP:1748354)
[16:09] <blackboxsw> * Fix ds-identify nocloud detection with bind mounted writable/system-data directory (LP: #1747070)
[16:09] <blackboxsw> * Tests: include missing unitests in python2.6 environments. Fix py2.6 incompatilibilies
[16:11] <blackboxsw> * Fixed centos cloud-init build and test tooliing
[16:12] <blackboxsw> * SUSE: Fix groups used for ownership of cloud-init.log [RobertS]
[16:12] <blackboxsw> thanks folks for continuing to push on quality of cloud-init releases.
[16:12] <smoser> o/ thanks for starting blackboxsw
[16:13] <blackboxsw> not sure if I'm missing any other content that has landed in the last week and a half
[16:14] <blackboxsw> I also think powersj rharper may have sorted a couple of issues with storage on our common CI on Jenkins
[16:14] <blackboxsw> #link https://jenkins.ubuntu.com/server/view/cloud-init/
[16:14] <powersj> Yes CI is up and running again, I have more defensive statements in to prevent us from running out of storage
[16:15]  * blackboxsw is not sure, are there rumors we might have more hardware dedicated to jenkins in the future powersj ?
[16:15] <powersj> We do, however it is our jenkins master that runs out of storage :\
[16:15] <blackboxsw> ahh gotcha, SPOF
[16:16] <powersj> yeah
[16:16] <blackboxsw> ok, if no other work is 'complete'; let's  jump topics
[16:16] <blackboxsw> ahh forgot ryan landed
[16:16] <blackboxsw>     net: accept network-config in netplan format for renaming interfaces
[16:17] <blackboxsw> per LP: #1709715
[16:18] <blackboxsw> #topic In-progress Development
[16:18] <blackboxsw> So we are working toward quality on the 18.1 release for next week.
[16:19] <blackboxsw> Ubuntu specifically is finalizing verification on cloud-init 17.2.35  update for Xenial and Artful series (expectation is that this SRU will be public in 1 week).   17.2.35 is a snapshot of tip from a couple days ago
[16:20] <blackboxsw> we've also published tip of cloud-init master to bionic to keep the development release up to date with latest cloud-init
[16:20] <blackboxsw> current ongoing work as always is on our trello board. we tried tidying up the cards a bit
[16:20] <blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
[16:21] <blackboxsw> we have upcoming branches for a new snap cloud-config module for configuring and maintaining snap packages
[16:21] <rharper> https://bugs.launchpad.net/cloud-init/+bug/1749722
[16:21] <rharper> I'm actively working on that
[16:22] <blackboxsw> this snap work will obsolete snappy and snap_config modules, so expect that they'll be deprecated. in 18.1 and dropped completely in 18.2
[16:22] <smoser> https://code.launchpad.net/~rski/cloud-init/+git/cloud-init/+merge/312284
[16:22] <smoser> i just moved that back into review
[16:22] <blackboxsw> #link https://bugs.launchpad.net/cloud-init/+bug/1749722
[16:22] <smoser> hope to take a lookc at it today.
[16:22] <blackboxsw> #link https://code.launchpad.net/~rski/cloud-init/+git/cloud-init/+merge/312284
[16:23] <blackboxsw> so per rharper's; chrony will be first class citizen in cloud-init
[16:26] <blackboxsw> per cards in our trello board TODO lane, any card above the 18.1 release card (and anything in Doing/Review  lane) is something we want to land in the 18.1 release
[16:26] <blackboxsw> ... next topic so we can talk about release
[16:27] <blackboxsw> #topic cloud-init version 18.1 release (2/23/2018)
[16:27] <blackboxsw> next thursday we want to cut tip of cloud-init with any features we want to fold into the 18.1 release
[16:28] <blackboxsw> this point in the meeting is a good opportunity for us to discuss features and bugs that any folks think are a priority for this release
[16:29] <blackboxsw> smoser we saw some talk about archlinux support/updates, do we know whether we've gotten any updates about gaps/needs/bugs there?
[16:29] <stanguturi> @blackboxsw: I have two requests. One for the merge request and one about the bug.
[16:29] <smoser> blackboxsw: i've not seen any more than that developer asked about here in the channel.
[16:30] <stanguturi> @blackboxsw: Let me know if I can post my questions here or discuss them offline.
[16:30] <blackboxsw> stanguturi: please do discuss here. open forum :)
[16:30] <blackboxsw> if it gets too long a discussion, we can take it to your branch or email
[16:31] <blackboxsw> #link https://code.launchpad.net/~sankaraditya/cloud-init/+git/cloud-init/+merge/337736
[16:31] <blackboxsw> for reference right ?
[16:31] <stanguturi> @blackboxsw: Thanks. I have a merge request posted at https://code.launchpad.net/~sankaraditya/cloud-init/+git/cloud-init/+merge/337736
[16:31] <stanguturi> Want this to get into 18.1 It's a low-risk fix. Should not break anything.
[16:32] <stanguturi> Also, found a bug in ds-identify . https://bugs.launchpad.net/cloud-init/+bug/1749980
[16:32] <blackboxsw> ok just glancing at your branch now stanguturi looks fairly straight forward, and as always I'd like to see some unit tests covering that changeset
[16:33] <stanguturi> @blackboxsw: We already have unit tests for DataSourceOVF. This actually doesn't add any new functionality. The existing test cases should be sufficient enough.
[16:33] <blackboxsw> we have existing unit tests in tests/unittests/test_ds_identify.py which should be easy to extend for the additional detection
[16:33] <blackboxsw> in ds-identify
[16:34] <blackboxsw> yeah I was thinking more about ds-identify specifically
[16:35] <blackboxsw> all said though, that branch looks low-risk and we can probably get that landed before release.
[16:35] <blackboxsw> I'll add a card to trello for us to shepherd that in.
[16:35] <stanguturi> @blackboxsw: Great. Thanks.
[16:36] <stanguturi> @blackboxsw: Also I have a question about https://bugs.launchpad.net/cloud-init/+bug/1749980 Any inputs will be great.
[16:37] <blackboxsw> #link https://bugs.launchpad.net/cloud-init/+bug/1749980
[16:37] <blackboxsw> looking
[16:38] <blackboxsw> ohh good stanguturi we'll sort that bug and either provide more information on this
[16:38] <blackboxsw> for that bug discussion, let's move it to the "office hours" topic which comes up next
[16:38] <blackboxsw> I'd like smoser rharper to peek at that too
[16:38] <stanguturi> @blackboxsw: Ok. Sure. Thanks
[16:39] <blackboxsw> any other topics, branches or bugs that folks are itching to get in for 18.1 release?
[16:41] <blackboxsw> kpcyrd: any opdates or concerns on archlinux that you are aware of currently?
[16:42] <smoser> stanguturi: you can run a command there now ?
[16:42] <blackboxsw> let's transition to office hours now
[16:42] <smoser> 2 things
[16:42] <blackboxsw> #topic Office hours (next ~30 mins)
[16:43] <stanguturi> @smoser: Sorry. Didn't quite get the question.
[16:43] <blackboxsw> And thanks all for joining. Any burning questions, bugs, branches that need discussion can be brought up now.
[16:44] <stanguturi> @smoser: Oh. Are you asking if I can run any commands in my virtual machine right now.? Yeah. Sure.
[16:44] <smoser> stanguturi: can you run stuff int hat system ?
[16:44] <smoser> a.) cat /run/cloud-init/ds-identify.log
[16:44] <smoser> b.) idstr="http://schemas.dmtf.org/ovf/environment/1"
[16:45] <smoser> grep --quiet --ignore-case "$idstr" /dev/sr0
[16:45] <smoser> grep --quiet --ignore-case "$idstr" /dev/sr0 && echo y || echo n
[16:46] <smoser> stanguturi: basically the 'is_cdrom_ovf' should have gone down the path into that grep of the cdrom block device
[16:48] <stanguturi> @smoser: grep --quiet --ignore-case "$idstr" /dev/sr0 returned "grep: /dev/sr0: Input/output error"
[16:48] <stanguturi> @smoser: grep --quiet --ignore-case "$idstr" /dev/sr0 && echo y || echo n returned "grep: /dev/sr0: Input/output error and then new line and then n'
[16:48] <meetingology> stanguturi: Error: No closing quotation
[16:48] <blackboxsw> heh thanks meetingology
[16:50] <stanguturi> @smoser: Actually, read_fs_info doesn't DI_ISO9660_DEVS in my system. and because of this, dscheck_OVF returns DS_NOT_FOUND.
[16:51] <smoser> stanguturi: what release are you on ?
[16:51] <stanguturi> Trying it on 17.04 zesty desktop
[16:52] <stanguturi> and tried with top of the tree code in cloud-init.
[16:56] <smoser> stanguturi: could you potentially let me in via ssh ?
[16:57] <stanguturi> @smoser: Sorry. It's on my private network. Will not be able to provide ssh.
[16:58] <stanguturi> @smoser: We can do a webex conference if you want.
[17:03] <smoser> stanguturi: can you ssh out of the node ?
[17:03] <stanguturi> @smoser: Yes.
[17:07] <smoser> ok. /query window
[17:11] <blackboxsw> ok this triage will continue. if there are no other pressing bugs/concerns, we'll close out this meeting and keep pushing toward 18.1 upstream release next thursday
[17:11] <blackboxsw> thanks again for your time folks. I'll post these minutes to the cloud-init github page
[17:12] <blackboxsw> #link https://cloud-init.github.io
[17:17] <blackboxsw> next meeting march 5th same "bat time" same "bat channel"
[17:18] <blackboxsw> #endmeeting
[17:18] <meetingology> Meeting ended Fri Feb 16 17:18:00 2018 UTC.
[17:18] <meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2018/cloud-init.2018-02-16-16.08.moin.txt
[17:22] <rharper> blackboxsw: thanks!
[17:39] <kpcyrd> blackboxsw: I didn't have the time to test it yet. Do you have any recommendations for a test-bed?
[17:40] <kpcyrd> also, if somebody knows somebody at OVH I would be very interested in their custom cloud-init build for archlinux :)
[18:21] <blackboxsw> kpcyrd: not sure exactly what you are looking for but lxc looks to have an archlinux image:
[18:21] <blackboxsw> | archlinux (5 more)            | 03a5245fd014 | yes    | Archlinux current amd64 (20180216_01:27) | x86_64  | 130.49MB | Feb 16, 2018 at 12:00am (UTC) |
[18:21] <blackboxsw> that's from 'lxc image list images:'
[18:22] <blackboxsw> so if you are on ubuntu lxc launch images:archlinux myarch-container might get you setup
[18:24] <blackboxsw> I'm not sure if cloud-init is installed in that archlinux container image though :/
[21:57] <blackboxsw> rharper: smoser ok, so I have a puzzle to ponder for the snap module
[21:58] <blackboxsw> installing snaps on containers requires that squashfuse get installed. so I thought I'd provide the following config
[21:59] <blackboxsw> https://pastebin.ubuntu.com/p/FzXYSJyX2d/
[21:59] <blackboxsw> I need apt upgrade to run first on an image so I can apt install squashfuse
[22:00] <blackboxsw> but uprades are run in cloud-init final stage
[22:00] <blackboxsw> and I have snap module schedule at module config timefreame
[22:01] <blackboxsw> any suggestions? though I could just provide snap: {commands: [apt-get update, apt-get install squashfuse]
[22:02] <blackboxsw> bah and reminds me I needed to bring up again whether we actually want to try limiting snap:commands to only running snappy commands to avoid potential abuse/bad practices
[22:03] <blackboxsw> or since squashfuse is a known dependency, maybe the snap module should apt update && apt install squashfuse for us if snap configuration is provided
[22:03] <blackboxsw> ... and we know that we are in a container
[22:14] <smoser> blackboxsw: you might not have seen my thread on that.
[22:14] <smoser>  https://bugs.launchpad.net/ubuntu/+source/squashfuse/+bug/1628289
[22:16] <rharper> blackboxsw: yeah;  I think it would be OK to install snapfuse if it's not already present and we're in a container
[22:17] <smoser> blackboxsw: its busted. dont work around it in cloud-init.
[22:18] <blackboxsw> "it's busted"  you mean snappy requiring squashfuse to run in a container?
[22:18] <smoser> the container image should have it.
[22:18] <smoser> for unknown reason snapd does not want to fix that.
[22:18] <smoser> https://github.com/snapcore/snapd/pull/3605
[22:19] <smoser> https://github.com/snapcore/snapd/pull/2856
[22:19] <smoser> they are not able to sru it because it would cause issue (due to an apt bug that would hold snapd rather than grab the new recommends)
[22:19] <rharper> let's raise it again
[22:19] <smoser> but that has no bearing against bionic , where they're just ignoring it
[22:19] <smoser> see the second. i asked on jan 17
[22:19] <rharper> to kirkland
[22:19] <smoser> you saw my thread, rharper. i raised to Beret
[22:20] <smoser> blackboxsw: so cloud-init should not try to fix this broken scenario
[22:20] <smoser> i too went down that route which is how i got to know this stuff.
[22:20] <blackboxsw> ok, so cloud-init should expect breakage on lxd platform
[22:21] <rharper> smoser: yes, I'm fine with not working around it, should we at least raise a warning if we detect snap installs and on-container in cloud-init and reference the LP and PRs ?
[22:21] <rharper> I'm sorta of the mind that we shove it in for testing for now; it's just much nicer to test cloud-config with lxd
[22:21] <smoser> whats annoying is that if they do not fix it in bionic, then it will have another 2 years (at least) until fixed
[22:21] <smoser> how about this
[22:21] <blackboxsw> cloud-init may not need logic to forcibly install squashfuse on a container, but we could present an example doc that says this is how to install snaps on a container.
[22:21] <smoser> a.) do not restrict snap commands to 'snap' ...
[22:22] <blackboxsw> smoser my current branch doesn't restrict
[22:22] <smoser> b.) warn if snap commands have a arg0 other than snap
[22:22] <blackboxsw> sure
[22:22] <smoser> (to discourage as you suggest)
[22:23] <smoser> c.) then we can feed 00_apt: [apt-get, install, -qy, squashfs]
[22:23] <blackboxsw> and cloudinit module example will list an lxd-supported snap:commands  cloud-config that notes the bug
[22:23] <blackboxsw> yeah
[22:23] <blackboxsw> docs are already in place for this. just wanted to check if we wanted to bake that 'fix' into cloud-init module logic instead of exposing it
[22:23] <smoser> i guess i'm even open to taking it another step
[22:23] <smoser>  snap/install_squashfs: true
[22:24] <smoser> or
[22:24] <smoser>  snap/squashfuse_in_container: true
[22:24] <blackboxsw> hrm since we know it's a bug, I agree we probably shouldn't bake in a workaround into our new snap module configuration properties. but I can be swayed.
[22:26] <smoser> as rharper suggested, for test... its just so useful
[22:27] <smoser> you can document that thing as E_NO_RELY_ON_THIS_TESTING_ONLY
[22:27] <blackboxsw> secondary (unrelated) question about snap module:     our old snap_config module ran a "snap managed" check  before trying to create a snap user, I'm planning on an line shell conditional example to check snap managed before "snap known system-user" call to cover this case
[22:27] <smoser> even warn if its used.
[22:27] <blackboxsw> .. planning an *inline shell conditional documentation example* which would showcase check managed before create user
[22:29] <rharper> blackboxsw: I;ve not tried in a while, maybe the create-user does the check now
[22:29] <rharper> that'd be ideal
[22:30] <blackboxsw> agreed , I'll poke at it
[22:30] <blackboxsw> ok, so back to snaps on lxd for testing. rharper/smoser you are of the mind that snap config module just does the lift for us on containers with a warning message?
[22:31] <rharper> blackboxsw: I dunno how I feel about leaving a warning in about brokenness; which we'd need to update;
[22:31] <rharper> maybe in exception handler we could check if incontainer and no squashfuse but that still feels wrong
[22:31] <rharper> we don't really know what container environment we may be in
[22:31] <rharper> some may be privledged and others not
[22:33] <blackboxsw> vsm
[22:33] <blackboxsw> oops
[22:34] <blackboxsw> can't we know what container we're in checking /run/systemd/container
[22:34] <blackboxsw> and installing only if that's "lxc"
[22:35] <rharper> well, it's not the container name
[22:35] <rharper> it's the capabilities
[22:37] <smoser> wait now.
[22:38] <smoser> i w was not clear
[22:38] <smoser> snap/squasfuse_in_container: false
[22:38] <blackboxsw> ok, hmm. so explicit better than implicit side-effects based on perceived environment I suppose.  Shall we just surface the snap/squashfuse_in_container: true option then?
[22:38] <smoser> default to false
[22:38] <blackboxsw> yeah that's easy enough and easy to deprecate that option when that bug is fixed
[22:38] <smoser> and use util.is_container
[22:39] <blackboxsw> +1 on is_container. BTW I'm adding deprecation warning messages to both snappy and snap_config modules in my snap branch. I wanted to warn existing users that those modules will be removed in 18.2
[22:39] <blackboxsw> sound ok?
[22:39] <smoser> sure.
[22:40]  * smoser has to run.
[22:40] <blackboxsw> what about snapuser config option under users_and_groups?
[22:40] <blackboxsw> have a good one smoser
[22:40] <blackboxsw> can talk about this against my branch when I post it