[06:19] wallyworld: balloons: thumper: it seems Bionic bootstrap broke over the weekend. I have one bug that may have been there for a while, and one where it looks like they dropped an old package from bionic [06:19] It seems a different package provides what we wanted, but we just always installed it from the old place [07:04] jam: do you have details? [07:05] wallyworld: bug #1764264 I have a patch up [07:05] Bug #1764264: bionic cloud-init 18.2 refuses Juju's 'runcmd' stanza [07:05] wallyworld: sorry, wrong bug [07:05] bug #1764267 [07:05] Bug #1764267: python-software-properties not found on bionic [07:06] wallyworld: https://github.com/juju/juju/pull/8602 [07:06] looking [07:06] (the former is also an issue on bionic, I'm trying to sort out how big of a deal it is) [07:08] jam: lgtm, a small change :-) [07:10] wallyworld: thanks. === frankban|afk is now known as frankban [09:02] wallyworld, jam, so roll both changes into 2.3.6 or ? [09:02] balloons: what are you doing awake ? :) [09:03] Good morning! ;) [09:03] balloons: i still haven't had a chance to look at the potential dry-run regression [09:04] no commits i can see since 2.3.5 touch that area, but i could be wrong [09:05] I was thinking perhaps do the 2.4-beta1 first, since we have a queue to do [09:06] Yea, the dry run regression is weird. We fixed that [09:07] jam, will you roll back the juju version in a PR so we're 2.3.6? [09:07] balloons: yeah, I can roll it back. [09:08] balloons: though we have a number of actual bugs with real Bionic support I'm finding out, so maybe we don't want to block 2.3.6 on that [09:08] and instead we just say "not fully supported" yet, and wait for 2.3.7 or something [09:08] Yea, the goal was to not break on bionic, not so much support it [09:09] So 2.3.6 is already primed at that commit. Easy to finish [09:15] balloons: so basic support is just broken on bionic with https://launchpad.net/bugs/1764267 [09:15] Bug #1764267: python-software-properties not found on bionic [09:15] balloons: at least, I see we try to install a package that just isn't there anymore [09:19] Yea, that one is pretty straightforward [09:26] balloons: so, should we roll back to 2.3.6 and then release with the package fix, or just release and say bionic not supported in 2.3.6? [09:28] wallyworld: so I'm trying to merge 2.3 into develop, but 'cmd/juju/commands/resolved.go' was deleted in 2.4. it got moved to application/resolved.go [09:28] jam, I think it's a hard call. If 2.4 is delayed to long, saying upgrade for support is harder [09:28] wallyworld: do you know how your NoRetry is supposed to be in 2.4? Did you land the fix directly there? [09:28] But that was the original intent [09:28] balloons: well, I've always wanted us to support bionic in 2.3 and netplan support *is* there. Just I found about 4 other bionic support bugs while testing it today [09:30] Yea everyone else wanted to stick with xenial, which 2.3.6 as is does do [09:31] wallyworld: it looks like your fix was to pass !c.NoRetry instead of c.NoRetry, which looks to already be done in 2.4 [09:32] manadart: just to confirm, if I just resolve the lxd conflicts in favor of the 2.4 code, you're ok with that, right? [09:35] jam: Yeah, I can interrogate and apply what is required. [09:37] manadart: container/lxd/lxd_test.go seems like makeManager is a pretty big ball of differencees [09:37] in 2.4 it starts taking a string param, and you changed it to take a baseConfig() param [09:38] manadart: although, it looks like the name string was always ignored [09:38] in 2.4 [09:38] ... weird [09:42] jam: Ignored in 2.3 as well. I think my change should be OK after-the fact if you resolve in favour of 2.4. [09:43] From my PR that should leave lxd.go and lxd_test.go untouched. [09:43] Should still build/pass yes? [09:46] manadart: so I'm just doing "git co HEAD lxd.go lxd_test.go" so it forces it specifically to 2.4. merging lxd.go was trying to pull in "Remote" from the 'lxdclient' which wasn't being imported into container/lxd.go anymore so I'm just punting [09:47] manadart: export_test.go also needed to be reverted. [09:48] jam: Ah, yes. [10:20] manadart: https://github.com/juju/juju/pulls/8606 is on its way to be merged. You could base your work off of that if you wanted, or you can wait for it to land [10:21] but the container/lxd stuff is just upstream/develop so you can probably work from there already [10:39] jam: Thanks. [10:52] manadart: it has now landed [12:07] manadart: heads up (can you review) https://github.com/juju/juju/pull/8609 [12:08] jam: Yep. [12:08] manadart: its trying to make a much smaller patch vs upstream lxd, so we can more easily transition. [12:09] wpk's patch was rejected in favor of a different approach, so eventually we'll have to follow along. But presumably we can't do anything until we can update to master tip of lxd [12:16] jam: Approved. [14:25] manadart: https://github.com/juju/replicaset/pull/6 and https://github.com/juju/replicaset/pull/7 if you could be so kind [14:35] jam: OK. Opened https://github.com/juju/juju/pull/8610 over here too. === deanman_ is now known as deanman [14:53] jam: Approved both. 1 trivial comment. [16:22] charm build failure alert https://paste.ubuntu.com/p/PfBYVHMFy8/ [16:22] anyone else failing to get the setuptools wheel? [16:22] https://files.pythonhosted.org/packages/20/d7/04a0b689d3035143e2ff288f4b9ee4bf6ed80585cc121c90bfd85a1a8c2e/setuptools-39.0.1-py2.py3-none-any.whl [16:47] seems to be working now [16:50] bdx: so python community rolled over to the new pypi site today [16:50] bdx: first redeploy/new software in 10yrs [16:50] bdx: so there's going to be some rough spots in python community packaging access today heh [16:51] rick_h_: ahhh good to know, thanks [16:51] yea, I know a few things getting bit by the big upgrade today [16:51] also hitting this in apt install charm package https://github.com/juju/charm/issues/240 [16:51] bdx: hah, yea that packaging is a new upgraded pip version [16:51] I'll keep that in mind as I go about my way [16:51] I wonder if that hit as well [16:52] cory_fu: ^ [16:52] I wonder if the pip is the dstro, from the pypi, or something else used there [16:54] rick_h_, bdx: I hit that today as well when running some tests. Doing a manual `sudo pip install --upgrade pip` to pick up 10.0.0 fixed it for me. We might need to rebuild the charm snap [16:54] cory_fu: cool, the charm snap doesn't exhibit ^ [16:55] bdx: If you got the error during charm build, then I think it's the version of pip inside the charm snap that's causing the issue [16:56] ahh, but I only get it with apt installed snap [16:56] geh [16:56] bdx: Mainly because that's what the venv is seeded with [16:56] * apt installed charm [16:56] Hrm [16:56] I see [16:56] So yeah, apt package of charm is pretty outdated [16:56] yeah, the snap has the pip wheel staticly defined in there [17:44] so i have a MAAS controller deployed, and ive also manually added an s390x machine to it (lts call that machine 0) [17:45] in my bundle i have 2 charms, charm-x86 and charm-s390x. i use constraints: arch=s390x for the latter charm [17:45] when i deploy the bundle, it attempts to request a machine of s390x arch from MAAS, rather than use the manually added machine [17:45] is this expected behaviour? === narindergupta is now known as canonical [17:50] that behaviour is the same whether i use map-machine=existing or not [17:50] however, if i just deploy the charm directly, it works. [17:51] guess ill log a bug [17:53] hi ... [17:53] ejat: Can you repeat the OpenStack credential error you were getting when trying to use your OpenStack with Juju? [17:53] Please ensure the credentials are correct. A common mistake is [17:53] to specify the wrong tenant. Use the OpenStack "project" name [17:53] for tenant-name in your model configuration. [17:54] admcleod_: You might need to use the --map-machines option to juju deploy to get it to use the pre-created machine [17:54] admcleod_: Otherwise, requesting a new machine for the bundle is the expected behavior [17:54] cory_fu: yeah, it doesnt work with the bundle [17:54] cory_fu: with or without map-machines [17:55] admcleod_: Odd. I would have expected --map-machines to do what you want [17:56] cory_fu: same. bug on its way [17:57] ejat: And, did you add the OpenStack credential using `juju add-credential`, `juju autoload-credentials`, or `conjure-up`? [17:57] juju add-credential [17:58] add cloud 1st then add-credential [18:06] ejat: Hrm. I don't have an OpenStack to test with, but the error message makes it sound like something was entered incorrectly. Are you sure you chose the correct auth-type and typed everything in correctly? You could try using `juju add-credential --replace` and type it in again [18:07] tenant-name == project name ? [18:07] or using the project ID ? [18:08] Yeah, the message said project name, so I'd try that if you used something different previously [18:08] bugs 1543262 [18:08] Bug #1543262: keystone V3 support needed [18:08] its almost the same [18:08] but the bugs is fixed [18:14] ejat: You could check the output of `juju show-cloud openstack` and `juju list-credentials --format=yaml openstack` and see if anything looks odd there [18:14] ejat: if you’ve sourced your nova-rc file, juju autoload-credentials sometimes works better for OpenStack [18:18] ejat: if you are using keystone v3, you need to make sure your novarc is also keystone v3 [18:18] admcleod_: using novarc is fine [18:20] ejat: what do you mean [18:22] I *think* he means that juju doesn't look for the same environement variables that are set by the novarc for kv3 [18:22] using autoload-credentials [18:22] thanks bdx [18:23] I hit it this weekend too [18:23] bdx: so how u counter it [18:24] 1) download novarc from horizon, 2) source novarc on local machine, 3) add openstack cloud in juju (`juju add-cloud myopenstack`), 4) run `juju autoload-credentials` and select the openstack cloud [18:24] ^ something like that I think [18:25] ejat: I'll try to to reproduce ^ in the next day or so and get a bug filed [18:26] owh let me try [18:29] bdx: result still the same [18:29] ejat: can you pastebin the novarc without passwords etc? [18:31] bdx: what is the difference between the env vars set by novarc for kv3 and the novarc contents downloaded from horizen? [18:31] admcleod_: http://paste.ubuntu.com/p/BGXMWKq6Qr/ [18:32] ejat: ok, thanks, looks fine [20:29] anyone can give advise? [20:46] ejat: sure [20:46] thumper: really much appreciate [20:47] still can't auth with openstack :( [20:49] personally I don't know much about openstack, but others here do [20:49] however you probably need to give more information [20:49] which openstack [20:50] how are the creds defined [20:50] what error are you getting [20:50] etc [20:52] im using openstack queens bundle with a little bit of customization which to include heat + telemetry [20:52] using MAAS [20:52] added the openstack into the cloud list [20:53] then juju add-credential [20:54] then tried to bootstrap [20:55] i got this : [20:55] ERROR authentication failed. [20:55] Please ensure the credentials are correct. A common mistake is [20:55] to specify the wrong tenant. Use the OpenStack "project" name [20:55] for tenant-name in your model configuration. [21:32] ejat: are you able to use those same credentials without juju in the mix and have it work? [21:32] thumper: for openstack cli works fine [21:36] hml: any ideas around openstack creds and queens? [21:37] thumper: no.. i was looking at the nova rc file ejat put it a pastebin earlier [21:39] ejat: can you give me the results of ‘juju credentials --format yaml ’? [21:40] ejat: also run with —show-secrets (dash dash) and verify your password [21:42] hml: https://paste.ubuntu.com/p/cxrydwmtmQ/ [21:46] ejat: it looks good - the only thing i see is that domain-name is also set, which mine doesn’t have…. should be okay [21:47] ejat: can you get me the output of juju bootstrap —debug please? (dash dash is auto correcting on me to —) [21:51] hml: https://paste.ubuntu.com/p/sGQgNDcf9x/ [21:51] * hml looking [21:54] egat: what do you get from ‘wget http://172.15.1.102:5000/v3/auth/tokens'? [21:56] ejat ^^ [21:59] hml: https://paste.ubuntu.com/p/xQTGQXvTMF/ [22:00] ejat: so it looks like there is something wrong with the credentials as the bootstrap output says… let me try something on my box… [22:02] ejat: it appears that juju doesn’t like domain-name in the credentials…. this is a bug. please file one! [22:03] ejat: i can also give you a work around until it’s fixed [22:11] ejat: juju credentials —format-yaml —show-secrets > /tmp/creds.yaml [22:12] https://bugs.launchpad.net/juju/+bug/1764550 [22:12] Bug #1764550: can't authenticate juju credential with openstack queens [22:12] ejat: edit /tmp/creds.yaml to remove ‘local-“ from the first line - and remove the domain-name line as well [22:12] can u help to comment the work around on LP ? [22:13] ejat: then juju add-credential --replace -f /tmp/creds.yaml [22:13] ejat: sure [22:13] hml: thanks a lot .. maybe bdx can refer it too [22:13] i guess he also facing the same issue [22:14] ejat: how did you create the juju credentials? that’d be helpful to know in the bug - as well as which novarc you were using? from k3 or horizen [22:16] ejat: I can auth to openstack just fine, mine was a totally different thing I was experiencing with autoload-credentials [22:17] bdx: owh sorry [22:17] ejat you can't just curl or wget the endpoint like you are in that bug [22:17] miss understood you [22:17] you have to get the token using the openstack client etc etc [22:17] it is meant to fail if you try to interact with it like you are there [22:17] hml: from horizon [22:18] ejat: interesting. [22:18] ejat: did you use autoload-credentials, or add-credentials? [22:23] ejat: i’m EOD for now.. will be online tomorrow though. i tried the cause and work around myself on an openstack v3 - so hopefully you’re good to bootstrap! [22:24] thanks a lot hml [22:24] see u tomorrow