[06:19] <jam> wallyworld: balloons: thumper: it seems Bionic bootstrap broke over the weekend. I have one bug that may have been there for a while, and one where it looks like they dropped an old package from bionic
[06:19] <jam> It seems a different package provides what we wanted, but we just always installed it from the old place
[07:04] <wallyworld> jam: do you have details?
[07:05] <jam> wallyworld: bug #1764264 I have a patch up
[07:05] <mup> Bug #1764264: bionic cloud-init 18.2 refuses Juju's 'runcmd' stanza <bionic> <juju> <cloud-init:New> <juju:Triaged> <juju 2.3:Triaged> <https://launchpad.net/bugs/1764264>
[07:05] <jam> wallyworld: sorry, wrong bug
[07:05] <jam> bug #1764267
[07:05] <mup> Bug #1764267: python-software-properties not found on bionic <bionic> <bootstrap> <juju:In Progress by jameinel> <juju 2.3:In Progress by jameinel> <https://launchpad.net/bugs/1764267>
[07:06] <jam> wallyworld: https://github.com/juju/juju/pull/8602
[07:06] <wallyworld> looking
[07:06] <jam> (the former is also an issue on bionic, I'm trying to sort out how big of a deal it is)
[07:08] <wallyworld> jam: lgtm, a small change :-)
[07:10] <jam> wallyworld: thanks.
[09:02] <balloons> wallyworld, jam, so roll both changes into 2.3.6 or ?
[09:02] <jam> balloons: what are you doing awake ? :)
[09:03] <balloons> Good morning! ;)
[09:03] <wallyworld> balloons: i still haven't had a chance to look at the potential dry-run regression
[09:04] <wallyworld> no commits i can see since 2.3.5 touch that area, but i could be wrong
[09:05] <balloons> I was thinking perhaps do the 2.4-beta1 first, since we have a queue to do
[09:06] <balloons> Yea, the dry run regression is weird. We fixed that
[09:07] <balloons> jam, will you roll back the juju version in a PR so we're 2.3.6?
[09:07] <jam> balloons: yeah, I can roll it back.
[09:08] <jam> balloons: though we have a number of actual bugs with real Bionic support I'm finding out, so maybe we don't want to block 2.3.6 on that
[09:08] <jam> and instead we just say "not fully supported" yet, and wait for 2.3.7 or something
[09:08] <balloons> Yea, the goal was to not break on bionic, not so much support it
[09:09] <balloons> So 2.3.6 is already primed at that commit. Easy to finish
[09:15] <jam> balloons: so basic support is just broken on bionic with https://launchpad.net/bugs/1764267
[09:15] <mup> Bug #1764267: python-software-properties not found on bionic <bionic> <bootstrap> <juju:In Progress by jameinel> <juju 2.3:Fix Committed by jameinel> <https://launchpad.net/bugs/1764267>
[09:15] <jam> balloons: at least, I see we try to install a package that just isn't there anymore
[09:19] <balloons> Yea, that one is pretty straightforward
[09:26] <jam> balloons: so, should we roll back to 2.3.6 and then release with the package fix, or just release and say bionic not supported in 2.3.6?
[09:28] <jam> wallyworld: so I'm trying to merge 2.3 into develop, but 'cmd/juju/commands/resolved.go' was deleted in 2.4. it got moved to application/resolved.go
[09:28] <balloons> jam, I think it's a hard call. If 2.4 is delayed to long, saying upgrade for support is harder
[09:28] <jam> wallyworld: do you know how your NoRetry is supposed to be in 2.4? Did you land the fix directly there?
[09:28] <balloons> But that was the original intent
[09:28] <jam> balloons: well, I've always wanted us to support bionic in 2.3 and netplan support *is* there. Just I found about 4 other bionic support bugs while testing it today
[09:30] <balloons> Yea everyone else wanted to stick with xenial, which 2.3.6 as is does do
[09:31] <jam> wallyworld: it looks like your fix was to pass !c.NoRetry instead of c.NoRetry, which looks to already be done in 2.4
[09:32] <jam> manadart: just to confirm, if I just resolve the lxd conflicts in favor of the 2.4 code, you're ok with that, right?
[09:35] <manadart> jam: Yeah, I can interrogate and apply what is required.
[09:37] <jam> manadart: container/lxd/lxd_test.go seems like makeManager is a pretty big ball of differencees
[09:37] <jam> in 2.4 it starts taking a string param, and you changed it to take a baseConfig() param
[09:38] <jam> manadart: although, it looks like the name string was always ignored
[09:38] <jam> in 2.4
[09:38] <jam> ... weird
[09:42] <manadart> jam: Ignored in 2.3 as well. I think my change should be OK after-the fact if you resolve in favour of 2.4.
[09:43] <manadart> From my PR that should leave lxd.go and lxd_test.go untouched.
[09:43] <manadart> Should still build/pass yes?
[09:46] <jam> manadart: so I'm just doing "git co HEAD lxd.go lxd_test.go" so it forces it specifically to 2.4. merging lxd.go was trying to pull in "Remote" from the 'lxdclient' which wasn't being imported into container/lxd.go anymore so I'm just punting
[09:47] <jam> manadart: export_test.go also needed to be reverted.
[09:48] <manadart> jam: Ah, yes.
[10:20] <jam> manadart: https://github.com/juju/juju/pulls/8606 is on its way to be merged. You could base your work off of that if you wanted, or you can wait for it to land
[10:21] <jam> but the container/lxd stuff is just upstream/develop so you can probably work from there already
[10:39] <manadart> jam: Thanks.
[10:52] <jam> manadart: it has now landed
[12:07] <jam> manadart: heads up (can you review) https://github.com/juju/juju/pull/8609
[12:08] <manadart> jam: Yep.
[12:08] <jam> manadart: its trying to make a much smaller patch vs upstream lxd, so we can more easily transition.
[12:09] <jam> wpk's patch was rejected in favor of a different approach, so eventually we'll have to follow along. But presumably we can't do anything until we can update to master tip of lxd
[12:16] <manadart> jam: Approved.
[14:25] <jam> manadart: https://github.com/juju/replicaset/pull/6 and https://github.com/juju/replicaset/pull/7 if you could be so kind
[14:35] <manadart> jam: OK. Opened https://github.com/juju/juju/pull/8610 over here too.
[14:53] <manadart> jam: Approved both. 1 trivial comment.
[16:22] <bdx> charm build failure alert https://paste.ubuntu.com/p/PfBYVHMFy8/
[16:22] <bdx> anyone else failing to get the setuptools wheel?
[16:22] <bdx> https://files.pythonhosted.org/packages/20/d7/04a0b689d3035143e2ff288f4b9ee4bf6ed80585cc121c90bfd85a1a8c2e/setuptools-39.0.1-py2.py3-none-any.whl
[16:47] <bdx> seems to be working now
[16:50] <rick_h_> bdx: so python community rolled over to the new pypi site today
[16:50] <rick_h_> bdx: first redeploy/new software in 10yrs
[16:50] <rick_h_> bdx: so there's going to be some rough spots in python community packaging access today heh
[16:51] <bdx> rick_h_: ahhh good to know, thanks
[16:51] <rick_h_> yea, I know a few things getting bit by the big upgrade today
[16:51] <bdx> also hitting this in apt install charm package https://github.com/juju/charm/issues/240
[16:51] <rick_h_> bdx: hah, yea that packaging is a new upgraded pip version
[16:51] <bdx> I'll keep that in mind as I go about my way
[16:51] <rick_h_> I wonder if that hit as well
[16:52] <rick_h_> cory_fu: ^
[16:52] <rick_h_> I wonder if the pip is the dstro, from the pypi, or something else used there
[16:54] <cory_fu> rick_h_, bdx: I hit that today as well when running some tests.  Doing a manual `sudo pip install --upgrade pip` to pick up 10.0.0 fixed it for me.  We might need to rebuild the charm snap
[16:54] <bdx> cory_fu: cool, the charm snap doesn't exhibit ^
[16:55] <cory_fu> bdx: If you got the error during charm build, then I think it's the version of pip inside the charm snap that's causing the issue
[16:56] <bdx> ahh, but I only get it with apt installed snap
[16:56] <bdx> geh
[16:56] <cory_fu> bdx: Mainly because that's what the venv is seeded with
[16:56] <bdx> * apt installed charm
[16:56] <cory_fu> Hrm
[16:56] <cory_fu> I see
[16:56] <cory_fu> So yeah, apt package of charm is pretty outdated
[16:56] <bdx> yeah, the snap has the pip wheel staticly defined in there
[17:44] <admcleod_> so i have a MAAS controller deployed, and ive also manually added an s390x machine to it (lts call that machine 0)
[17:45] <admcleod_> in my bundle i have 2 charms, charm-x86 and charm-s390x. i use constraints: arch=s390x for the latter charm
[17:45] <admcleod_> when i deploy the bundle, it attempts to request a machine of s390x arch from MAAS, rather than use the manually added machine
[17:45] <admcleod_> is this expected behaviour?
[17:50] <admcleod_> that behaviour is the same whether i use map-machine=existing or not
[17:50] <admcleod_> however, if i just deploy the charm directly, it works.
[17:51] <admcleod_> guess ill log a bug
[17:53] <ejat> hi ...
[17:53] <cory_fu> ejat: Can you repeat the OpenStack credential error you were getting when trying to use your OpenStack with Juju?
[17:53] <ejat> Please ensure the credentials are correct. A common mistake is
[17:53] <ejat> to specify the wrong tenant. Use the OpenStack "project" name
[17:53] <ejat> for tenant-name in your model configuration.
[17:54] <cory_fu> admcleod_: You might need to use the --map-machines option to juju deploy to get it to use the pre-created machine
[17:54] <cory_fu> admcleod_: Otherwise, requesting a new machine for the bundle is the expected behavior
[17:54] <admcleod_> cory_fu: yeah, it doesnt work with the bundle
[17:54] <admcleod_> cory_fu: with or without map-machines
[17:55] <cory_fu> admcleod_: Odd.  I would have expected --map-machines to do what you want
[17:56] <admcleod_> cory_fu: same. bug on its way
[17:57] <cory_fu> ejat: And, did you add the OpenStack credential using `juju add-credential`, `juju autoload-credentials`, or `conjure-up`?
[17:57] <ejat> juju add-credential
[17:58] <ejat> add cloud 1st then add-credential
[18:06] <cory_fu> ejat: Hrm.  I don't have an OpenStack to test with, but the error message makes it sound like something was entered incorrectly.  Are you sure you chose the correct auth-type and typed everything in correctly?  You could try using `juju add-credential <cloud> --replace` and type it in again
[18:07] <ejat> tenant-name == project name ?
[18:07] <ejat> or using the project ID ?
[18:08] <cory_fu> Yeah, the message said project name, so I'd try that if you used something different previously
[18:08] <ejat> bugs 1543262
[18:08] <mup> Bug #1543262: keystone V3 support needed <openstack-provider> <uosci> <Go OpenStack Exchange:Fix Committed by wallyworld> <juju:Fix Released by wallyworld> <juju 2.1:Fix Released by wallyworld> <https://launchpad.net/bugs/1543262>
[18:08] <ejat> its almost the same
[18:08] <ejat> but the bugs is fixed
[18:14] <cory_fu> ejat: You could check the output of `juju show-cloud openstack` and `juju list-credentials --format=yaml openstack` and see if anything looks odd there
[18:14] <hml> ejat:  if you’ve sourced your nova-rc file, juju autoload-credentials sometimes works better for OpenStack
[18:18] <admcleod_> ejat: if you are using keystone v3, you need to make sure your novarc is also keystone v3
[18:18] <ejat> admcleod_: using novarc is fine
[18:20] <admcleod_> ejat: what do you mean
[18:22] <bdx> I *think* he means that juju doesn't look for the same environement variables that are set by the novarc for kv3
[18:22] <bdx> using autoload-credentials
[18:22] <ejat> thanks bdx
[18:23] <bdx> I hit it this weekend too
[18:23] <ejat> bdx: so how u counter it
[18:24] <bdx> 1) download novarc from horizon, 2) source novarc on local machine, 3) add openstack cloud in juju (`juju add-cloud myopenstack`), 4) run `juju autoload-credentials` and select the openstack cloud
[18:24] <bdx> ^ something like that I think
[18:25] <bdx> ejat: I'll try to to reproduce ^ in the next day or so and get a bug filed
[18:26] <ejat> owh let me try
[18:29] <ejat> bdx: result still the same
[18:29] <admcleod_> ejat: can you pastebin the novarc without passwords etc?
[18:31] <hml> bdx:  what is the difference between the env vars set by novarc for kv3 and the novarc contents downloaded from horizen?
[18:31] <ejat> admcleod_: http://paste.ubuntu.com/p/BGXMWKq6Qr/
[18:32] <admcleod_> ejat: ok, thanks, looks fine
[20:29] <ejat> anyone can give advise?
[20:46] <thumper> ejat: sure
[20:46] <ejat> thumper: really much appreciate
[20:47] <ejat> still can't auth with openstack :(
[20:49] <thumper> personally I don't know much about openstack, but others here do
[20:49] <thumper> however you probably need to give more information
[20:49] <thumper> which openstack
[20:50] <thumper> how are the creds defined
[20:50] <thumper> what error are you getting
[20:50] <thumper> etc
[20:52] <ejat> im using openstack queens bundle with a little bit of customization which to include heat + telemetry
[20:52] <ejat> using MAAS
[20:52] <ejat> added the openstack into the cloud list
[20:53] <ejat> then juju add-credential
[20:54] <ejat> then tried to bootstrap
[20:55] <ejat> i got this :
[20:55] <ejat> ERROR authentication failed.
[20:55] <ejat> Please ensure the credentials are correct. A common mistake is
[20:55] <ejat> to specify the wrong tenant. Use the OpenStack "project" name
[20:55] <ejat> for tenant-name in your model configuration.
[21:32] <thumper> ejat: are you able to use those same credentials without juju in the mix and have it work?
[21:32] <ejat> thumper: for openstack cli works fine
[21:36] <thumper> hml: any ideas around openstack creds and queens?
[21:37] <hml> thumper: no..  i was looking at the nova rc file ejat put it a pastebin earlier
[21:39] <hml> ejat: can you give me the results of ‘juju credentials  --format yaml <cloudname>’?
[21:40] <hml> ejat:  also run with —show-secrets (dash dash)  and verify your password
[21:42] <ejat> hml: https://paste.ubuntu.com/p/cxrydwmtmQ/
[21:46] <hml> ejat: it looks good - the only thing i see is that domain-name is also set, which mine doesn’t have…. should be okay
[21:47] <hml> ejat:  can you get me the output of juju bootstrap —debug please?  (dash dash is auto correcting on me to —)
[21:51] <ejat> hml: https://paste.ubuntu.com/p/sGQgNDcf9x/
[21:51]  * hml looking
[21:54] <hml> egat: what do you get from ‘wget http://172.15.1.102:5000/v3/auth/tokens'?
[21:56] <hml> ejat ^^
[21:59] <ejat> hml: https://paste.ubuntu.com/p/xQTGQXvTMF/
[22:00] <hml> ejat: so it looks like there is something wrong with the credentials as the bootstrap output says…  let me try something on my box…
[22:02] <hml> ejat:  it appears that juju doesn’t like domain-name in the credentials…. this is a bug.  please file one!
[22:03] <hml> ejat:  i can also give you a work around until it’s fixed
[22:11] <hml> ejat:  juju credentials —format-yaml —show-secrets <cloudname> > /tmp/creds.yaml
[22:12] <ejat> https://bugs.launchpad.net/juju/+bug/1764550
[22:12] <mup> Bug #1764550: can't authenticate juju credential with openstack queens <juju:New> <https://launchpad.net/bugs/1764550>
[22:12] <hml> ejat:  edit /tmp/creds.yaml to remove ‘local-“ from the first line - and remove the domain-name line as well
[22:12] <ejat> can u help to comment the work around on LP ?
[22:13] <hml> ejat: then juju add-credential --replace <cloudname> -f /tmp/creds.yaml
[22:13] <hml> ejat: sure
[22:13] <ejat> hml: thanks a lot .. maybe bdx can refer it too
[22:13] <ejat> i guess he also facing the same issue
[22:14] <hml> ejat: how did you create the juju credentials?  that’d be helpful to know in the bug - as well as which novarc you were using?  from k3 or horizen
[22:16] <bdx> ejat: I can auth to openstack just fine, mine was a totally different thing I was experiencing with autoload-credentials
[22:17] <ejat> bdx: owh sorry
[22:17] <bdx> ejat you can't just curl or wget the endpoint like you are in that bug
[22:17] <ejat> miss understood you
[22:17] <bdx> you have to get the token using the openstack client etc etc
[22:17] <bdx> it is meant to fail if you try to interact with it like you are there
[22:17] <ejat> hml: from horizon
[22:18] <hml> ejat: interesting.
[22:18] <hml> ejat:  did you use autoload-credentials, or add-credentials?
[22:23] <hml> ejat: i’m EOD for now.. will be online tomorrow though.  i tried the cause and work around myself on an openstack v3 - so hopefully you’re good to bootstrap!
[22:24] <ejat> thanks a lot hml
[22:24] <ejat> see u tomorrow