[01:49] <vino> veebers: i am here
[01:50]  * thumper goes to make a coffee while tests run
[02:42]  * thumper headdesks
[03:10]  * thumper primal screams...
[03:10] <thumper> FFS
[03:10] <thumper> our fake authorizer is SO fake, it was never checking anything
[03:10]  * blahdeblah grabs popcorn
[03:13] <thumper> it came with a hard coded "admin" default user for the dummy provider
[03:13] <thumper> and the fact that the admin permission is called "admin"
[03:13] <thumper> and we have a fake drop through that is a "helper"
[03:13] <thumper> that allows you to name the user based on the permission you'd like it to have
[03:14] <thumper> so the fake authorizer always had the default user with admin permissions
[03:14] <thumper> no matter what you set
[03:53]  * thumper runs all the tests again after the latest pass of fixing
[03:53]  * thumper nips out to look at a car
[07:01] <wallyworld_> kelvin: if you get a chance, a review of this PR would be great - it's adds the ability for k8s to report storage info back to juju for recording in the model https://github.com/juju/juju/pull/8884
[07:02] <kelvin> wallyworld_, looking now
[07:03] <wallyworld_> kelvin: a lot of the code will be new to you - let me know if you have questions
[07:03] <kelvin> wallyworld_, yup, thanks.
[07:27] <manadart> externalreality: Got 5 for a HO?
[08:23] <icey> I have a deployed openstack, and I have different types of hypervisors; I can control the hypervisors that guests land on by using flavors; how can I control which flavors juju will use by default?
[09:39] <rathore_> Hi all, how can I install 2 charms that use ntp as subordinate on same machine? They all try to use the same port which fails some charms. It is to support both nova-compute and neutron-gateway
[09:39] <blahdeblah> rathore_: You shouldn't do that.
[09:39] <blahdeblah> You should configure one ntp subordinate on the bare metal and leave it at that.
[09:43] <rathore_> How can i do that, when i mention ntp to be deployed on specific hosts neutron gateway still tries to create subordinate charms
[09:44] <blahdeblah> rathore_: I'd need to see your configuration to be sure, but likely what you need to do is remove the relation between the neutron-gateway charm and ntp.
[09:49] <rathore_> :blahdeblah : http://termbin.com/1gam
[09:49] <rathore_> I am attempting to have 3 servers for control plane. All 3 will have all required services in HA
[09:55] <blahdeblah> rathore_: right, so you are hulk-smashing both nova-compute and neutron-gateway to nodes 0-2.  So removing the ntp relation on neutron-gateway should be safe in that situation.
[09:56] <blahdeblah> incidentally, the same would apply to other similar subordinates like nrpe, telegraf, landscape-client, etc.
[09:56] <blahdeblah> In some of our clouds we get around this by using cs:ubuntu as a bare metal base charm and relating all the machine-specific subordinates to that.
[09:58] <blahdeblah> rathore_: Also, in a 5-node deployment there's no value in enabling auto-peers on the ntp charm; it will only reduce performance due to the way the peer relation is implemented at the moment.
[09:58] <blahdeblah> (reduce performance of relation convergence, that is - it shouldn't have any effect on the actual running system once juju has deployed things and set up the relations)
[09:59] <rathore_> blahdeblah: thanks. Is there anywhere an example to implement this? I am new to openstack and not sure if having nova-compute and neutron-gateway is a bad idea
[10:00] <blahdeblah> I'll leave that for others more qualified to comment further, but we certainly have several clouds where both nova-compute and neutron-gateway are deployed together.
[10:00] <blahdeblah> (on the same bare metal nodes)
[10:00] <rathore_> blahdeblah: Thanks, is there any example for the cs:ubuntu trick anywhere in wild
[10:01] <blahdeblah> rathore_: I'm not sure; but it would look very similar to what you have already
[10:01] <blahdeblah> Probably not necessary if you're not adding lots of subordinates
[10:03] <rathore_> Is it something like https://github.com/gianfredaa/joid/blob/20af269d65cd053ba29ac8d9701bace4b17520be/ci/config_tpl/bundle_tpl/bundle.yaml
[10:03] <blahdeblah> rathore_: also, #openstack-charms might be a good place to ask about this
[10:05] <rathore_> blahdeblah: Thanks
[10:08] <stickupkid> manadart: I'm about to push my changes for ensuring that the local server is created and cached inside the server factory, but it seems there is an issue with bionic. Is there any chance you can have a look?
[10:09] <manadart> stickupkid: Sure. Just let me polish off this PR.
[11:01] <rick_h_> Morning party people
[11:37] <manadart> jam externalreality: Should you have a moment - https://github.com/juju/juju/pull/8885
[11:50] <stickupkid> anybody know what's changed in mongo recently, we're getting a lot of failures recently
[11:51] <stickupkid> s/mongo/mongo code/
[11:54] <manadart> stickupkid: What issue did you have? Looks like it's spinning OK here.
[11:54] <stickupkid> manadart: i can look into it more later, but http://ci.jujucharms.com/job/github-check-merge-juju/2054/console
[11:55] <manadart> manadart: I was meaning re the PR from earlier.
[11:55] <stickupkid> ah, sorry
[11:55] <stickupkid> manadart: let me check
[11:56] <stickupkid> manadart: let me just create a new lxc bionic container
[12:04] <rick_h_> externalreality: if you get a sec can you put https://github.com/juju/juju/pull/8885 on your "to look at" list please?
[12:10] <rick_h_> jam: with 2.4.0 out we should be able to do that merge 2.3 to 2.4 now if that's cool
[12:13] <jam> rick_h_: thanks for the reminder.
[12:17] <rick_h_> 2.4.0 release announcement formatted up on discourse. I probably missed some formatting changes so feel free to suggest/edit.  https://discourse.jujucharms.com/t/juju-2-4-0-has-been-released/53
[12:20] <rathore_> hi all, anyone has seen ubuntu@juju-04c90f-0-lxd-1:~$ sudo ls sudo: unable to resolve host juju-04c90f-0-lxd-1: Connection timed out
[12:26] <rathore_> lxd containers and hosts are unable to resolve themselves
[12:29] <rick_h_> rathore_: is this on bionic?
[12:29] <rathore_> xenial
[12:30] <rick_h_> rathore_: is their hostname in /etc/hosts?
[12:30] <rathore_> no it isn't
[12:57] <rathore_> rick_h_: Is this a known issue in bionic?
[12:58] <rick_h_> rathore_: sorry, just checking. I know that systemd does something where it resolves with a local daemon thing vs the older way to do things
[12:59] <rathore_> rick_h_: Thanks, i am also trying bionic now to see if it has same issue
[13:02] <rathore_> rick_h_: If it helps, the bare metal hosts have the same issue. I am using Maas for provisioning
[13:10] <jam> manadart: feedback on your PR
[13:11] <manadart> jam: Ta.
[13:23] <rathore_> rick_h_: Cannot deploy bionic. Install fails saying it cannot find image for bionic/amd64. So don't know if it works
[13:25] <rick_h_> rathore_: gotcha, yea you'll have to pull those images down into your MAAS to try it out
[13:25] <rick_h_> rathore_: sorry, on the phone a bit this morning. If you can replicate it please file a bug with notes on the MAAS/Juju verison and such.
[13:27] <rathore_> rick_h_: Maas has bionic and bare metal was provisioned with bionic. Its the lxd which complains. I will check if it happens again on xenial and file a bug.
[13:55] <rathore_> rick_h_: Just to update, the issue is fixed with Juju 2.4
[14:05] <hml> externalreality: is pr 8864 ready to move from WIP status to review?
[14:09] <rathore_> rick_h_: I take my words back. It seems to be random.
[14:20] <jhobbs> any chamhelpers reviewers around? This one is pretty small... https://github.com/juju/charm-helpers/pull/194
[14:28] <manadart> jam: Fixed it.
[16:05] <maaudet> After a clean bootstrap with Juju 2.4.0 on AWS, and then running juju enable-ha I get the following warning (but no error state): WARNING juju.mongo mongodb connection failed, will retry: dial tcp REDACTED-MACHINE-0-EXT-IP:37017: i/o timeout
[16:05] <maaudet> Is it cause for concern?
[16:05] <maaudet> Both machine 1 and 2 outputs this error in a loop
[16:47] <rmescandon> Does anybody know if influxdb and telegraf charms related as influxdb:query - telegraf:influxdb-api work out of the box. Or is it needed additional configuration?. Documentation say so, but I don't see that telegraf is adding the influxdb output plugin to /etc/telegraf/telegraf.conf
[16:58] <rick_h_> rmescandon: sorry, not sure off the top of my head.
[16:59] <rick_h_> maaudet: hmm, that doesn't seem right. Looks like a timeout trying to get the new controller dbs in sync with the first one? If you can replicate it please file a bug with the debug-log details please.
[17:25] <maaudet> rick_h_: Will do, I replicated it 3 times already, I'm going to put all the details in the bug report
[17:26] <rick_h_> maaudet: ty
[17:54] <kwmonroe> maaudet: just reading the 2.4 release notes, it says this about juju-ha-space bootstrap config: When enabling HA, this value must be set where member machines in the HA set have more than one IP address available for MongoDB use, otherwise an error will be reported.
[17:54] <kwmonroe> maaudet: could it be you're missing that config? https://docs.jujucharms.com/2.4/en/controllers-config
[17:57] <kwmonroe> maaudet: here's the actual section that talks about juju-ha-space in more deats: https://docs.jujucharms.com/2.4/en/controllers-config#controller-related-spaces.  looks like it's only a "must" if your controller has multiple ip addrs. not sure if that applies to you.
[17:58] <maaudet> kwmonroe: I tried it, but when I run juju controller-config juju-ha-space=my-space it says that my machine is not part of that space, but when I check juju show-machine 0 it says that it's in that space
[17:59] <rick_h_> maaudet: kwmonroe so that space stuff is only on MAAS since AWS doesn't know about spaces tbh
[17:59] <maaudet> that's what I figured
[18:00] <kwmonroe> aaaah, thx rick_h_.  i'll go back to my cave now.
[18:00] <rick_h_> kwmonroe: all good
[18:00] <rick_h_> kwmonroe: heads up, juju show thurs (since the holiday wed) to celebrate 2.4
[18:00] <rick_h_> and hopefully some other fun stuff to show
[18:00] <kwmonroe> w00t
[18:00] <rick_h_> bdx: zeestrat ^
[18:01] <bdx> rick_h: good deal, count me in
[18:01] <bdx> congrats on the 2.4 all!!
[18:02] <rick_h_> woooo!
[18:04] <pmatulis> rick_h_, maaudet, kwmonroe: i should add MAAS & AWS only for that stuff on controller spaces
[18:04] <rick_h_> pmatulis: yea, I was just asking folks about that as it *should* work anywhere we support spaces but seems it's MAAS-only atm
[18:04] <pmatulis> but the main docs do link to the network-space page
[18:04] <pmatulis> which do say MAAS/AWS only
[18:15] <maaudet> I'm on AWS actually
[18:22] <pmatulis> maaudet, in Juju did you create a space for an available AWS subnet?
[18:23] <maaudet> pmatulis: Yes, 2 subnets for 1 space, both subnets were in 2 different az
[18:26] <pmatulis> maaudet, and 'juju spaces' looks fine?
[18:36] <maaudet> pmatulis: I took down the controller for now, but yes, I could see the right ranges in the right space + 2 FAN network ranges
[19:04] <cory_fu> rick_h_: I'm trying to help someone debug the controller returning "could not fetch relations: cannot get remote application "kubeapi-load-balancer": read tcp 172.31.5.119:36590->172.31.5.119:37017: i/o timeout" and came across https://bugs.launchpad.net/juju/+bug/1644011  They're also having `juju status` hang, but checking on the controller they still see a /usr/lib/juju/mongo3.2/bin/mongod process.  Any clues on what could cause that or how to
[19:04] <cory_fu> track it down?
[19:04] <mup> Bug #1644011: juju needs improved error messaging when controller has connection issues <usability> <juju:Triaged> <https://launchpad.net/bugs/1644011>
[19:05] <rick_h_> cory_fu: hmm, so is this a cross model relation?
[19:05] <cory_fu> rick_h_: Nope.  Just a normal relation in a CDK deployment
[19:05] <rick_h_> cory_fu: hmm, someone else was getting an i/o timeout today setting up HA.
[19:05] <rick_h_> cory_fu: I'm trying to see if I can replicate it and see what's up
[19:05] <cory_fu> rick_h_: This was mid-deployment with conjure-up
[19:05] <rick_h_> cory_fu: with 2.4?
[19:05] <cory_fu> Not sure
[19:06] <rick_h_> cory_fu: please check, I'm wondering if there's an issue in 2.4 around this
[19:07] <cory_fu> Asked, waiting on a response
[19:07] <rick_h_> cory_fu: k
[19:19] <zeestrat> rick_h_: thanks for invite. Cruising around Sweden atm so looking forward to watch when back :)
[19:24] <rick_h_> zeestrat: oooh have fun!
[19:28] <cory_fu> rick_h_: Apparently, it's Juju 2.3.8, and the juju-db service seems to still be running.
[19:47] <rick_h_> cory_fu: ok, and 172.31.5.119 is the controller IP address?
[19:47] <cory_fu> Yes, the cloud internal IP address.
[19:48] <cory_fu> rick_h_: Just like in the bug that you filed about reporting db connection errors better, it seems to be the controller timing out talking to the db, but in this case the DB seems to be up and running.
[19:49] <cory_fu> At least, systemctl is reporting it as running, and there were no obvious errors in the log, with update messages from the controller in there after the time of the error
[19:49] <rick_h_> cory_fu: did we try to restart it?
[19:49] <rick_h_> cory_fu: as well as the jujud ?
[19:51] <cory_fu> rick_h_: Unfortunately no.  The person in question apparently wanted to just tear it down and redeploy, since this happened during deployment.  I asked about trying to get more debugging info, but it seemed like they were trying to get EOD before the holiday
[19:51] <cory_fu> rick_h_: They said that, if it happens again, they'll follow up on Thursday
[19:51] <rick_h_> cory_fu: ah ok. Yea sorry. I'd like to get to the bottom of it. Feel free to connect us more directly if we can help
[19:52] <cory_fu> rick_h_: Will do
[20:41] <thumper> babbageclunk: ping
[20:42] <thumper> bug 1779904 did we not test the rc upgrades with the upgrade step?
[20:42] <thumper> also the upgrade steps should be idempotent
[20:42] <mup> Bug #1779904: 2.4.0 upgrade step failed: bootstrapping raft cluster <juju:New> <https://launchpad.net/bugs/1779904>
[21:23] <thumper> rick_h_: re bug 1779897, looks like a potential recent lxd change
[21:23] <mup> Bug #1779897: container already exists <cdo-qa> <foundations-engine> <juju:New> <https://launchpad.net/bugs/1779897>
[21:23] <thumper> care to put a card on your board to track?
[21:24] <rick_h_> thumper: rgr, on it
[21:26] <babbageclunk> thumper: crap
[21:26] <babbageclunk> No, I didn't test upgrading from an rc to 2.4.0
[21:28] <babbageclunk> Ah, and because it wasn't a state upgrade I didn't use the standard check upgraded data test for steps, which would have checked it was idempotent. Bugger.
[21:30] <babbageclunk> thumper: ok, I'll change the upgrade step now
[21:31] <thumper> babbageclunk: thanks
[21:34] <veebers> thumper: does that mean we'll be releasing 2.4.1 sooner than expected?
[22:38] <babbageclunk> thumper: can you review https://github.com/juju/juju/pull/8886 please/
[22:38] <babbageclunk> ?
[22:40] <babbageclunk> I'm just testing both cases now
[23:21] <thumper> babbageclunk: reviewed, just wondering if just checking the existence of the directory is sufficient to consider raft bootstrapped?
[23:23] <babbageclunk> thumper: I think so - that directory is created in the process of creating the logstore or snapshot store, immediately before the bootstrapping.
[23:26] <thumper> ok
[23:40] <babbageclunk> I'll put that on the PR for posterity too.