/srv/irclogs.ubuntu.com/2018/07/03/#juju.txt

vinoveebers: i am here01:49
* thumper goes to make a coffee while tests run01:50
* thumper headdesks02:42
* thumper primal screams...03:10
thumperFFS03:10
thumperour fake authorizer is SO fake, it was never checking anything03:10
* blahdeblah grabs popcorn03:10
thumperit came with a hard coded "admin" default user for the dummy provider03:13
thumperand the fact that the admin permission is called "admin"03:13
thumperand we have a fake drop through that is a "helper"03:13
thumperthat allows you to name the user based on the permission you'd like it to have03:13
thumperso the fake authorizer always had the default user with admin permissions03:14
thumperno matter what you set03:14
* thumper runs all the tests again after the latest pass of fixing03:53
* thumper nips out to look at a car03:53
wallyworld_kelvin: if you get a chance, a review of this PR would be great - it's adds the ability for k8s to report storage info back to juju for recording in the model https://github.com/juju/juju/pull/888407:01
kelvinwallyworld_, looking now07:02
wallyworld_kelvin: a lot of the code will be new to you - let me know if you have questions07:03
kelvinwallyworld_, yup, thanks.07:03
manadartexternalreality: Got 5 for a HO?07:27
iceyI have a deployed openstack, and I have different types of hypervisors; I can control the hypervisors that guests land on by using flavors; how can I control which flavors juju will use by default?08:23
rathore_Hi all, how can I install 2 charms that use ntp as subordinate on same machine? They all try to use the same port which fails some charms. It is to support both nova-compute and neutron-gateway09:39
blahdeblahrathore_: You shouldn't do that.09:39
blahdeblahYou should configure one ntp subordinate on the bare metal and leave it at that.09:39
rathore_How can i do that, when i mention ntp to be deployed on specific hosts neutron gateway still tries to create subordinate charms09:43
blahdeblahrathore_: I'd need to see your configuration to be sure, but likely what you need to do is remove the relation between the neutron-gateway charm and ntp.09:44
rathore_:blahdeblah : http://termbin.com/1gam09:49
rathore_I am attempting to have 3 servers for control plane. All 3 will have all required services in HA09:49
blahdeblahrathore_: right, so you are hulk-smashing both nova-compute and neutron-gateway to nodes 0-2.  So removing the ntp relation on neutron-gateway should be safe in that situation.09:55
blahdeblahincidentally, the same would apply to other similar subordinates like nrpe, telegraf, landscape-client, etc.09:56
blahdeblahIn some of our clouds we get around this by using cs:ubuntu as a bare metal base charm and relating all the machine-specific subordinates to that.09:56
blahdeblahrathore_: Also, in a 5-node deployment there's no value in enabling auto-peers on the ntp charm; it will only reduce performance due to the way the peer relation is implemented at the moment.09:58
blahdeblah(reduce performance of relation convergence, that is - it shouldn't have any effect on the actual running system once juju has deployed things and set up the relations)09:58
rathore_blahdeblah: thanks. Is there anywhere an example to implement this? I am new to openstack and not sure if having nova-compute and neutron-gateway is a bad idea09:59
blahdeblahI'll leave that for others more qualified to comment further, but we certainly have several clouds where both nova-compute and neutron-gateway are deployed together.10:00
blahdeblah(on the same bare metal nodes)10:00
rathore_blahdeblah: Thanks, is there any example for the cs:ubuntu trick anywhere in wild10:00
blahdeblahrathore_: I'm not sure; but it would look very similar to what you have already10:01
blahdeblahProbably not necessary if you're not adding lots of subordinates10:01
rathore_Is it something like https://github.com/gianfredaa/joid/blob/20af269d65cd053ba29ac8d9701bace4b17520be/ci/config_tpl/bundle_tpl/bundle.yaml10:03
blahdeblahrathore_: also, #openstack-charms might be a good place to ask about this10:03
rathore_blahdeblah: Thanks10:05
stickupkidmanadart: I'm about to push my changes for ensuring that the local server is created and cached inside the server factory, but it seems there is an issue with bionic. Is there any chance you can have a look?10:08
manadartstickupkid: Sure. Just let me polish off this PR.10:09
rick_h_Morning party people11:01
manadartjam externalreality: Should you have a moment - https://github.com/juju/juju/pull/888511:37
stickupkidanybody know what's changed in mongo recently, we're getting a lot of failures recently11:50
stickupkids/mongo/mongo code/11:51
manadartstickupkid: What issue did you have? Looks like it's spinning OK here.11:54
stickupkidmanadart: i can look into it more later, but http://ci.jujucharms.com/job/github-check-merge-juju/2054/console11:54
manadartmanadart: I was meaning re the PR from earlier.11:55
stickupkidah, sorry11:55
stickupkidmanadart: let me check11:55
stickupkidmanadart: let me just create a new lxc bionic container11:56
rick_h_externalreality: if you get a sec can you put https://github.com/juju/juju/pull/8885 on your "to look at" list please?12:04
rick_h_jam: with 2.4.0 out we should be able to do that merge 2.3 to 2.4 now if that's cool12:10
jamrick_h_: thanks for the reminder.12:13
rick_h_2.4.0 release announcement formatted up on discourse. I probably missed some formatting changes so feel free to suggest/edit.  https://discourse.jujucharms.com/t/juju-2-4-0-has-been-released/5312:17
rathore_hi all, anyone has seen ubuntu@juju-04c90f-0-lxd-1:~$ sudo ls sudo: unable to resolve host juju-04c90f-0-lxd-1: Connection timed out12:20
rathore_lxd containers and hosts are unable to resolve themselves12:26
rick_h_rathore_: is this on bionic?12:29
rathore_xenial12:29
rick_h_rathore_: is their hostname in /etc/hosts?12:30
rathore_no it isn't12:30
rathore_rick_h_: Is this a known issue in bionic?12:57
rick_h_rathore_: sorry, just checking. I know that systemd does something where it resolves with a local daemon thing vs the older way to do things12:58
rathore_rick_h_: Thanks, i am also trying bionic now to see if it has same issue12:59
rathore_rick_h_: If it helps, the bare metal hosts have the same issue. I am using Maas for provisioning13:02
=== freyes__ is now known as freyes
jammanadart: feedback on your PR13:10
manadartjam: Ta.13:11
rathore_rick_h_: Cannot deploy bionic. Install fails saying it cannot find image for bionic/amd64. So don't know if it works13:23
rick_h_rathore_: gotcha, yea you'll have to pull those images down into your MAAS to try it out13:25
rick_h_rathore_: sorry, on the phone a bit this morning. If you can replicate it please file a bug with notes on the MAAS/Juju verison and such.13:25
rathore_rick_h_: Maas has bionic and bare metal was provisioned with bionic. Its the lxd which complains. I will check if it happens again on xenial and file a bug.13:27
rathore_rick_h_: Just to update, the issue is fixed with Juju 2.413:55
hmlexternalreality: is pr 8864 ready to move from WIP status to review?14:05
rathore_rick_h_: I take my words back. It seems to be random.14:09
jhobbsany chamhelpers reviewers around? This one is pretty small... https://github.com/juju/charm-helpers/pull/19414:20
manadartjam: Fixed it.14:28
maaudetAfter a clean bootstrap with Juju 2.4.0 on AWS, and then running juju enable-ha I get the following warning (but no error state): WARNING juju.mongo mongodb connection failed, will retry: dial tcp REDACTED-MACHINE-0-EXT-IP:37017: i/o timeout16:05
maaudetIs it cause for concern?16:05
maaudetBoth machine 1 and 2 outputs this error in a loop16:05
rmescandonDoes anybody know if influxdb and telegraf charms related as influxdb:query - telegraf:influxdb-api work out of the box. Or is it needed additional configuration?. Documentation say so, but I don't see that telegraf is adding the influxdb output plugin to /etc/telegraf/telegraf.conf16:47
rick_h_rmescandon: sorry, not sure off the top of my head.16:58
rick_h_maaudet: hmm, that doesn't seem right. Looks like a timeout trying to get the new controller dbs in sync with the first one? If you can replicate it please file a bug with the debug-log details please.16:59
maaudetrick_h_: Will do, I replicated it 3 times already, I'm going to put all the details in the bug report17:25
rick_h_maaudet: ty17:26
kwmonroemaaudet: just reading the 2.4 release notes, it says this about juju-ha-space bootstrap config: When enabling HA, this value must be set where member machines in the HA set have more than one IP address available for MongoDB use, otherwise an error will be reported.17:54
kwmonroemaaudet: could it be you're missing that config? https://docs.jujucharms.com/2.4/en/controllers-config17:54
kwmonroemaaudet: here's the actual section that talks about juju-ha-space in more deats: https://docs.jujucharms.com/2.4/en/controllers-config#controller-related-spaces.  looks like it's only a "must" if your controller has multiple ip addrs. not sure if that applies to you.17:57
maaudetkwmonroe: I tried it, but when I run juju controller-config juju-ha-space=my-space it says that my machine is not part of that space, but when I check juju show-machine 0 it says that it's in that space17:58
rick_h_maaudet: kwmonroe so that space stuff is only on MAAS since AWS doesn't know about spaces tbh17:59
maaudetthat's what I figured17:59
kwmonroeaaaah, thx rick_h_.  i'll go back to my cave now.18:00
rick_h_kwmonroe: all good18:00
rick_h_kwmonroe: heads up, juju show thurs (since the holiday wed) to celebrate 2.418:00
rick_h_and hopefully some other fun stuff to show18:00
kwmonroew00t18:00
rick_h_bdx: zeestrat ^18:00
bdxrick_h: good deal, count me in18:01
bdxcongrats on the 2.4 all!!18:01
rick_h_woooo!18:02
pmatulisrick_h_, maaudet, kwmonroe: i should add MAAS & AWS only for that stuff on controller spaces18:04
rick_h_pmatulis: yea, I was just asking folks about that as it *should* work anywhere we support spaces but seems it's MAAS-only atm18:04
pmatulisbut the main docs do link to the network-space page18:04
pmatuliswhich do say MAAS/AWS only18:04
maaudetI'm on AWS actually18:15
pmatulismaaudet, in Juju did you create a space for an available AWS subnet?18:22
maaudetpmatulis: Yes, 2 subnets for 1 space, both subnets were in 2 different az18:23
pmatulismaaudet, and 'juju spaces' looks fine?18:26
maaudetpmatulis: I took down the controller for now, but yes, I could see the right ranges in the right space + 2 FAN network ranges18:36
cory_furick_h_: I'm trying to help someone debug the controller returning "could not fetch relations: cannot get remote application "kubeapi-load-balancer": read tcp 172.31.5.119:36590->172.31.5.119:37017: i/o timeout" and came across https://bugs.launchpad.net/juju/+bug/1644011  They're also having `juju status` hang, but checking on the controller they still see a /usr/lib/juju/mongo3.2/bin/mongod process.  Any clues on what could cause that or how to19:04
cory_futrack it down?19:04
mupBug #1644011: juju needs improved error messaging when controller has connection issues <usability> <juju:Triaged> <https://launchpad.net/bugs/1644011>19:04
rick_h_cory_fu: hmm, so is this a cross model relation?19:05
cory_furick_h_: Nope.  Just a normal relation in a CDK deployment19:05
rick_h_cory_fu: hmm, someone else was getting an i/o timeout today setting up HA.19:05
rick_h_cory_fu: I'm trying to see if I can replicate it and see what's up19:05
cory_furick_h_: This was mid-deployment with conjure-up19:05
rick_h_cory_fu: with 2.4?19:05
cory_fuNot sure19:05
rick_h_cory_fu: please check, I'm wondering if there's an issue in 2.4 around this19:06
cory_fuAsked, waiting on a response19:07
rick_h_cory_fu: k19:07
zeestratrick_h_: thanks for invite. Cruising around Sweden atm so looking forward to watch when back :)19:19
rick_h_zeestrat: oooh have fun!19:24
cory_furick_h_: Apparently, it's Juju 2.3.8, and the juju-db service seems to still be running.19:28
rick_h_cory_fu: ok, and 172.31.5.119 is the controller IP address?19:47
cory_fuYes, the cloud internal IP address.19:47
cory_furick_h_: Just like in the bug that you filed about reporting db connection errors better, it seems to be the controller timing out talking to the db, but in this case the DB seems to be up and running.19:48
cory_fuAt least, systemctl is reporting it as running, and there were no obvious errors in the log, with update messages from the controller in there after the time of the error19:49
rick_h_cory_fu: did we try to restart it?19:49
rick_h_cory_fu: as well as the jujud ?19:49
cory_furick_h_: Unfortunately no.  The person in question apparently wanted to just tear it down and redeploy, since this happened during deployment.  I asked about trying to get more debugging info, but it seemed like they were trying to get EOD before the holiday19:51
cory_furick_h_: They said that, if it happens again, they'll follow up on Thursday19:51
rick_h_cory_fu: ah ok. Yea sorry. I'd like to get to the bottom of it. Feel free to connect us more directly if we can help19:51
cory_furick_h_: Will do19:52
thumperbabbageclunk: ping20:41
thumperbug 1779904 did we not test the rc upgrades with the upgrade step?20:42
thumperalso the upgrade steps should be idempotent20:42
mupBug #1779904: 2.4.0 upgrade step failed: bootstrapping raft cluster <juju:New> <https://launchpad.net/bugs/1779904>20:42
thumperrick_h_: re bug 1779897, looks like a potential recent lxd change21:23
mupBug #1779897: container already exists <cdo-qa> <foundations-engine> <juju:New> <https://launchpad.net/bugs/1779897>21:23
thumpercare to put a card on your board to track?21:23
rick_h_thumper: rgr, on it21:24
babbageclunkthumper: crap21:26
babbageclunkNo, I didn't test upgrading from an rc to 2.4.021:26
babbageclunkAh, and because it wasn't a state upgrade I didn't use the standard check upgraded data test for steps, which would have checked it was idempotent. Bugger.21:28
babbageclunkthumper: ok, I'll change the upgrade step now21:30
thumperbabbageclunk: thanks21:31
veebersthumper: does that mean we'll be releasing 2.4.1 sooner than expected?21:34
babbageclunkthumper: can you review https://github.com/juju/juju/pull/8886 please/22:38
babbageclunk?22:38
babbageclunkI'm just testing both cases now22:40
thumperbabbageclunk: reviewed, just wondering if just checking the existence of the directory is sufficient to consider raft bootstrapped?23:21
babbageclunkthumper: I think so - that directory is created in the process of creating the logstore or snapshot store, immediately before the bootstrapping.23:23
thumperok23:26
babbageclunkI'll put that on the PR for posterity too.23:40

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!