/srv/irclogs.ubuntu.com/2020/03/18/#juju.txt

=== narindergupta is now known as narinderguptamac
TimM[m]Hi all, would anyone be interested in collaborating to charm up Jitsi and BigBlueButton?00:32
thumperwallyworld: ping02:29
wallyworldyo02:29
thumperI think you have a bug in the 2.8 upgrade step for the tasks sequence02:30
thumpershouldn't the tasks sequence be bumped for every model?02:30
wallyworldok, will fix, what's the issue?02:30
wallyworldprobs, i'll need to check02:30
thumperdo we create the task id from the sequence value +1 ?02:31
thumperis that why we're bumping the sequence id?02:31
wallyworldthe sequence starting value and increment logic changed02:32
wallyworldso to avoid doubling up when using 2.8 on a 2.7 db, the sequence was incremented02:32
wallyworldotherwise the latest sequence number could have been used twice02:32
thumperthat didn't answer my question02:32
thumperit doesn't make sense to me why we actually have this upgrade step02:33
wallyworldbecause we went from calling sequence() to sequenceWithMin()02:34
wallyworldso starting id is 1 not 002:35
wallyworldand if you compare the old logic vs new logic, running 2.8 would have reused an existing task id02:36
wallyworldi haven't got the exact code handy02:36
* thumper is looking at it now02:37
wallyworldthe code has moved as well02:37
wallyworldrather, additonal use of task sequence added02:38
thumperok02:39
wallyworldaction id used to add 1 to sequence()02:39
thumperI do think that the upgrade step needs to be run across all models02:39
wallyworldwhich started from 002:39
thumperright now it only does the controller model02:39
wallyworldyeah, it does02:39
wallyworldwhich is a bug :-(02:39
* thumper is writing an upgrade step now for unit machine id denormalisation02:39
wallyworlddrive by? :-)02:40
TimM[m]1st pass at a "juju personas" document https://discourse.jujucharms.com/t/juju-user-personas/280803:50
jammanadart: ping07:56
jamor maybe stickupkid07:57
=== balloons6 is now known as balloons
sdhd-saschaHi, i use juju with an lxd cluster. Now i remove one lxd-server from the cluster. Where can i find the configuration for the ip address ?10:24
sdhd-saschaI searched and changed all occurrences at `~/.local/share/juju`10:24
sdhd-saschaBut i still get this error, when i try to remove or add unit's. Or when i try to call `juju storage`10:24
sdhd-sascha`ERROR getting state: getting storage provider registry: Get https://...:8443/1.0: Unable to connect to: ...:8443`10:24
stickupkidsdhd-sascha: is this the same one you signed up to?10:26
sdhd-saschastickupkid: it was the initial lxd-server from the cluster, where i later bootstrapped juju.10:56
stickupkidsdhd-sascha, yeah, so I believe the issue is that we only know about that ip. To change the ip of that will require so mongo surgery. I think it's not unreasonable to make a bug for this.10:58
stickupkidmanadart, achilleasa_ that's correct ^10:59
stickupkid?10:59
sdhd-saschastickupkid: where can i find the mongodb?10:59
manadartstickupkid: Yep.10:59
stickupkidsdhd-sascha, https://discourse.jujucharms.com/t/login-into-mongodb/30911:00
sdhd-saschastickupkid: super. Thank you :-)11:00
stickupkidsdhd-sascha, would you mind taking the time to file a bug? https://bugs.launchpad.net/juju/+bugs11:00
sdhd-saschastickupkid: yes, i will. Maybe i have time to create a patch, too11:01
stickupkidsdhd-sascha, I'm unsure what's the best way to solve this, maybe the lxd provider/environ should help the instancepoller11:02
sdhd-saschaAh, ok. I will see11:02
sdhd-saschaThank you11:03
stickupkidmanadart, jam you seen this stack trace before ? https://paste.ubuntu.com/p/djJKY7dszN/11:39
jam2020-03-18 11:38:45 INFO juju.cmd supercommand.go:83 running jujud [2.7.5.1 9825e246a9ec70e6551744d033802f19d78cabce gc go1.14]11:40
jamruntime: mlock of signal stack failed: 1211:40
jamruntime: increase the mlock limit (ulimit -l) or11:40
jamruntime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+11:40
jamfatal error: mlock failed11:40
jamstickupkid: ^ I have not seen that before.11:40
jamsounds like a bug in focal if I had to guess11:40
stickupkidjam, neither have i11:40
manadartstickupkid: Nope.11:40
stickupkidjam, just wrapping up my focal PR, I'll see if it happens again11:41
stickupkidmanadart, CR my changes since thumper last reviewed - https://github.com/juju/juju/pull/1133212:18
stickupkidmanadart, also I need help testing if possible12:18
manadartstickupkid: Yep. Gimme a few.12:19
hmlstickupkid: 1 line review pls?  https://github.com/CanonicalLtd/juju-qa-jenkins/pull/40212:22
stickupkidmanadart, whilst you're there - https://github.com/juju/juju/pull/11333/files#diff-e8543713fc0c30ea33131d41edf815e7R1613:00
manadartrick_h_: Were we going to hang on and talk test?13:33
rick_h_manadart:  yep my bad13:35
* rick_h_ got distracted13:35
hmlanyone noticed something wrong with changes to “juju model-config logging-config”15:42
hmli updated logging-config value and it didn’t take effect in my model, though shown in the model config.15:43
hmlhad to bounce jujud for machine and unit agents15:43
hmlin develop15:44
danboidWhere is the juju user data stored eg credential/login stuff?15:58
danboidor in other words, how do I move a juju account?15:59
rick_h_danboid:  so it's in .local/share/juju15:59
rick_h_danboid:  to move an accout the best thing is to just juju login to the controller from a new location16:00
rick_h_danboid:  but if you need credntials/etc you need the .local/share/juju/credentials.yaml and clouds.yaml16:00
danboidrick_h_, Thanks16:01
danboidrick_h_, I presume a single juju user account be used from multiple machines then?16:12
rick_h_danboid:  yes, normally if you create a juju user and give them as passowrd you can then juju login with that user/pass as long as you know the IP of the controller16:39
rick_h_danboid:  so it's pretty easy to have multiple clients against a single controller16:39
danboidGreat, thanks16:39
stickupkidrick_h_, my PR landed in 2.717:54
stickupkidrick_h_, and so has manadart's17:54
rick_h_stickupkid:  woot woot, will watch for a ci run with that sha then ty!17:57
stickupkidnps17:57
sdhd-saschahey, hi,19:51
sdhd-saschai tried to connect to the mongodb, but cannot find the correct password. Like, described here:19:51
sdhd-saschahttps://discourse.jujucharms.com/t/login-into-mongodb/309/5?u=sdhd19:51
sdhd-saschaHmm, i just inside an juju controller, but can't find the `init.d` or `systemd` startup of the `mongod` ? Who launches this daemon?20:00
rick_h_sdhd-sascha:  those are started via systemd as long as you're on ubuntu >= xenial20:01
sdhd-sascharick_h_: thank you. I'm on 20.04 on the host. And the container is `bionic`20:02
rick_h_sdhd-sascha:  this is juju bootstrapped on localhost?20:02
sdhd-saschayes20:02
rick_h_sdhd-sascha:  so to access the contrller you do "juju switch controller; juju ssh 0"20:03
rick_h_sdhd-sascha:  that puts you on the controller machine which has mongodb and jujud running20:03
sdhd-saschainside the controller, this command gives no output ... `# systemctl | grep mongo`20:03
sdhd-saschaBut, `pstree -p` shows that mongod is running...20:03
sdhd-saschaNope, if i try `juju ssh`, i have this:20:03
sdhd-sascha```20:03
sdhd-sascha$ juju ssh 020:03
sdhd-saschaERROR opening environment: Get https://10.0.0.8:8443/1.0: Unable to connect to: 10.0.0.8:844320:03
rick_h_sdhd-sascha:  what version of Juju?20:04
sdhd-sascharick_h_: that's because i delete `10.0.0.8` before from the cluster...20:04
sdhd-sascha wait...20:04
sdhd-sascha3.22 (13840) on every server20:04
timClicksI believe the service is called juju-db20:04
rick_h_sdhd-sascha:  what is the Juju version though? what does the version in juju status show you?20:05
rick_h_timClicks:  +120:05
sdhd-saschasorry20:05
sdhd-sascha```20:05
sdhd-saschajuju               2.7.4                       10906  latest/stable     canonical✓  classic20:05
sdhd-saschalxd                3.22                        13840  latest/stable     canonical✓  -20:05
sdhd-sascha```20:05
rick_h_sdhd-sascha:  is your contrller up and running? can you do juju status successfully?20:06
sdhd-sascharick_h_: yes20:06
sdhd-sascha```20:06
sdhd-sascha| juju-b1a552-0  | RUNNING | 10.0.2.92 (eth0)       |      | CONTAINER | 0         | mars     |20:06
sdhd-sascha+----------------+---------+------------------------+------+-----------+-----------+----------+20:06
sdhd-sascha| juju-b1a552-1  | RUNNING | 10.0.2.94 (eth0)       |      | CONTAINER | 0         | merkur   |20:06
sdhd-sascha+----------------+---------+------------------------+------+-----------+-----------+----------+20:06
sdhd-sascha| juju-b1a552-2  | RUNNING | 10.0.2.93 (eth0)       |      | CONTAINER | 0         | mars     |20:06
sdhd-sascha```20:06
rick_h_sdhd-sascha:  is it something you can pastebin? https://paste.ubuntu.com/20:06
rick_h_sdhd-sascha:  so that's lxc list which is showing you the containers you have running20:07
rick_h_looks like you've got a 3 machine HA cluster going?20:07
sdhd-sascharick_h_: what exactly, should i print ?20:07
sdhd-sascharick_h_: yes, 3 HA20:07
rick_h_sdhd-sascha:  `juju status`20:07
sdhd-saschahttps://www.irccloud.com/pastebin/qGvYj9Bz/20:08
rick_h_sdhd-sascha:  hmm, ok can you try that ssh command again with --debug `juju ssh 0 --debug`?20:09
sdhd-saschahttps://www.irccloud.com/pastebin/ahwq5OaF/20:09
rick_h_sdhd-sascha:  does that make any sense to you? It's confusing to me as the lxd machines are all 10.0.2.xx and not sure what a tomcat port has to do with anything?20:10
sdhd-sascharick_h_: today, your colleage, say's i should send a bugreport about the failure with the deleted lxd-server.20:11
sdhd-saschaI will do tomorrow.20:11
sdhd-saschaBut now, i only want to access the mongodb. But didn't have a password...20:11
rick_h_sdhd-sascha:  ok, the services are run via /etc/systemd/system/juju-db.service and /etc/systemd/system/jujud-machine-0.service20:12
sdhd-sascharick_h_: I bootstrapped the lxd-cluster from 10.0.0.8. Then i bootstrapped juju from the same ip. Then i figured out, that the machine has not enough RAM. So i deleted it from the cluster... ;-) ...20:12
sdhd-sascharick_h_: Ah, thank you... i searched for `mongo`20:13
sdhd-sascha:-)20:13
rick_h_sdhd-sascha:  the db password will be on the machine in the /var/lib/juju/agents/machine-0/agent.conf20:13
rick_h_sdhd-sascha:  that post is the right one: https://discourse.jujucharms.com/t/login-into-mongodb/309 but since your juju ssh is wonky you'll have to pull the script apart manually20:14
rick_h_sdhd-sascha:  you can skip the juju ssh to the machine since it's lxd and just lxd exec bash on the right instance20:14
sdhd-sascharick_h_: my agent.conf's didn't have a value `statepassword` like i said before20:14
rick_h_juju-b1a552-0 from the status output20:14
rick_h_sdhd-sascha:  it should be there.20:15
sdhd-sascharick_h_: ah, thank you :-) i found it ;-)20:15
rick_h_sdhd-sascha:  coolio20:15
sdhd-sascha:-)20:15
sdhd-saschalogin works. great +120:17

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!