=== narindergupta is now known as narinderguptamac [00:32] Hi all, would anyone be interested in collaborating to charm up Jitsi and BigBlueButton? [02:29] wallyworld: ping [02:29] yo [02:30] I think you have a bug in the 2.8 upgrade step for the tasks sequence [02:30] shouldn't the tasks sequence be bumped for every model? [02:30] ok, will fix, what's the issue? [02:30] probs, i'll need to check [02:31] do we create the task id from the sequence value +1 ? [02:31] is that why we're bumping the sequence id? [02:32] the sequence starting value and increment logic changed [02:32] so to avoid doubling up when using 2.8 on a 2.7 db, the sequence was incremented [02:32] otherwise the latest sequence number could have been used twice [02:32] that didn't answer my question [02:33] it doesn't make sense to me why we actually have this upgrade step [02:34] because we went from calling sequence() to sequenceWithMin() [02:35] so starting id is 1 not 0 [02:36] and if you compare the old logic vs new logic, running 2.8 would have reused an existing task id [02:36] i haven't got the exact code handy [02:37] * thumper is looking at it now [02:37] the code has moved as well [02:38] rather, additonal use of task sequence added [02:39] ok [02:39] action id used to add 1 to sequence() [02:39] I do think that the upgrade step needs to be run across all models [02:39] which started from 0 [02:39] right now it only does the controller model [02:39] yeah, it does [02:39] which is a bug :-( [02:39] * thumper is writing an upgrade step now for unit machine id denormalisation [02:40] drive by? :-) [03:50] 1st pass at a "juju personas" document https://discourse.jujucharms.com/t/juju-user-personas/2808 [07:56] manadart: ping [07:57] or maybe stickupkid === balloons6 is now known as balloons [10:24] Hi, i use juju with an lxd cluster. Now i remove one lxd-server from the cluster. Where can i find the configuration for the ip address ? [10:24] I searched and changed all occurrences at `~/.local/share/juju` [10:24] But i still get this error, when i try to remove or add unit's. Or when i try to call `juju storage` [10:24] `ERROR getting state: getting storage provider registry: Get https://...:8443/1.0: Unable to connect to: ...:8443` [10:26] sdhd-sascha: is this the same one you signed up to? [10:56] stickupkid: it was the initial lxd-server from the cluster, where i later bootstrapped juju. [10:58] sdhd-sascha, yeah, so I believe the issue is that we only know about that ip. To change the ip of that will require so mongo surgery. I think it's not unreasonable to make a bug for this. [10:59] manadart, achilleasa_ that's correct ^ [10:59] ? [10:59] stickupkid: where can i find the mongodb? [10:59] stickupkid: Yep. [11:00] sdhd-sascha, https://discourse.jujucharms.com/t/login-into-mongodb/309 [11:00] stickupkid: super. Thank you :-) [11:00] sdhd-sascha, would you mind taking the time to file a bug? https://bugs.launchpad.net/juju/+bugs [11:01] stickupkid: yes, i will. Maybe i have time to create a patch, too [11:02] sdhd-sascha, I'm unsure what's the best way to solve this, maybe the lxd provider/environ should help the instancepoller [11:02] Ah, ok. I will see [11:03] Thank you [11:39] manadart, jam you seen this stack trace before ? https://paste.ubuntu.com/p/djJKY7dszN/ [11:40] 2020-03-18 11:38:45 INFO juju.cmd supercommand.go:83 running jujud [2.7.5.1 9825e246a9ec70e6551744d033802f19d78cabce gc go1.14] [11:40] runtime: mlock of signal stack failed: 12 [11:40] runtime: increase the mlock limit (ulimit -l) or [11:40] runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+ [11:40] fatal error: mlock failed [11:40] stickupkid: ^ I have not seen that before. [11:40] sounds like a bug in focal if I had to guess [11:40] jam, neither have i [11:40] stickupkid: Nope. [11:41] jam, just wrapping up my focal PR, I'll see if it happens again [12:18] manadart, CR my changes since thumper last reviewed - https://github.com/juju/juju/pull/11332 [12:18] manadart, also I need help testing if possible [12:19] stickupkid: Yep. Gimme a few. [12:22] stickupkid: 1 line review pls? https://github.com/CanonicalLtd/juju-qa-jenkins/pull/402 [13:00] manadart, whilst you're there - https://github.com/juju/juju/pull/11333/files#diff-e8543713fc0c30ea33131d41edf815e7R16 [13:33] rick_h_: Were we going to hang on and talk test? [13:35] manadart: yep my bad [13:35] * rick_h_ got distracted [15:42] anyone noticed something wrong with changes to “juju model-config logging-config” [15:43] i updated logging-config value and it didn’t take effect in my model, though shown in the model config. [15:43] had to bounce jujud for machine and unit agents [15:44] in develop [15:58] Where is the juju user data stored eg credential/login stuff? [15:59] or in other words, how do I move a juju account? [15:59] danboid: so it's in .local/share/juju [16:00] danboid: to move an accout the best thing is to just juju login to the controller from a new location [16:00] danboid: but if you need credntials/etc you need the .local/share/juju/credentials.yaml and clouds.yaml [16:01] rick_h_, Thanks [16:12] rick_h_, I presume a single juju user account be used from multiple machines then? [16:39] danboid: yes, normally if you create a juju user and give them as passowrd you can then juju login with that user/pass as long as you know the IP of the controller [16:39] danboid: so it's pretty easy to have multiple clients against a single controller [16:39] Great, thanks [17:54] rick_h_, my PR landed in 2.7 [17:54] rick_h_, and so has manadart's [17:57] stickupkid: woot woot, will watch for a ci run with that sha then ty! [17:57] nps [19:51] hey, hi, [19:51] i tried to connect to the mongodb, but cannot find the correct password. Like, described here: [19:51] https://discourse.jujucharms.com/t/login-into-mongodb/309/5?u=sdhd [20:00] Hmm, i just inside an juju controller, but can't find the `init.d` or `systemd` startup of the `mongod` ? Who launches this daemon? [20:01] sdhd-sascha: those are started via systemd as long as you're on ubuntu >= xenial [20:02] rick_h_: thank you. I'm on 20.04 on the host. And the container is `bionic` [20:02] sdhd-sascha: this is juju bootstrapped on localhost? [20:02] yes [20:03] sdhd-sascha: so to access the contrller you do "juju switch controller; juju ssh 0" [20:03] sdhd-sascha: that puts you on the controller machine which has mongodb and jujud running [20:03] inside the controller, this command gives no output ... `# systemctl | grep mongo` [20:03] But, `pstree -p` shows that mongod is running... [20:03] Nope, if i try `juju ssh`, i have this: [20:03] ``` [20:03] $ juju ssh 0 [20:03] ERROR opening environment: Get https://10.0.0.8:8443/1.0: Unable to connect to: 10.0.0.8:8443 [20:04] sdhd-sascha: what version of Juju? [20:04] rick_h_: that's because i delete `10.0.0.8` before from the cluster... [20:04] wait... [20:04] 3.22 (13840) on every server [20:04] I believe the service is called juju-db [20:05] sdhd-sascha: what is the Juju version though? what does the version in juju status show you? [20:05] timClicks: +1 [20:05] sorry [20:05] ``` [20:05] juju 2.7.4 10906 latest/stable canonical✓ classic [20:05] lxd 3.22 13840 latest/stable canonical✓ - [20:05] ``` [20:06] sdhd-sascha: is your contrller up and running? can you do juju status successfully? [20:06] rick_h_: yes [20:06] ``` [20:06] | juju-b1a552-0 | RUNNING | 10.0.2.92 (eth0) | | CONTAINER | 0 | mars | [20:06] +----------------+---------+------------------------+------+-----------+-----------+----------+ [20:06] | juju-b1a552-1 | RUNNING | 10.0.2.94 (eth0) | | CONTAINER | 0 | merkur | [20:06] +----------------+---------+------------------------+------+-----------+-----------+----------+ [20:06] | juju-b1a552-2 | RUNNING | 10.0.2.93 (eth0) | | CONTAINER | 0 | mars | [20:06] ``` [20:06] sdhd-sascha: is it something you can pastebin? https://paste.ubuntu.com/ [20:07] sdhd-sascha: so that's lxc list which is showing you the containers you have running [20:07] looks like you've got a 3 machine HA cluster going? [20:07] rick_h_: what exactly, should i print ? [20:07] rick_h_: yes, 3 HA [20:07] sdhd-sascha: `juju status` [20:08] https://www.irccloud.com/pastebin/qGvYj9Bz/ [20:09] sdhd-sascha: hmm, ok can you try that ssh command again with --debug `juju ssh 0 --debug`? [20:09] https://www.irccloud.com/pastebin/ahwq5OaF/ [20:10] sdhd-sascha: does that make any sense to you? It's confusing to me as the lxd machines are all 10.0.2.xx and not sure what a tomcat port has to do with anything? [20:11] rick_h_: today, your colleage, say's i should send a bugreport about the failure with the deleted lxd-server. [20:11] I will do tomorrow. [20:11] But now, i only want to access the mongodb. But didn't have a password... [20:12] sdhd-sascha: ok, the services are run via /etc/systemd/system/juju-db.service and /etc/systemd/system/jujud-machine-0.service [20:12] rick_h_: I bootstrapped the lxd-cluster from 10.0.0.8. Then i bootstrapped juju from the same ip. Then i figured out, that the machine has not enough RAM. So i deleted it from the cluster... ;-) ... [20:13] rick_h_: Ah, thank you... i searched for `mongo` [20:13] :-) [20:13] sdhd-sascha: the db password will be on the machine in the /var/lib/juju/agents/machine-0/agent.conf [20:14] sdhd-sascha: that post is the right one: https://discourse.jujucharms.com/t/login-into-mongodb/309 but since your juju ssh is wonky you'll have to pull the script apart manually [20:14] sdhd-sascha: you can skip the juju ssh to the machine since it's lxd and just lxd exec bash on the right instance [20:14] rick_h_: my agent.conf's didn't have a value `statepassword` like i said before [20:14] juju-b1a552-0 from the status output [20:15] sdhd-sascha: it should be there. [20:15] rick_h_: ah, thank you :-) i found it ;-) [20:15] sdhd-sascha: coolio [20:15] :-) [20:17] login works. great +1