[01:08] <anastasiamac> wallyworld: decided to propose add-k8s separately (will probably do a pr per command) to avoid mudding the mud
[01:08] <anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10760 - add-k8s changes for ask-or-tell
[01:10] <wallyworld> ok
[01:19] <wallyworld> anastasiamac: lgtm, ty
[01:20] <anastasiamac> oh wallyworld  \o/ re: "cluster"... everywhere else in the command we actually say 'k8s cloud"... should i still say "k8s cluster" or 'k8s cloud' to b consistent..?
[01:23] <wallyworld> hmmm
[01:23] <wallyworld> k8s cloud
[01:23] <wallyworld> i prefer cluster but i think certain others wanted cloud
[01:24] <anastasiamac> wallyworld: ack
[01:49] <kelvinliu> wallyworld: +1 plz, thanks! https://github.com/juju/juju/pull/10761
[01:53] <wallyworld> ok
[01:54] <wallyworld> kelvinliu: glad that one was caught
[01:55] <kelvinliu> yeah, thanks!
[01:57] <anastasiamac> wallyworld: PTAL next in line - https://github.com/juju/juju/pull/10762 - remove-k8s changes
[02:00] <wallyworld> +1
[02:01] <anastasiamac> \o/\o/\o/
[03:08] <anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10764 - remove-cloud changes :D
[03:08] <wallyworld> in a minute, just doing a critical fix
[03:32] <wallyworld> thumper: https://github.com/juju/juju/pull/10765
[03:32] <wallyworld> still need to figure out the potential dependency issue
[03:36] <wallyworld> anastasiamac: lgtm with a suggestion
[04:26] <anastasiamac> wallyworld: for ur delight PTAL https://github.com/juju/juju/pull/10766 - remove-credential changes :D
[04:47] <kelvinliu> wallyworld: was about to fix the is_primary machine tag issue but found you already got a PR for this. here is a small enhancement, +1 plz thanks! https://github.com/juju/juju/pull/10767
[04:48] <kelvinliu> microk8s test is green now, just enabled the job on CI. I think the gke will be green as well once the k8s version fix landed,
[05:02] <wallyworld> kelvinliu: jeez, that was a big change
[05:02] <kelvinliu> yes, it is! lol
[05:04] <wallyworld> kelvinliu: i merged directly since jobs are taking upwards of 40 minutes right now and it was a acceptance test only Pythin change
[05:05] <kelvinliu> yep, thanks
[05:07] <kelvinliu> `snap remove microk8s` fixed the 503 health check errors.  last two runs were all green. so merged the PR on qa repo to enable the job.
[07:35] <manadart> wallyworld: I know what the k8s issue is with not gating on the upgrade. My late email on Friday was poorly communicated in that the issue fixed by John wasn't the only outstanding article for my patch.
[07:36] <manadart> I have a couple of patches to put up.
[07:37] <stickupkid> babbageclunk, you around?
[07:42] <stickupkid> babbageclunk, send email instead
[07:58] <wallyworld> manadart: awesome that you have it it hand :-)
[08:00] <stickupkid> wallyworld, it looks like it's return a complex error from juju rather than a string for pylibjuju
[08:00] <wallyworld> manadart: my PR to address the k8s issue is landing as we speak
[08:00] <wallyworld> stickupkid: you mean the libjuju storage issue?
[08:00] <stickupkid> wallyworld, yeah
[08:01] <wallyworld> i thought it looked like the params not being marshalled properly
[08:01] <wallyworld> ie the deploy storage args (a map) was not converted from a map of string to a map of struct
[08:01] <wallyworld> i didn;t raise the issue - just updated the description
[08:02] <stickupkid> wallyworld, yeah, sorry, you're right, it's expecting a param rather a string
[08:02] <wallyworld> in the bundle, it's a map of string, but in the api, it's a map of struct
[08:02] <wallyworld> but there's code to do it
[08:02] <stickupkid> wallyworld, ah, nice nice
[08:02] <wallyworld>         if storage:
[08:02] <wallyworld>             storage = {
[08:02] <wallyworld>                 k: client.Constraints(**v)
[08:02] <wallyworld>                 for k, v in storage.items()
[08:02] <wallyworld>             }
[08:02] <wallyworld> appears to be *just* for bundles perhaps
[08:03] <wallyworld> so maybe there's a code path that is with bundle deploy what doesn't invoke that conversion, not sure yet
[08:06] <stickupkid> wallyworld, also it might be an issue where i've pinned the facades to agressively, so it might be worth checking those out
[08:07] <stickupkid> wallyworld, https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L20-L118
[08:07] <wallyworld> stickupkid: that storage functionality was introduced in 1.24 so inlikely to be that
[08:07] <stickupkid> wallyworld, that's good to know
[08:08] <wallyworld> probs a long undiscovered issue in libjuju, but i haven't diagnosed fully
[09:02] <nammn_de> manadart achilleasa I got a pr review regarding skipping a caas on upgrades. https://github.com/juju/juju/pull/10696#discussion_r336822311 Sadly I cannot 100% follow where the difference between applications and the others are. Currently I just skip if its kube, but there seems to be a easier way I don't know. Someone a pointer?
[09:04] <nammn_de> plan is to skip models and controller (just everything) related to kube on upgradesteps
[09:04] <Fallenour> nammn_de, wallyworld stickupkid rick_h Im having an issue with a for loop:
[09:05] <Fallenour> for i in $(seq 1 3); do lxc launch ubuntu:x "saltmaster-00${i}"; done
[09:05] <Fallenour> its telling me this isnt correct, but it should be.
[09:06] <Fallenour> am I missing something here?
[09:08] <stickupkid> Fallenour, bash or shell, if bash, that should work
[09:09] <stickupkid> Fallenour, https://paste.ubuntu.com/p/hTBvMdJwTq/
[09:10] <manadart> nammn_de: thumper is saying you can remove that block at 129, because checking that the agent tag is of type machine satisfies this.
[09:10] <manadart> CAAS agents never have machine tags.
[09:11] <nammn_de> manadart: thanks 🦸‍♂️, ahhh why that? Do we have some kind of doc running around what kind of unit can have what kind of tag? For me thats pretty confusing tbh
[09:13] <manadart> jam, wallyworld: The reason k8s continues even though the DB upgrade worker fails to start is that the lock is returned *unlocked* if we are already on the current version.
[09:13] <manadart> So wallyworld's patch is sufficient.
[09:13] <Fallenour> stickupkid, its bash. I think I know wht the issue was. I was using an explicit path for a command in front of it, which is why it was failing. Im testing to see if adding that bin to PATH env will solve the problem
[09:16] <wallyworld> manadart: ah, yes, that makes sense
[09:17] <Fallenour> hey wallyworld ;) I see you put several lines of code in chat. I see manadart didnt eat your face for more than one line. Im assuming you work for canonical or the juju team o.o
[09:18] <wallyworld> i do
[09:18] <Fallenour> YAAASS *scribbles notes* *adds to birthday cake list*
[09:18] <Fallenour> I think theres at least 8-11 of you in here o.o
[09:19] <Fallenour> but ive only got like...4 :(
[09:19] <Fallenour> you guys do way too much for the community to not at least get birthday cake o.o
[09:19] <wallyworld> \o/
[09:19] <manadart> :)
[09:19] <Fallenour> \o/
[09:19] <Fallenour> 8D
[09:19] <wallyworld> glad you like juju :-)
[09:20] <Fallenour> But yea, I herd from the grape vine that canonical was hiring o.o
[09:20] <Fallenour> Oh, I dont like Juju
[09:21] <Fallenour> Id marry juju if she were about 5'2, 135 lbs. shes beyond amazing personality wise, and runs like a champ. After HA, like...all my problems died. Well, most of them.
[09:21] <Fallenour> its the best damn solution I think ive ever seen cloud wise. I converged it with Saltstack, and am deploying that to production now on live to about 15,000 people in my network.
[09:22] <Fallenour> I plan on doing talks on it all year this year, integrating it in systems, storage, and security.
[09:22] <wallyworld> there's an opening we're interiewing for in the APAC timezone
[09:22] <Fallenour> Id go for it, but I honestly dont think im good enough. You guys are several levels higher than I am in terms of skillset.
[09:22] <wallyworld> if you need help with talks etc we have a developer advocate on the team who would love to help if needed
[09:23] <Fallenour> omg thatd be AMAZING
[09:23] <Fallenour> One of the talks Im looking at doing is a full CI/CD stack deployment with juju to CI/CD in a can kinda concept.
[09:23] <wallyworld> he isa busy person so it depends on the requets etc, but feel free to ask and he can do what he can
[09:24] <Fallenour> im building a new solution out of the box that uses LXD containers instead of docker containers for kubernetes with git that allows security teams to be appeased from security concerns with service containers.
[09:25] <wallyworld> he's in NZ. you can ping him here to ask about stuff. his nic is timclicks
[09:26] <Fallenour> OOOH its TIM! Tim is the bees knees!
[09:26] <wallyworld> we don't necessarily have any pre-canned material but can offer general advice etc
[09:26] <Fallenour> wallyworld, oh thats totally fine! I generally find people dont like pre-canned stuff, so thats actualyl a good thing
[09:26] <wallyworld> great
[09:27] <Fallenour> I do a lot of international security conference talks already, and one thing Ive noticed is they dont respond well to canned anything. unless its a canned solution for deployment purposes with a custom talk on top
[09:27] <Fallenour> ill hit him up once hes on. For now, I have to figure out why sshing into one system logs me into another one? o.O
[09:28] <Fallenour> I have no idea how thats even possible honestly.
[09:28] <wallyworld> maybe the model you think you're connecting to is not
[09:28] <wallyworld> you can use juju models and the with with * is the current
[09:28] <wallyworld> or use the -m to specify exlicitly
[09:28] <Fallenour> oh its an ssh issue with ubuntu base, its not a juju issue. juju ssh always works reliably.
[09:28] <wallyworld> ok
[09:29] <Fallenour> btw, while its on my mind. do you guys still do the juju live events?
[09:29] <Fallenour> I think those are the greatest things since elderberry jam
[09:29] <wallyworld> the Juju Show?
[09:29] <Fallenour> yea
[09:29] <Fallenour> id love to be on one of those one day.
[09:29] <wallyworld> yeah, tim (clicks) and rick are working on a new batch
[09:30] <wallyworld> nayone can join in
[09:30] <Fallenour> omg
[09:30] <Fallenour> what?
[09:30] <Fallenour> o.O
[09:30] <Fallenour> how do I sign up?
[09:30] <wallyworld> good question, i'm not totally sure
[09:30] <Fallenour> and is there a subscription thing I can sign up for?
[09:30] <wallyworld> rick_h is the person to ask
[09:30] <Fallenour> 8D
[09:30] <wallyworld> he'll be on irc in a few hours
[09:30] <Fallenour> yaaaaasssss
[09:31] <Fallenour> rick_h, is awesome
[09:31] <wallyworld> i think they normally have a few people logged in and able to ask questiosn etc
[09:31] <wallyworld> he is
[09:31] <Fallenour> one thing Ive noticed about all canonical employees in general is they are all really happy
[09:31] <wallyworld> we're all peachy
[09:32] <Fallenour> canonical must be a great company or its the drugs in the free water, its gotta be.
[09:32] <Fallenour> only a handful of companies i know that are like that, saltstack, suse, canonical, riot
[09:32] <wallyworld> bit of booth
[09:32] <nammn_de> manadart: just to make sure before i press the merge button. I removed the block below: https://github.com/juju/juju/pull/10696/files#diff-8bc810c7809469ea95764da958639d1aR121-R126
[09:37] <manadart> nammn_de: Looks fine.
[09:37] <nammn_de> manadart: 🏄‍♂️
[09:47] <Fallenour> quick question, but what service(s) should be running for lxd/lxc to work? @wallyworld manadart nammn_de
[09:47] <Fallenour> I just built an lxd cluster, and it was in HA and fine, then I built 3 containers, and it died.
[09:47] <Fallenour> all three of them. So much for HA :P
[09:53] <wallyworld> Fallenour: way past my EOD now, i'll leave to others to answer as i need to get AFK
[09:56] <manadart> Fallenour: I think the daemon, the socket and possibly DNSmasq if the LXD bridge is managed.
[10:23] <Fallenour> manadart, I found out the issue is with the database, likely due to being a snap. Super frustrating that snaps are supposed to be stable, and my general experience is they are anything but. Im having to rebuild all three machines
[11:24] <nammn_de> stickupkid: got a min?
[11:25] <stickupkid> nammn_de, sure
[11:25] <nammn_de> stickupkid: Thinking about changing this function  https://github.com/juju/charm/blob/974f39ea8f706c25616d022f70838c862687d3ca/charmdir.go#L418
[11:25] <nammn_de> that it does not log anymore at all
[11:26] <nammn_de> so that it can be called n times without keep logging
[11:26] <nammn_de> problem: I want to log things at different levels (debug, error and warn)
[11:27] <nammn_de> on way to solve is to return the log level as well, but this would change the return signature from 3 to 5, which is kind of not cool
[11:27] <stickupkid> nammn_de, that make sense, maybe, pass in a logger, then you can tell it to not log at all if you don't want it too
[11:27] <nammn_de> ahhh good one
[11:27] <stickupkid> nammn_de, or return a list of issues
[11:28] <nammn_de> oh, list of issues are nice too. which "only" makes it to 4. Like both approaches. Gonna try them out
[11:29] <nammn_de> Im going the first approach to let the things stay lean
[11:31] <nammn_de> stickupkid how would you tell a logger not to log before passing it in?
[11:31] <nammn_de> *loggo
[11:31] <stickupkid> nammn_de, is there not a dumb logger
[11:32] <stickupkid> nammn_de, or provide an interface for a logger and then pass an instance of the logging instance to it
[11:32] <stickupkid> nammn_de, similar to how the worker are done (thumper has done work in this area)
[11:33] <nammn_de> stickupkid: thanks gonna take a look
[11:40] <rick_h> Fallenour:  what's up?
[11:40]  * rick_h yawns
[11:41] <rick_h> Fallenour:  your cluster died?
[11:44] <Fallenour>  rick_h yup. It ate the orange squirrel cable of love, sailed off into the tuskegee, took a short walk on a long pier
[11:48]  * rick_h processes that for a while...
[11:48] <rick_h> Fallenour:  any hint on the issue?
[11:48] <Fallenour> rick_h, database connection issue.
[11:49] <rick_h> Fallenour:  for the lxd db? or juju to the mongodb?
[11:49] <Fallenour> rick_h, likely from snap/deb collision.
[11:49] <Fallenour> rick_h, lxd db, juju isnt at fault here
[11:49] <rick_h> oh hmmm, how did they collide?
[11:49] <Fallenour> rick_h, followed a juju tutorial on bootstrap deployment in ha, but didnt do lxd.migrate because it wasnt in the instructions :P
[11:50] <rick_h> Fallenour:  :(
[11:50] <Fallenour> rick_h, lesson learned, lxd does NOT in fact like to use deb/snap in combination. it is very much so a dinner menu only kinda gal.
[11:50] <rick_h> no no no, agree it's a "pick one and only one"
[11:50] <Fallenour> rick_h, took me 10+ minutes to run systemctl snap.lxd.daemon stop
[11:50] <Fallenour> just to kill the service
[11:51] <rick_h> :/ you have much more patience than me
[11:51] <Fallenour> I think what it was doing is creating a race condition. I think it was passing the for i loop as a command to each node, which cuased that node to create a for loop for the next node
[11:51] <Fallenour> so it was infinitely trying to create 3 containers in loop
[11:51] <rick_h> how helpful!
[11:52] <Fallenour> rick_h, I know! It just really wanted to make sure I had enough salt masters :P
[11:52] <Fallenour> I guess it figured 30,000+ salt masters should suffice
[11:53] <Fallenour> I did in fact, not concur, so we had a splitting of ways, ergo a complete rebuild inside maas
[11:53] <rick_h> well for most people that'd be good, but nooooo you have to be all picky and stuff :P
[11:53] <rick_h> ouch, sorry for the trouble
[11:54] <Fallenour> rick_h, yeaaa :( I like my systems like I like my tacos, a little on the light side
[11:54] <rick_h> lol
[11:54] <Fallenour> rick_h, BUT!
[11:54]  * rick_h ducks
[11:55] <Fallenour> rick_h, its ok. because they have a lot more ram now, so this is good.
[11:55] <Fallenour> each has at least 96GB of ram apiece, so I can do the things with them now :)
[11:55] <rick_h> now you're cool with 30k masters?
[11:55] <rick_h> Fallenour:  nice! that's cool
[11:55] <Fallenour> rick_h, no, sadly but I am ok with 3 ^__^
[11:56] <Fallenour> rick_h, the good news is that once this is finally finished, Ill be bootstrapping the juju controllers onto them, and have a cluster in lxd. Ive come to the ultimate realization that I just dont need 3 physical dedicated controllers, they will never use the resources in the current state.
[11:56] <rick_h> ok, glad the world sucks a bit but has a bright side. I'm going to go make some morning coffee
[11:56] <Fallenour> rick_h, morning....coffee?
[11:57] <Fallenour> rick_h, Dont you mean all day coffee? o.O
[11:57] <rick_h> yes, the morning ritual that means it's morning time and the day is starting off ok
[11:57] <rick_h> hah
[11:57] <rick_h> no, cut off after lunch. Have to sleep you know
[11:57] <Fallenour> rick_h, mmmm, *starts looking around for the rick_h service restart command*
[11:58] <Fallenour> can anyone assist? I keep finding errors in my journalctl -u rick_h log files
[11:58] <Fallenour> rick_h, something something morning coffee?
[11:58] <Fallenour> XD
[11:58] <Fallenour> something something sleeping. cant have sleeping services o.o
[11:59] <Fallenour> my morning starts at 4 am EST, runs till 9-10PM EST
[13:16] <danboid> How do I use curtin the set the default DNS servers for all of my MAAS deployments?
[13:21] <rick_h> danboid:  can't you set the dns servers in MAAS itself?
[13:22] <rick_h> danboid:  under "settings, network services, DNS"
[13:24] <danboid> rick_h, That is already configured but its never worked. For some reason I have to edit the netplan config post deployment and add in the DNS server for it to work
[13:25] <danboid> Maybe bind isn't corretly configured on my maas controller?
[13:28] <danboid> rick_h, Shouldn'r MAAS error tho if bind was inorrectly configured?
[13:28] <rick_h> danboid:  I would think, I've not poked at it tbh other than setting it
[13:28] <rick_h> I guess I've never really checked it was set like I did in config
[13:32] <danboid> I'm going to try with DNSSEC explicitly disabled, see if that helps. It was set to auto
[13:49] <danboid> Turning off DNSSEC under MAAS hasn't fixed it
[13:53] <rick_h> danboid:  ok, yea I don't know. It might be good to bug the maas folks. If you want to use curtain to tweak things there is the cloud-init configy stuff you can do in Juju but sounds like a work-around
[13:54] <danboid> The docs give the impression cloud-init is more for one-off configs and curtin is what I want if I want ti change the config of all deployments
[13:55] <rick_h> well there's a juju cloud-init thing that runs on any machine juju goes on
[13:56] <rick_h> danboid:  https://discourse.jujucharms.com/t/using-model-config-key-cloudinit-userdata/512
[14:42] <bdx> danboid, rick_h: add the dns server you want to use to the subnet details page
[14:45] <bdx> https://maas.io/docs/networking
[14:47] <danboid> Looks like bind is configured to only listen to localhost, despite it saying otherwise in /etc/bind/maas/named.conf.options.inside.maa
[14:47] <danboid> s
[14:50] <danboid> Yay! Fixed it!
[14:51] <rick_h> oooh, ty bdx
[14:51] <bdx> np
[14:51] <danboid> I just had to comment out the `listen on` line in /etc/bind/named.conf.options
[14:52] <danboid> Then restart bind
[14:54] <nammn_de> stickupkid: still around? If yes might wanna take another review round?
[14:54] <nammn_de> https://github.com/juju/charm/pull/294
[14:59] <stickupkid> nammn_de, looking now
[15:03] <stickupkid> nammn_de, almost, just the naming I think now
[15:04] <achilleasa> hml: I think this bit of code conflates space IDs (defaults) with space names (givenBindings that comes from the client). I will try to fix it as it prevents me from applying the Merge -> merge change
[15:06] <hml> achilleasa:  which bit of code?
[15:06] <nammn_de> stickupkid: thanks, just to make sure. You suggest to extract the extra file and just put it into same `charmdir` file as well?
[15:06] <achilleasa> hml: https://github.com/juju/juju/blob/6e8f551b5305ec2f30d1910a0788db52cc30466d/apiserver/facades/client/application/deploy.go#L134-L162
[15:06] <stickupkid> nammn_de, yeah, along with renaming it from MockLogger
[15:07] <achilleasa> hml: the comment on L135-136 is misleading. DefaultEndpointBindingsForCharm returns space IDs
[15:08] <achilleasa> hml: we can either get everything as space names and pass it to NewBindings or translate givenBindings to spaceID before calling this func. What do you think?
[15:09] <hml> achilleasa:  pondering
[15:09] <achilleasa> hml: Is the model config param for the default space storing a name or a space ID?
[15:09] <achilleasa> nammn_de: ^ ?
[15:10] <hml> achilleasa:  a name
[15:10] <nammn_de> stickupkid: done
[15:10] <hml> the space id will always be 0
[15:11] <achilleasa> hml: I propose we change it to use space names since this is the facade
[15:12] <achilleasa> or more accurately, make DefaultEndpointBindingsForCharm return a Bindings object
[15:13] <nammn_de> stickupkid: while we are at it, we were planning to land https://bugs.launchpad.net/juju/+bug/1846240 time for a chat about it? I can see that you opened the bug as well as the inital PR
[15:13] <hml> achilleasa:  yes to the DefaultEndpointBindingsForCharm
[15:13] <mup> Bug #1846240: Add support for Windows 2019 series  <juju:Triaged> <https://launchpad.net/bugs/1846240>
[15:17] <stickupkid> nammn_de, done
[15:24] <nammn_de> stickupkid: thanks man! still got time today to chat about the windows bot? Or gonna go soon?
[15:33] <stickupkid> nammn_de, give us 15
[15:37] <nammn_de> stickupkid: us? if you mean in 15 min, im flexible :D just ping me if you can
[15:41] <achilleasa> hml: so it turns out fixing that needs much more effort than I thought (lots of validation code that assumes space names). I will just add the fix for the isModified for now and push the PR
[15:41] <achilleasa> hml: I am kinda surprised that deploy --bind worked while running my QA steps...
[15:41] <hml> achilleasa:  rgr
[15:44] <achilleasa> hml: can you take a look? https://github.com/juju/juju/pull/10734 (I am releasing the guimaas nodes ATM)
[15:45] <hml> achilleasa:  yes, already started with the cli pieces… will continue ;-)
[15:46] <achilleasa> hml: if you want we can pair tomorrow on the cleanup for the facade bits...
[15:46] <hml> achilleasa: sounds like a plan
[15:47] <nammn_de> stickupkid: its me again for the related pr in our main repo https://github.com/juju/juju/pull/10744/files
[15:48] <stickupkid> nammn_de, https://media.giphy.com/media/RoajqIorBfSE/giphy.gif
[15:49] <nammn_de> stickupkid: no ones safe :D, btw. ignore the linting stuff, I will fix them by then
[15:49] <stickupkid> nammn_de, looks good to me
[15:50] <stickupkid> nammn_de, it doesn't like your lint issues though :D
[15:50] <stickupkid> nammn_de, got a dep issue going on
[15:51] <nammn_de> stickupkid: yeah i smell that I may forgott to add my gopkg lock :D
[22:52] <Fallenour> ughhh
[22:52] <Fallenour> Ive been fighting with this all day
[22:52] <Fallenour> does anyone know why LXD wouldnt see a MAAS API thats working?
[22:52] <Fallenour> Im so tired of issues like these @____@
[23:39] <wallyworld> Fallenour: you mean that from inside the LXD container you cannot reach the MAAS API endpoint?
[23:40] <hpidcock> wallyworld: I've pushed that rebase
[23:41] <wallyworld> ta
[23:41] <hpidcock> time for more coffee