/srv/irclogs.ubuntu.com/2019/10/21/#juju.txt

anastasiamacwallyworld: decided to propose add-k8s separately (will probably do a pr per command) to avoid mudding the mud01:08
anastasiamacwallyworld: PTAL https://github.com/juju/juju/pull/10760 - add-k8s changes for ask-or-tell01:08
wallyworldok01:10
wallyworldanastasiamac: lgtm, ty01:19
anastasiamacoh wallyworld  \o/ re: "cluster"... everywhere else in the command we actually say 'k8s cloud"... should i still say "k8s cluster" or 'k8s cloud' to b consistent..?01:20
wallyworldhmmm01:23
wallyworldk8s cloud01:23
wallyworldi prefer cluster but i think certain others wanted cloud01:23
anastasiamacwallyworld: ack01:24
kelvinliuwallyworld: +1 plz, thanks! https://github.com/juju/juju/pull/1076101:49
wallyworldok01:53
wallyworldkelvinliu: glad that one was caught01:54
kelvinliuyeah, thanks!01:55
anastasiamacwallyworld: PTAL next in line - https://github.com/juju/juju/pull/10762 - remove-k8s changes01:57
wallyworld+102:00
anastasiamac\o/\o/\o/02:01
anastasiamacwallyworld: PTAL https://github.com/juju/juju/pull/10764 - remove-cloud changes :D03:08
wallyworldin a minute, just doing a critical fix03:08
wallyworldthumper: https://github.com/juju/juju/pull/1076503:32
wallyworldstill need to figure out the potential dependency issue03:32
wallyworldanastasiamac: lgtm with a suggestion03:36
anastasiamacwallyworld: for ur delight PTAL https://github.com/juju/juju/pull/10766 - remove-credential changes :D04:26
kelvinliuwallyworld: was about to fix the is_primary machine tag issue but found you already got a PR for this. here is a small enhancement, +1 plz thanks! https://github.com/juju/juju/pull/1076704:47
kelvinliumicrok8s test is green now, just enabled the job on CI. I think the gke will be green as well once the k8s version fix landed,04:48
wallyworldkelvinliu: jeez, that was a big change05:02
kelvinliuyes, it is! lol05:02
wallyworldkelvinliu: i merged directly since jobs are taking upwards of 40 minutes right now and it was a acceptance test only Pythin change05:04
kelvinliuyep, thanks05:05
kelvinliu`snap remove microk8s` fixed the 503 health check errors.  last two runs were all green. so merged the PR on qa repo to enable the job.05:07
manadartwallyworld: I know what the k8s issue is with not gating on the upgrade. My late email on Friday was poorly communicated in that the issue fixed by John wasn't the only outstanding article for my patch.07:35
manadartI have a couple of patches to put up.07:36
stickupkidbabbageclunk, you around?07:37
stickupkidbabbageclunk, send email instead07:42
wallyworldmanadart: awesome that you have it it hand :-)07:58
stickupkidwallyworld, it looks like it's return a complex error from juju rather than a string for pylibjuju08:00
wallyworldmanadart: my PR to address the k8s issue is landing as we speak08:00
wallyworldstickupkid: you mean the libjuju storage issue?08:00
stickupkidwallyworld, yeah08:00
wallyworldi thought it looked like the params not being marshalled properly08:01
wallyworldie the deploy storage args (a map) was not converted from a map of string to a map of struct08:01
wallyworldi didn;t raise the issue - just updated the description08:01
stickupkidwallyworld, yeah, sorry, you're right, it's expecting a param rather a string08:02
wallyworldin the bundle, it's a map of string, but in the api, it's a map of struct08:02
wallyworldbut there's code to do it08:02
stickupkidwallyworld, ah, nice nice08:02
wallyworld        if storage:08:02
wallyworld            storage = {08:02
wallyworld                k: client.Constraints(**v)08:02
wallyworld                for k, v in storage.items()08:02
wallyworld            }08:02
wallyworldappears to be *just* for bundles perhaps08:02
wallyworldso maybe there's a code path that is with bundle deploy what doesn't invoke that conversion, not sure yet08:03
stickupkidwallyworld, also it might be an issue where i've pinned the facades to agressively, so it might be worth checking those out08:06
stickupkidwallyworld, https://github.com/juju/python-libjuju/blob/master/juju/client/connection.py#L20-L11808:07
wallyworldstickupkid: that storage functionality was introduced in 1.24 so inlikely to be that08:07
stickupkidwallyworld, that's good to know08:07
wallyworldprobs a long undiscovered issue in libjuju, but i haven't diagnosed fully08:08
nammn_demanadart achilleasa I got a pr review regarding skipping a caas on upgrades. https://github.com/juju/juju/pull/10696#discussion_r336822311 Sadly I cannot 100% follow where the difference between applications and the others are. Currently I just skip if its kube, but there seems to be a easier way I don't know. Someone a pointer?09:02
nammn_deplan is to skip models and controller (just everything) related to kube on upgradesteps09:04
Fallenournammn_de, wallyworld stickupkid rick_h Im having an issue with a for loop:09:04
Fallenourfor i in $(seq 1 3); do lxc launch ubuntu:x "saltmaster-00${i}"; done09:05
Fallenourits telling me this isnt correct, but it should be.09:05
Fallenouram I missing something here?09:06
stickupkidFallenour, bash or shell, if bash, that should work09:08
stickupkidFallenour, https://paste.ubuntu.com/p/hTBvMdJwTq/09:09
manadartnammn_de: thumper is saying you can remove that block at 129, because checking that the agent tag is of type machine satisfies this.09:10
manadartCAAS agents never have machine tags.09:10
nammn_demanadart: thanks 🦸‍♂️, ahhh why that? Do we have some kind of doc running around what kind of unit can have what kind of tag? For me thats pretty confusing tbh09:11
manadartjam, wallyworld: The reason k8s continues even though the DB upgrade worker fails to start is that the lock is returned *unlocked* if we are already on the current version.09:13
manadartSo wallyworld's patch is sufficient.09:13
Fallenourstickupkid, its bash. I think I know wht the issue was. I was using an explicit path for a command in front of it, which is why it was failing. Im testing to see if adding that bin to PATH env will solve the problem09:13
wallyworldmanadart: ah, yes, that makes sense09:16
Fallenourhey wallyworld ;) I see you put several lines of code in chat. I see manadart didnt eat your face for more than one line. Im assuming you work for canonical or the juju team o.o09:17
wallyworldi do09:18
FallenourYAAASS *scribbles notes* *adds to birthday cake list*09:18
FallenourI think theres at least 8-11 of you in here o.o09:18
Fallenourbut ive only got like...4 :(09:19
Fallenouryou guys do way too much for the community to not at least get birthday cake o.o09:19
wallyworld\o/09:19
manadart:)09:19
Fallenour\o/09:19
Fallenour8D09:19
wallyworldglad you like juju :-)09:19
FallenourBut yea, I herd from the grape vine that canonical was hiring o.o09:20
FallenourOh, I dont like Juju09:20
FallenourId marry juju if she were about 5'2, 135 lbs. shes beyond amazing personality wise, and runs like a champ. After HA, like...all my problems died. Well, most of them.09:21
Fallenourits the best damn solution I think ive ever seen cloud wise. I converged it with Saltstack, and am deploying that to production now on live to about 15,000 people in my network.09:21
FallenourI plan on doing talks on it all year this year, integrating it in systems, storage, and security.09:22
wallyworldthere's an opening we're interiewing for in the APAC timezone09:22
FallenourId go for it, but I honestly dont think im good enough. You guys are several levels higher than I am in terms of skillset.09:22
wallyworldif you need help with talks etc we have a developer advocate on the team who would love to help if needed09:22
Fallenouromg thatd be AMAZING09:23
FallenourOne of the talks Im looking at doing is a full CI/CD stack deployment with juju to CI/CD in a can kinda concept.09:23
wallyworldhe isa busy person so it depends on the requets etc, but feel free to ask and he can do what he can09:23
Fallenourim building a new solution out of the box that uses LXD containers instead of docker containers for kubernetes with git that allows security teams to be appeased from security concerns with service containers.09:24
wallyworldhe's in NZ. you can ping him here to ask about stuff. his nic is timclicks09:25
FallenourOOOH its TIM! Tim is the bees knees!09:26
wallyworldwe don't necessarily have any pre-canned material but can offer general advice etc09:26
Fallenourwallyworld, oh thats totally fine! I generally find people dont like pre-canned stuff, so thats actualyl a good thing09:26
wallyworldgreat09:26
FallenourI do a lot of international security conference talks already, and one thing Ive noticed is they dont respond well to canned anything. unless its a canned solution for deployment purposes with a custom talk on top09:27
Fallenourill hit him up once hes on. For now, I have to figure out why sshing into one system logs me into another one? o.O09:27
FallenourI have no idea how thats even possible honestly.09:28
wallyworldmaybe the model you think you're connecting to is not09:28
wallyworldyou can use juju models and the with with * is the current09:28
wallyworldor use the -m to specify exlicitly09:28
Fallenouroh its an ssh issue with ubuntu base, its not a juju issue. juju ssh always works reliably.09:28
wallyworldok09:28
Fallenourbtw, while its on my mind. do you guys still do the juju live events?09:29
FallenourI think those are the greatest things since elderberry jam09:29
wallyworldthe Juju Show?09:29
Fallenouryea09:29
Fallenourid love to be on one of those one day.09:29
wallyworldyeah, tim (clicks) and rick are working on a new batch09:29
wallyworldnayone can join in09:30
Fallenouromg09:30
Fallenourwhat?09:30
Fallenouro.O09:30
Fallenourhow do I sign up?09:30
wallyworldgood question, i'm not totally sure09:30
Fallenourand is there a subscription thing I can sign up for?09:30
wallyworldrick_h is the person to ask09:30
Fallenour8D09:30
wallyworldhe'll be on irc in a few hours09:30
Fallenouryaaaaasssss09:30
Fallenourrick_h, is awesome09:31
wallyworldi think they normally have a few people logged in and able to ask questiosn etc09:31
wallyworldhe is09:31
Fallenourone thing Ive noticed about all canonical employees in general is they are all really happy09:31
wallyworldwe're all peachy09:31
Fallenourcanonical must be a great company or its the drugs in the free water, its gotta be.09:32
Fallenouronly a handful of companies i know that are like that, saltstack, suse, canonical, riot09:32
wallyworldbit of booth09:32
nammn_demanadart: just to make sure before i press the merge button. I removed the block below: https://github.com/juju/juju/pull/10696/files#diff-8bc810c7809469ea95764da958639d1aR121-R12609:32
manadartnammn_de: Looks fine.09:37
nammn_demanadart: 🏄‍♂️09:37
Fallenourquick question, but what service(s) should be running for lxd/lxc to work? @wallyworld manadart nammn_de09:47
FallenourI just built an lxd cluster, and it was in HA and fine, then I built 3 containers, and it died.09:47
Fallenourall three of them. So much for HA :P09:47
wallyworldFallenour: way past my EOD now, i'll leave to others to answer as i need to get AFK09:53
manadartFallenour: I think the daemon, the socket and possibly DNSmasq if the LXD bridge is managed.09:56
Fallenourmanadart, I found out the issue is with the database, likely due to being a snap. Super frustrating that snaps are supposed to be stable, and my general experience is they are anything but. Im having to rebuild all three machines10:23
nammn_destickupkid: got a min?11:24
stickupkidnammn_de, sure11:25
nammn_destickupkid: Thinking about changing this function  https://github.com/juju/charm/blob/974f39ea8f706c25616d022f70838c862687d3ca/charmdir.go#L41811:25
nammn_dethat it does not log anymore at all11:25
nammn_deso that it can be called n times without keep logging11:26
nammn_deproblem: I want to log things at different levels (debug, error and warn)11:26
nammn_deon way to solve is to return the log level as well, but this would change the return signature from 3 to 5, which is kind of not cool11:27
stickupkidnammn_de, that make sense, maybe, pass in a logger, then you can tell it to not log at all if you don't want it too11:27
nammn_deahhh good one11:27
stickupkidnammn_de, or return a list of issues11:27
nammn_deoh, list of issues are nice too. which "only" makes it to 4. Like both approaches. Gonna try them out11:28
nammn_deIm going the first approach to let the things stay lean11:29
nammn_destickupkid how would you tell a logger not to log before passing it in?11:31
nammn_de*loggo11:31
stickupkidnammn_de, is there not a dumb logger11:31
stickupkidnammn_de, or provide an interface for a logger and then pass an instance of the logging instance to it11:32
stickupkidnammn_de, similar to how the worker are done (thumper has done work in this area)11:32
nammn_destickupkid: thanks gonna take a look11:33
rick_hFallenour:  what's up?11:40
* rick_h yawns11:40
rick_hFallenour:  your cluster died?11:41
Fallenour rick_h yup. It ate the orange squirrel cable of love, sailed off into the tuskegee, took a short walk on a long pier11:44
* rick_h processes that for a while...11:48
rick_hFallenour:  any hint on the issue?11:48
Fallenourrick_h, database connection issue.11:48
rick_hFallenour:  for the lxd db? or juju to the mongodb?11:49
Fallenourrick_h, likely from snap/deb collision.11:49
Fallenourrick_h, lxd db, juju isnt at fault here11:49
rick_hoh hmmm, how did they collide?11:49
Fallenourrick_h, followed a juju tutorial on bootstrap deployment in ha, but didnt do lxd.migrate because it wasnt in the instructions :P11:49
rick_hFallenour:  :(11:50
Fallenourrick_h, lesson learned, lxd does NOT in fact like to use deb/snap in combination. it is very much so a dinner menu only kinda gal.11:50
rick_hno no no, agree it's a "pick one and only one"11:50
Fallenourrick_h, took me 10+ minutes to run systemctl snap.lxd.daemon stop11:50
Fallenourjust to kill the service11:50
rick_h:/ you have much more patience than me11:51
FallenourI think what it was doing is creating a race condition. I think it was passing the for i loop as a command to each node, which cuased that node to create a for loop for the next node11:51
Fallenourso it was infinitely trying to create 3 containers in loop11:51
rick_hhow helpful!11:51
Fallenourrick_h, I know! It just really wanted to make sure I had enough salt masters :P11:52
FallenourI guess it figured 30,000+ salt masters should suffice11:52
FallenourI did in fact, not concur, so we had a splitting of ways, ergo a complete rebuild inside maas11:53
rick_hwell for most people that'd be good, but nooooo you have to be all picky and stuff :P11:53
rick_houch, sorry for the trouble11:53
Fallenourrick_h, yeaaa :( I like my systems like I like my tacos, a little on the light side11:54
rick_hlol11:54
Fallenourrick_h, BUT!11:54
* rick_h ducks11:54
Fallenourrick_h, its ok. because they have a lot more ram now, so this is good.11:55
Fallenoureach has at least 96GB of ram apiece, so I can do the things with them now :)11:55
rick_hnow you're cool with 30k masters?11:55
rick_hFallenour:  nice! that's cool11:55
Fallenourrick_h, no, sadly but I am ok with 3 ^__^11:55
Fallenourrick_h, the good news is that once this is finally finished, Ill be bootstrapping the juju controllers onto them, and have a cluster in lxd. Ive come to the ultimate realization that I just dont need 3 physical dedicated controllers, they will never use the resources in the current state.11:56
rick_hok, glad the world sucks a bit but has a bright side. I'm going to go make some morning coffee11:56
Fallenourrick_h, morning....coffee?11:56
Fallenourrick_h, Dont you mean all day coffee? o.O11:57
rick_hyes, the morning ritual that means it's morning time and the day is starting off ok11:57
rick_hhah11:57
rick_hno, cut off after lunch. Have to sleep you know11:57
Fallenourrick_h, mmmm, *starts looking around for the rick_h service restart command*11:57
Fallenourcan anyone assist? I keep finding errors in my journalctl -u rick_h log files11:58
Fallenourrick_h, something something morning coffee?11:58
FallenourXD11:58
Fallenoursomething something sleeping. cant have sleeping services o.o11:58
Fallenourmy morning starts at 4 am EST, runs till 9-10PM EST11:59
danboidHow do I use curtin the set the default DNS servers for all of my MAAS deployments?13:16
rick_hdanboid:  can't you set the dns servers in MAAS itself?13:21
rick_hdanboid:  under "settings, network services, DNS"13:22
danboidrick_h, That is already configured but its never worked. For some reason I have to edit the netplan config post deployment and add in the DNS server for it to work13:24
danboidMaybe bind isn't corretly configured on my maas controller?13:25
danboidrick_h, Shouldn'r MAAS error tho if bind was inorrectly configured?13:28
rick_hdanboid:  I would think, I've not poked at it tbh other than setting it13:28
rick_hI guess I've never really checked it was set like I did in config13:28
danboidI'm going to try with DNSSEC explicitly disabled, see if that helps. It was set to auto13:32
danboidTurning off DNSSEC under MAAS hasn't fixed it13:49
rick_hdanboid:  ok, yea I don't know. It might be good to bug the maas folks. If you want to use curtain to tweak things there is the cloud-init configy stuff you can do in Juju but sounds like a work-around13:53
danboidThe docs give the impression cloud-init is more for one-off configs and curtin is what I want if I want ti change the config of all deployments13:54
rick_hwell there's a juju cloud-init thing that runs on any machine juju goes on13:55
rick_hdanboid:  https://discourse.jujucharms.com/t/using-model-config-key-cloudinit-userdata/51213:56
bdxdanboid, rick_h: add the dns server you want to use to the subnet details page14:42
bdxhttps://maas.io/docs/networking14:45
danboidLooks like bind is configured to only listen to localhost, despite it saying otherwise in /etc/bind/maas/named.conf.options.inside.maa14:47
danboids14:47
danboidYay! Fixed it!14:50
rick_hoooh, ty bdx14:51
bdxnp14:51
danboidI just had to comment out the `listen on` line in /etc/bind/named.conf.options14:51
danboidThen restart bind14:52
nammn_destickupkid: still around? If yes might wanna take another review round?14:54
nammn_dehttps://github.com/juju/charm/pull/29414:54
stickupkidnammn_de, looking now14:59
stickupkidnammn_de, almost, just the naming I think now15:03
achilleasahml: I think this bit of code conflates space IDs (defaults) with space names (givenBindings that comes from the client). I will try to fix it as it prevents me from applying the Merge -> merge change15:04
hmlachilleasa:  which bit of code?15:06
nammn_destickupkid: thanks, just to make sure. You suggest to extract the extra file and just put it into same `charmdir` file as well?15:06
achilleasahml: https://github.com/juju/juju/blob/6e8f551b5305ec2f30d1910a0788db52cc30466d/apiserver/facades/client/application/deploy.go#L134-L16215:06
stickupkidnammn_de, yeah, along with renaming it from MockLogger15:06
achilleasahml: the comment on L135-136 is misleading. DefaultEndpointBindingsForCharm returns space IDs15:07
achilleasahml: we can either get everything as space names and pass it to NewBindings or translate givenBindings to spaceID before calling this func. What do you think?15:08
hmlachilleasa:  pondering15:09
achilleasahml: Is the model config param for the default space storing a name or a space ID?15:09
achilleasanammn_de: ^ ?15:09
hmlachilleasa:  a name15:10
nammn_destickupkid: done15:10
hmlthe space id will always be 015:10
achilleasahml: I propose we change it to use space names since this is the facade15:11
achilleasaor more accurately, make DefaultEndpointBindingsForCharm return a Bindings object15:12
nammn_destickupkid: while we are at it, we were planning to land https://bugs.launchpad.net/juju/+bug/1846240 time for a chat about it? I can see that you opened the bug as well as the inital PR15:13
hmlachilleasa:  yes to the DefaultEndpointBindingsForCharm15:13
mupBug #1846240: Add support for Windows 2019 series  <juju:Triaged> <https://launchpad.net/bugs/1846240>15:13
stickupkidnammn_de, done15:17
nammn_destickupkid: thanks man! still got time today to chat about the windows bot? Or gonna go soon?15:24
stickupkidnammn_de, give us 1515:33
nammn_destickupkid: us? if you mean in 15 min, im flexible :D just ping me if you can15:37
achilleasahml: so it turns out fixing that needs much more effort than I thought (lots of validation code that assumes space names). I will just add the fix for the isModified for now and push the PR15:41
achilleasahml: I am kinda surprised that deploy --bind worked while running my QA steps...15:41
hmlachilleasa:  rgr15:41
achilleasahml: can you take a look? https://github.com/juju/juju/pull/10734 (I am releasing the guimaas nodes ATM)15:44
hmlachilleasa:  yes, already started with the cli pieces… will continue ;-)15:45
achilleasahml: if you want we can pair tomorrow on the cleanup for the facade bits...15:46
hmlachilleasa: sounds like a plan15:46
nammn_destickupkid: its me again for the related pr in our main repo https://github.com/juju/juju/pull/10744/files15:47
stickupkidnammn_de, https://media.giphy.com/media/RoajqIorBfSE/giphy.gif15:48
nammn_destickupkid: no ones safe :D, btw. ignore the linting stuff, I will fix them by then15:49
stickupkidnammn_de, looks good to me15:49
stickupkidnammn_de, it doesn't like your lint issues though :D15:50
stickupkidnammn_de, got a dep issue going on15:50
nammn_destickupkid: yeah i smell that I may forgott to add my gopkg lock :D15:51
Fallenourughhh22:52
FallenourIve been fighting with this all day22:52
Fallenourdoes anyone know why LXD wouldnt see a MAAS API thats working?22:52
FallenourIm so tired of issues like these @____@22:52
wallyworldFallenour: you mean that from inside the LXD container you cannot reach the MAAS API endpoint?23:39
hpidcockwallyworld: I've pushed that rebase23:40
wallyworldta23:41
hpidcocktime for more coffee23:41

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!