[00:05] anastasiamac: when you have a moment would love a review on that pr :-) [00:06] veebers: k... but i might collect at some stage :) [01:32] wallyworld, kelvin: What's the best way to have a k8s cluster to test with. I've had issues with microk8s, unable to tear things down, uninstall needs a reboot as there is resource issues etc. For now I would prefer something I can deploy, leave and blowaway the namespaces as needed. [01:32] i use aws [01:32] or you can use lxd [01:32] deploy kubernetes-core [01:33] wallyworld: Ah, using the bundle that kelvin has put locally in our testing repo? (I think he had to touch something to make it work with lxd or our tests) [01:33] have you tried microk8s.reset [01:33] yeah, lxd [01:33] wallyworld: aye, I have [01:34] i have a version of kubernetes-core in my repo, i just edited it to remove the lxd nesting [01:35] wallyworld: ack, how many machines does it use out of interest. I might need to reboot at any rate. [01:35] 2 [01:35] 3 [01:35] 0,1,2 [01:36] lol, ascii art matrix :-) Sweet, I'll use that I think [02:06] kelvin: i've update the go k8s sdk dependencies https://github.com/juju/juju/pull/8969 [02:12] wallyworld, that's cool. I will pull dev after this landed to test my changes. [02:15] kelvin: can i get you to +1 it? [02:17] wallyworld, sure. [02:31] anastasiamac: If you could eyeball the test I added when you have a moment that would be grand [02:31] veebers: of course [02:33] veebers: lgtm'ed [02:33] awesome, thanks anastasiamac o/ [02:33] :) === braderhart is now known as Guest13679 [02:35] wallyworld: circling around, this is the bundle yeah? https://github.com/wallyworld/caas/tree/master/bundles/kubernetes-core [02:36] yeah, it's probably out of date compared to upstream [02:36] all i did was take upstream and unnest the lxd containers, justed use a distince new machine [02:36] *distinct [02:40] ok seet [02:40] sweet even === zeus is now known as Guest14217 [03:06] kelvin: hah, deploying that k8s bundle with the lxc profile changes is doing odd things to my machine, seems it's rebooting the usb subsystem or something over and over [03:07] veebers, it kills my x-org and log me out. I have to login then re-open all the windows. [03:07] 0_0 oh man, hopefully that's a pain /me hopes that doesn't happen :-| [03:10] veebers, let me know when u get it up and running or get any issues. [03:10] kelvin: will do just waiting on charm sw install, and cluster dns now [03:24] kelvin: if you're having an issue building a caas charm, you can either comment out the resource, build and add it back in and push, or you need to build charm command from source (there is both py and go parts) [03:24] Although I've never wired up the py part (I just modified after the build L:-p) [03:26] ok, ic. I did that for device as well [03:26] kelvin: looks like I'm cooking with gas: https://paste.ubuntu.com/p/TZcBmZHW6C/ [03:26] awesome! [04:03] I'm seeing this error (tip of develop) is it expected? (seems harmless as it doesn't seem to stop anything) ERROR unable to detect LXC credentials: open /home/leecj2/.config/lxc/config.yml: no such file or directory [04:37] veebers: looks like there are some failures including https://bugs.launchpad.net/bugs/1783400 [04:37] Bug #1783400: misleading error message "unable to detect LXC credentials" when no credentials are necessary [04:37] wallyworld: is there a way to get upgrade-juju to use my built jujud in path, and not have it attempt to build it itself? [04:38] anastasiamac: ah ack, thanks seems like it's in hand then [04:39] veebers, i got this error https://paste.ubuntu.com/p/SgxzxszGN2/ [04:39] * veebers looks [04:41] kelvin: you have a charm in the staging charmstore? can you link me please? [04:42] veebers, im using mariadb at ./caas/charms/ [04:42] kelvin: so a local build of a charm? [04:42] veebers, yes. [04:43] kelvin: where can I see the charm code, have you modified it? (Is this in Ians repo) [04:45] veebers, what did was just comment the resources: [04:45] mysql_image: -> build -> add back to the build version of metadata.yaml [04:45] kelvin: ok, makes sense. The metadata.yaml defines mysql_image as a resource, the make_pod_spec tries to resource-get it, but it's not anywhere so it fails. [04:46] kelvin: normally you would --resources mysql_image= but that's broken, I'm working on it :-) [04:46] kelvin: so you can log into the staging charm store, publish your changes there, attach a resource and deploy from there [04:46] (as a work around for now) [04:47] ah, ic [05:05] ah I forgot to use a custom operator image. That's why no logs are happening duh [06:08] veebers: if it finds a jujud it will use that === frankban|afk is now known as frankban [11:00] not sure exactly when it landed but thankyou for the improvements in machine provisioning observe-ability with the MAAS provider - nice to know what's going on! === chrisp_ is now known as chrisp262 [11:37] stickupkid: if you have a few minutes, quick pr review please? https://github.com/go-goose/goose/pull/64 [12:47] hml: looking [12:47] stickupkid: ty [12:54] hml: why do you use a httptest.Server? [12:54] ah never mind, just read the package path [12:54] "testservices" <--- [12:54] stickupkid_: :-) it’s part of the test double for openstack [12:56] hml: done [12:56] stickupkid_: ty [12:59] Ive got an issue where my install for openstack, https://jujucharms.com/openstack-telemetry/ , has been stuck on a loop "installing packages" since yesterday, which is a bit abserd, can anyone provide any tshoot ideas I can do? Ive already checked all the hardware, as well as the systems for connectivity [13:18] fallenour, ssh to the machine and check logs? [13:21] pmatulis: any log in particular you think? The only thing I can think of that might be impacting it might be MAAS proxy for APT repo, but I would assume MAAS team would update maas repos to also pull for bionic as well, especialyl since they added bionic image into it. Plus I checked its repo list, its got archive.ubuntu.com listed [13:26] fallenour, i meant ssh'ing to the actual juju machine that is stuck [13:54] manadart: for some reason I'm getting a weird error with my host-arch branch, that i need to resolve [13:58] stickupkid_: What's the problem? [13:59] https://pastebin.canonical.com/p/wdrDNt3sy5/ [13:59] manadart: check the error out [13:59] manadart: it's a total lie, it really does support that, but the contraints validator is broken, maybe? [14:01] stickupkid_: This the thing I alluded to in the review comment. Let me look [14:06] manadart: https://github.com/juju/juju/blob/develop/environs/bootstrap/tools.go#L31 [14:06] is this the issue, i.e. we're checking the local machine vs the server machine? [14:38] They should both be AMD64 though. [14:38] manadart: `(server: x86_64, local: amd64)` [14:39] Ah, so we will have to normalise the server's string. [14:40] This will happen when we are running edge (for whatever branch), because it checks if we can build tools locally for uploading to the target. [14:45] manadart: fixed it [14:46] Nice. [15:00] stickupkid_: do you have a few minutes to play teddy bear on some ca cert stuff? [15:01] hml: sure [15:01] stickupkid_: standup ho? [15:01] yup [15:20] hml: Approved that PR. [15:20] manadart: thank you [15:27] rick_h_: so I got that bad unit to clear finally by upgrading systemd. I see nothing in https://launchpad.net/ubuntu/+source/systemd/229-4ubuntu21.2 that explains it, but it worked [15:28] rick_h_: on the model with the machine that won't delete with --force though, it's still stuck. One thing I noticed is that jenkins charm had previously been deployed there, so it has some storage associated with it in juju. It could be getting stuck trying to deal with that? [15:30] plars: hmm, normally if you want to remove something with storage you have to provide the --destroy-all-storage flag or something to it [15:31] rick_h_: I don't see that in juju remove-machine. I used --force though, and I also tried to remove it with juju remove-storage. juju destroy-model seems to be waiting on the machine and storage to disappear [15:32] plars: bah yea not sure [15:32] rick_h_: on a side note - my juju client is at 2.3.7-xenial-amd64 - is it ill-advised to allow upgrade-juju to bump the version in the models to a higher version than that? [15:32] or is it best to try to be on the latest in every model [15:33] plars: yea, the big thing is the version on the controller [15:33] plars: as that does most of the work for things [15:33] plars: but ideally you'd update the client to the latest, the controller, and then each of the models [15:34] rick_h_: that's the latest version of the client in xenial, without moving to the ppa I guess. we rely on juju-deployer right now too if that makes a difference. Not sure if compatibility is affected in future versions [15:35] plars: snaps ftw :) [15:35] rick_h_: I've thought about moving to the snap version - coming from the packaged version though, is there migration to consider since this is already in production? [15:36] rick_h_: and does juju-deployer continue to function with the snaps? [15:36] plars: no, it's only the juju client that's involved [15:36] plars: oh hmmm, no deployer is hopefully never used any more [15:36] plars: It'd be good to hear what you're using for it vs raw juju commands [15:37] rick_h_: it's just always been a convenient way to describe our environment in a big yaml file. Most of our new instances are copy/pasteable from a similar one with only changing a couple of values [15:39] rick_h_: you would suggest what instead? bundles? [15:39] plars: yea, bundles are what does all that and is baked in [15:40] plars: so the question is what do you use in the deployer vs the raw bundles code [15:47] rick_h_: good question... is there a good way to convert or export it, or do we just need to go through the entire environment by hand? [15:47] plars: just try your bundle file you're using with the deployer in a new model juju deploy and check out the latest bundle docs to see if there's slight differences in how things are specified [15:48] plars: we attempted to take the external tool and bake it into Juju well [15:48] so it's not exact [15:49] rick_h_: did that just now actually, and tried it with --dry-run, I got: ERROR invalid charm or bundle provided at "./test-bundle.yaml" [15:49] so I'll have to pick through them manually and see what I can find, ok [15:50] plars: k [15:52] manadart: it seems like the maas we're using is returing weird addresses again... you have any suggestions? [15:59] rick_h_: I think one of the main things we used was the ability to specify something like "branch: git://git.launchpad.net/..." I suppose the workaround for that is a script that first pulls in the necessary branches and do charm: ./xenial/foo? === frankban is now known as frankban|afk [20:38] externalreality: where are we on the 2.4.1 release? [20:38] I see we got the go ahead [20:39] multiJob project CI job should be finished in a few minutes. Then moving on to manual steps. [20:42] I'd say that is about half way [20:53] thumper, ^^^ [20:59] externalreality: thanks [22:21] externalreality, wallyworld: it seems somehow we got an invalid version number landed in 2.4 branch [22:22] how did this happen? [22:22] "2.4.2 " <- note the space [22:22] see http://ci.jujucharms.com/job/github-merge-juju/815/console [22:22] veebers: does the script autocommit to the branch? [22:23] I guess we don't have landing test coverage [22:23] thumper: the release process does yes [22:23] wallyworld, thumper: this is a straight push to the repo [22:23] we need to validate the text before we commit it [22:23] The release process should validaate input [22:23] yeah, what thumper said [22:24] I'm going to edit the branch.. [22:24] awesome, thanks thumper [22:25] thumper, thx, I there is indeed a space in the release notes, which I copy pasta'ed into the build params. It's like I can't finish kicking myself in the teeth [22:26] thumper, thx for fixing [22:26] externalreality: The release jobs should have done a better job at handling this. On the bright side, your pain today means the next run will be more streamlined ^_^ [22:28] https://github.com/juju/juju/pull/8977/files landing now [22:46] babbageclunk: https://bugs.launchpad.net/juju/+bug/1782745 might be interesting, and relates to leadership [22:46] Bug #1782745: Agents are in 'failed' state after restoring from a backup [22:46] I know we had some changes in 2.4.0 around leadership... [22:47] thumper: taking a look [22:57] crap. when you rm a file, were does it go? please tell me its not "gone" [22:57] bionic changed my maas deployment, and now the web MAAS interface wont load [23:23] thumper: I asked him for more logs - the ones in the bug are from the unit, but the error looks like it's coming from the controller (the lease manager has died for some reason). [23:45] fallenour: normally gone is the answer [23:53] veebers: do you have a few minutes to chat? [23:54] thumper: I can try, my machine is under a bit of load so might not work in HO/meetup [23:54] veebers: are you running tests? [23:54] thumper: deploying k8s cluster locally [23:54] i'll be fine :) [23:54] 1:1? [23:54] sure thing omw [23:57] kelvin__: i left some comments - the device handling stuff on k8s.go isn't quite right, see if what i say makes sense [23:58] wallyworld, looking now, thanks