=== ajmitch_ is now known as ajmitch === mjs0 is now known as menn0 [07:44] good morning juju world! [07:57] Have we moved on from charms only supporting LTS releases? I've got an MP here adding zesty and yakkety. [07:59] Its a subordinate, so probably necessary if other charms support those series. [08:29] oi === frankban|afk is now known as frankban [12:36] so I tried to bootstrap into aws (having just setup a new account): it got stuck in "Contacting Juju controller at 172.31.10.1 to verify accessibility..." for an hour (even though in the aws console I could see it was up". Ctrl+c says "Interrupt signalled: waiting for bootstrap to exit" [12:36] and that's it, it's waiting for something? but probably doing nothing - any thoughts? [12:36] seems like a bug? [12:36] mattyw: hmm, some issue getting ssh there? that's the normal thing it's waiting for there [12:36] looks a bit weird [12:36] mattyw: I hit that with my maas setup when I forget my 'sshuttle' [12:36] 172 looks internal to me [12:37] magicaltrout, indeed, it's the internal ip [12:46] hmm, I was using the london region for the first time every [12:47] I wonder if there's something weird about it [12:47] ah that might very well be the case mattyw [12:47] there have some hard-ish coded assumptions about available server types and stuff [12:47] rick_h: ? [12:47] magicaltrout: yes, there's some hard coded bits on the types of instances but that's not about connecting to the internal address. [12:48] it would fail earlier in that it couldn't find an instance type [12:48] oh yeah mattyw said the instance came up [12:48] good point [12:51] eu-west-2 certainly appears as a supported region [12:51] yeah [12:51] i just bootstrapped in it [12:51] this is all a totally new aws account I'm setting up [12:51] try again mattyw and see what happens [12:51] so there could be any number of things [12:51] am doing [12:51] this time with --debug :) [12:51] 13:51:51 DEBUG juju.api apiclient.go:695 will retry after error dialing websocket: dial tcp 52.56.222.236:17070: getsockopt: connection refused [12:52] magicaltrout, ^^ [12:52] that's pretty much what the log is saying [12:52] looks a bit shit [12:52] weird though how it works for me [12:56] magicaltrout, looks to be the same thing using us-east-1 [12:56] so it's something to do with it being a new account maybe? [12:56] don't know what thought [12:56] don't think so [12:57] aws accounts have worked ootb for me [12:57] well i've not done it in a while but they just needed a default vpc [12:59] this is the first time I've tried it with a brand new aws account... [13:00] what I mean is - an aws account that was created this morning [13:00] not just that it was new to juju [13:04] sure, but i did create a new account for juju stuff a while ago [13:04] and I don't believe I changed anything [13:05] its like its some funky routing weirdness [13:05] out of interest mattyw if you spin up a small ubuntu server on AWS [13:05] then bootstrap *from* that does it make it any happier? [13:06] magicaltrout, you mean using the manual provider? [13:06] no just bootstrapping from outside of your local network [13:09] I could do... [13:09] I'll try with my other credentials - I know they work [13:09] clearly its not idea, but would also help track down if its AWS end or your end [13:09] I'll try that first [13:09] s/idea/ideal [13:09] indeed [13:14] magicaltrout, huh - old credentials aren't working either [13:14] hmmm [13:16] ha - 2.2-beta3 has been released - I'm on an older ish client - I bet that's it [13:27] has anyone run into this before when attempting to make a persistent volume claim against a ceph rbd provisioner in k8s Failed to provision volume with StorageClass "fast": failed to create rbd image: executable file not found in $PATH, command output: [13:29] magicaltrout, ha! yep [13:29] all working now [13:29] in my case k8s is running inside openstack and I am connecting to the same ceph cluster that openstack is using, so I did not juju deploy ceph and add relations from the same juju controller [13:30] had an older ish client (2 weeks old - built myself from master) [13:30] good stuff :) [13:31] btw, I can statically provision claims [13:32] the error seems to indicate that the rbd binary is missing, I'm just not exactly sure where it needs to go [13:35] tychicus thats gettign realesed this week [13:35] tychicus sorry you hit that, but a fix is indeed incoming! [13:35] lazyPower: thanks, is there a work-a-round? [13:35] tychicus thats a regression with our 1.6.1 release, and should be resolved with 1.6.2, we upped the snaps. You can do an in-place configuration change to get the fix before we release the charms [13:35] regression?! surely not [13:36] 1 sec while i find which track/channel they are in [13:36] yes, in place configuration change sounds great! [13:37] tychicus juju config kubernetes-master channel=1.6/edge [13:37] tychicus juju config kubernetes-worker channel=1.6/edge [13:37] magicaltrout may your code be forever bug free and you not give me an edge to raz you ;) [13:38] magicaltrout in short, bless you. [13:38] my code is amazing.... [13:38] ly full of bugs [13:38] story of our lives some days eh? :) [13:38] wrote some gitlab tests last week [13:38] it'll be in the review queue this week [13:40] magicaltrout: if I may ask what is on the docket for gitlab? [13:40] o/ juju world [13:40] lots of amazing stuff [13:40] dunno tychicus what features are top of your list of requirements? [13:41] o/ Budgie^Smore [13:41] those whom shout the loudest will likely win the feature request race [13:41] still killing bugs and taking names layer eh lazyPower? [13:41] lol [13:41] Budgie^Smore welllllll [13:41] theres an open ended question on if i'm writing bugs faster than i can patch them, but thats the general mantra :) yeah [13:42] well my plan is to: 1 install gitlab into k8s [13:42] lazyPower at least their giving credit where credit is due ;-) [13:42] 2 have all of the ci jobs run int k8s [13:43] then deploy to k8s [13:43] but I am sure that there are more useful thing that I have not thought about [13:44] tychicus pretty sure we spoke to this, but there is a multi-executor that knows how to talk to k8s if you give it a kubeconfig [13:44] hmm well i don't have any plans to deploy gitlab into k8s cause its general charm stuff. I do want to leverage the ci -> k8s executor and docker repo asap [13:44] https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/kubernetes.md [13:45] magicaltrout ^ I'm currently slammed but should see the break soon enough. We can pair on this on a for-fun friday proejct [13:45] yep that would be the step 2 [13:46] cool, i'm just getting through yet another stupid week care of NASA but hopefully i can start stabbing away on gitlab amazingness in the next week or two [13:46] maybe whilst i'm at apachecon [13:46] magicaltrout sounds good, i'm sure we'll be in touch [13:47] magicaltrout i'm currently cannibalizing merlijins work after hours to get a che stood up for my chromium book. [13:47] ah yeah, thats pretty sweet [13:47] well, if it worked [13:47] lol [13:47] busted 3/3 times on gce [13:47] so i have work tod o there to figure out why and submit a fix [13:48] i think its just an ip binding issue [13:48] my new guy started today so in a few weeks i'll have him cranking on simple charm tasks [13:48] oooo [13:48] send him deep in k8s land [13:48] i could use the extra hands [13:49] lazyPower: gizmo__ gizmo__ lazyPower [13:49] hi [13:50] i told gizmo__ to test your documentation and stuff to get his head around charm stuff as a starting point lazyPower [13:50] of course as new stuff comes in and needs testing etc, feel free to prod him. Currently no direct charming experience but we'll be fixing that over the next month or two [13:51] just don't prod too hard!! [13:53] LazyPower: forgive my lack of knowledge here "Needs manual upgrade, run the upgrade action" [13:53] tychicus did you get that from etcd? [13:53] no kubernetes-worker [13:54] hmmm [13:55] oh super cool [13:56] tychicus i forgot this landed last cycle. We decoupled upgrades from operational code upgrade, so the thought is [13:56] you can juju upgrade-charm on any of your k8s charms, and it wont effect the workloads. (if you set the boolean toggle, defaulted to on so everyone should get this behavior by default) [13:57] tychicus which means, when you juju config k8s-worker channel=1.6/edge, its found a new snap to install, but it wont do anything untill you run a manual action, as it *could* introduce downtime in mission critical scenarios [13:57] tychicus so, juju run-action kubernetes-worker upgrade [13:57] er juju run-action kubernetes-worker/0 upgrade -- it will need to be repeated once for every unit of the application. [13:57] this way you can stagger and ensure HA [13:59] lazyPower that looks like a nice safety measure in case you "forget" to drain the worker first [13:59] Budgie^Smore thats what we're going for. Value add :) [14:00] that is super cool, thanks [14:00] do you know whats super cool?! [14:00] lazyPower! ;) [14:01] oh and I was going to say the etymology of conondrum [14:01] lol [14:01] nooooo [14:01] lazyPower I am not sure I would go that far :P as I am all about full automation but definitely a good safety net initially [14:01] admcleod isn't super cool [14:01] Budgie^Smore well, we're all for that too. all of those actions can be scripted [14:01] magicaltrout careful now, you don't want to give lazyPower too much of a big head! [14:02] hehe [14:02] wat [14:02] nothing [14:02] return to sleeping [14:02] brexit === daniel is now known as Guest47234 [14:02] Budgie^Smore the thing is, juju is a great empowering tool for encapsulating the operations but in terms of guessing your company culture and business logic, its not very omniscient. So instead of forcing you on rails, we giv eyou the primitives to roll that up with little fuss. [14:02] thats less insulting that "Theresa May" [14:03] hahaha [14:03] magicaltrout: yeah but i forgot her name [14:03] like most of the uk population [14:03] magicaltrout: grey cardboard cutout lady === Guest47234 is now known as Odd_Bloke [14:03] its true though, lazyPower is super cool [14:04] well thanks all for the votes of confidence. Careful putting me on a pedestal, theres nowhere to go from there but down :) [14:04] lazyPower totally get that :) I would be just rebuilding the charm code instead of writing yet more code on top ;-) [14:05] Glad to hear its helpful :) [14:05] lazyPower: there are other, much more majestic, guilded pedestals, on which pidgeons wearing gem encrusted crowns puff out their chests for no particular reason other than that their feathers are shiny. [14:05] admcleod what kinda crazy business do you have going on over there in spain that you're putting pigeons on pedestals? [14:05] sometimes admcleod scares me [14:05] seems dreadfully wasteful! [14:06] oh they're just there, in the pidgeon pedestal dimension [14:06] well, I don't get the $PATH error any more, but the claim just sticks at pending [14:06] tychicus and the ceph secret is enlisted? [14:06] yes [14:06] hmmm [14:07] anything in dmesg/journalctl? [14:07] I can do static claims [14:07] yeah i modeled the static claim in an action... which remained working. what we broke as the rbd auto provisioner when we moved to snaps. Confinement is a hefty hammer apparently. [14:08] bunch of app armor denied messages: apparmor="DENIED" operation="create" profile="snap.cdk-addons.hook.configure" pid=7965 comm="snapctl" family="inet6" sock_type="stream" protocol=6 requested_mask="create" denied_mask="create" [14:08] tychicus i'm going to need more details. When i tested the patch branch I was able to get rbd autoprovisioning working, but thats been some weeks ago now. I don't have the test results or workload manifests handy anymore to spin one up quickly. [14:09] thats expected - a bit of a red herring. [14:09] snap confinement at work [14:09] ah ok [14:11] just to confirm I should be looking at dmesg and journalctl on the master? [14:11] tychicus correct [14:11] the kube-controller-manager should be doing that enlistment [14:14] here is one item from dmesg spedific to kube-controller-manager and rbd [14:14] [ 2034.211833] audit: type=1400 audit(1493732994.271:175): apparmor="DENIED" operation="open" profile="snap.kube-controller-manager.daemon" name="/var/tmp/" pid=6055 comm="rbd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [14:16] hmmm [14:16] its attempting to create a tmp resource and apparmor denied it [14:16] that may be related, but i didn't see that when testing [14:16] is the pv/pvc still in pending? with no movement in the eventlog of k8s? [14:17] sorry, I just deleted it, let me re-create it [14:20] ok [14:21] no problem, i'm here all day :) [14:21] and the next day [14:22] and the next [14:23] way to make it creepy magicaltrout [14:23] thats me [14:23] :) <3 [14:24] glad you can own it there magicaltrout [14:24] hehe [14:37] LazyPower: here is what I get from journalctl [14:38] May 02 14:36:15 juju-0aa679-default-14 snap[5794]: I0502 14:36:15.273296 5794 wrap.go:75] PUT /api/v1/namespaces/default/persistentvolumeclaims/mypvc: (5.581372ms) 200 [[kube-controller-manager/v1.6.2 (linux/amd64) kubernetes/477efc3/persistent-volume-binder] 127.0.0.1:33828] [14:38] May 02 14:36:15 juju-0aa679-default-14 snap[5794]: I0502 14:36:15.276311 5794 wrap.go:75] GET /api/v1/persistentvolumes/pvc-b23b5e28-2f44-11e7-b9c8-fa163e1e0ce5: (2.139686ms) 404 [[kube-controller-manager/v1.6.2 (linux/amd64) kubernetes/477efc3/persistent-volume-binder] 127.0.0.1:33828] [14:38] May 02 14:36:15 juju-0aa679-default-14 snap[5794]: I0502 14:36:15.278364 5794 wrap.go:75] GET /api/v1/namespaces/default/secrets/ceph-secret-admin: (1.12258ms) 200 [[kube-controller-manager/v1.6.2 (linux/amd64) kubernetes/477efc3/persistent-volume-binder] 127.0.0.1:33828] [14:38] May 02 14:36:15 juju-0aa679-default-14 audit[11601]: AVC apparmor="DENIED" operation="open" profile="snap.kube-controller-manager.daemon" name="/var/tmp/" pid=11601 comm="rbd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [14:38] May 02 14:36:15 juju-0aa679-default-14 kernel: audit: type=1400 audit(1493735775.306:336): apparmor="DENIED" operation="open" profile="snap.kube-controller-manager.daemon" name="/var/tmp/" pid=11601 comm="rbd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [14:38] tychicus looks like something has crept in since validation that its being blocked [14:38] the only thing creepier than magicaltrout is a thawing moose head. [14:39] tychicus I'll take a task from this and circle back but it likely wont be until tomorrow. [14:39] thanks again for your help! [14:39] thanks [14:40] admcleod and you just made it even creepier [14:42] there only one way to make this even creepier than it is. [15:13] wow the silence is deafening in here ;-) [15:14] admcleod sucked all the air out of the room :) :) [15:14] shhh [15:15] watching pidgeons [15:15] haha [15:16] and there was me thinking it was the Freenode netsplit / server crash that did that lazyPower... why do you have to pick on poor admcleod like that, it is soooo mean :P [15:16] we have a weird dynamic [15:16] we pick on each other and do good work when we pair [15:16] and I thought it was my bad comcast. [15:17] ah I know that dynamic well so it ain't that weird :P [15:17] picking on each other is safety valve on a pressure cooker [15:19] its more like a shallow frying pan with a spritz of margerine spray [15:19] some of the teflon coating is peeling off and ends up in the eggs so we're slowly poisoning ourselves [15:20] there is a tiny glimmer of hope. the pidgeon overlords may take pity on us. [15:20] so your saying it is time for a new frying pan there admcleod [15:21] perhaps a slow death is more comforting in its certainty than one unknown. [15:21] * Budgie^Smore admits he is just bored of being a full time job seeker! [15:23] maybe I should look at the CDK issues page and see if there is "something" I can fix there [15:23] hello [15:25] hey thomaszylla o/ [15:25] o/ thomaszylla [15:26] magicaltrout: have you seen 'theresa may awkwardly eating chips'? [15:29] wat [15:30] admcleod I think you could have had stopped at "awkward" there [15:31] * Budgie^Smore is a Brit living California [15:31] haha [15:35] All I have to say about the election is "Go SNP!" [15:37] fingers crossed with my uk^H^Hscottish passport [15:37] admcleod right! [15:42] so to contain the "may awkwardly...", did you see her awekwardly knocking on doors in Aberdeenshire? [15:43] continue even, coffee needs to hurry up and kick in! [15:45] Budgie^Smore: haha no, i literally only saw a guardian article about her eating chips [15:46] admcleod: Daily Mail called it "cringeworthy footage" [15:50] heh [15:50] e [17:27] so I know that juju has a concept of spaces now, can juju be used to add new network interfaces? [17:31] for instance I used maas + juju to deploy openstack, but now I need to add a new bridge-mapping to neutron-gateway, but to do that I need a new network interface vlan definition and bridge definition [17:34] tychicus: not yet. It's what we want to get to. [17:36] rick_h: thanks so for now my best option is to modify /etc/network/interfaces to add the new interface/vlan definition, then I can add that to the bridge-mappings for neutron-gateway [17:39] tychicus: hmm, I'm not sure. I'm not sure if there's ways to do that through actions on neutron or what. [17:39] tychicus: /me isn't an OS guru unfortunately [17:40] ok, thanks [17:41] I'm not either, but juju has been very instrumental in helping me to start learning === frankban is now known as frankban|afk [21:23] if I reboot neutron-gateway, juju give the following error after reboot [21:23] Services not running that should be: neutron-l3-agent, neutron-metadata-agent, neutron-dhcp-agent [21:27] dpkg —get-selections lists them as not installed