[00:11] ok one last one. can I refer to a local charm or upload one to the controller from my local system? or is canonical's store the best way to do that with locked down ACLs? === CyberJacob is now known as zz_CyberJacob [07:52] Good morning Juju world! === frankban|afk is now known as frankban [10:55] Is lxc profile named "juju-default" magically applied to all models or do you have to manually tell juju to apply that profile to a model? [10:55] oh, is 'default' the model name ? [10:56] so if I do "juju add-model foo" I need a corresponding juju-foo profile ? [11:04] ok, for any future travellers juju seems to look for an lxc profile called juju-, if it finds it it applies it to the containers in that model. [11:04] s/mode name/model name/ [14:05] Reminder to all: Charmer Summit / config management camp CFPs are due tomorrow! [14:20] jcastro: is there a CFP template somewhere? [14:21] http://summit.juju.solutions/ has a link to the form [14:21] ta [14:32] anyone having a working environment with openstack mitaka ? [14:41] justicefries: hey, so you'd want to use the existing client interface, that way it just seamlessly integrates [14:42] justicefries: an elb-proxy-charm is a great idea, we had an early attempt a while ago, but it would just reuse the http interface and take aws sepcifiic config as charm config and clue the two together [14:42] hmm nice. ok. might roll up my sleeves today and write some charms. [14:43] jcastro/marcoceppi: do you have a couple minutes to chat in eco-wx? [15:00] aisrael: I'm editing mid video, I need a few minutes [15:03] jcastro: no worries, marco answered my q's <3 [15:04] cool [15:04] anyone have anything for the crossteam? [15:04] jcastro: yes, 1 sec [15:05] jcastro: https://bugs.launchpad.net/bugs/1640242 [15:05] Bug #1640242: debug-hooks doesn't accept a named action [15:05] That's not a wishlist item, imo, but a usability issue [15:08] cool, anyone else have a burning bug they'd like to see core address? [15:09] I'm going to ask about spot instances again [15:14] lazyPower: just waiting for youtube to finish the edit I did to trim the front of the video and I'll publish it on the YT channel. [15:15] nice, thanks jcastro [15:17] marcoceppi: any bugs from you? [15:17] jcastro: how about that one where you ahve to have credentials even if you get addmodel access to a controller [15:17] I don't think I've run into that yet? [15:17] jcastro: it's been around since the summit [15:17] since rc1 [15:18] I've been gone remember? Link me up. [15:19] lazyPower: mbruzek: https://github.com/conjure-up/conjure-up/issues/505 === scuttle|afk is now known as scuttlemonkey [15:21] jcastro: I can't find a bug now [15:22] ok, I can dig around [15:22] jcastro - yeah he hopped on a hangout with us yesterday and we saw the progress [15:22] so we've got most of the stuff there, still sorting out system acccess control issues, but otherwise stokachu made a ton of progress there [15:22] jcastro: https://bugs.launchpad.net/juju/+bug/1630372 [15:22] Bug #1630372: "ERROR no credential specified" during add-model as non-admin user [15:23] lazyPower: you guys looking at a release for canoniocal-kubernetes and kuberntes-core today or going to try to gamble on a Friday [15:24] got it [15:25] I am confused by the bug work in core lately [15:25] like, bugs are being closed with no explanation [15:26] jcastro: example? [15:27] https://bugs.launchpad.net/juju-core/+bug/945862 [15:27] Bug #945862: Support for AWS "spot" instances [15:27] arosales - good question, we're more than likely going to push today. [15:27] arosales - is there something specific you're looking for? [15:31] lazyPower: generally interested, but was also noticing the only failure core and canonical k8 on cwr was that pesky lint issue [15:32] ah, yeah. I didn't see the refactor merge come in yesterday, so i'll circle back on that and we'll get a release made as soon as its validated [15:33] closer to EOD, but likely today [15:34] ref = http://data.vapour.ws/cwr-tests/results/bundle_canonical_kubernetes/ec410f94fa8d4c58b482b9b9d04cf530/report.html and http://data.vapour.ws/cwr-tests/results/bundle_kubernetes_core/b117cfc786174737af81ef32c3372108/report.html [15:37] lazyPower: thanks [15:45] marcoceppi: can you explain the use case in more detail in https://bugs.launchpad.net/juju/+bug/1630372 [15:45] Bug #1630372: "ERROR no credential specified" during add-model as non-admin user [15:45] rick is confused as to what you're actually trying to do [15:59] jcastro: otp [16:01] good morning [16:01] how do I find out what container is running what instance of openstack? [16:06] I have to control all the openstack compontents out of juju? [16:06] i cant just edit config files on the servers cause they get overwritten :( [16:10] lazyPower: mbruzek: how do we look on azure? there's a guy asking in the sigcluster-lifecycle channel about azure [16:11] jcastro: Last I checked we deploy fine in Azure I had kwmonroe do it a few times [16:11] we have good test results on azure deploys in CWR aside from the lint error [16:12] awesome, good to know [16:12] I think I'll just respond each time a kops or kargo guy responds to a question [16:14] lazyPower: dang, so that lint error makes everything look broken? [16:14] yeah, arosales already reached out about it this morning [16:14] ack [16:19] lazyPower: the nginx one is fixed [16:19] marcoceppi - sorry i lost context, in what regard? [16:20] lazyPower: the nginx lint errors in kubeapi-load-balancer [16:20] ah ok [16:37] mbruzek: any objection to me kicking off a new jujubox build on dockerhub? [16:37] (last one was 16 days ago) [16:37] kwmonroe: yes [16:38] kwmonroe: can you review the 2 pull requests in the repo? [16:38] I just landed them today [16:38] Giving you the option to build with a user other than ubuntu [16:38] but by default it will build with ubuntu [16:39] If those meet your approval, then I would like to merge them so we can build a new one. [16:40] kwmonroe: I anticipate problems with charmbox with my changes yesterday and today. [16:40] But I am committed to fixing those too [16:58] stokachu: Hey stokachu I see you have put up for review & promulgation dokuwiki until revision 11 , but I see you have also revision 15 under your namespace. Would you like to update the dokuwiki revision you have up for review? === zz_CyberJacob is now known as CyberJacob === frankban is now known as frankban|afk [18:10] i've tried to install the juju-gui and it's sitting in an unknown state and when connecting to the web interface its hanging on "connecting to juju model hangs" juju-2.0 2.0~rc3-0ubuntu4.16.10.1 [18:13] hatch: ^ === tvansteenburgh1 is now known as tvansteenburgh [18:14] bildz: with Juju 2 you no longer have to deploy the GUI charm [18:15] the GUI charm is only for Juju 1 [18:15] to access the GUI with juju 2 simply run `juju gui --show-credentials` and it'll open a browser with the GUI and output your credentials to the CLI [18:15] oh sweet [18:16] bildz: and - if you've got a long running controller you can run `juju upgrade-gui` to get the latest gui release. :) [18:17] thanks, hatch [18:17] np, anytime, if you run into any issues there just ping me [18:17] thanks tvansteenburgh [18:18] appreciate the help! [18:43] I'm having an issue with Juju trying to connecto to MAAS API version 1.0. Version 1.0 is not supported on the version of MAAS I'm using. [18:43] ERROR cmd supercommand.go:458 new environ: Get http://10.0.96.2:5240/MAAS/api/1.0/version/ [18:43] juju version => 2.0.1-xenial-amd64 [18:43] maas version => 2.1.1+bzr5544-0ubuntu1 (16.04.1) [18:45] anyone had this problem before ? [19:01] hello everyone: for the last few days when I try to bootstrap a juju controller on maas it fails with the error: "ERROR failed to bootstrap model: bootstrap instance started but did not change to Deployed state: instance "4y3hek" is started but not deployed" Anyone have any ideas? I'm seeing older stuff on google but nothing recently... [19:01] this command worked fine the week before last, btw [19:02] any errors output on the console of the machine that was started? [19:03] quixoten: I hadn't thought of that, gimme a few and I'll see what happens on the console [19:05] Hi, any chance I can get some help with a wonky bootstrap node? looks like the mongodb config got broken/gone [19:06] here are the mongodb logs: http://paste.ubuntu.com/23491858/ after restarting juju-db [19:23] kjackal: yea i need to re-review that charm and then ill push a new review request [20:08] hatch: I've made changes to the openstack charms and have commited them, but they dont appear to be refreshing the proper changes. [20:08] bildz: was this on a fresh deploy? [20:09] yes [20:09] i did a conjure up conjure-up [20:09] this is absolutely amazing though [20:09] my mouth dropped [20:09] :D [20:09] hackedbellini o/ [20:10] lazyPower: here! :) [20:10] so, to recap for anyone that comes across this later, we're continuing investigating running a docker based workload in lxd [20:10] lazyPower: so, how can I rebuild the layer of the charm? [20:10] and you ran into a problem with a really old version of a charm that hasn't been refreshed with the latest layer fixes [20:10] bildz: so when you click on the application on the canvas, and you go to the configuration settings in the inspector - does it show your changes? [20:10] I need to restart the nova-cloud-controller and computes [20:10] checking [20:10] hackedbellini - first you'll need to clone the layer: https://github.com/chuckbutler/redmine-layer [20:11] hackedbellini - you'll also need charm-tools installed, with the juju stable ppa enabled, apt-get install charm-tools, or you can snap install it snap install charm [20:11] hatch: yes they changes are there [20:11] the* [20:11] lazyPower: both done! [20:12] hackedbellini - cd into the charm dir, and issue `charm build -r --no-local-layers` [20:12] bildz: ok then beyond that I'm not familiar with the internals of the openstack charms. [20:12] this will assemble the charm from its declared layers, and output to a build path. its likely to put it in $PWD/builds unless you've exported $JUJU_REPOSITORY in your shell [20:12] after making a change to a charm, is there a way to restart the service from the UI [20:12] seems when i made the change, openstack went plop [20:13] bildz: when a configuration option is changed, the 'config-changed' hook in the charm is run. It's up to the charm to do what it does at that point. If you wanted to manually restart you'd have to ssh into the machine and do it manually [20:14] bildz: I'd imagine that the openstack charms would restart what needed, but again, outside of my wheelhouse there [20:14] lazyPower: ok, it worked! Now I move the build to my charms dir? [20:14] yep, and juju deploy ./redmine [20:15] lazyPower: should I do a new deploy or change the charm of the one I already deployed? [20:16] I would recommend a fresh deploy [20:16] just to ensure we dont have any niggly issues hiding in there that might muddy the results [20:17] hatch: thanks I will let you know what i find out [20:17] hmm. private charms. what's the way to do them? upload them to canonical with only the people I want having ACLs on them? any way to just do it directly from a private git repo, or is it the controller that's pulling the charm? [20:20] hey bdx, did you submit something for the summit? [20:20] also got a weird one on the canonical-kubernetes bundle, and I think it has to do with the kubeapi-load-balancer [20:21] justicefries - we're cycling through an update which should catch the stray error with the api-lb [20:21] I installed tiller now that helm 2.0 is out, and I think it proxies through kubectl, but I'm getting an upgrade request when forwarding ports. [20:21] oh, cool. [20:21] we just published the charms, but hte bundle hasn't been revved yet [20:21] ^ that error too, or the one with the instance bouncing? [20:21] lazyPower: I have to go home now. Tomorrow I'll ping you to continue (hopefully what we did will be enough) [20:21] you can set ACL's on your charms in the store, yep [20:22] thanks for your time! :) [20:22] so you can use private repos, and then restrict the charms to your team using hte charm store ACL's [20:22] so its private all the way across [20:22] ok. any notion of self-hosted stores at this time? not a requirement for me, just curious [20:23] not that i'm aware of === alexisb is now known as alexisb-afk [20:26] cool. OH! but I can use --local while I'm devving charms, nice. [20:27] yep [20:30] hmm ok. if I'm creating an infrastructure charm (say, aws-elb) that doesn't depend on a certain version of ubuntu, what's the right folder structure? is `charms/precise` from the example simply convention, or is it a GOPATH-like requirement? === alexisb-afk is now known as alexisb [20:48] hey all, FYI, I had to build my own MacOS Sierra version of juju off the 2.0 branch because what's on the releases page is still on 1.6. anyway, this works fine when you already have a juju controller, but when you're trying to stand something up it can't find the right agent version (thanks to Sierra being in the version). maybe I should build off the tag [20:48] instead. [20:48] doesn't matter now because I have a controller [21:05] justicefries - you can make multi-series charms [21:05] eg: [21:05] in metadata.yaml just define `series: -xenial -trusty` [21:05] now, you can define series, but you cannot define multiple cross-series, like have centos-6 listed as well as -xenial [21:05] unless thats changed recently [21:05] hm ok got it. [21:06] also re-bootstrapping with tools [21:06] i would poke in #juju-dev about that, they might have some super secret sauce for you there [21:06] working with non-machine resources as charms overall just feels a little weird. [21:06] ah nice ok. [21:06] yeah, i totally understand [21:06] we call those proxy charms, and they just poke things with a stick to make it do somethin [21:06] which in itself is kind of odd but it does get the job done. [21:07] yeah [21:07] whats nice about them though, is you can colocate them in lxd on some unit you have running in your infra [21:07] so its all nice and isolated and cozy [21:07] if thast even a concern of infra [21:07] :) [21:07] now is that something I'd have to specify in the charm metadata that it can colo with another machine? or do I specify the machine when deploying my unit to make that happen? [21:07] the rules of when I get a machine vs. when it packs are a little fuzzy. [21:12] ah, ok. [21:13] so you can deploy most charms to lxd on a pricipal unit, eg --to lxc/5 which allocates a container on machine #5, whatever that may be [21:13] in the instance of bundles, our CDK core bundle uses colocation to squeeze easy-rsa on machine 0 [21:14] https://github.com/juju-solutions/bundle-kubernetes-core/blob/master/bundle.yaml#L27 [21:14] also looks like i botched the syntax, its now --to lxd:# [21:19] i think i'm starting to see through the murk. ok. so what I'm going to want next is to make sure I specify my --cloud-provider on the kube-apiserver. there's no way to add a flag as it stands today with another layer, is there? [21:20] basically to get the setup I want with my bundle, I need that, and I need to make sure my machines get an IAM profile, and I'd like it to create the IAM profile as well just so I have completely repeatable clusters. [21:21] Correct, you'd need to extend the kubernetes charms to take that --cloud-provider flag to enable the cloud provider specific integrations. we dont support that as it encourages bad behavior by not going through juju to request resources. [21:22] but if thats your end goal to fully integrate with $CLOUD, its a reasonable expectation to add some extensions to the template logic to enable that, and you'd have to manually provision the IAM role sets. [21:23] hmm. that's an interesting way to put it. [21:24] well its that or open a bug and we can openly talk about it. I know that in our previous planning sessions we explicitly decided to punt on the cloud provider specific integrations as its not portable [21:25] you provision a workload in kubernetes using an ELB, and then suddenly it doesn't work when you re-deploy on maas because its a different resource set on the backend [21:26] yeah. you want to keep it portable. i need to think about the balance I'd want here. obviously I'm used to PVs provisioning on the provider, and services/ingress doing the same. [21:26] where available. [21:27] I dont think its an unreasonable request, just not one we've committed to supporting yet. Ideally we would get some primitives for those in juju and extend kubernetes to talk to juju [21:27] ergo, i need a load balancer [21:27] oh, sure. [21:27] it requests a juju deployed haproxy [21:27] that would be a really nice way to do it [21:27] my workload wants storage, juju requests up EBS flavors [21:27] you'd almost need a charms equivalent of resources. "queue this charm up when kubernetes asks for this resource" [21:28] i've been thinking about how we can extend the worker pool with cloud storage using the existing juju storage feature set, it seems fairly limited, but it may be good enough to work as we can enlist those PV's directly with a simple manifest render after its been attached to the unit. [21:28] but today we only support ceph RBD as a PV in our k8s stack, with some commitment to extend that in the coming cycles with our other vendors like nexenta. [21:29] until it gets rescheduled right [21:29] exactly [21:29] as workloads move, the PV would be stuck on a different unit [21:29] so things get wonky in that scenario [21:29] yup. suddenly you're pinning stuff with node labels :o [21:30] heh. be nice if I could just attach kubernetes to my model's credentials. [21:30] and the charm could use that to make decisions. [21:30] interesting idea [21:31] what i would really like is the ability to aggregate resources without directly attaching them to a unit, instead allocating them against the charm's definition, and they become floating resources, which would enable those PV's to travel between the units. [21:31] but thats a pipe dream today as its a big departure from how its currently modeled [21:31] yeah sure [21:31] you'd almost at that point need kubernetes workloads represented as charms. [21:31] 10k ideas, 100 hours to complete them [21:31] go [21:31] haha yup [21:34] hmm I can't find the repo for containers/kubernetes-master [21:35] we're nested deep in the kubernetes repository 1 sec and i'll get you a direct link [21:35] https://github.com/juju-solutions/kubernetes/tree/master-node-split/cluster/juju/layers [21:36] ^ this is our latest work we just published today. We're nested deep in the cluster/juju directory tree of the kubernetes proper repo. We're a bit behind getting our changes upstream to their master branch, but we're actively working towards making that an easy process with submitting our e2e test results on a regular basis [21:36] which i'm actively working on today [21:36] ah ha. [21:37] well maybe I should stop asking questions then. :p [21:37] nah you're fine :) I'd rather help a user get moving with what they want to do, than satisfy beurocracy fwiw [21:58] heh, looking through these charms, I've been doing Go for years, getting used to python again phew. [21:58] yeah, duck typed refresher course [21:58] quack quack [21:58] i felt the same way coming to ruby/python from .net [21:59] decorators are sweet though in python 3. [21:59] awe thanks :D we abuse them like candy [21:59] yeah I kind of want to check out the new C# and .NET Core 1.1 stuff. [21:59] @when('this.makes.sense') [22:04] ah. is the kubernetes resource coming from a `charm attach`? [22:05] justicefries - correct, our resources are vetted by hand by mbruzek and I. we then attach those resources to the charms in the store during our release management process. If you wish to use your own bins, you can certaily override them with a `juju attach` [22:05] maybe at some point, right now just feeling it all out. [22:05] and when i say by hand, i mean we run e2e suits against a deployment and some additional things by hand. [22:05] but its mostly automated [22:05] sure sure [22:06] i'm going to have to do a bit of similar stuff with the windows CI charm I need to write :| [22:06] i feel like saying anything is manual is a bad thing in this whole process, its kind of baffling that just 2 dudes do all this. #thanksjuju [22:06] well, i was thinking about that [22:06] is there any reason yo ucouldn't use the .net container as a base for running those? [22:06] the way I'm putting it to my team is that juju and kubernetes used well lets us punch well above our weight class. [22:06] that would skinny up the required charm code [22:07] justicefries - thats a fan-freaking-tastic description. Can i quote you on that? [22:07] sure. :) [22:07] <3 love itttt, ta [22:07] GPU requirements on some of them. [22:07] * lazyPower heads off to twitter [22:07] that's the barrier on the containers. [22:07] on the linux side, sure, there's a lot of good precedent now for GPU containers. [22:07] https://twitter.com/lazypower/status/799373475819483139 [22:10] Hmmm you're right [22:10] cuda integration on containers for windows is funky, i just googled and saw the mess they're untangling. [22:10] * lazyPower retracts his shower thought [22:10] so its coming, but its not here today. [22:10] yeah. nvidia-docker wrapper is great so you don't get all screwed up on device mounting and driver versions on linux. [22:10] yup [22:10] well fortunately, when you're ready for that, i got your back [22:10] i'll reach out to teh cloudbase peeps [22:11] see if i can get you someone to pair and patch-pilot your first windows charm into the store [22:11] i wish. :| i'm basically just wrapping it all up into a well isolated thing that I don't need to deal with. that'd be awesome. windows automation just feels so ugly to me. [22:11] having come from a msdeploy based background [22:11] i know the feeling. powershell got a lot better, but its still not where i would want it to be. [22:14] yeah. fortunately there's a path there to linux for some of that for me in the next few months. it really affects the resources you're able to throw at the problem when you're constrained to windows for a certain part of the whole thing. [22:17] we had a large scale deployment for a marketing firm at my last job, and the core component of all of that was mssql server, and at the time there was zero support for running that on linux (which appears to have changed). So i completely understand the frustration there. Having a single mssql backend surrounded by ubuntu was maddening when it was the most finicky component of them all. [22:17] but i'm also not an mssql admin, so i probably did something wonky in there. [22:18] all i do know, is that WAL files for mssql are nightmare fuel [22:18] ugh [22:18] haha, it seems i'm in good company [22:18] i fortunately haven't been within a ten mile radius of mssql [22:20] this is for sure nightmare fuel too though. fortunately all of the services and everything else isn't that way [22:21] * lazyPower nods [22:21] unfortunately because even though stuff is migrating to linux, a lot of the devs are going to remain on windows, so there's a whole fun gyp infrastructure in place [22:21] and a rat's nest of linking that that team is maintaining. [22:24] hackedbellini - hey no problem. sorry it took me forever to see that message. i went scrolling back to touch base with how you were doing and see you went home for the day. Cheers until later today (when you see this) then :) [22:33] hmm. does it make sense to create a general "aws" charm with interfaces for each type of resource you might want to relate to? could you have a unit with multiple relations to the same interface? say you want 2 EBS volumes or something. [22:34] maybe not. maybe that'd end up being clunky versus just making two aws-ebs units [22:34] and adding the relations. [22:34] i think that having succinct representations for those managed services [22:34] so 1 charm for rds, 1 charm for ebs storage [22:34] yeah [22:34] you can abstract the common bits of that into a base layer [22:34] like layer-aws-managed-credentials or something [22:34] probably just an aws base layer that contains boto and stuff [22:34] so you can plug in your keys and all that, then write shim layers on top using the aws sdk [22:35] hmm credentials is an interesting one. [22:36] maybe a sane idea to do vaultproject.io and then have relations (once the vault is unsealed) that ultimately provide the related unit's api key [22:36] ahh see now you're getting into where we got mired and basically cound't agree. we wanted to use vault [22:36] but i dont know enough about it to really use vault effectively [22:37] its on my TODO to repalce easyrsa with vault for a ssl CA [22:37] i just wish it wasn't open core. :| [22:37] oh nice. [22:37] yep, so expect a pilot of that one in the coming months. we have some vault layers/charms in the wild already as community submissions [22:37] we're likely to pick that up, polish it, and drop it right into the bundle as a flavor [22:39] that adds a lot of power to it. does the current charm handle renewing with easyrsa? [22:39] we want to add that, but it doesn't exist today [22:39] the idea is to juju run-action easyrsa re-key, and it regenerates and pushes the keys out ot anything attached to the CA [22:39] its a long standing issue in kubernetes-proper, how to re-key a k8s installation. We'd like to contribute that back if we can [22:50] hm. noticed an interesting one [22:50] kubernetes-master needs socat installed to port forward! [22:50] really? that seems new [22:50] it was just using iptables before [22:51] E1117 15:48:41.963192 46813 portforward.go:329] an error occurred forwarding 49400 -> 44134: error forwarding port 44134 to pod tiller-deploy-2241983194-k4tdu_kube-system, uid : unable to do port forwarding: socat not found. [22:51] yup [22:51] good find [22:51] and easy fix too [22:52] is that something you'd put in basic or kubernetes-master? [22:52] kubernetes-master, in the layer.yaml under packages: [22:52] but you can work around it temporarily by just juju run --application kubernetes-master "apt-get install socat" [22:52] ah interesting, didn't know there was an option there for it. i was looking in the actual reactive [22:53] ah sure [22:53] yeah we'll get that committed for the next release [22:53] we just bumped the charms today, so its unlikely to get pushed unless mbruzek tells me i'm being a ninny [22:53] which he just did, great [22:53] why did i say anything [22:54] for context, we're on a hangout. i got it first hand [22:55] haha fair [22:55] is api.jujucharms.com down? [22:56] so that works. the workers need it too. [22:56] ack, i'll re-tag the bug to target both [22:56] i had to expose kube api directly, because going through the LB was giving an upgrade error, so something's off in that nginx config. [22:57] you can replicate by grabbing helm 2.0, doing a `helm init` to install tiller, and then `helm status` [22:57] vmorris - it doesn't appear to be. i'm able to deploy from the store, which is api driven [22:57] yeah i'm not able to deploy from the store for some reason :( [22:57] huh, are beta extensions disabled? [22:58] we dont explicitly disable them... what else have you uncovered justicefries? [22:58] OH [22:58] privileged is disabled. [22:58] yeah, i want to make that a config option [22:59] which I need for CI agents. though maybe I could just do a LXD CI agent and call it a day. [22:59] lazyPower can you confirm that api.jujucharms.com is at 162.213.33.122? and is that supposed to be pingable? [22:59] that way you can expsoe a smaller subset of workers that need priv. containers [22:59] 162.213.33.121 is the correct ip vmorris, however i think icmp is disabled [22:59] okay ty [23:00] ah that'll probably require some worker labeling [23:00] right, i can prototype that out real quick, 1 sec [23:01] i'm probably marching to my own internal k8s bundle and then backfeeding things that can be generalized. [23:06] justicefries - http://paste.ubuntu.com/23492869/ [23:06] something like this. you can import that into jujucharms.com/demo and visualize it. you get different worker pools for different "roles" per say. and using the tagging/labels you can narrow down how the workloads get scheduled [23:07] so options will get passed in as flags today? nice. [23:07] theres a labels config flag for worker [23:07] you'll see in the coming release, (probably the next actually) that ingress=True will only flag and schedule ingress on the units in that service pool, as it is today, its an all or nothing shot [23:08] and not every k8s worker should be an ingress, it was a blanket decision early on for expedience, but we're at a point today we can fine tune those operations to only effect sibling units. [23:08] s/service/application/ [23:08] man good thing mark isn't looking or i'd be flogged for that. [23:08] or rick_h for that matter [23:08] * lazyPower ducks [23:09] haha [23:09] actually I had to re-parse it though since ingresses technically do point at services [23:09] yeah, we're talking about a very mixed set of logical operands. and the more overlapping words, the more confusion the docs will be without illustration [23:09] and i come with no illustrations [23:10] https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/135 -- justicefries - your issue for socat, if you want to subscribe [23:11] horrible issues with spaces on aws today ... just mailed the list [23:11] no matter what I do, additional units will not deploy to a subnet in my space [23:12] I've created subnets in each AZ in my region [23:12] and added them to my space [23:12] still no luck [23:13] so bummed [23:13] sorry to read that bdx :/ [23:13] bdx: spaced and aws aren't fully supported. There was work there that was more of a PoC and so I'm sure it's an uphill battle. [23:13] ooo man, i think i told bdx otherwise :| [23:13] bdx: were working to reset and not make things so provider specific, but it's going to take time to basically rebuild the networking aupport unfortunately. [23:13] this is probably myf ault [23:14] I didn't know aws spaces was POC [23:14] this really throws a stick in my spokes [23:14] We celebrated spaces support with Maas a cycle ago, we spent the next cycle making it work properly on Maas and aws didn't get the same attention. It's something that we're learning hard lessons from right now [23:15] bdx: I'm sorry, we've not set you up for success here. [23:15] rick_h i apologize for my part in this too :| [23:16] running off all willy nilly with good news for everyone [23:16] its cool .. thanks [23:16] i mean bdx [23:16] * rick_h owes bdx beverages next summit [23:16] I don't know how I should move forward now ... lol .... 100+ subnets created .... all mapped out for each app [23:17] bdx: jam was looking at the bootstrap subnet work from your email to the list as a first step [23:17] rick_h: thats great news [23:17] bdx: and has been mapping out the bits that need to be rebuilt. [23:18] rick_h: thats awesome, thanks [23:18] bdx: but it's currently a 2.2 target for end of this cycle to have meaningful improvements. [23:18] darn ... ok [23:18] Right now it's very much in the 'spec and build a better path mode [23:18] nice [23:19] rick_h, lazyPower: so, what should I do then, just have a non-prod vpc for all non-prod apps [23:19] and a prod vpc for prod apps [23:19] it doesn't feel right [23:20] bc different client have different users accessing non-prod env, and if they are all clustered across the same address spaces .... [23:20] same with production envs [23:24] bdx - i'm unsure of how to recommend a better path to you at face value that wouldn't require unwinding temporary/workaround style fixes for this. [23:25] you're looking to gain tenant isolation up and down the stack at every stage right? between units/networking/et-al [23:26] lazyPower: yea ... because we have different client's users accessing the machines and services in across apps/app envs [23:27] right, and without spaces thats not a juju native primitive. You could achieve something like that by using another means of sdn, and configuring apps nievely to use that sdn - but its not clean, automated, or easy to rip out once spaces gain the proper support [23:27] yea [23:27] thanks for your insight [23:28] talking non-trivial surgery that woiud likely yield a redeploy [23:29] yeah, I mean ... luckily the next production deploy I'm doing is on private infrastructure and I'll be using the manual provider [23:30] I won't have to spin up any prod on aws till january I think [23:30] ehhh nix that [23:30] big aws prod deploy next month [23:31] I think I should just use a separate vpc for each production app deploy anyway [23:32] hopefully that will siplify things, though I've never tested adding models in vpcs outside of the one I bootstrapped to [23:32] simplify* [23:37] on a brighter note, I did get my barbican stack up and publicly accessable on aws [23:37] http://paste.ubuntu.com/23493006/ [23:37] it was a bear, and required hacking of the barbican charm in multiple areas [23:38] 20 deploys later [23:38] W000t [23:43] bdx: yea I mean is it worth just using a different regions? [23:46] for the uninitiated like myself: https://wiki.openstack.org/wiki/Barbican [23:46] bdx - interesting, so does this replace your interest in vault or does it augment it? [23:47] rick_h: aah, like create my models in different regions? [23:47] rick_h: then they would be forced to use subnets in the region? [23:48] errr, then *juju* would be forced to deploy the units to subnets within those regions [23:48] and disjoint from the other apps in other subnets in other regions? [23:48] bdx: just thinking out loud of forcing separation from staging and prod [23:49] bdx: using regions might be an approach [23:50] thats a great suggestion [23:51] rick_h: let me get back to you after I try implementing that [23:52] lazyPower: in all reality I'm super comfortable interfaceing to keystone [23:53] lazyPower: I was getting stumped around every corner with the intricacies of vault [23:54] I felt the same way during my discovery session with it [23:54] but i also thought that was just me being nooby with it, and once we had really flexed it it would become more obvious [23:54] the fact that I have no experience interfacing with vault, combined with the lack of (or any) documentation [23:55] posed a huge road block [23:55] I spent two weeks learning how to admin and interface to vault .... with 100+ clients, and each client with many users [23:56] yeah that sounds like nightmare fuel as a learning curve [23:56] I just don't see myself having the bandwidth to facilitate being the admin for it across a hoard of clients/users/apps/envs [23:57] keystone/barbican on the other hand [23:58] I'm a big +1 for using the applications you're comfortable with. thats 100% the reason kube-api-loadbalancer is nginx based today. I consciously thought to myself: If i have to go into a customer site and debug this deployment, i know nginx. I barely know haproxy. I can either spend the time learning that or use what i know and go from there. [23:58] exactly [23:58] and offtopic, but you're likely to be interested in this too bdx - https://twitter.com/lazypower/status/799401051300372480 [23:59] no way! [23:59] thats awesome! [23:59] yeah man stokachu just wrapped that POC up today [23:59] i think its too ealry to say its "supported" as he does in the blog post, but hey, with that hurculean task complete, he can sell it as whatever he wants :D [23:59] wow ... theres some people around my company who have been waiting for that so they an start playing with local deploys [23:59] haha