[11:31] gnuoy, do you have a few minutes for https://code.launchpad.net/~james-page/charm-helpers/optimize-dkms-headers/+merge/282927 ? [11:31] gnuoy, will help towards https://git.launchpad.net/~james-page/charms/+source/openstack-on-lxd-bundle/tree/ [11:32] looking [11:32] gnuoy, btw I have an entire openstack cloud running on my laptop under LXD [11:32] including overlay networks, multiple nics and nested KVM... [11:32] inc the gateway ? [11:32] yikes! [11:32] fmd [11:32] that's fantastic [11:33] yeah I said something similar [11:33] gnuoy, a few oddities - I'm using the ZFS backend for LXD - but ceph no like that with direct io enabled [11:34] the two wise Chris' will have that fixed in two shakes of a lambs tail I'm sure [11:35] gnuoy, yeah [11:35] gnuoy, anyways... [11:35] approved [11:35] gnuoy, awesome [11:47] gnuoy, how are we looking for end-of-the-month? [11:48] jamespage, early to say. We are feature frozen and tracking here: https://docs.google.com/spreadsheets/d/1K1iR2-HlVEePsG_wOo6_zlL0d4ZDiBh3VaiLsb7qbXc/edit [14:49] jamespage, is is possible to poke around the dellstack setup? I wanted to understand the dns-namserver snafu we saw in /etc/network/interfaces. We ran into this again last week but on on 1.25 which has since been fixed but I cannot repro this on my local setup. [14:49] frobware, possibly but thedac did tear everything down [14:50] frobware, the only diff we could see what that I built ontop of 14.04 and he was building in 15.10 - which would mean different go versions.... [14:51] jamespage, difficult to understand why go would make the difference unless a later go (15.10) uses its own resolver library/implementation... [14:51] frobware, any compiler version conditional logic in the codebase? [14:52] jamespage, not sure. but, if this happens with a maas node deploy that takes go/juju out of the equation. curtin/maas version combo? [14:54] jamespage, but I will try a maas 1.9.0 install on 15.10 out of completeness. [14:54] frobware, well that could be quite possible [14:54] frobware, oh the MAAS install was on 14.04 always [14:55] just the juju build was different [14:55] jamespage, oh. ok. [14:55] jamespage, that's still worth trying anyway. a little quicker for me than maas on 15.10 [15:00] jamespage, do you know if the bootstrap node was trusty or wily? [15:07] frobware, trusty [15:07] jamespage, thx === med_ is now known as Guest51217 [17:09] mbruzek, lazyPower: I want to run something by you concerning layer-tls ..... After giving a lot of thought to layer-tls, I feel like I might of initially taken the wrong impression about how it could be used and implemented ... [17:10] mbruzek, lazyPower: initially when looking over k8s.py I was strucken with the idea to use layer-tls as a supporting layer to web applications to help generate crts and keys, and get them in the right places [17:14] mbruzek, lazyPower: I am now feeling like layer-tls would/could be better/more efficiently used if it was deployed as a standalone charm, and other services/charms that need certs could relate to it and get their cert/keys generated and passed to them via unitdata [17:16] mbruzek, lazyPower: I feel like the latter is how you were intending it to be used ... what are your thoughts on this? [17:45] bdx: That is an interesting idea that I had not thought of. The current layer-tls charm does not pass around the private keys, so I like that part. If you had a separate charm the keys would be have to be passed back to the units and in theory could be intercepted [17:46] bdx: We have 2 charms consuming the tls layer and that seems to work, but if it does not work for your case we can iterate on the design. [17:58] mkruzek, no, it works, I just feel like having a centralized ca that issued the certs out to requesters would be far more lightweight and more easily consumable. Is this not what you were going for? === tinwood is now known as tinwood_ === tinwood_ is now known as tinwood__ === tinwood__ is now known as tinwood [18:06] mbruzek: other critical data is passed via relation ... [18:11] bdx That makes sense in a lot of ways [18:11] bdx And I had considered that, pairing this down and co-locating the CA's with the model controllers [18:12] bdx using relations to the CA pki on the mc, you could then relate all over your model, and get self signed certificates supporting whatever endpoints react to the interface tls.available, i think it warrants some later investigation [18:13] bdx as it stands, we only have a single relation, which is the peering relationship to exchange the csr/keys. It would be cool to see that logic teased apart into a provides/requires role and then refactor peering to use the common code (lib?) [18:16] lazyPower: my thoughts exactly .... I cant help but think that it would be overkill to include/install that layer on every openstack service, or every webapp .... just seems like a centralized ca would really sweeten the deal here [18:16] bdx: we <3 contributions - you can start with just the tls layer and the tls interface - all the breadcrumbs you really need are right there [18:17] bdx: bonus points if you bundle up some examples once you have the CA working as a stand alone service [18:17] we're about to move into the next focus area of log infrastructure and backups [18:19] lazyPower, mbruzek: I am currently trying to redesign our webapp deployment process to be juju deployed, we have a bunch of django/rails webapps with nginx front ends that all need certs and keys, the tls-layer will be a crucial part of this, so I think I can justify the time [18:19] lazyPower: ^^Nice, thats definitely needed [18:20] lazyPower, mbruzek: thanks for your insight on this [18:20] bdx anytime :) we'll happily lend a hand where we can and do some CR along the way [18:28] kwmonroe: happy new year [18:28] lazypower: you too :) [18:28] asanjar \o/ [18:28] happy new year asanjar [18:29] not to immutable mbruzek [18:29] he's gone to immutable lunch [18:30] then immutable bathroom run [18:31] oi [18:33] lazypower: I will send you a meeting invite for sometimes next week, I like to discuss the work you have done with Juju and Docker [18:33] asanjar thats crunch week for me, final prep for belgium [18:34] but i can squeeze in 30 minutes or so on tuesday [18:34] lazypower: then I'll see after Belgium [18:35] ok, whats the focus area? [18:35] launching big data containers? [18:37] lazypower: yeap.. trying to convenience bigtop community to use Juju instead of vagrant and puppet to manage their docker containers [18:37] asanjar - If their containers are already baked, and you have say a docker-compose file - send me that and i'll send you a charm to deploy their stack [18:39] lazypower: will do, [19:00] * aisrael waves to asanjar [19:26] Hey asanjar I just got back from lunch. How are you doing? [20:05] hi aisrael Happy New Year, how are you my friend [20:06] mbruzek: miss me :) [20:10] asanjar: Very good, thanks, and you? === mwhudson is now known as Guest86296 === Guest86296 is now known as mwhudson === mwhudson is now known as Guest65934 === Guest65934 is now known as mwhudson [20:31] mbruzek, lazyPower: am I on the right track here -> https://github.com/jamesbeedy/interface-tls/blob/add_provides/provides.py ? [20:31] bdx looking [20:38] bdx: It looks like a great start! I like this approach too. [20:39] bdx: I think you might have the term "service" overloaded here. "We expect multiple, separate services to be related. [20:39] No two services should share the same key, cert." [20:41] Unless I don't understand what you meant, this would be units. $ juju deploy -n 3 mysql [20:41] mbruzek: exactly, and we would want all 3 units to have the same client key right? [20:41] mysql/0, mysql/1, mysql/2 <- Those are units, "mysql" is the service [20:42] mbruzek, exactly [20:42] bdx: Yes the units _could_ have the same client credentials, but what if each unit needs their own cert, key [20:43] mbruzek: yea, I've been going back and fourth examining use cases ... [20:43] service = hookenv.remote_service() ... [20:43] key = '{0}_signed_certificate'.format(service) [20:44] This would give you one signed certificate per _service_ I think you want one per _unit_ [20:45] mbruzek: take for instance, 3 nginx frontends behind an haproxy, if nginx[0] dies, we don't want to have to re-add the cert to our cache to be able to access the site again [20:46] bdx: That is true, but in the Kubernetes case each unit has its own api server that needs a key and cert. I *think* they have to (should) b different certs and keys than the other servers [20:47] haproxy is a special case, in that case would haproxy request the cert/key ? [20:47] in the context of per unit key,cert pairs nginx[1] would need to prompt users to re-auth against the different cert,key pair [20:47] ok [20:48] mbruzek: I'm not sure, a use case I was just looking at involves all web front ends having the same client key,certs [20:48] behind haproxy [20:49] I'm sure there are other ways [20:49] bdx: In that scenario I would have haproxy request the key/cert. In the swarm case different cert/keys are generated for each unit. [20:49] mbruzek: entirely [20:50] mbruzek: ok, I'll revise my work per ^ [20:51] mbruzek: this will highly impact how tls is implemented/used by the openstack service charms [20:51] I am happy to iterate on this. We could build the meaty methods that do the generation and discuss use cases for certs and keys later? [20:51] bdx: Yes if we get a good solution everyone will want to use it. [20:52] mbruzek: totally, but in the instance of HA openstack service endpoints and proxying to them [20:53] mbruzek: oooh, nm, it looks like individual (key,cert) is generated per unit for the same service if using ssl [20:54] currently [21:16] bdx what about 2 interfaces [21:16] bdx one that distributes "shared cert/key" and one that does per-unit cert/key? [21:17] lazyPower: great idea [21:23] lazyPower, mbruzek: I feel it would mimic what the current layer-tls does with 'tls.server.certificate', to 'tls.client.certificate', and 'tls.client.key' ...? [21:23] errr current interface [21:23] Yep, same events signal'd the difference is what keypair its generating/syndicating [21:24] lazyPower: entirely [21:27] lazyPower, mbruzek: this would then allow layer-tls to be extended to set the client.{cert,key} in unitdata, and other relating charms to be able to get it [21:27] bdx that sounds awesome [21:30] totally, props to lazyPower for the needed bump in the right direction! [21:38] I'm just operating the rudder :) [21:39] Yes.... a lazy, yet powerful position. very fitting [23:44] lazyPower, mbruzek: its rough, but getting close I think [23:44] lazyPower, mbruzek: here is interface-tls-gen -> https://github.com/jamesbeedy/interface-tls-gen/blob/master/provides.py [23:45] lazyPower, mbruzek: and here is how I am looking at integrating it into layer-tls -> https://github.com/jamesbeedy/layer-tls/blob/add_tls_gen/reactive/tls.py#L224-L266 [23:46] I'm not sure the duplication of this method body is required [23:46] * lazypower will need to look closer later [23:46] oh.... so thats what I had a question about .... [23:47] I'm not sure I need to wrap the create_certificates in the @when('tls-gen.key.cert.requested') [23:47] ok, thanks [23:48] bdx it may be required i'm not super focused on the review [23:48] * lazypower will look in the am [23:48] i'm currently teasing apart some interface code from the ETCD charm, and refactoring [23:50] np, thanks