[11:31] <jamespage> gnuoy, do you have a few minutes for https://code.launchpad.net/~james-page/charm-helpers/optimize-dkms-headers/+merge/282927 ?
[11:31] <jamespage> gnuoy, will help towards https://git.launchpad.net/~james-page/charms/+source/openstack-on-lxd-bundle/tree/
[11:32] <gnuoy> looking
[11:32] <jamespage> gnuoy, btw I have an entire openstack cloud running on my laptop under LXD
[11:32] <jamespage> including overlay networks, multiple nics and nested KVM...
[11:32] <gnuoy> inc the gateway ?
[11:32] <jamespage> yikes!
[11:32] <gnuoy> fmd
[11:32] <gnuoy> that's fantastic
[11:33] <jamespage> yeah I said something similar
[11:33] <jamespage> gnuoy, a few oddities - I'm using the ZFS backend for LXD - but ceph no like that with direct io enabled
[11:34] <gnuoy> the two wise Chris' will have that fixed in two shakes of a lambs tail I'm sure
[11:35] <jamespage> gnuoy, yeah
[11:35] <jamespage> gnuoy, anyways...
[11:35] <gnuoy> approved
[11:35] <jamespage> gnuoy, awesome
[11:47] <jamespage> gnuoy, how are we looking for end-of-the-month?
[11:48] <gnuoy> jamespage, early to say. We are feature frozen and tracking here: https://docs.google.com/spreadsheets/d/1K1iR2-HlVEePsG_wOo6_zlL0d4ZDiBh3VaiLsb7qbXc/edit
[14:49] <frobware> jamespage, is is possible to poke around the dellstack setup? I wanted to understand the dns-namserver snafu we saw in /etc/network/interfaces. We ran into this again last week but on on 1.25 which has since been fixed but I cannot repro this on my local setup.
[14:49] <jamespage> frobware, possibly but thedac did tear everything down
[14:50] <jamespage> frobware, the only diff we could see what that I built ontop of 14.04 and he was building in 15.10 - which would mean different go versions....
[14:51] <frobware> jamespage, difficult to understand why go would make the difference unless a later go (15.10) uses its own resolver library/implementation...
[14:51] <jamespage> frobware, any compiler version conditional logic in the codebase?
[14:52] <frobware> jamespage, not sure. but, if this happens with a maas node deploy that takes go/juju out of the equation. curtin/maas version combo?
[14:54] <frobware> jamespage, but I will try a maas 1.9.0 install on 15.10 out of completeness.
[14:54] <jamespage> frobware, well that could be quite possible
[14:54] <jamespage> frobware, oh the MAAS install was on 14.04 always
[14:55] <jamespage> just the juju build was different
[14:55] <frobware> jamespage, oh. ok.
[14:55] <frobware> jamespage, that's still worth trying anyway. a little quicker for me than maas on 15.10
[15:00] <frobware> jamespage, do you know if the bootstrap node was trusty or wily?
[15:07] <jamespage> frobware, trusty
[15:07] <frobware> jamespage, thx
[17:09] <bdx> mbruzek, lazyPower: I want to run something by you concerning layer-tls ..... After giving a lot of thought to layer-tls, I feel like I might of initially taken the wrong impression about how it could be used and implemented ...
[17:10] <bdx> mbruzek, lazyPower: initially when looking over k8s.py I was strucken with the idea to use layer-tls as a supporting layer to web applications to help generate crts and keys, and get them in the right places
[17:14] <bdx> mbruzek, lazyPower: I am now feeling like layer-tls would/could be better/more efficiently used if it was deployed as a standalone charm, and other services/charms that need certs could relate to it and get their cert/keys generated and passed to them via unitdata
[17:16] <bdx> mbruzek, lazyPower: I feel like the latter is how you were intending it to be used ... what are your thoughts on this?
[17:45] <mbruzek> bdx: That is an interesting idea that I had not thought of.  The current layer-tls charm does not pass around the private keys, so I like that part.  If you had a separate charm the keys would be have to be passed back to the units and in theory could be intercepted
[17:46] <mbruzek> bdx: We have 2 charms consuming the tls layer and that seems to work, but if it does not work for your case we can iterate on the design.
[17:58] <bdx> mkruzek, no, it works, I just feel like having a centralized ca that issued the certs out to requesters would be far more lightweight and more easily consumable. Is this not what you were going for?
[18:06] <bdx> mbruzek: other critical data is passed via relation ...
[18:11] <lazypower> bdx That makes sense in a lot of ways
[18:11] <lazypower> bdx And I had considered that, pairing this down and co-locating the CA's with the model controllers
[18:12] <lazypower> bdx using relations to the CA pki on the mc, you could then relate all over your model, and get self signed certificates supporting whatever endpoints react to the interface tls.available, i think it warrants some later investigation
[18:13] <lazypower> bdx as it stands, we only have a single relation, which is the peering relationship to exchange the csr/keys. It would be cool to see that logic teased apart into a provides/requires role and then refactor peering to use the common code (lib?)
[18:16] <bdx> lazyPower: my thoughts exactly .... I cant help but think that it would be overkill to include/install that layer on every openstack service, or every webapp .... just seems like a centralized ca would really sweeten the deal here
[18:16] <lazypower> bdx: we <3 contributions - you can start with just the tls layer and the tls interface - all the breadcrumbs you really need are right there
[18:17] <lazypower> bdx: bonus points if you bundle up some examples once you have the CA working as a stand alone service
[18:17] <lazypower> we're about to move into the next focus area of log infrastructure and backups
[18:19] <bdx> lazyPower, mbruzek: I am currently trying to redesign our webapp deployment process to be juju deployed, we have a bunch of django/rails webapps with nginx front ends that all need certs and keys, the tls-layer will be a crucial part of this, so I think I can justify the time
[18:19] <bdx> lazyPower: ^^Nice, thats definitely needed
[18:20] <bdx> lazyPower, mbruzek: thanks for your insight on this
[18:20] <lazypower> bdx anytime :) we'll happily lend a hand where we can and do some CR along the way
[18:28] <asanjar> kwmonroe: happy new year
[18:28] <asanjar> lazypower: you too :)
[18:28] <lazypower> asanjar \o/
[18:28] <lazypower> happy new year asanjar
[18:29] <asanjar> not to immutable mbruzek
[18:29] <lazypower> he's gone to immutable lunch
[18:30] <asanjar> then immutable bathroom run
[18:31] <lazypower> oi
[18:33] <asanjar> lazypower: I will send you a meeting invite for sometimes next week, I like to discuss the work you have done with Juju and Docker
[18:33] <lazypower> asanjar thats crunch week for me, final prep for belgium
[18:34] <lazypower> but i can squeeze in 30 minutes or so on tuesday
[18:34] <asanjar> lazypower: then I'll see after Belgium
[18:35] <lazypower> ok, whats the focus area?
[18:35] <lazypower> launching big data containers?
[18:37] <asanjar> lazypower: yeap.. trying to convenience bigtop community to use Juju instead of vagrant and puppet to manage their docker containers
[18:37] <lazypower> asanjar - If their containers are already baked, and you have say a docker-compose file - send me that and i'll send you a charm to deploy their stack
[18:39] <asanjar> lazypower: will do,
[19:00]  * aisrael waves to asanjar 
[19:26] <mbruzek> Hey asanjar I just got back from lunch.  How are you doing?
[20:05] <asanjar> hi aisrael Happy New Year, how are you my friend
[20:06] <asanjar> mbruzek: miss me :)
[20:10] <aisrael> asanjar: Very good, thanks, and you?
[20:31] <bdx> mbruzek, lazyPower: am I on the right track here -> https://github.com/jamesbeedy/interface-tls/blob/add_provides/provides.py ?
[20:31] <mbruzek> bdx looking
[20:38] <mbruzek> bdx: It looks like a great start!  I like this approach too.
[20:39] <mbruzek> bdx: I think you might have the term "service" overloaded here.  "We expect multiple, separate services to be related.
[20:39] <mbruzek> No two services should share the same key, cert."
[20:41] <mbruzek> Unless I don't understand what you meant, this would be units.  $ juju deploy -n 3 mysql
[20:41] <bdx> mbruzek: exactly, and we would want all 3 units to have the same client key right?
[20:41] <mbruzek> mysql/0, mysql/1, mysql/2  <- Those are units, "mysql" is the service
[20:42] <bdx> mbruzek, exactly
[20:42] <mbruzek> bdx: Yes the units _could_ have the same client credentials, but what if each unit needs their own cert, key
[20:43] <bdx> mbruzek: yea, I've been going back and fourth examining use cases ...
[20:43] <mbruzek> service = hookenv.remote_service() ...
[20:43] <mbruzek> key = '{0}_signed_certificate'.format(service)
[20:44] <mbruzek> This would give you one signed certificate per _service_ I think you want one per _unit_
[20:45] <bdx> mbruzek: take for instance, 3 nginx frontends behind an haproxy, if nginx[0] dies, we don't want to have to re-add the cert to our cache to be able to access the site again
[20:46] <mbruzek> bdx: That is true, but in the Kubernetes case each unit has its own api server that needs a key and cert.  I *think* they have to (should) b different certs and keys than the other servers
[20:47] <mbruzek> haproxy is a special case, in that case would haproxy request the cert/key ?
[20:47] <bdx> in the context of per unit key,cert pairs nginx[1] would need to prompt users to re-auth against the different cert,key pair
[20:47] <bdx> ok
[20:48] <bdx> mbruzek: I'm not sure, a use case I was just looking at involves all web front ends having the same client key,certs
[20:48] <bdx> behind haproxy
[20:49] <bdx> I'm sure there are other ways
[20:49] <mbruzek> bdx: In that scenario I would have haproxy request the key/cert. In the swarm case different cert/keys are generated for each unit.
[20:49] <bdx> mbruzek: entirely
[20:50] <bdx> mbruzek: ok, I'll revise my work per ^
[20:51] <bdx> mbruzek: this will highly impact how tls is implemented/used by the openstack service charms
[20:51] <mbruzek> I am happy to iterate on this.  We could build the meaty methods that do the generation and discuss use cases for certs and keys later?
[20:51] <mbruzek> bdx: Yes if we get a good solution everyone will want to use it.
[20:52] <bdx> mbruzek: totally, but in the instance of HA openstack service endpoints and proxying to them
[20:53] <bdx> mbruzek: oooh, nm, it looks like individual (key,cert) is generated per unit for the same service if using ssl
[20:54] <bdx> currently
[21:16] <lazypower> bdx what about 2 interfaces
[21:16] <lazypower> bdx one that distributes "shared cert/key" and one that does per-unit cert/key?
[21:17] <bdx> lazyPower: great idea
[21:23] <bdx> lazyPower, mbruzek: I feel it would mimic what the current layer-tls does with 'tls.server.certificate', to 'tls.client.certificate', and 'tls.client.key' ...?
[21:23] <bdx> errr current interface
[21:23] <lazypower> Yep, same events signal'd the difference is what keypair its generating/syndicating
[21:24] <bdx> lazyPower: entirely
[21:27] <bdx> lazyPower, mbruzek: this would then allow layer-tls to be extended to set the client.{cert,key} in unitdata, and other relating charms to be able to get it
[21:27] <mbruzek> bdx that sounds awesome
[21:30] <bdx> totally, props to lazyPower for the needed bump in the right direction!
[21:38] <lazypower> I'm just operating the rudder :)
[21:39] <bdx> Yes.... a lazy, yet powerful position. very fitting
[23:44] <bdx> lazyPower, mbruzek: its rough, but getting close I think
[23:44] <bdx> lazyPower, mbruzek: here is interface-tls-gen -> https://github.com/jamesbeedy/interface-tls-gen/blob/master/provides.py
[23:45] <bdx> lazyPower, mbruzek: and here is how I am looking at integrating it into layer-tls -> https://github.com/jamesbeedy/layer-tls/blob/add_tls_gen/reactive/tls.py#L224-L266
[23:46] <lazypower> I'm not sure the duplication of this method body is required
[23:46]  * lazypower will need to look closer later
[23:46] <bdx> oh.... so thats what I had a question about ....
[23:47] <bdx> I'm not sure I need to wrap the create_certificates in the  @when('tls-gen.key.cert.requested')
[23:47] <bdx> ok, thanks
[23:48] <lazypower> bdx it may be required i'm not super focused on the review
[23:48]  * lazypower will look in the am
[23:48] <lazypower> i'm currently teasing apart some interface code from the ETCD charm, and refactoring
[23:50] <bdx> np, thanks