[06:18] <pitti> hello all
[06:19] <pitti> I read https://juju-docs.readthedocs.org/en/latest/expose-services.html and https://jujucharms.com/docs/stable/authors-intro and two existing charms, but I can't figure this out:
[06:19] <pitti> how/where do I declare the ports that "juju expose" should open?
[06:19] <pitti> with my current charm, juju expose doesn't do anything
[06:20] <pitti> but nowhere it is documented where/how to tell it what to open
[06:22] <pitti> also, how is that actually implemented? my rabbitmq-server/0 has "open-ports: 5672/tcp" in juju status, but that's in none of its nova secgroups
[06:23] <pitti> err, sorry,  I didn't expose it, it's in the juju-<env>-<machinenumber> rules as expected
[06:24] <pitti> so that still leaves the question how/where to declare that?
[06:33] <pitti> ah, I found it -- https://jujucharms.com/docs/stable/authors-hook-environment#open-port
[12:58] <gnuoy> coreycb, I'll take a look at   https://pastebin.canonical.com/133930/ after the call
[12:58] <coreycb> gnuoy, thanks
[15:18] <Syed_A> Hello Folks, Can anyone point out to me how quantum-gateway charm interacts with OpenStack Keystone. I don't see any relation between two.
[15:37] <Syed_A> 	In my openstack environment, instances are failing to get metadata from nova-api-metadata. And apparently the reason is the metadata agent is trying to contact keystone on loclahost instead of AUTH_URL
[16:21] <beisner> gnuoy, still around?
[16:21] <gnuoy> I am
[16:23] <beisner> gnuoy, i know you're working on some amulet tests;  looking for your input on a set of MPs -- 2 reasons -- avoiding potential collisions, and getting your feedback.   fyi, coreycb will be doing a review this week on those, and i'm sure we'd both take your input as well.  ;-)  please & thanks
[16:23] <beisner> https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates/+merge/262013
[16:23] <gnuoy> sure
[16:23] <beisner> gnuoy, the related charm MPs are linked on the c-h MP
[16:23] <beisner> gnuoy, appreciate it!
[16:29] <beisner> coreycb, ah heck.  your mp amulet test failed re: the inflight rename of quantum-gateway to neutron-gateway  https://code.launchpad.net/~corey.bryant/charms/trusty/neutron-gateway/proxy-none/+merge/262892    i'll update those tests shortly.
[16:31] <coreycb> beisner, ok
[17:33] <Bialogs> lazyPower: Hey question about the Kubernetes bundle: Is there a reason why the master/etcd charms deploy in the root container instead of lxc? Is there a problem with deploying lxc with those charms?
[17:41] <dpm> hi all, I'd like to jujufy a personal Django project. What's the best way to get started? I looked at this a while ago, and there still seem to be several different django charms on the store. Is python-django (trusty) the one to use? And why is the django bundle still on precise?
[18:10] <dweaver> should I expect to be able to upgrade juju agents using upgrade-juju from 1.23.3 to 1.24.0?
[18:23] <lazyPower> Bialogs: ETCD has to be reachable from the other nodes, LXC networking is still a bit hinky
[18:24] <lazyPower> Bialogs: however its safe to co-locate etcd on bootstrap unless you require additional resources/nodes
[18:25] <Bialogs> lazyPower: what about additional resources would cause a problem for putting etcd on the bootstrap machine
[18:25] <lazyPower> Bialogs: it doesn't necessarily make sense to deploy a cluster of 3 etcd machines on the same node
[18:26] <lazyPower> defeats the HA model
[18:36] <Bialogs> lazyPower: Thanks for the info
[18:37] <lazyPower> np Bialogs
[18:51] <beisner> gnuoy, coreycb - holding back the cinder-ceph amulet test mp, still some subordinate foo to resolve.
[18:56] <coreycb> beisner, ok
[19:00] <beisner> coreycb, the others are still re-testing, anticipate a-ok on those.
[19:17] <Bialogs> lazyPower: The bundle deployed correctly except for one thing... its using the fqdn instead of the ip address and for whatever reason the machine that has the master node cannot be looked up by name
[19:17] <lazyPower> Bialogs: maas provider?
[19:17] <Bialogs> error: couldn't read version from server: Get http://ajunta-pall.maas:8080/api: dial tcp: lookup ajunta-pall.maas: no such host
[19:18] <Bialogs> lazyPower: Also it created a new machine when there was one available in maas and it got a no available machine error. I had to manually delete that machine and service
[19:27] <Bialogs> lazyPower: although this error is occurring repeatedly i killed it & i continued with the instructions...created a pod and ran it
[19:27] <Bialogs> It still worked
[19:54] <beisner> coreycb, neutron-gateway tests update for charm name change @ https://code.launchpad.net/~1chb1n/charms/trusty/neutron-gateway/next-amulet-update-rename/+merge/263006
[20:01] <coreycb> beisner, I just had one comment
[20:03] <beisner> coreycb, i wondered about that, but it looks like the oldest supported release (precise-icehouse) has python-neutronclient in the cloud archive, no?
[20:03] <beisner> coreycb, looking at http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/main/binary-amd64/Packages
[20:03] <coreycb> beisner, ah ok so that must be from pre-icehouse
[20:03] <coreycb> beisner, lgtm then
[20:04] <beisner> coreycb, yep i think we're finally clear of that  \o/
[20:05] <beisner> coreycb, thanks!
[20:09] <beisner> coreycb, ok so the others re-tested a-ok and are more officially on to you.  tia!
[20:10] <coreycb> beisner, ok cool. I'll probably have to look at those tomorrow.
[20:10] <beisner> coreycb, totally not something you wanna start near eod
[20:10] <coreycb> beisner, lol
[21:24] <Odd_Bloke> aisrael: Thanks for looking at the ubuntu-repository-cache stuff!
[21:24] <aisrael> Odd_Bloke: Happy to help!
[21:25] <Odd_Bloke> aisrael: I think rcj knows something about the testing problems we've seen (and he's in the same room as me, as we're sprinting), so I'll get back to you on that.
[21:25] <aisrael> Odd_Bloke: Excellent, thanks! I'd love to track that down and make sure it's fixed, if it's an amulet issue.
[21:26] <Odd_Bloke> aisrael: Our response might just be "yeah, amulet is broken". ;)
[21:26] <aisrael> Odd_Bloke: A perfectly reasonable response. :D If so, the good news is that there's a good example of how to recreate the failure.
[21:33] <Odd_Bloke> aisrael: We need to test single units with sync-on-start; it's only really optional for multi-unit deployments.
[21:36] <aisrael> Odd_Bloke: Understood.