[04:03] <redelmann_> mh... is safe to use block-storage-broker from charmes in trusty?
[04:03] <redelmann_> easy as cloning to /trusty/ and deploying?
[04:05] <redelmann_> well, there are a lot of fork of official block-storage-broker
[04:06] <redelmann_> I'll assume that it's safe
[06:18] <stub> redelmann: bsb works fine in trusty. I don't know if there is a reason it hasn't been promulgated.
[06:26] <gnuoy`> jamespage, I have 3 mps to support dvr if you get a sec http://paste.ubuntu.com/10716710/
[06:56] <gnuoy`> jamespage, and another unrelated small one if you have time https://code.launchpad.net/~gnuoy/charms/trusty/keystone/token-expiry/+merge/254870
[07:10] <jamespage> gnuoy`, keystone reviewed and landed
[07:10] <jamespage> I also synced up the kilo template with ed's pki stuff
[07:10] <gnuoy`> jamespage, thanks!
[07:11] <jamespage> gnuoy`, nice to have a unit test for the charm-helper change
[07:11] <jamespage> ?
[07:12] <gnuoy`> jamespage, sure, I'll do that now
[07:13] <jam> hey jamespage. I had a "quick" question for you
[07:13] <jamespage> jam: fireaway
[07:13] <jam> elmo had mentioned that he'd really like cgroup QoS for the Openstack charms
[07:13] <jam> is that something you're part of ?
[07:14] <jam> jamespage: I think he CC'd you on the last email I had with him.
[07:15] <jamespage> jam: he did
[07:32] <jamespage> gnuoy`, neutron-openvswitch has lint and test errors/failures
[07:32] <gnuoy`> urgh, sorry about that. I could have sworn I fixed those up
[07:39] <jamespage> gnuoy`, one bigish comment on your neutron-api change
[07:39] <jamespage> it looks to complex
[07:39] <jamespage> (see MP for details)
[07:39] <jamespage> all you want todo is pass the data from keystone down to neutron-openvswitch right?
[07:39] <jamespage> so the remapping looks surplus and inefficient
[07:40] <gnuoy`> Well, I've had mps bounced before for not being explicit about what I'm expecting and setting and acting as a blind proxy
[07:41] <gnuoy`> jamespage, I'll take a look. I've updated the charm helpers mp
[07:41] <jamespage> gnuoy`, oh wait I see
[07:43] <gnuoy`> jamespage,I think your way would leave the keystone settings in place even if the keystone <-> neutron-api relation was removed
[07:43] <gnuoy`> which I think is wrong
[07:43] <jamespage> gnuoy`, you'r right
[07:44] <jamespage> gnuoy`, also the context does some remapping from what keystone provides
[07:44] <jamespage> specifically tenant, username and password
[07:44] <gnuoy`> jamespage, hmm, that may not be useful tbh
[07:44] <gnuoy`> the remapping
[07:44] <jamespage> so to re-use the same context in openvswitch, you need todo what you've done
[07:44] <gnuoy`> ah, yes
[07:45] <gnuoy`> that's why
[07:45] <gnuoy`> yesterday was sooo long ago
[07:45] <jamespage> gnuoy`, can you add auth_protocol to the list as well please
[07:45] <jamespage> just in case we need that sometime
[07:45] <gnuoy`> jamespage, sure. I need to step out for 30mins but will do it when I get back
[07:45] <jamespage> (I have it in my head that's required for kilo but I'm prob wrong)
[07:45] <jamespage> gnuoy`, ack
[09:00] <gnuoy`> ssh chure
[09:35] <jamespage> gnuoy`, https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/bug-1439085/+merge/254882
[09:41] <gnuoy`> jamespage, approved
[10:01] <philip_stoev> What is the canonical way for obtaining the information about a juju inteface endpoint? For example, if I have a mysql charm, how can I see, using the command line, the username and password that have been generated?
[10:02] <philip_stoev> The only solution I could find was to use juju debug-hooks, but it does not work for me -- relation-get is not found
[10:19] <stub> philip_stoev: You can use 'juju run --unit=foo/0 relation-ids', 'juju run --unit=foo/0 relation-get'
[10:20] <stub> philip_stoev: Of if your fingers get tired, install the juju-relinfo package from ppa:stub/juju to give yourself the 'juju relation-get' and 'juju relation-set' commands
[10:37] <philip_stoev> thank you!
[11:29] <gnuoy`> jamespage, http://paste.ubuntu.com/10716710/ are ready for review if you have a moment
[11:29] <jamespage> gnuoy`, going to have lunch - will do them when I get back
[11:29] <jamespage> gnuoy`, zmq is testing well now
[11:29] <gnuoy`> \o/
[11:30] <jamespage> gnuoy`, just polishing things
[11:30] <jamespage> gnuoy`, well for nova and neutron
[11:30] <jamespage> cinder appears quite broken
[11:31] <jamespage> gnuoy`, nested kvm appears a little happer now as well
[11:31] <jamespage> no drops all this morning
[11:32] <gnuoy`> jamespage, same here, I've done multiple big deploys this morning and no drops
[11:32] <jamespage> gnuoy`, did you see my comment re removing the downed rmq broker?
[11:32] <jamespage> it was first in all the lists so was causing lag
[11:33] <gnuoy`> jamespage, could we have tuned the timout value ?
[11:33] <jamespage> maybe
[11:33]  * jamespage looks
[11:34] <gnuoy`> dosaboy, I was planning on getting some of the le branches merged to next this afternoon. Are you working on them atm?
[11:47] <dosaboy> gnuoy: lets discuss ;)
[12:44] <jamespage> gnuoy, charmhelper merged - looking at charms now
[12:54] <jamespage> gnuoy, all done and merged - thanks!
[12:56] <gnuoy> jamespage, fantastic. thanks for all your review/merge efforts
[12:56] <jamespage> gnuoy, I'm pretty happy with this lot - http://paste.ubuntu.com/10718125/
[12:57] <jamespage> I need to look at glance and cinder still
[12:57] <jamespage> + ceilometer
[12:57] <jamespage> but nova/neutron is all good for 0mq
[12:57] <gnuoy> jamespage, ack, I'll take a look
[12:57] <jamespage> gnuoy, ta
[12:58] <jamespage> gnuoy, you can run that against the kilo staging ppa
[12:58] <jamespage> ppa:ubuntu-cloud-archive/kilo-staging
[12:58] <jamespage> but keystone needs a hack right now with next branch as its not at b3 yet in archive (waiting MIR's)
[13:08] <gnuoy> jamespage, the neutron-ovs one needs rebasing
[13:09] <gnuoy> I would guess compute, api and gateway probably will too
[13:12] <jamespage> urgh - Ok I'll check
[13:12] <jamespage> I suspect I just hosed myself by landing your bits first....
[13:21] <jcastro> Odd_Bloke, any luck with your problem?
[14:21] <jamespage> gnuoy, they should all be good now
[14:21] <gnuoy> jamespage, thanks. Do, you have a bundle I can prod at?
[14:23] <jamespage> gnuoy, oct bundles/0mq
[14:23] <gnuoy> ack
[14:28] <Odd_Bloke> jcastro: I haven't really looked at it; I was just playing around in downtime during test runs.
[14:28]  * jcastro nods
[14:54] <lazyPower> If anyone missed the charmers meeting and wanted to catch up - the video is here: https://www.youtube.com/watch?v=99iiQCypEGI
[15:00] <apuimedo> cool having the meetings recorded :-)
[15:01] <mbruzek> yeah +1 on the recorded meetings
[15:11] <mbruzek> tvansteenburgh: If I use the jenkins test runner will I get the new isolated containers on Jenkins?
[15:12] <tvansteenburgh> yes
[15:12] <mbruzek> across all clouds?
[15:12] <tvansteenburgh> yes
[15:12] <mbruzek> thanks
[15:29] <gnuoy> jamespage, I've landed your 0mq branches. Thanks!
[15:37] <jamespage> gnuoy, awesome - thankyou
[16:12] <ennoble> Does the action feature flag work with 1.22? I can't seem to get actions to work. I have JUJU_DEV_FEATURE_FLAG="action" destroyed my environment, re-created the environment and then juju help action or juju action both return "ERROR unknown command or topic for action"
[16:14] <jw4> ennoble: yes it's in 1.22
[16:15] <jw4> ennoble: the envar is plural... JUJU_DEV_FEATURE_FLAGS
[16:15] <ennoble> any idea why i can't quite get it to work... the user feature flag works (jes) and enables help user
[16:16] <ennoble> jw4: is setting the environment variable enough or do I need to destroy and re-create the environment too?
[16:17] <jw4> ennoble: just setting the feature flag should be sufficient
[16:17] <ennoble> jw4: thanks, it's working; I appreciate it
[16:17] <jw4> ennoble: you're welcome :)
[16:18] <ennoble> jw4: it will become the default in 1.23?
[16:18] <jw4> ennoble: yes
[16:52] <marcoceppi_> ennoble: 1.23-beta1 is out in the ppa:juju/devel repository if you wanted to try it out
[16:53] <marcoceppi_> ennoble: there are also docker containers that have the dev release in them if you want to try them out in isolation
[17:45] <ennoble> Is there a way to run the juju orchestration server on the same system that's running MaaS? I don't want to use one of the machines MaaS is managing for that functionality, but I assume if I create another non-MaaS environment that won't work.
[17:55] <marcoceppi_> ennoble: you can create a KVM on the MAAS machine and enlist it in maas then have the juju bootstrap node deployed to it
[17:59] <ennoble> thank macroeppi_ that will probably do the trick
[18:55] <skay> is it possible to use docker and mount both my .juju directory and a local charm that I want to test? https://github.com/juju-solutions/charmbox/blob/master/README.md#using-charmbox-with-existing-juju
[18:56] <skay> it looks like I can only do one or the other
[18:58]  * lazyPower reads scrollback
[18:59] <lazyPower> skay: oh you sure can!
[18:59] <cory_fu> skay: You can.  You'll want to mount the trusty directory under /home/ubuntu/trusty
[18:59] <lazyPower> skay: you pass the -v to volume mount
[18:59] <lazyPower> let me fetch the readme  1 sec
[18:59] <skay> lazyPower: I can pass more than one -v?
[18:59] <lazyPower> https://github.com/whitmo/jujubox/blob/master/charmbox.md
[18:59] <lazyPower> sure can!
[18:59] <cory_fu> You just can't mount the whole JUJU_REPOSITORY directory directly, since it will overwrite the .juju folder
[18:59] <skay> lazyPower: I linked the readme up there!
[19:00] <lazyPower> skay: the charmbox has an illustration of 2 -v mounts
[19:00] <lazyPower> slightly different than the base jujubox
[19:00] <skay> lazyPower: okiedokie. I am pretty excited if I can get this to work.
[19:03] <skay> lazyPower: which version of docker does the readme assume? -f isn't a valid flag
[19:03] <skay> I've got 1.2.0 build fa7b24f
[19:03] <lazyPower> 1.3 plus
[19:03] <lazyPower> i'm personally running 1.5 provided from the docker PPA
[19:04] <lazyPower> 1.2.x is really old... i believe we're offering 1.3.1 in distro in trusty
[19:04] <skay> I thought I was using the ppa, but I guess I screwed it up somehow
[19:04] <lazyPower> thanks for pointing that out though, i'll open a bug about that
[19:04] <skay> and I updated from trusty to utopic a few weeks ago
[19:06] <lazyPower> https://github.com/whitmo/jujubox/issues/17
[19:16] <skay> I've fallen down a weird rabbit hole
[19:17] <lazyPower> skay: uh oh - whats going on?
[19:17] <skay> lazyPower: it's not any of you, it's me. I can't figure out how to upgrade docker. I must have installed it in some bizarro way
[19:18] <skay> lazyPower: I tried sudo apt-get uninstalling it, and installing it and the system thinks I have 1.5.x instealled, but when I run it, It still tell sme it's the other version
[19:19] <skay> I'm quiet sure once I get that settled I'll be able to do things. I ran a test with the other box successfully
[19:23] <skay> lazyPower: which ppa are you using?
[19:24] <lazyPower> skay: deb https://get.docker.com/ubuntu docker main
[19:25] <lazyPower> skay: have you tried 'which docker' to see where the bin in $PATH is overriding the ppa provided docker binary?
[19:26] <skay> lazyPower: I did, and I wasn't sure which package installed it. One of my friends walked me through using dpkg -S against the binary to figure out which package it belonged to. culprit was docker.io
[19:26] <lazyPower> thats the distro package, lxc-docker is the ppa package
[19:26] <lazyPower> yeah, those two dont play nice together, at all
[19:32] <ennoble> is there a way to prevent juju from freeing a MaaS machine after you destroy a service running on it?
[19:32] <ennoble> Juju seems overly eager to clean-up and it's extremely annoying to have to wait 8 minutes for Maas to re-provision the machine
[19:35] <ennoble> it seems like i should have to run juju destroy-machine or destroy-environment for the machine to be freed based on the documentation, but in practice it just happens immediately after the service is destroyed
[19:42] <skay> lazyPower: thanks, btw. I have it installed properly and was able to build the charmbox
[19:42] <lazyPower> \o/
[20:03] <skay> alrighty, I can run tests woohoo.
[20:06] <lazyPower> skay: glad we got you sorted :)
[20:09] <skay> today's goal is to add basenode support to block-storage-broker. I see that charmhelpers has an execd_preinstall helper for that, but I also see a lot of charms doing this in very many ways
[20:09] <skay> any reason for me NOT to use execd_preinstall?
[20:14] <blr> skay: the b-s-b charm already supports basenode
[20:14] <skay> blr: it does? I did something wrong then. it wasn't working in my environment
[20:15] <skay> blr: I don't see where it happens in the source. could you show me?
[20:47] <skay> this adds basenode support to block-storage-broker. https://code.launchpad.net/~codersquid/charms/precise/block-storage-broker/basenode-support
[20:47] <skay> I'm not sure it's good for a merge request
[21:29] <lazyPower> tvansteenburgh: for tomorow - https://github.com/juju-solutions/bundletester/issues/15