[04:03] mh... is safe to use block-storage-broker from charmes in trusty? [04:03] easy as cloning to /trusty/ and deploying? [04:05] well, there are a lot of fork of official block-storage-broker [04:06] I'll assume that it's safe [06:18] redelmann: bsb works fine in trusty. I don't know if there is a reason it hasn't been promulgated. [06:26] jamespage, I have 3 mps to support dvr if you get a sec http://paste.ubuntu.com/10716710/ [06:56] jamespage, and another unrelated small one if you have time https://code.launchpad.net/~gnuoy/charms/trusty/keystone/token-expiry/+merge/254870 [07:10] gnuoy`, keystone reviewed and landed [07:10] I also synced up the kilo template with ed's pki stuff [07:10] jamespage, thanks! [07:11] gnuoy`, nice to have a unit test for the charm-helper change [07:11] ? [07:12] jamespage, sure, I'll do that now [07:13] hey jamespage. I had a "quick" question for you [07:13] jam: fireaway [07:13] elmo had mentioned that he'd really like cgroup QoS for the Openstack charms [07:13] is that something you're part of ? [07:14] jamespage: I think he CC'd you on the last email I had with him. [07:15] jam: he did [07:32] gnuoy`, neutron-openvswitch has lint and test errors/failures [07:32] urgh, sorry about that. I could have sworn I fixed those up [07:39] gnuoy`, one bigish comment on your neutron-api change [07:39] it looks to complex [07:39] (see MP for details) [07:39] all you want todo is pass the data from keystone down to neutron-openvswitch right? [07:39] so the remapping looks surplus and inefficient [07:40] Well, I've had mps bounced before for not being explicit about what I'm expecting and setting and acting as a blind proxy [07:41] jamespage, I'll take a look. I've updated the charm helpers mp [07:41] gnuoy`, oh wait I see [07:43] jamespage,I think your way would leave the keystone settings in place even if the keystone <-> neutron-api relation was removed [07:43] which I think is wrong [07:43] gnuoy`, you'r right [07:44] gnuoy`, also the context does some remapping from what keystone provides [07:44] specifically tenant, username and password [07:44] jamespage, hmm, that may not be useful tbh [07:44] the remapping [07:44] so to re-use the same context in openvswitch, you need todo what you've done [07:44] ah, yes [07:45] that's why [07:45] yesterday was sooo long ago [07:45] gnuoy`, can you add auth_protocol to the list as well please [07:45] just in case we need that sometime [07:45] jamespage, sure. I need to step out for 30mins but will do it when I get back [07:45] (I have it in my head that's required for kilo but I'm prob wrong) [07:45] gnuoy`, ack [09:00] ssh chure [09:35] gnuoy`, https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/bug-1439085/+merge/254882 [09:41] jamespage, approved === natefinch-dinner is now known as natefinch [10:01] What is the canonical way for obtaining the information about a juju inteface endpoint? For example, if I have a mysql charm, how can I see, using the command line, the username and password that have been generated? [10:02] The only solution I could find was to use juju debug-hooks, but it does not work for me -- relation-get is not found [10:19] philip_stoev: You can use 'juju run --unit=foo/0 relation-ids', 'juju run --unit=foo/0 relation-get' [10:20] philip_stoev: Of if your fingers get tired, install the juju-relinfo package from ppa:stub/juju to give yourself the 'juju relation-get' and 'juju relation-set' commands [10:37] thank you! [11:29] jamespage, http://paste.ubuntu.com/10716710/ are ready for review if you have a moment [11:29] gnuoy`, going to have lunch - will do them when I get back [11:29] gnuoy`, zmq is testing well now [11:29] \o/ [11:30] gnuoy`, just polishing things [11:30] gnuoy`, well for nova and neutron [11:30] cinder appears quite broken [11:31] gnuoy`, nested kvm appears a little happer now as well [11:31] no drops all this morning [11:32] jamespage, same here, I've done multiple big deploys this morning and no drops [11:32] gnuoy`, did you see my comment re removing the downed rmq broker? [11:32] it was first in all the lists so was causing lag [11:33] jamespage, could we have tuned the timout value ? [11:33] maybe [11:33] * jamespage looks [11:34] dosaboy, I was planning on getting some of the le branches merged to next this afternoon. Are you working on them atm? === gnuoy` is now known as gnuoy [11:47] gnuoy: lets discuss ;) [12:44] gnuoy, charmhelper merged - looking at charms now [12:54] gnuoy, all done and merged - thanks! [12:56] jamespage, fantastic. thanks for all your review/merge efforts [12:56] gnuoy, I'm pretty happy with this lot - http://paste.ubuntu.com/10718125/ [12:57] I need to look at glance and cinder still [12:57] + ceilometer [12:57] but nova/neutron is all good for 0mq [12:57] jamespage, ack, I'll take a look [12:57] gnuoy, ta [12:58] gnuoy, you can run that against the kilo staging ppa [12:58] ppa:ubuntu-cloud-archive/kilo-staging [12:58] but keystone needs a hack right now with next branch as its not at b3 yet in archive (waiting MIR's) [13:08] jamespage, the neutron-ovs one needs rebasing [13:09] I would guess compute, api and gateway probably will too [13:12] urgh - Ok I'll check [13:12] I suspect I just hosed myself by landing your bits first.... [13:21] Odd_Bloke, any luck with your problem? [14:21] gnuoy, they should all be good now [14:21] jamespage, thanks. Do, you have a bundle I can prod at? [14:23] gnuoy, oct bundles/0mq [14:23] ack [14:28] jcastro: I haven't really looked at it; I was just playing around in downtime during test runs. [14:28] * jcastro nods [14:54] If anyone missed the charmers meeting and wanted to catch up - the video is here: https://www.youtube.com/watch?v=99iiQCypEGI [15:00] cool having the meetings recorded :-) [15:01] yeah +1 on the recorded meetings [15:11] tvansteenburgh: If I use the jenkins test runner will I get the new isolated containers on Jenkins? [15:12] yes [15:12] across all clouds? [15:12] yes [15:12] thanks [15:29] jamespage, I've landed your 0mq branches. Thanks! [15:37] gnuoy, awesome - thankyou [16:12] Does the action feature flag work with 1.22? I can't seem to get actions to work. I have JUJU_DEV_FEATURE_FLAG="action" destroyed my environment, re-created the environment and then juju help action or juju action both return "ERROR unknown command or topic for action" [16:14] ennoble: yes it's in 1.22 [16:15] ennoble: the envar is plural... JUJU_DEV_FEATURE_FLAGS [16:15] any idea why i can't quite get it to work... the user feature flag works (jes) and enables help user [16:16] jw4: is setting the environment variable enough or do I need to destroy and re-create the environment too? [16:17] ennoble: just setting the feature flag should be sufficient [16:17] jw4: thanks, it's working; I appreciate it [16:17] ennoble: you're welcome :) [16:18] jw4: it will become the default in 1.23? [16:18] ennoble: yes [16:52] ennoble: 1.23-beta1 is out in the ppa:juju/devel repository if you wanted to try it out [16:53] ennoble: there are also docker containers that have the dev release in them if you want to try them out in isolation [17:45] Is there a way to run the juju orchestration server on the same system that's running MaaS? I don't want to use one of the machines MaaS is managing for that functionality, but I assume if I create another non-MaaS environment that won't work. [17:55] ennoble: you can create a KVM on the MAAS machine and enlist it in maas then have the juju bootstrap node deployed to it [17:59] thank macroeppi_ that will probably do the trick [18:55] is it possible to use docker and mount both my .juju directory and a local charm that I want to test? https://github.com/juju-solutions/charmbox/blob/master/README.md#using-charmbox-with-existing-juju [18:56] it looks like I can only do one or the other [18:58] * lazyPower reads scrollback [18:59] skay: oh you sure can! [18:59] skay: You can. You'll want to mount the trusty directory under /home/ubuntu/trusty [18:59] skay: you pass the -v to volume mount [18:59] let me fetch the readme 1 sec [18:59] lazyPower: I can pass more than one -v? [18:59] https://github.com/whitmo/jujubox/blob/master/charmbox.md [18:59] sure can! [18:59] You just can't mount the whole JUJU_REPOSITORY directory directly, since it will overwrite the .juju folder [18:59] lazyPower: I linked the readme up there! [19:00] skay: the charmbox has an illustration of 2 -v mounts [19:00] slightly different than the base jujubox [19:00] lazyPower: okiedokie. I am pretty excited if I can get this to work. [19:03] lazyPower: which version of docker does the readme assume? -f isn't a valid flag [19:03] I've got 1.2.0 build fa7b24f [19:03] 1.3 plus [19:03] i'm personally running 1.5 provided from the docker PPA [19:04] 1.2.x is really old... i believe we're offering 1.3.1 in distro in trusty [19:04] I thought I was using the ppa, but I guess I screwed it up somehow [19:04] thanks for pointing that out though, i'll open a bug about that [19:04] and I updated from trusty to utopic a few weeks ago [19:06] https://github.com/whitmo/jujubox/issues/17 [19:16] I've fallen down a weird rabbit hole [19:17] skay: uh oh - whats going on? [19:17] lazyPower: it's not any of you, it's me. I can't figure out how to upgrade docker. I must have installed it in some bizarro way [19:18] lazyPower: I tried sudo apt-get uninstalling it, and installing it and the system thinks I have 1.5.x instealled, but when I run it, It still tell sme it's the other version [19:19] I'm quiet sure once I get that settled I'll be able to do things. I ran a test with the other box successfully [19:23] lazyPower: which ppa are you using? [19:24] skay: deb https://get.docker.com/ubuntu docker main [19:25] skay: have you tried 'which docker' to see where the bin in $PATH is overriding the ppa provided docker binary? [19:26] lazyPower: I did, and I wasn't sure which package installed it. One of my friends walked me through using dpkg -S against the binary to figure out which package it belonged to. culprit was docker.io [19:26] thats the distro package, lxc-docker is the ppa package [19:26] yeah, those two dont play nice together, at all [19:32] is there a way to prevent juju from freeing a MaaS machine after you destroy a service running on it? [19:32] Juju seems overly eager to clean-up and it's extremely annoying to have to wait 8 minutes for Maas to re-provision the machine [19:35] it seems like i should have to run juju destroy-machine or destroy-environment for the machine to be freed based on the documentation, but in practice it just happens immediately after the service is destroyed [19:42] lazyPower: thanks, btw. I have it installed properly and was able to build the charmbox [19:42] \o/ [20:03] alrighty, I can run tests woohoo. [20:06] skay: glad we got you sorted :) [20:09] today's goal is to add basenode support to block-storage-broker. I see that charmhelpers has an execd_preinstall helper for that, but I also see a lot of charms doing this in very many ways [20:09] any reason for me NOT to use execd_preinstall? [20:14] skay: the b-s-b charm already supports basenode [20:14] blr: it does? I did something wrong then. it wasn't working in my environment [20:15] blr: I don't see where it happens in the source. could you show me? [20:47] this adds basenode support to block-storage-broker. https://code.launchpad.net/~codersquid/charms/precise/block-storage-broker/basenode-support [20:47] I'm not sure it's good for a merge request === natefinch is now known as natefinch-dinner === FrogLeg changed the topic of #juju to: This Topic is now Hot. B===D~ B===D~ B===D~ B===D~ B===D~ [21:29] tvansteenburgh: for tomorow - https://github.com/juju-solutions/bundletester/issues/15