[00:11] thedac, thx I pushed swift-proxy with tests fixed up [02:03] coreycb: did that get pushed up? I don't see it. [04:02] thedac, it's pushed up now, along with swift-storage and ceilometer-agent [12:12] jamespage: answered your comments in https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/midonet/+merge/273717 [12:16] gnuoy`, I pushed to the swift-storage status mp again if you want to take a look [12:17] coreycb, will do, thanks [12:18] gnuoy`, swift-proxy is also ready for re-review [12:18] gnuoy`, thanks [12:19] gnuoy`, morning [12:20] hey apuimedo - will look shortly [12:20] thanks jamespage [12:29] https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet/+merge/273772 as well [12:35] apuimedo, erm I can seen any response on either of those two merges? [12:35] can't rather [12:38] mmm [12:38] let me check [12:39] jamespage: sorry about that, forgot to press save. They should be visible now [12:39] lol [12:41] I'm a launchpad noob :P [12:55] is there a debugging guide for juju. Something I can follow when juju won't deploy new services or machines and the debug-log output seems to end 10 hours ago? [14:00] jamespage: I submitted the fix to charm-helpers to add the missing relation (neutron-api/next will need to be synched again) [14:02] apuimedo, does MidonetContext get used elsewhere? otherwise lets avoid a ch-sync task and just drop it into the contexts for neuton-api itself [14:03] jamespage: at the moment only in neutron-api [14:03] apuimedo, direct into neutron-api then is fine [14:03] jamespage: right. I thought about that too [14:03] so I'll put it there :P [14:03] thanks [14:03] jamespage: anything about the other comments I made? [14:03] apuimedo, still looking [14:03] good, thanks [14:08] apuimedo, responded on nova-cc - I don't think config gets passed to all templates [14:09] mmm, I tested this one like a month ago [14:09] apuimedo, NovaConfigContext [14:09] but I'll test it again [14:09] does the same thing that you'll need [14:27] jamespage: https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet/+merge/273772 should address your comments [14:30] apuimedo, I'm not sure the relative pathing for key file open calls will work [14:30] I'll try it [14:30] hooks run from the toplevel of the charm [14:30] apuimedo, it might be better to use the charm_dir function to fully specify it [14:31] thats in hookenv I think [14:31] let's see [14:31] I'll read that method up [14:31] os.path.join(charm_dir(), 'files/midonet.key') [14:31] ok [14:31] I didn't know about that one ;-) [14:32] apuimedo, a unit test to cover the changes in neutron_api_utils.py would help validate that one way or the other [14:32] that should be pretty trivial [14:33] what should it cover? installation of the source? [14:34] apuimedo, it should validate that given plugin == midonet and source being one of mem/midonet, that the right calls are made to add_source [14:34] just exercise the code paths appropriately [14:34] apuimedo, also my comment on the neutron.conf changes was wrong [14:35] the change to the juno file disables loadbalancer and firewall for anything other that midonet [14:35] I think it needs to reflect the way you did it in the kilo neutron.conf file [14:35] if midonet [14:35] else [14:35] .... [14:37] jamespage: it should just be != "midonet" [14:37] probably a typo [14:38] apuimedo, not sure but right now it will regress other plugins [14:38] yes [14:38] I'll push the fix now [14:38] apuimedo, I'm assuming that [14:38] +service_provider = LOADBALANCER:Midonet:midonet.neutron.services.loadbalancer.driver.MidonetLoadbalancerDriver:default [14:38] 531 [14:38] does not have juno context then? [14:39] I think we did not have our own load balancer then [14:39] I'll double check [14:49] jamespage: fix pushed [15:00] apuimedo, I really would like to see unit tests for the code additions to this charm - the new context and the source configuration specifically [15:00] apuimedo, also can i ask why the username and password is not passed over the midonet relation, and is provided by config instead? [15:00] I may have already asked that at some point in the past.. [15:00] can't remember [15:01] jamespage: It used to be fetched from a charm called midonet-repository that you told me to remove and make the repo configuration be config and not a relation [15:02] apuimedo, oh - sorry being dumb - that's not the password to access midonet services - just the repos... [15:02] midonet-api is not the owner of the repo information [15:02] apuimedo, I woke up to early [15:02] yes [15:02] sorry [15:02] no problem [15:02] * jamespage <- jetlagged [15:03] where did you fly? [15:10] Seattle [15:12] nice === apuimedo is now known as apuimedo|away [15:58] Is there any way to set Juju's default placement? So I don't need to say --to lxc:0 all the time with the manual provider, for example? === lborda is now known as lborda-sprint [17:58] jcastro: ping [17:59] is there a way to say that the other side of this relation has to be finished before running? [18:04] Icey: could you give a little more context? [18:04] working on a charm that uses the elasticsearch relation [18:04] the elasticsearch charm manages UFW [18:05] our charm is trying to create an index in Elasticsearch but sometimes fails if it's relation-changed hook runs before elasticsearch's has opened up the ports [18:05] probably because there's no way for your charm to reach elasticsearch? [18:05] try adding a check to see if the port is open before running [18:09] yeah, was just hoping to have more than an infinite loop to wait on the port [18:10] Icey: would you mind doing a paste of your relation-changed hook? I wanna check something real quick [18:14] https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L192 [18:14] the part that's failing is in https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L181 [18:14] / https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L127 [18:32] Icey: I think what you need to do there is check if elasticsearch has already set values for the relation, don't just run. your charm is running even though ES is not ready [18:42] jose, for relation_get is the unit the ip addr of the ES unit? [18:43] huh? [18:43] jose, i'm also working on Icey's problem [18:43] I know, but what did you mean with your last question? [18:43] well you said grab the values that ES is setting with relation_get [18:44] yes. elasticsearch will set relation values once the unit is ready for a relaiton [18:44] relation* [18:45] jose, ok great [18:45] tbh, I don't know what those values are since I don't know the interface, but they should be documented somewhere [18:45] otherwise, checking the charm source will do [18:45] yeah i'm looking [18:48] jose, if that relation value isn't set can i return 0 and expect to have my changed hook get called again or is it a one shot deal? [18:48] cholcombe: that's right! when the values change it will count as a relation change, meaning that the relation-changed hook will run again [18:48] jose, ok cool [18:48] i wonder why juju is calling me before the relation values are set [18:49] because a relation-joined is followed by a relation-changed [18:49] i see [19:36] How's it going everyone?? Are there are any plans in the works for an official repo for layers and interfaces? ....If not, an official general location for interfaces and layers would be sweet, and would also make the development and usage workflow much nicer:-) [19:40] juju-solutions, core: I am getting this error http://paste.ubuntu.com/12726889/ when trying to login with launchpad to http://interfaces.juju.solutions/ [19:43] I assume it is because I am not in a group with the correct permissions [21:00] does anyone know what this error means http://paste.ubuntu.com/12727715/ === apuimedo|away is now known as apuimedo [21:15] asanjar, o/ [21:15] asanjar, that looks like something went awry with the storage manager. what version of juju? [21:23] hi lazypower juju --version ==> 1.24.6-vivid-ppc64el [21:24] asanjar, Storage got a massive revamp in 1.25. If you're not opposed to running from the devel ppa, that error message *should* go away. [21:26] okay will do === zerick_ is now known as zerick