[02:44] charmers, openstack-charmers: for 50 rupes ..... what is the purpose of ext-port and data-port params in neutron-openvswitch? [03:14] charmers, openstack-charmers: From what I can gather, a hypothetical multi-provider l2/l3 configuration might look like: floating-pool-ext-net-1 --> network-node.eth0, floating-pool-ext-net2 --> network-node.eth1 ----- then we might expect our bridge_mappings to look like: physnet1:br-ex1, physnet2:br-ex2 ----- AND FINALLY THE data-port param could be: br-ex1:eth0 br-ex2:eth1 ? <------- this makes sense to me: [03:14] eth0 --> br-ex1 --> physnet1, and eth1 --> br-ex2 --> physnet2. We could then create our floating pool nets on their respective interfaces by specifying --provider:physical_network=physnet1, --provider:physical_network=physnet2 and implement multi-provider...... am I on the right track here...as far as the data-port param is concerned? [03:23] charmers, openstack-charmers: I guess where I am confused is why neutron-openvswitch should be concerned with a configuration of this nature....from what I can gather it only affects the neutron and neutron plugin conf files on nova-compute nodes....is this a common practice to connect your floating pool provider networks to your compute nodes and network nodes too? <-- for dvr this makes sense... but should [03:23] it have any implication elsewhere? [03:24] charmers, openstack-charmers: I have a feeling I'm missing something here....could anyone help me out? [03:25] jamespage: Could you give any insight on this? [10:14] jamespage, +1 for https://code.launchpad.net/~james-page/charms/trusty/neutron-api/kilo-dvr/+merge/258370 [10:14] Happy for me to land it? [10:14] It'd be good to get it into stable too [10:14] * gnuoy lands it into next [10:27] gnuoy, ta [11:24] jamespage: ping [11:24] apuimedo, o/ [11:25] jamespage: how's it going? [11:25] good thanks - and you? [11:25] here, doing some changes to our charms for better interoperability :-) [11:25] apuimedo, awesome! [11:26] jamespage: I found that it is better that I use the identity-service keystone relation for midonet-api [11:26] the problem I encountered trying to use it is two-fold though [11:26] apuimedo, for registering the endpoints and getting credentials? [11:26] yes [11:26] apuimedo, def the way togo [11:26] midonet-api needs to get the admin-token [11:26] so it should be a valid-service [11:26] you'll probably need to update keystone to understand the midonet service type [11:26] so it should be in valid_services [11:27] apuimedo, yah - you got it :-) [11:27] could I submit a patch to keystone stable about it? [11:27] *the keystone charm stable [11:29] the other issue, that does not directly affect me, since I need to be a valid service, is that for non_valid service, I think I stumbled upon a bug [11:29] 2015-05-08 11:14:42 INFO identity-service-relation-changed File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line 1521, in add_endpoint [11:29] 2015-05-08 11:14:42 INFO identity-service-relation-changed desc = valid_services[service]["desc"] [11:30] you are accessing the desc field without checking if it is in valid services [11:30] I mean, you call ensure_valid_service to remove the admin token [11:30] but then on add_enpoint [11:31] you uncondicionally will try access the fields [11:31] -> Kaboom :-) [11:31] uncaught exception [11:31] jamespage: ^^ [11:32] apuimedo, interesting [11:32] :-) [11:32] apuimedo, please do submit a merge proposal for keystone for the new service types you need - that's 100% appropriate [11:32] I think it warrants an amulet test for adding non valid services :P [11:32] apuimedo, well a unit test at least :-) [11:33] at the very least, yes :P [11:33] jamespage: do you need me to file the launchpad bug or you take care of it? [11:34] apuimedo, I guess the handling could be better - the identity-service relation expects endpoints with valid types to be presented - if that's not the case it errors - not sure whether that is better than handling with a log message or not [11:34] apuimedo, please file a bug :-) [11:34] jamespage: well, the fact that there's specific code for removing the admin token from the returned data makes me think that the original intention was to allow arbitrary non-privileged endpoint additions [11:35] so the fix, for keystone/next [11:35] would be to either remove that capability or to make sure that it adds the endpoint ;-) [11:35] apuimedo, it might be - I did not write much of that code tbh so I'd have to go grok and learn [11:36] who is ivok? [11:38] apuimedo, ivok? [11:39] I see him as the author of this code in the bzr blame [11:48] oh, I see, Ante Karamatic === rogpeppe2 is now known as rogpeppe [13:19] gnuoy: Nothing says thank you like a reciprocal review -> https://code.launchpad.net/~stub/charm-helpers/integration/+merge/257748 === anthonyf is now known as Guest11644 [13:35] jamespage: https://code.launchpad.net/~celebdor/charms/trusty/keystone/midonet_valid_service/+merge/258624 [13:37] stub, thanks for the review. I'm not keen on having the exits in status_set tbh, I was planning to run status_set multiple times in the same hook. [16:28] get a wire problem while using JUJU + MAAS [16:31] I have 8 nodes are in status "ready" at MAAS. when I use "juju add-machine" command, it will deploying 3 nodes at the same time. but actually, I only want one. [16:33] In before, it will just request one machine. but suddenly switch to 3, after I run add several arguments to "juju add-machine " command like: juju add-machine -n 5, juju add-machine zone=production. [16:33] but, when I switch batch to default command " juju add-machine" it always request 3 machine ..... [16:34] does anybody have such kind of experience before? thx [17:22] anacapa: thats interesting behavior and the first time i've seen this reported [17:22] anacapa: which version of Juju are you using? [17:38] natefinch, mgz: can either of you review http://reviews.vapour.ws/r/1633/ === mhall119 is now known as mhall119|afk [19:04] lazyPower_ : 1.23.2-trusty-amd64 [19:04] sorry, went to DataCenter just now [19:06] sorry, the version which has the described problem is: juju-1.22.1-trusty-amd64 [19:06] I just upgrade it to 1.23.2 === anthonyf is now known as Guest82825 [19:43] the problem appears again, even I use command" juju add-machine -n 1". it asks 3 nodes from mass, and show me the error likes: "106": [19:43] agent-state-info: 'cannot run instances: cannot run instances: gomaasapi: got [19:43] error back from server: 409 CONFLICT (No available node matches constraints: [19:43] zone=default)' [19:43] instance-id: pending [19:43] series: trusty === mhall119|afk is now known as mhall119 [20:17] juju version 1.23.2 does not appear to queue actions. I have a script that bootstraps, adds two machines, deploys a service to each machine, adds a relation between the two services, and then runs an action. That action occurs before the relation-changed hook is fired. [20:17] I am expecting to much or doing something wrong? [20:18] I have a one second sleep between each juju command.