[02:44] <bdx> charmers, openstack-charmers: for 50 rupes ..... what is the purpose of ext-port and data-port params in neutron-openvswitch?
[03:14] <bdx> charmers, openstack-charmers: From what I can gather, a hypothetical multi-provider l2/l3 configuration might look like: floating-pool-ext-net-1 --> network-node.eth0, floating-pool-ext-net2 --> network-node.eth1 ----- then we might expect our bridge_mappings to look like: physnet1:br-ex1, physnet2:br-ex2 ----- AND FINALLY THE data-port param could be: br-ex1:eth0 br-ex2:eth1 ? <------- this makes sense to me:
[03:14] <bdx> eth0 --> br-ex1 --> physnet1, and eth1 --> br-ex2 --> physnet2. We could then create our floating pool nets on their respective interfaces by specifying --provider:physical_network=physnet1, --provider:physical_network=physnet2 and implement multi-provider...... am I on the right track here...as far as the data-port param is concerned?
[03:23] <bdx> charmers, openstack-charmers: I guess where I am confused is why neutron-openvswitch should be concerned with a configuration of this nature....from what I can gather it only affects the neutron and neutron plugin conf files on nova-compute nodes....is this a common practice to  connect your floating pool provider networks to your compute nodes and network nodes too? <-- for dvr this makes sense... but should
[03:23] <bdx> it have any implication elsewhere?
[03:24] <bdx> charmers, openstack-charmers: I have a feeling I'm missing something here....could anyone help me out?
[03:25] <bdx> jamespage: Could you give any insight on this?
[10:14] <gnuoy> jamespage, +1 for https://code.launchpad.net/~james-page/charms/trusty/neutron-api/kilo-dvr/+merge/258370
[10:14] <gnuoy> Happy for me to land it?
[10:14] <gnuoy> It'd be good to get it into stable too
[10:14]  * gnuoy lands it into next
[10:27] <jamespage> gnuoy, ta
[11:24] <apuimedo> jamespage: ping
[11:24] <jamespage> apuimedo, o/
[11:25] <apuimedo> jamespage: how's it going?
[11:25] <jamespage> good thanks - and you?
[11:25] <apuimedo> here, doing some changes to our charms for better interoperability :-)
[11:25] <jamespage> apuimedo, awesome!
[11:26] <apuimedo> jamespage: I found that it is better that I use the identity-service keystone relation for midonet-api
[11:26] <apuimedo> the problem I encountered trying to use it is two-fold though
[11:26] <jamespage> apuimedo, for registering the endpoints and getting credentials?
[11:26] <apuimedo> yes
[11:26] <jamespage> apuimedo, def the way togo
[11:26] <apuimedo> midonet-api needs to get the admin-token
[11:26] <apuimedo> so it should be a valid-service
[11:26] <jamespage> you'll probably need to update keystone to understand the midonet service type
[11:26] <apuimedo> so it should be in valid_services
[11:27] <jamespage> apuimedo, yah - you got it :-)
[11:27] <apuimedo> could I submit a patch to keystone stable about it?
[11:27] <apuimedo> *the keystone charm stable
[11:29] <apuimedo> the other issue, that does not directly affect me, since I need to be a valid service, is that for non_valid service, I think I stumbled upon a bug
[11:29] <apuimedo> 2015-05-08 11:14:42 INFO identity-service-relation-changed   File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line 1521, in add_endpoint
[11:29] <apuimedo> 2015-05-08 11:14:42 INFO identity-service-relation-changed     desc = valid_services[service]["desc"]
[11:30] <apuimedo> you are accessing the desc field without checking if it is in valid services
[11:30] <apuimedo> I mean, you call ensure_valid_service to remove the admin token
[11:30] <apuimedo> but then on add_enpoint
[11:31] <apuimedo> you uncondicionally will try access the fields
[11:31] <apuimedo> -> Kaboom :-)
[11:31] <apuimedo> uncaught exception
[11:31] <apuimedo> jamespage: ^^
[11:32] <jamespage> apuimedo, interesting
[11:32] <apuimedo> :-)
[11:32] <jamespage> apuimedo, please do submit a merge proposal for keystone for the new service types you need - that's 100% appropriate
[11:32] <apuimedo> I think it warrants an amulet test for adding non valid services :P
[11:32] <jamespage> apuimedo, well a unit test at least :-)
[11:33] <apuimedo> at the very least, yes :P
[11:33] <apuimedo> jamespage: do you need me to file the launchpad bug or you take care of it?
[11:34] <jamespage> apuimedo, I guess the handling could be better - the identity-service relation expects endpoints with valid types to be presented - if that's not the case it errors - not sure whether that is better than handling with a log message or not
[11:34] <jamespage> apuimedo, please file a bug :-)
[11:34] <apuimedo> jamespage: well, the fact that there's specific code for removing the admin token from the returned data makes me think that the original intention was to allow arbitrary non-privileged endpoint additions
[11:35] <apuimedo> so the fix, for keystone/next
[11:35] <apuimedo> would be to either remove that capability or to make sure that it adds the endpoint ;-)
[11:35] <jamespage> apuimedo, it might be - I did not write much of that code tbh so I'd have to go grok and learn
[11:36] <apuimedo> who is ivok?
[11:38] <jamespage> apuimedo, ivok?
[11:39] <apuimedo> I see him as the author of this code in the bzr blame
[11:48] <apuimedo> oh, I see, Ante Karamatic
[13:19] <stub> gnuoy: Nothing says thank you like a reciprocal review -> https://code.launchpad.net/~stub/charm-helpers/integration/+merge/257748
[13:35] <apuimedo> jamespage: https://code.launchpad.net/~celebdor/charms/trusty/keystone/midonet_valid_service/+merge/258624
[13:37] <gnuoy> stub, thanks for the review.  I'm not keen on having the exits in status_set tbh, I was planning to run status_set multiple times in the same hook.
[16:28] <anacapa> get a wire problem while using JUJU + MAAS
[16:31] <anacapa> I have 8 nodes are in status "ready" at MAAS. when I use "juju add-machine" command, it will deploying 3 nodes at the same time.   but actually, I only want one.
[16:33] <anacapa> In before, it will just request one machine. but suddenly switch to 3, after I run add several arguments to "juju add-machine " command like: juju add-machine -n 5,   juju add-machine zone=production.
[16:33] <anacapa> but, when I switch batch to default command " juju add-machine" it always request 3 machine .....
[16:34] <anacapa> does anybody have such kind of experience before? thx
[17:22] <lazyPower_> anacapa: thats interesting behavior and the first time i've seen this reported
[17:22] <lazyPower_> anacapa: which version of Juju are you using?
[17:38] <sinzui> natefinch, mgz: can either of you review http://reviews.vapour.ws/r/1633/
[19:04] <anacapa> lazyPower_ : 1.23.2-trusty-amd64
[19:04] <anacapa> sorry, went to DataCenter just now
[19:06] <anacapa> sorry, the version which has the described problem is: juju-1.22.1-trusty-amd64
[19:06] <anacapa> I just upgrade it to 1.23.2
[19:43] <anacapa> the problem appears again, even I  use command" juju add-machine -n 1". it asks 3 nodes from mass, and show me the error likes: "106":
[19:43] <anacapa>     agent-state-info: 'cannot run instances: cannot run instances: gomaasapi: got
[19:43] <anacapa>       error back from server: 409 CONFLICT (No available node matches constraints:
[19:43] <anacapa>       zone=default)'
[19:43] <anacapa>     instance-id: pending
[19:43] <anacapa>     series: trusty
[20:17] <nodtkn> juju version 1.23.2 does not appear to queue actions.  I have a script that bootstraps, adds two machines, deploys a service to each machine, adds a relation between the two services, and then runs an action.  That action occurs before the relation-changed hook is fired.
[20:17] <nodtkn> I am expecting to much or doing something wrong?
[20:18] <nodtkn> I have a one second sleep between each juju command.