/srv/irclogs.ubuntu.com/2015/05/08/#juju.txt

bdxcharmers, openstack-charmers: for 50 rupes ..... what is the purpose of ext-port and data-port params in neutron-openvswitch?02:44
bdxcharmers, openstack-charmers: From what I can gather, a hypothetical multi-provider l2/l3 configuration might look like: floating-pool-ext-net-1 --> network-node.eth0, floating-pool-ext-net2 --> network-node.eth1 ----- then we might expect our bridge_mappings to look like: physnet1:br-ex1, physnet2:br-ex2 ----- AND FINALLY THE data-port param could be: br-ex1:eth0 br-ex2:eth1 ? <------- this makes sense to me:03:14
bdxeth0 --> br-ex1 --> physnet1, and eth1 --> br-ex2 --> physnet2. We could then create our floating pool nets on their respective interfaces by specifying --provider:physical_network=physnet1, --provider:physical_network=physnet2 and implement multi-provider...... am I on the right track here...as far as the data-port param is concerned?03:14
bdxcharmers, openstack-charmers: I guess where I am confused is why neutron-openvswitch should be concerned with a configuration of this nature....from what I can gather it only affects the neutron and neutron plugin conf files on nova-compute nodes....is this a common practice to  connect your floating pool provider networks to your compute nodes and network nodes too? <-- for dvr this makes sense... but should03:23
bdxit have any implication elsewhere?03:23
bdxcharmers, openstack-charmers: I have a feeling I'm missing something here....could anyone help me out?03:24
bdxjamespage: Could you give any insight on this?03:25
gnuoyjamespage, +1 for https://code.launchpad.net/~james-page/charms/trusty/neutron-api/kilo-dvr/+merge/25837010:14
gnuoyHappy for me to land it?10:14
gnuoyIt'd be good to get it into stable too10:14
* gnuoy lands it into next10:14
jamespagegnuoy, ta10:27
apuimedojamespage: ping11:24
jamespageapuimedo, o/11:24
apuimedojamespage: how's it going?11:25
jamespagegood thanks - and you?11:25
apuimedohere, doing some changes to our charms for better interoperability :-)11:25
jamespageapuimedo, awesome!11:25
apuimedojamespage: I found that it is better that I use the identity-service keystone relation for midonet-api11:26
apuimedothe problem I encountered trying to use it is two-fold though11:26
jamespageapuimedo, for registering the endpoints and getting credentials?11:26
apuimedoyes11:26
jamespageapuimedo, def the way togo11:26
apuimedomidonet-api needs to get the admin-token11:26
apuimedoso it should be a valid-service11:26
jamespageyou'll probably need to update keystone to understand the midonet service type11:26
apuimedoso it should be in valid_services11:26
jamespageapuimedo, yah - you got it :-)11:27
apuimedocould I submit a patch to keystone stable about it?11:27
apuimedo*the keystone charm stable11:27
apuimedothe other issue, that does not directly affect me, since I need to be a valid service, is that for non_valid service, I think I stumbled upon a bug11:29
apuimedo2015-05-08 11:14:42 INFO identity-service-relation-changed   File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line 1521, in add_endpoint11:29
apuimedo2015-05-08 11:14:42 INFO identity-service-relation-changed     desc = valid_services[service]["desc"]11:29
apuimedoyou are accessing the desc field without checking if it is in valid services11:30
apuimedoI mean, you call ensure_valid_service to remove the admin token11:30
apuimedobut then on add_enpoint11:30
apuimedoyou uncondicionally will try access the fields11:31
apuimedo-> Kaboom :-)11:31
apuimedouncaught exception11:31
apuimedojamespage: ^^11:31
jamespageapuimedo, interesting11:32
apuimedo:-)11:32
jamespageapuimedo, please do submit a merge proposal for keystone for the new service types you need - that's 100% appropriate11:32
apuimedoI think it warrants an amulet test for adding non valid services :P11:32
jamespageapuimedo, well a unit test at least :-)11:32
apuimedoat the very least, yes :P11:33
apuimedojamespage: do you need me to file the launchpad bug or you take care of it?11:33
jamespageapuimedo, I guess the handling could be better - the identity-service relation expects endpoints with valid types to be presented - if that's not the case it errors - not sure whether that is better than handling with a log message or not11:34
jamespageapuimedo, please file a bug :-)11:34
apuimedojamespage: well, the fact that there's specific code for removing the admin token from the returned data makes me think that the original intention was to allow arbitrary non-privileged endpoint additions11:34
apuimedoso the fix, for keystone/next11:35
apuimedowould be to either remove that capability or to make sure that it adds the endpoint ;-)11:35
jamespageapuimedo, it might be - I did not write much of that code tbh so I'd have to go grok and learn11:35
apuimedowho is ivok?11:36
jamespageapuimedo, ivok?11:38
apuimedoI see him as the author of this code in the bzr blame11:39
apuimedooh, I see, Ante Karamatic11:48
=== rogpeppe2 is now known as rogpeppe
stubgnuoy: Nothing says thank you like a reciprocal review -> https://code.launchpad.net/~stub/charm-helpers/integration/+merge/25774813:19
=== anthonyf is now known as Guest11644
apuimedojamespage: https://code.launchpad.net/~celebdor/charms/trusty/keystone/midonet_valid_service/+merge/25862413:35
gnuoystub, thanks for the review.  I'm not keen on having the exits in status_set tbh, I was planning to run status_set multiple times in the same hook.13:37
anacapaget a wire problem while using JUJU + MAAS16:28
anacapaI have 8 nodes are in status "ready" at MAAS. when I use "juju add-machine" command, it will deploying 3 nodes at the same time.   but actually, I only want one.16:31
anacapaIn before, it will just request one machine. but suddenly switch to 3, after I run add several arguments to "juju add-machine " command like: juju add-machine -n 5,   juju add-machine zone=production.16:33
anacapabut, when I switch batch to default command " juju add-machine" it always request 3 machine .....16:33
anacapadoes anybody have such kind of experience before? thx16:34
lazyPower_anacapa: thats interesting behavior and the first time i've seen this reported17:22
lazyPower_anacapa: which version of Juju are you using?17:22
sinzuinatefinch, mgz: can either of you review http://reviews.vapour.ws/r/1633/17:38
=== mhall119 is now known as mhall119|afk
anacapalazyPower_ : 1.23.2-trusty-amd6419:04
anacapasorry, went to DataCenter just now19:04
anacapasorry, the version which has the described problem is: juju-1.22.1-trusty-amd6419:06
anacapaI just upgrade it to 1.23.219:06
=== anthonyf is now known as Guest82825
anacapathe problem appears again, even I  use command" juju add-machine -n 1". it asks 3 nodes from mass, and show me the error likes: "106":19:43
anacapa    agent-state-info: 'cannot run instances: cannot run instances: gomaasapi: got19:43
anacapa      error back from server: 409 CONFLICT (No available node matches constraints:19:43
anacapa      zone=default)'19:43
anacapa    instance-id: pending19:43
anacapa    series: trusty19:43
=== mhall119|afk is now known as mhall119
nodtknjuju version 1.23.2 does not appear to queue actions.  I have a script that bootstraps, adds two machines, deploys a service to each machine, adds a relation between the two services, and then runs an action.  That action occurs before the relation-changed hook is fired.20:17
nodtknI am expecting to much or doing something wrong?20:17
nodtknI have a one second sleep between each juju command.20:18

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!