=== Guest80711 is now known as med_ [02:35] jamespage: Hi, I want to get a config option value from another charm ,I use config('ext-port') but can't the value from the charm neutron-gateway? How can I get this value "ext-port" from neutron-gateway? Thanks === yuanyou_ is now known as yuanyou === coreycb` is now known as coreycb [09:26] yuanyou, hey - config is always scoped to a specific charm - its possible to distribute that across relations between charms, but you'd have todo that explicitly - what's your use case? [09:35] jamespage, did you see my question from yesterday? I might have missed your reply as my client got disconnected apparently for a bit [10:24] Hello trying to run juju inside an ubuntu wily64 VM behind proxy [10:25] I've configured proxy inside environment.yam and when trying to use the local provider to deploy a simple reds service i get an error "DEBUG httpbakery client.go:226 } -> error ". Any hints? [10:28] nagyz, hey - my irc was on and off whilst travelling - ask me again :-) === axino` is now known as axino [10:34] jamespage: are you back from the travel? [10:35] apuimedo|away, I am yes [10:35] :-) [10:36] jamespage: https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709 reminder :P [10:36] apuimedo|away, looking [10:36] apuimedo|away, did you see my comments in midonet-api on the review bug? [10:37] not yet, I'll check it after I finish the current meeting [10:38] apuimedo|away, oh - right - that merge includes shared-secret... [10:39] jamespage, I'd like to use juju to deploy openstack but I already have a ceph cluster up and running which was not done using juju - is there a way to deploy the ceph charms and point them to the existing installation? [10:40] nagyz, no - sorry - that's not possible [10:40] I guess it's also not possible to deploy a new availability zone but instead of deploying keystone and horizon point them to an already existing one? [10:40] its a popular request and its probably not that much work to figure out a proxy charm that implements the client interface of the ceph charm [10:40] but its not been written by anyone yet... [10:41] nagyz, az in which context? nova? [10:41] yeah [10:41] its possible todo multiple regions with the charm with a single keystone and horizon [10:41] az is adjunct to that concept - its just a grouping of servers within a region... [10:41] what I meant is that we already have openstack up and running and we want to add a new az that I wanted to deploy using juju [10:42] nagyz, do you mean region or az? [10:43] ah, sorry, right, I meant region. [10:43] the only shared components between regions are keystone and horizon, right? [10:44] so is it possible to deploy everything except keystone and horizon with juju, and for those just point them to the existing installation? [10:44] nagyz, ok - so all of the charms have a 'region' setting; you have to deploy new instances of all charms with the region set to the new region name - horizon just relates to keystone so that should just dtrt; keystone you can specify multiple regions [10:44] nagyz, oh right - so the proxy question again in a different context - no that's not possible [10:44] right, but when I add a keystone relation so the different charms can register in the endpoints, they need to proxy [10:44] ah, right. [10:44] ok, so juju is only for greenfield deployments, I see :) [10:45] nagyz, yes [10:45] sorry [10:45] not a retro fit... [10:45] so I'd need to figure out how to migrate the current data over... [10:45] which is not going to happen on the ceph side (100TB+) [10:45] so there goes juju for me I assume :( :) [10:46] nagyz, yah - that's quite a bit of data... [10:46] nagyz, lemme check on the proxy charm thing [10:46] the new one we're about to deploy is ~2PB which I expect the pesky users to fill up quickly :-) [10:46] icey, cholcombe: ^^ I know we discussed proxying an existing ceph deployment into a juju deployed openstack cloud - have either of you done any work in that area? [10:46] or I guess we could write the proxy charm ourselves. [10:47] nagyz, that's def possible [10:47] jamespage nagyz: I haven't done any work on that yet [10:47] nagyz, we'd love to support you in that effort if you decide to go that route... [10:47] seems like every project I touch I end up writing code for - same happened to OpenStack itself. [10:47] nagyz, welcome to open source! [10:47] hah [10:48] with the current ceph charms is it possible to deploy the mons separately? [10:48] nagyz, yes [10:48] infact we have a bit of a redesign in flight for that [10:48] I know there is ceph-osd which only adds osds to existing clusters but looked to me like the ceph charm installs both the mon and the osd code [10:48] nagyz, the ceph charm is a superset of ceph-osd - but you can run it with no osd-devices configuration, so it just does the mon [10:48] and sets it up [10:48] ah, got it [10:48] and then just keep adding ceph-osds [10:49] nagyz, icey has been working on a new ceph-mon charm, chopping out the osd support from ceph and simplifying the charm [10:49] are you aware that if I deploy using 'osd-devices: a' and then change it to 'osd-devices: a b' then it doesn't work? :-) [10:49] it wants to re-run ceph-deploy on a, which fails [10:49] nagyz, osd-devices is quite blunt [10:49] so I cannot add drives inside an osd once deployed? [10:49] nagyz, oh wait - that should not happen - the charm should detect that a is in use and just skip it [10:49] nagyz, please file a bug for that [10:50] def a regression if that is the case... [10:50] ok, in my -limited- testing, this didn't work. I'll retest and see [10:50] * jamespage wonders if icey broke my code... [10:50] for quick setup and teardown I'm using juju now to test stuff [10:50] jamespage: I don't think so... [10:50] but ofcourse juju itself has problems with my bonded maas 1.9 network [10:50] icey, just kidding ;-) [10:50] 50% of deployments fail [10:50] yeah yeah [10:51] thanks jamespage icey for the info [10:51] nagyz, oh that sounds nasty - again if you're hitting specific issues please raise bugs on juju as well - you sound like you're right on the edge of feature support and we're working towards 16.04 in three months... [10:51] so any feedback on latest juju + maas 1.9 with network bonding vlans etc... is super useful right now [10:51] yeah we're going to stick to 14.04 for the next 6+ after .04 is released tho :P [10:52] right I've been opening maas bugs left and right [10:52] nagyz, ack - you and alot of people [10:52] nagyz, i think most uses take the new lts at release into testing and then go to production in that type of timescale... [10:52] agreed - we'll do the same [10:52] testtesttesttesttest [10:53] * jamespage apologies for the mantra... [10:53] nagyz, if you have other question either ask here or on the juju ML [10:53] will do [10:53] I try to watch both ;-) [10:54] apuimedo|away, hey - so I have most of midonet deployed apart from the host-agents bit [10:54] apuimedo|away, I was surprised that my instance got a dhcp IP address even without that - does midolman do something clever on the compute node? [10:55] jamespage: you mean apart of the neutron-agents-midonet? [10:55] apuimedo|away, yeah === apuimedo|away is now known as apuimedo [10:56] jamespage: MidoNet dhcp driver only does a noop and sets a namespace for the metadata driver [10:56] jamespage: MidoNet serves dhcp itself :P [10:56] apuimedo, ah! [10:56] it's much comfier [10:56] so that comes from midonet-api or midolman? [10:56] one more question on the network side now that you guys are talking about it: let's say I have eth0 deployed on 10.1.0.0/16 and eth1 deployed on 10.2.0.0/16 - is it possible to tell juju to use one subnet for exposing networks and the other just for spinning up containers for example? [10:56] and with the new release, we also provide metadata from the agent [10:56] midolman [10:56] is there a juju network doc that I could read about the maas integration? [10:57] nagyz, not quite... [10:57] its coming with juju 2.0 [10:57] but juju will still take a while to catch up with the new MidoNet release [10:57] 'network spaces' [10:57] lemme dig you out a reference [10:57] apuimedo, hmm ok [10:57] jamespage, maas introduced fabric and space but it's very confusing even for someone with good network experience [10:57] jamespage: ryotagami will be the one adding the v5 support [10:57] jamespage, the wording is not very exact in the maas docs [10:57] nagyz, I'll provide them that feedback [10:57] hmm - none in channel [10:58] nagyz, fabric [10:58] nagyz, sorry - I know there are some documentation updates in flight - MAAS 1.9 is still fairly new [10:58] nagyz, broadly [10:58] I guess one is L2 the other is L3 separation [10:58] subnet >---- space [10:59] so multiple subnets in a space [10:59] a space is a collection of subnets (l2 network segments) with equivalent security profile [10:59] jamespage: which public-address will get you then? [10:59] so thing DMZ, APPS, PCI for space [10:59] and which private-address [10:59] apuimedo, oh - not got that far yet :-) [10:59] in hookenv [10:59] jamespage: I was thinking hard about that problem before going to bed [11:00] sorry to many conversations [11:00] * jamespage reads backscroll [11:00] jamespage: I was just joining your conversation with nagyz [11:00] apuimedo, well private-address almost becomes obsolete [11:00] because it will prolly affect openstack charms deployment [11:00] it still exists as really the IP address on the network with the default route [11:00] you'll probably have a management network [11:01] apuimedo, oh it will [11:01] and a data network [11:01] apuimedo, most charms support that with congig [11:01] right [11:01] so if I want to future proof [11:01] apuimedo, we're working on network space support across the charms in the run up to 16.04 [11:01] I need to get from my charms the ip on a specific maas network [11:01] so how is it currently done? can I declare the different subnets and have MAAS manage DHCP+DNS on them and juju use it? [11:01] when that feature GA's in juju [11:01] apuimedo, ok - so there will be some additional hook tooling for this [11:01] nagyz: so does maas already provide dnsmasq for more than a net? [11:02] apuimedo, and probably some extra metadata stanza's - thats still in flux [11:02] jamespage: good. I'll be looking forward to see it then [11:02] apuimedo, good question actually - it still could be buggy. [11:02] thanks [11:02] apuimedo, even with one subnet, maas dns is broken for me [11:02] (I promised to open bugs) [11:02] nagyz: in which way? [11:02] apuimedo, basically its 'network-get -r --primary-addres' [11:03] thing binding == management network or data network for tenant traffic [11:03] well, alai did mention yesterday that on some lxc containers she was not getting "search " in /etc/resolv.conf [11:03] when doing the initial enlistment the node gets a DHCP IP which is registered to DNS, but then when a new node wants to enlist it gets the same IP which already has a DNS record so it gets back a bad request for the enlistment rest call [11:03] but services are bound to spaces, so if units end up on different subnets within a space, they still get the right IP for the local subnet [11:03] which breaks enlistment so breaks maas totally [11:03] so the DHCP record <> DNS record sync is quite flaky [11:04] nagyz: that sounds very strange. It would be more of a dnsmasq bug, and those don't come often [11:04] why would it be dnsmasq? [11:04] nagyz: I got once the setting screwed up [11:04] nagyz, hmm - that sounds like a bug or some sort of configuration issue - I've not seen that problem in our lab [11:04] so I had to destroy the environment and recreate it [11:04] apuimedo, its isc-dhcp server in MAAS (not dnsmasq) [11:04] I'd LOVE to use maas's built in DNS instead of writing a manual designate syncer script [11:04] I've found an isc-dhcp-server bug when using multiple subnets in trusty... [11:04] oh, then I misremember badly [11:04] it's pending confirmed, triaged [11:05] so I'm down to one subnet. [11:05] I had such a funny one. I did juju ssh foo [11:05] and juju was taking me to the `bar` machine [11:05] but even then if I right now flip it from DHCP to DHCP+DNS, it just kills my environment fully - no more commissioning or deploy possible [11:05] and I don't want to reenlist the 100+ nodes :) [11:05] (vm snapshots ftw) [11:06] nagyz, suffice to say lots of improvements in managing DNS across multiple subnets using MAAS - if there are isc-dhcp bugs, we're fortunate that there is lots of expertise both in the MAAS team and across the ubuntu distro to resolve that... [11:06] jamespage, right I think the bug only affects the version shipped in trusty [11:06] I can dig out the bug if you want. :) [11:07] https://bugs.launchpad.net/maas/+bug/1521618 [11:07] Bug #1521618: wrong subnet in DHCP answer when multiple networks are present [11:07] here it is [11:07] basically I can't even do PXE boot as it mixes up the subnet infos [11:07] nagyz, please raise bugs and mail the juju and MAAS mailing lists with any probs - feedback is a hugely valuable part of our development process. [11:07] a lot of times I open bugs then I get an invalid and blake explains to me that it's not the right way to hold it ;-) [11:07] 'effects me to' with a comment always good to help with priorities on existing bugs.... [11:08] nagyz, hehe - ok - sometimes a post to the ML first is a nice idea [11:08] is the ML active for maas? [11:08] I saw the dev ML archive that had like 50 mails in a year [11:08] nagyz, but I'd always rather have the bug and explain that no [11:08] nagyz, meh - just email on the juju one - that has the right eyes on it [11:09] :D [11:09] lol [11:09] no/not [11:09] so would doing a keystone proxy charm be considerably harder than the ceph proxy? [11:09] (is "proxy charm" an official term?) [11:09] nagyz, well proxy charm is in my head really ;-) [11:09] found some google reference to it, might have been from you [11:10] but I think it makes a lot of sense [11:10] nagyz, but that's what its doing - implementing the required interfaces, but just proxying out to a different backend service. [11:10] although our corporate overlords needs to give me an OK before I could opensource any of the code I write which takes months... [11:10] rather than running them itself... [11:11] it can skip all installation phases and just has to implement the "answer" part of a relationship I guess [11:11] (haven't looked at juju internals) [11:12] nagyz, basically yes [11:12] one last question before I leave to grab a bite [11:12] nagyz, ok [11:12] let's say I deploy ubuntu now on trusty from the liberty cloud archive [11:12] ack [11:12] once mitaka comes out, how would the upgrade look like using juju? [11:12] nagyz, oh right [11:12] two options [11:12] either - "all units at once" [11:13] juju set glance openstack-origin=cloud:trusty-mitaka [11:13] or [11:13] juju set glance openstack-origin=cloud:trusty-mitaka action-managed-upgrade=True [11:13] and then do [11:13] juju action do glance/0 openstack-upgrade [11:13] just changing the origin wouldn't actually have the charm change the config options (if it got renamed from a to b, for example) [11:13] ah [11:13] that's the openstack-upgrade part :) [11:13] so I can right now test this going from kilo to liberty for example, right? [11:14] should that work? [11:14] yes [11:14] we've had this is folsom :-) [11:14] cool [11:14] I'd like to like juju. [11:14] help me like it. [11:14] :) [11:14] nagyz, look for 'openstack-origin' for that behaviour [11:14] so I could just go, deploy icehouse without the origin parameter, add the parameter to kilo, upgrade, change it to liberty, upgrade... [11:14] and shouldn't break a thing? [11:14] 'source' is less well specificied - some charms will do an upgrade (rabbitmq-server) - some won't (ceph) [11:15] nagyz, that's the idea and what we test yes [11:15] nagyz, let me dig you out a link [11:15] are these covered in internal CD/CI tests? [11:16] nagyz, https://www.openstack.org/summit/tokyo-2015/videos/presentation/canonical-amazing-operations-201-in-place-upgrades-of-openstack-with-running-workloads [11:16] nagyz, yes [11:17] ah cool will watch later [11:17] we also re-verify before every charm release (3 month cadence) [11:17] I was actually in tokyo but missed this [11:17] nagyz, wolsen is awesome... [11:17] nagyz, watch out for godzilla ;) [11:18] the food in austin will be better [11:18] :) [11:18] hah [11:18] nagyz, I'm hoping to have our charm development upstream under openstack soon [11:18] nagyz, so we might get some design session time for discussing roadmap for the next 6 months for the charms [11:18] you mean under openstack's github? [11:19] nagyz, yeah - git/gerrit workflow... [11:19] that would actually mean I could contribute code without doing any legal paperwork [11:19] as we're cleared for all-openstack stuff [11:19] nagyz, better make that happen quick then :) [11:19] I need to have this region up by the end of the month [11:20] nagyz, we sprinted on making the transition smooth last week which was really the final blocker; I just need to complete the review for the infra team and make sure everything is ready to go [11:20] and all devs know where to go once we migrate! [11:20] cool [11:20] looking forward to that [11:20] yeah me to [11:21] we need to do the stress test on our ceph first but I can get someone on my team to look into charm internals to assess the proxy [11:21] so if no work is done yet, we have a fresh start [11:22] ok, really off to grab a bite - talk to you later and thanks for all the infos! [11:22] nagyz, tbh you can do your development where you like [11:22] nagyz, ack - ttfn === jesse_ is now known as randleman [11:31] apuimedo, right back to reviewing midonet [11:31] very well [11:32] jamespage: I'm doing a bugfix for midonet-api midonet-agent interaction [11:32] apuimedo, ok [11:32] because the juju-info relation is giving me hostname.domain [11:32] and the matching was on just hostname [11:33] apuimedo, is neutron-agents-midonet usable yet? [11:33] so if you have a domain and dns configured maas, that is problematic [11:33] apuimedo, oh I'm testing on my laptop under LXD [11:33] jamespage: the one that runs on bare-metal and then neutron-api goes inside lxc? Should be [11:33] its quicker for reviews.... [11:34] jamespage: so no dns in your setup, right? [11:34] nope - everything in LXD containers - its work inflight right now... [11:34] apuimedo, simple dns [11:34] ip/host forward reverse lookup only [11:34] ok [11:34] well, it should probably work then [11:35] I haven't tried the lxd provider, do you have a link to it? [11:35] apuimedo, midonet-api and midonet-agent are working ok [11:35] just to verify, jamespage [11:35] apuimedo, kinda - right now its xenial development + juju built from source with a patch... [11:35] juju ssh midonet-api/0 [11:35] FOO=`sudo midonet-cli tunnel-zone list` [11:36] crap [11:36] wrong command [11:36] hehe [11:36] sudo -i [11:36] midonet-cli -e tunnel-zone list [11:36] it will give you a uuid [11:36] then [11:37] midonet-cli -e tunnel-zone uuid_you_got member list [11:37] if you have some member, we should be fine [11:37] I don't have a tunnel zone afaict [11:37] otherwise you have the same bug I do with maas [11:37] $ tunnel-zone list [11:37] tzone tzone0 name default_tz type gre [11:37] is that right? [11:37] yes [11:37] oh [11:38] tzone0? [11:38] then tunnel-zone tzone0 member list [11:38] tzone0 is an alias [11:38] zone tzone0 host host0 address 10.0.3.93 [11:38] ok, so that should be the ip of the compute machine [11:38] that appears ok [11:38] yeah - it is [11:38] good [11:38] host list [11:38] host host0 name juju-2bb878e7-d45e-4422-890e-8778b0aff37c-machine-8 alive true addresses /10.0.3.93,/ffff:ffff:fe80:0:ffff:ffff:fef7:3fff,/127.0.0.1,/0:0:0:0:0:0:0:1,/192.168.122.1 [11:38] that matches [11:38] you'll get another member when you add neutron-agents-midonet related to midoent-agent [11:39] ok [11:39] doing so now... [11:39] for the metadata [11:39] otherwise the nova instances are not gonna get metadata for ssh keys and such [11:41] apuimedo, yeah I observed that already [11:41] :-) [11:41] apuimedo, but was expecting that :-) [11:41] jamespage: you're one step ahead [11:41] ip but not metadata [11:42] in case you want to have external connectivity on the instances, I can tell you how to manually add an edge to one of the machines running midolman [11:42] I helped alai do it yesterday [11:42] apuimedo, pls [11:42] that would be good before we get the gateway charm [11:42] :-) [11:43] is the floating range 200.200.200.0/24 good for you? [11:43] i can work with that [11:43] okey dokey [11:44] so create an external network in neutron [11:44] that should automagically create a midonet "provider router" which you can see with midonet-cli -e router list [11:44] then [11:44] go to the compute node [11:44] and do [11:45] sudo ip link add type veth [11:45] sudo ip link set dev veth0 up [11:45] same for veth1 [11:45] enable sysctl forwarding [11:46] sysctl -w net.ipv4.ip_forward [11:46] sudo iptables -t nat -I POSTROUTING -s 0.0.0.0/0 -d 200.200.200.0/24 -j MASQUERADE [11:47] sudo iptables -t nat -I POSTROUTING -s 200.200.200.0/24 -d 0.0.0.0/0 -j MASQUERADE [11:47] jamespage: did you get the uuid of the provider router? [11:49] i wonder if doing veth creation on lxc containers is problematic though [11:55] jamespage: going out for lunch [11:56] I'll be back in an hour or so === apuimedo is now known as apuimedo|lunch [12:19] apuimedo|lunch, ack [12:19] apuimedo|lunch, agents running, host registered - just not part of the tunnel zone yet... [12:19] * jamespage pokes more [12:43] apuimedo|lunch, doing lunch myself now [12:43] some bits on -agents-midonet [12:43] the template for /etc/neutron/neutron.conf is missing: [12:43] [agent] [12:43] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [12:43] [oslo_concurrency] [12:43] lock_path = /var/lock/neutron [12:44] without which the dhcp agent is not able to create and manage namespaces [12:44] the unit running -agents-midonet is listed in 'host list' but is not registered in to the tunnel-zone [12:44] I'm running with kilo (and not mem) [12:47] apuimedo|lunch, exported bundle from my env - http://paste.ubuntu.com/14865744/ [13:13] that looks small [13:13] jamespage: no config? [13:14] ah yes, there is on some [13:32] jamespage: oh, I added that to the neutron.conf template on friday, I may have forgotten to push [13:38] that's odd [13:39] jamespage: can you send me the logs for the midonet-api unit? [13:45] apuimedo|lunch, yup [13:47] apuimedo|lunch, http://paste.ubuntu.com/14866135/ === apuimedo|lunch is now known as apuimedo [13:57] thanks [14:02] odd, I only see one ADD_MEMEBER [14:02] *MEMBER [14:03] and one nice failure [14:05] it seems zookeeper went down and then recovered [14:56] marcoceppi_: busy atm? [15:01] apuimedo, what do I need to poke to get things functional? [15:02] jamespage: well, we can manually add the neutron-agents-midonet host to the tunnel zone [15:03] midonet-cli -e tunnel-zone uuid_of_the_tunnel_zone add member host uuid_of_the_host address ip_address_of_the_host [15:04] did you get to do the iptables and veth creation I told you about before? [15:06] apuimedo, just getting to that now [15:08] ok [15:20] apuimedo, veth in containers it fine - its all namespace OK [15:20] apuimedo, done those setup steps [15:23] cool [15:24] apuimedo, I'm basically doing this - https://www.rdoproject.org/networking/midonet-integration/ [15:24] ? [15:24] without the bridge [15:24] since it it not necessary [15:24] did you create an external neutron network? [15:25] apuimedo, yes - I can see the router in midonet-cli, but not in neutron - is that right? [15:25] it is :-) [15:25] it's like a parent router [15:25] okay [15:26] midonet-cli -e router uuid_of_the_provider_router add port address 200.200.200.2 net 200.200.200.0/24 [15:26] this will give you a uui [15:26] *uuid [15:26] apuimedo, done [15:27] midonet-cli -e host uuid_of_the_compute_node_in_host_list add binding port router router_uuid port previous_uuid interface veth1 [15:27] after this command, from nova-compute/0, you should be able to ping 200.200.200.2 [15:36] apuimedo, hmm - I may have an issue with cassandra - one second [15:37] ok [15:43] apuimedo, hmm so midonet-agent is not configuring the connection to cassandra with credentials [15:44] no [15:44] apuimedo, I can turn auth off on cassandra [15:45] its on by default [15:45] one sec [15:45] Didn't I send you the charm config for Cassandr?! [15:47] apuimedo, erm no [15:47] at least I don't think so [15:47] mmm [15:47] let me check [16:44] apuimedo, ok re-deployed - still not using mem - but the config in midonet-cli looked good and I can see midolman logging port events on the edges... [16:44] that said no cigar on the ping yet [16:44] mmm [16:45] ip -4 route on the host where there's the veths [16:46] default via 10.0.3.1 dev eth0 [16:46] 10.0.3.0/24 dev eth0 proto kernel scope link src 10.0.3.5 [16:46] 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 [16:46] apuimedo, ^^ [16:46] apuimedo, I saw: [16:46] 2016.02.03 16:46:05.339 INFO [midolman-akka.actor.default-dispatcher-5] datapath-control - Datapath port veth1 added [16:46] 2016.02.03 16:46:05.339 INFO [midolman-akka.actor.default-dispatcher-5] datapath-control - Port 4/veth1/a8d30ec9-e44d-42fa-9d90-0d02203581cf became active [16:47] jamespage: you are missing a link scope route for 200.200.200.1 [16:47] did you do [16:47] ip addr add 200.200.200.1/24 dev veth0? [16:48] apuimedo, erm I have now [16:48] missed that - apologies [16:48] no problem ;-) [16:49] oh, add a subnet with 200.200.200.0/24 to the public net [16:49] apuimedo, already done [16:50] can you ping 200.200.200.2 [16:51] apuimedo, nope [16:51] should i be able to see that bound anywhere on the server? [16:51] mmm [16:51] on midonet-cli [16:52] create a router [16:52] in neutron [16:53] apuimedo, plugged into external and internal networks? [16:53] yes [16:54] but even so, we should have ping already, yesterday with alai we already had it at this step [16:54] I feel like we probably missed something [16:54] iptables -n -L -t nat [16:55] jamespage: oh, and [16:55] yes thanks apuimedo , i think we are getting very close [16:55] midonet-cli router provider_router_uuid port list [16:55] maybe we forgot to bind the right address [16:57] apuimedo, how do I unbind the previous bound router port? [16:57] why, the current one doesn't have address jamespage ? [16:58] apuimedo, so i created the router in neutron, and plugged it in [16:58] router router0 name MidoNet Provider Router state up [16:58] router router1 name public-router state up infilter chain6 outfilter chain7 [16:58] that's good [16:58] I now have extra ports for the neutron created router [16:59] can you show me the output of the provier router port list? [16:59] apuimedo, [16:59] port port8 device router1 state up plugged no mac ac:ca:ba:72:6f:7c address 200.200.200.2 net 169.254.255.0/30 peer router0:port2 [16:59] port port10 device router1 state up plugged no mac ac:ca:ba:7b:6c:76 address 192.168.21.1 net 192.168.21.0/24 peer bridge1:port0 [16:59] port port9 device bridge1 state up plugged no peer router1:port1 [17:00] apuimedo, erm maybe [17:00] mmm [17:00] apuimedo, I'm not finding midonet-cli very discoverable [17:01] apuimedo, ah I also have to dial into another call... [17:01] biab [17:01] jamespage: yes, midonet-cli takes a while to get used to [17:01] ok, we can continue later/tomorrow [19:08] Is it possible to get the list of parameters to a JUJU action? [19:09] The “juju action defined” is listing the actions, but not the parameters. [19:09] ubuntu@juju:~/mwc16charms/trusty/clearwater-juju$ juju action defined ims-a [19:09] create-user: Create a user. [19:09] delete-user: Delete a user. [19:33] agunturu: juju action defined --schema [19:35] Hi marcoceppi_. Thanks that works [19:36] agunturu: cheers! === rcj` is now known as rcj [22:02] lazypower|summit you around? [22:55] firl: we're GMT+1 atm, he might be in bed [22:56] marcoceppi_: gotcha thanks!