[02:35] <yuanyou> jamespage:  Hi， I want to get a config option value from another charm ,I use config('ext-port') but can't the value from the charm neutron-gateway? How can I get this value "ext-port" from neutron-gateway? Thanks
[09:26] <jamespage> yuanyou, hey - config is always scoped to a specific charm - its possible to distribute that across relations between charms, but you'd have todo that explicitly - what's your use case?
[09:35] <nagyz> jamespage, did you see my question from yesterday? I might have missed your reply as my client got disconnected apparently for a bit
[10:24] <deanman> Hello trying to run juju inside an ubuntu wily64 VM behind proxy
[10:25] <deanman> I've configured proxy inside environment.yam and when trying to use the local provider to deploy a simple reds service i get an error  "DEBUG httpbakery client.go:226 } -> error <nil>". Any hints?
[10:28] <jamespage> nagyz, hey - my irc was on and off whilst travelling - ask me again :-)
[10:34] <apuimedo|away> jamespage: are you back from the travel?
[10:35] <jamespage> apuimedo|away, I am yes
[10:35] <apuimedo|away> :-)
[10:36] <apuimedo|away> jamespage: https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709 reminder :P
[10:36] <jamespage> apuimedo|away, looking
[10:36] <jamespage> apuimedo|away, did you see my comments in midonet-api on the review bug?
[10:37] <apuimedo|away> not yet, I'll check it after I finish the current meeting
[10:38] <jamespage> apuimedo|away, oh - right - that merge includes shared-secret...
[10:39] <nagyz> jamespage, I'd like to use juju to deploy openstack but I already have a ceph cluster up and running which was not done using juju - is there a way to deploy the ceph charms and point them to the existing installation?
[10:40] <jamespage> nagyz, no - sorry - that's not possible
[10:40] <nagyz> I guess it's also not possible to deploy a new availability zone but instead of deploying keystone and horizon point them to an already existing one?
[10:40] <jamespage> its a popular request and its probably not that much work to figure out a proxy charm that implements the client interface of the ceph charm
[10:40] <jamespage> but its not been written by anyone yet...
[10:41] <jamespage> nagyz, az in which context? nova?
[10:41] <nagyz> yeah
[10:41] <jamespage> its possible todo multiple regions with the charm with a single keystone and horizon
[10:41] <jamespage> az is adjunct to that concept - its just a grouping of servers within a region...
[10:41] <nagyz> what I meant is that we already have openstack up and running and we want to add a new az that I wanted to deploy using juju
[10:42] <jamespage> nagyz, do you mean region or az?
[10:43] <nagyz> ah, sorry, right, I meant region.
[10:43] <nagyz> the only shared components between regions are keystone and horizon, right?
[10:44] <nagyz> so is it possible to deploy everything except keystone and horizon with juju, and for those just point them to the existing installation?
[10:44] <jamespage> nagyz, ok - so all of the charms have a 'region' setting; you have to deploy new instances of all charms with the region set to the new region name - horizon just relates to keystone so that should just dtrt; keystone you can specify multiple regions
[10:44] <jamespage> nagyz, oh right - so the proxy question again in a different context - no that's not possible
[10:44] <nagyz> right, but when I add a keystone relation so the different charms can register in the endpoints, they need to proxy
[10:44] <nagyz> ah, right.
[10:44] <nagyz> ok, so juju is only for greenfield deployments, I see :)
[10:45] <jamespage> nagyz, yes
[10:45] <jamespage> sorry
[10:45] <jamespage> not a retro fit...
[10:45] <nagyz> so I'd need to figure out how to migrate the current data over...
[10:45] <nagyz> which is not going to happen on the ceph side (100TB+)
[10:45] <nagyz> so there goes juju for me I assume :( :)
[10:46] <jamespage> nagyz, yah - that's quite a bit of data...
[10:46] <jamespage> nagyz, lemme check on the proxy charm thing
[10:46] <nagyz> the new one we're about to deploy is ~2PB which I expect the pesky users to fill up quickly :-)
[10:46] <jamespage> icey, cholcombe: ^^ I know we discussed proxying an existing ceph deployment into a juju deployed openstack cloud - have either of you done any work in that area?
[10:46] <nagyz> or I guess we could write the proxy charm ourselves.
[10:47] <jamespage> nagyz, that's def possible
[10:47] <icey> jamespage nagyz: I haven't done any work on that yet
[10:47] <jamespage> nagyz, we'd love to support you in that effort if you decide to go that route...
[10:47] <nagyz> seems like every project I touch I end up writing code for - same happened to OpenStack itself.
[10:47] <jamespage> nagyz, welcome to open source!
[10:47] <nagyz> hah
[10:48] <nagyz> with the current ceph charms is it possible to deploy the mons separately?
[10:48] <jamespage> nagyz, yes
[10:48] <jamespage> infact we have a bit of a redesign in flight for that
[10:48] <nagyz> I know there is ceph-osd which only adds osds to existing clusters but looked to me like the ceph charm installs both the mon and the osd code
[10:48] <jamespage> nagyz, the ceph charm is a superset of ceph-osd - but you can run it with no osd-devices configuration, so it just does the mon
[10:48] <nagyz> and sets it up
[10:48] <nagyz> ah, got it
[10:48] <nagyz> and then just keep adding ceph-osds
[10:49] <jamespage> nagyz, icey has been working on a new ceph-mon charm, chopping out the osd support from ceph and simplifying the charm
[10:49] <nagyz> are you aware that if I deploy using 'osd-devices: a' and then change it to 'osd-devices: a b' then it doesn't work? :-)
[10:49] <nagyz> it wants to re-run ceph-deploy on a, which fails
[10:49] <jamespage> nagyz, osd-devices is quite blunt
[10:49] <nagyz> so I cannot add drives inside an osd once deployed?
[10:49] <jamespage> nagyz, oh wait - that should not happen - the charm should detect that a is in use and just skip it
[10:49] <jamespage> nagyz, please file a bug for that
[10:50] <jamespage> def a regression if that is the case...
[10:50] <nagyz> ok, in my -limited- testing, this didn't work. I'll retest and see
[10:50]  * jamespage wonders if icey broke my code...
[10:50] <nagyz> for quick setup and teardown I'm using juju now to test stuff
[10:50] <icey> jamespage: I don't think so...
[10:50] <nagyz> but ofcourse juju itself has problems with my bonded maas 1.9 network
[10:50] <jamespage> icey, just kidding ;-)
[10:50] <nagyz> 50% of deployments fail
[10:50] <icey> yeah yeah
[10:51] <nagyz> thanks jamespage icey for the info
[10:51] <jamespage> nagyz, oh that sounds nasty - again if you're hitting specific issues please raise bugs on juju as well - you sound like you're right on the edge of feature support and we're working towards 16.04 in three months...
[10:51] <jamespage> so any feedback on latest juju + maas 1.9 with network bonding vlans etc... is super useful right now
[10:51] <nagyz> yeah we're going to stick to 14.04 for the next 6+ after .04 is released tho :P
[10:52] <nagyz> right I've been opening maas bugs left and right
[10:52] <jamespage> nagyz, ack - you and alot of people
[10:52] <jamespage> nagyz, i think most uses take the new lts at release into testing and then go to production in that type of timescale...
[10:52] <nagyz> agreed - we'll do the same
[10:52] <jamespage> testtesttesttesttest
[10:53]  * jamespage apologies for the mantra...
[10:53] <jamespage> nagyz, if you have other question either ask here or on the juju ML
[10:53] <nagyz> will do
[10:53] <jamespage> I try to watch both ;-)
[10:54] <jamespage> apuimedo|away, hey - so I have most of midonet deployed apart from the host-agents bit
[10:54] <jamespage> apuimedo|away, I was surprised that my instance got a dhcp IP address even without that - does midolman do something clever on the compute node?
[10:55] <apuimedo|away> jamespage: you mean apart of the neutron-agents-midonet?
[10:55] <jamespage> apuimedo|away, yeah
[10:56] <apuimedo> jamespage: MidoNet dhcp driver only does a noop and sets a namespace for the metadata driver
[10:56] <apuimedo> jamespage: MidoNet serves dhcp itself :P
[10:56] <jamespage> apuimedo, ah!
[10:56] <apuimedo> it's much comfier
[10:56] <jamespage> so that comes from midonet-api or midolman?
[10:56] <nagyz> one more question on the network side now that you guys are talking about it: let's say I have eth0 deployed on 10.1.0.0/16 and eth1 deployed on 10.2.0.0/16 - is it possible to tell juju to use one subnet for exposing networks and the other just for spinning up containers for example?
[10:56] <apuimedo> and with the new release, we also provide metadata from the agent
[10:56] <apuimedo> midolman
[10:56] <nagyz> is there a juju network doc that I could read about the maas integration?
[10:57] <jamespage> nagyz, not quite...
[10:57] <jamespage> its coming with juju 2.0
[10:57] <apuimedo> but juju will still take a while to catch up with the new MidoNet release
[10:57] <jamespage> 'network spaces'
[10:57] <jamespage> lemme dig you out a reference
[10:57] <jamespage> apuimedo, hmm ok
[10:57] <nagyz> jamespage, maas introduced fabric and space but it's very confusing even for someone with good network experience
[10:57] <apuimedo> jamespage: ryotagami will be the one adding the v5 support
[10:57] <nagyz> jamespage, the wording is not very exact in the maas docs
[10:57] <jamespage> nagyz, I'll provide them that feedback
[10:57] <jamespage> hmm - none in channel
[10:58] <jamespage> nagyz, fabric
[10:58] <jamespage> nagyz, sorry - I know there are some documentation updates in flight - MAAS 1.9 is still fairly new
[10:58] <jamespage> nagyz, broadly
[10:58] <nagyz> I guess one is L2 the other is L3 separation
[10:58] <jamespage> subnet >---- space
[10:59] <jamespage> so multiple subnets in a space
[10:59] <jamespage> a space is a collection of subnets (l2 network segments) with equivalent security profile
[10:59] <apuimedo> jamespage: which public-address will get you then?
[10:59] <jamespage> so thing DMZ, APPS, PCI for space
[10:59] <apuimedo> and which private-address
[10:59] <jamespage> apuimedo, oh - not got that far yet :-)
[10:59] <apuimedo> in hookenv
[10:59] <apuimedo> jamespage: I was thinking hard about that problem before going to bed
[11:00] <jamespage> sorry to many conversations
[11:00]  * jamespage reads backscroll
[11:00] <apuimedo> jamespage: I was just joining your conversation with nagyz
[11:00] <jamespage> apuimedo, well private-address almost becomes obsolete
[11:00] <apuimedo> because it will prolly affect openstack charms deployment
[11:00] <jamespage> it still exists as really the IP address on the network with the default route
[11:00] <apuimedo> you'll probably have a management network
[11:01] <jamespage> apuimedo, oh it will
[11:01] <apuimedo> and a data network
[11:01] <jamespage> apuimedo, most charms support that with congig
[11:01] <nagyz> right
[11:01] <apuimedo> so if I want to future proof
[11:01] <jamespage> apuimedo, we're working on network space support across the charms in the run up to 16.04
[11:01] <apuimedo> I need to get from my charms the ip on a specific maas network
[11:01] <nagyz> so how is it currently done? can I declare the different subnets and have MAAS manage DHCP+DNS on them and juju use it?
[11:01] <jamespage> when that feature GA's in juju
[11:01] <jamespage> apuimedo, ok - so there will be some additional hook tooling for this
[11:01] <apuimedo> nagyz: so does maas already provide dnsmasq for more than a net?
[11:02] <jamespage> apuimedo, and probably some extra metadata stanza's - thats still in flux
[11:02] <apuimedo> jamespage: good. I'll be looking forward to see it then
[11:02] <nagyz> apuimedo, good question actually - it still could be buggy.
[11:02] <apuimedo> thanks
[11:02] <nagyz> apuimedo, even with one subnet, maas dns is broken for me
[11:02] <nagyz> (I promised to open bugs)
[11:02] <apuimedo> nagyz: in which way?
[11:02] <jamespage> apuimedo, basically its 'network-get -r <binding> --primary-addres'
[11:03] <jamespage> thing binding == management network or data network for tenant traffic
[11:03] <apuimedo> well, alai did mention yesterday that on some lxc containers she was not getting "search " in /etc/resolv.conf
[11:03] <nagyz> when doing the initial enlistment the node gets a DHCP IP which is registered to DNS, but then when a new node wants to enlist it gets the same IP which already has a DNS record so it gets back a bad request for the enlistment rest call
[11:03] <jamespage> but services are bound to spaces, so if units end up on different subnets within a space, they still get the right IP for the local subnet
[11:03] <nagyz> which breaks enlistment so breaks maas totally
[11:03] <nagyz> so the DHCP record <> DNS record sync is quite flaky
[11:04] <apuimedo> nagyz: that sounds very strange. It would be more of a dnsmasq bug, and those don't come often
[11:04] <nagyz> why would it be dnsmasq?
[11:04] <apuimedo> nagyz: I got once the setting screwed up
[11:04] <jamespage> nagyz, hmm - that sounds like a bug or some sort of configuration issue - I've not seen that problem in our lab
[11:04] <apuimedo> so I had to destroy the environment and recreate it
[11:04] <jamespage> apuimedo, its isc-dhcp server in MAAS (not dnsmasq)
[11:04] <nagyz> I'd LOVE to use maas's built in DNS instead of writing a manual designate syncer script
[11:04] <nagyz> I've found an isc-dhcp-server bug when using multiple subnets in trusty...
[11:04] <apuimedo> oh, then I misremember badly
[11:04] <nagyz> it's pending confirmed, triaged
[11:05] <nagyz> so I'm down to one subnet.
[11:05] <apuimedo> I had such a funny one. I did juju ssh foo
[11:05] <apuimedo> and juju was taking me to the `bar` machine
[11:05] <nagyz> but even then if I right now flip it from DHCP to DHCP+DNS, it just kills my environment fully - no more commissioning or deploy possible
[11:05] <nagyz> and I don't want to reenlist the 100+ nodes :)
[11:05] <nagyz> (vm snapshots ftw)
[11:06] <jamespage> nagyz, suffice to say lots of improvements in managing DNS across multiple subnets using MAAS - if there are isc-dhcp bugs, we're fortunate that there is lots of expertise both in the MAAS team and across the ubuntu distro to resolve that...
[11:06] <nagyz> jamespage, right I think the bug only affects the version shipped in trusty
[11:06] <nagyz> I can dig out the bug if you want. :)
[11:07] <nagyz> https://bugs.launchpad.net/maas/+bug/1521618
[11:07] <mup> Bug #1521618: wrong subnet in DHCP answer when multiple networks are present <MAAS:Triaged> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1521618>
[11:07] <nagyz> here it is
[11:07] <nagyz> basically I can't even do PXE boot as it mixes up the subnet infos
[11:07] <jamespage> nagyz, please raise bugs and mail the juju and MAAS mailing lists with any probs - feedback is a hugely valuable part of our development process.
[11:07] <nagyz> a lot of times I open bugs then I get an invalid and blake explains to me that it's not the right way to hold it ;-)
[11:07] <jamespage> 'effects me to' with a comment always good to help with priorities on existing bugs....
[11:08] <jamespage> nagyz, hehe - ok - sometimes a post to the ML first is a nice idea
[11:08] <nagyz> is the ML active for maas?
[11:08] <nagyz> I saw the dev ML archive that had like 50 mails in a year
[11:08] <jamespage> nagyz, but I'd always rather have the bug and explain that no
[11:08] <jamespage> nagyz, meh - just email on the juju one - that has the right eyes on it
[11:09] <nagyz> :D
[11:09] <nagyz> lol
[11:09] <jamespage> no/not
[11:09] <nagyz> so would doing a keystone proxy charm be considerably harder than the ceph proxy?
[11:09] <nagyz> (is "proxy charm" an official term?)
[11:09] <jamespage> nagyz, well proxy charm is in my head really ;-)
[11:09] <nagyz> found some google reference to it, might have been from you
[11:10] <nagyz> but I think it makes a lot of sense
[11:10] <jamespage> nagyz, but that's what its doing - implementing the required interfaces, but just proxying out to a different backend service.
[11:10] <nagyz> although our corporate overlords needs to give me an OK before I could opensource any of the code I write which takes months...
[11:10] <jamespage> rather than running them itself...
[11:11] <nagyz> it can skip all installation phases and just has to implement the "answer" part of a relationship I guess
[11:11] <nagyz> (haven't looked at juju internals)
[11:12] <jamespage> nagyz, basically yes
[11:12] <nagyz> one last question before I leave to grab a bite
[11:12] <jamespage> nagyz, ok
[11:12] <nagyz> let's say I deploy ubuntu now on trusty from the liberty cloud archive
[11:12] <jamespage> ack
[11:12] <nagyz> once mitaka comes out, how would the upgrade look like using juju?
[11:12] <jamespage> nagyz, oh right
[11:12] <jamespage> two options
[11:12] <jamespage> either - "all units at once"
[11:13] <jamespage> juju set glance openstack-origin=cloud:trusty-mitaka
[11:13] <jamespage> or
[11:13] <jamespage> juju set glance openstack-origin=cloud:trusty-mitaka action-managed-upgrade=True
[11:13] <jamespage> and then do
[11:13] <jamespage> juju action do glance/0 openstack-upgrade
[11:13] <nagyz> just changing the origin wouldn't actually have the charm change the config options (if it got renamed from a to b, for example)
[11:13] <nagyz> ah
[11:13] <nagyz> that's the openstack-upgrade part :)
[11:13] <nagyz> so I can right now test this going from kilo to liberty for example, right?
[11:14] <nagyz> should that work?
[11:14] <jamespage> yes
[11:14] <jamespage> we've had this is folsom :-)
[11:14] <nagyz> cool
[11:14] <nagyz> I'd like to like juju.
[11:14] <nagyz> help me like it.
[11:14] <nagyz> :)
[11:14] <jamespage> nagyz, look for 'openstack-origin' for that behaviour
[11:14] <nagyz> so I could just go, deploy icehouse without the origin parameter, add the parameter to kilo, upgrade, change it to liberty, upgrade...
[11:14] <nagyz> and shouldn't break a thing?
[11:14] <jamespage> 'source' is less well specificied - some charms will do an upgrade (rabbitmq-server) - some won't (ceph)
[11:15] <jamespage> nagyz, that's the idea and what we test yes
[11:15] <jamespage> nagyz, let me dig you out a link
[11:15] <nagyz> are these covered in internal CD/CI tests?
[11:16] <jamespage> nagyz, https://www.openstack.org/summit/tokyo-2015/videos/presentation/canonical-amazing-operations-201-in-place-upgrades-of-openstack-with-running-workloads
[11:16] <jamespage> nagyz, yes
[11:17] <nagyz> ah cool will watch later
[11:17] <jamespage> we also re-verify before every charm release (3 month cadence)
[11:17] <nagyz> I was actually in tokyo but missed this
[11:17] <jamespage> nagyz, wolsen is awesome...
[11:17] <jamespage> nagyz, watch out for godzilla ;)
[11:18] <nagyz> the food in austin will be better
[11:18] <nagyz> :)
[11:18] <jamespage> hah
[11:18] <jamespage> nagyz, I'm hoping to have our charm development upstream under openstack soon
[11:18] <jamespage> nagyz, so we might get some design session time for discussing roadmap for the next 6 months for the charms
[11:18] <nagyz> you mean under openstack's github?
[11:19] <jamespage> nagyz, yeah - git/gerrit workflow...
[11:19] <nagyz> that would actually mean I could contribute code without doing any legal paperwork
[11:19] <nagyz> as we're cleared for all-openstack stuff
[11:19] <jamespage> nagyz, better make that happen quick then :)
[11:19] <nagyz> I need to have this region up by the end of the month
[11:20] <jamespage> nagyz, we sprinted on making the transition smooth last week which was really the final blocker; I just need to complete the review for the infra team and make sure everything is ready to go
[11:20] <jamespage> and all devs know where to go once we migrate!
[11:20] <nagyz> cool
[11:20] <nagyz> looking forward to that
[11:20] <jamespage> yeah me to
[11:21] <nagyz> we need to do the stress test on our ceph first but I can get someone on my team to look into charm internals to assess the proxy
[11:21] <nagyz> so if no work is done yet, we have a fresh start
[11:22] <nagyz> ok, really off to grab a bite - talk to you later and thanks for all the infos!
[11:22] <jamespage> nagyz, tbh you can do your development where you like
[11:22] <jamespage> nagyz, ack -  ttfn
[11:31] <jamespage> apuimedo, right back to reviewing midonet
[11:31] <apuimedo> very well
[11:32] <apuimedo> jamespage: I'm doing a bugfix for midonet-api midonet-agent interaction
[11:32] <jamespage> apuimedo, ok
[11:32] <apuimedo> because the juju-info relation is giving me hostname.domain
[11:32] <apuimedo> and the matching was on just hostname
[11:33] <jamespage> apuimedo, is neutron-agents-midonet usable yet?
[11:33] <apuimedo> so if you have a domain and dns configured maas, that is problematic
[11:33] <jamespage> apuimedo, oh I'm testing on my laptop under LXD
[11:33] <apuimedo> jamespage: the one that runs on bare-metal and then neutron-api goes inside lxc? Should be
[11:33] <jamespage> its quicker for reviews....
[11:34] <apuimedo> jamespage: so no dns in your setup, right?
[11:34] <jamespage> nope - everything in LXD containers - its work inflight right now...
[11:34] <jamespage> apuimedo, simple dns
[11:34] <jamespage> ip/host forward reverse lookup only
[11:34] <apuimedo> ok
[11:34] <apuimedo> well, it should probably work then
[11:35] <apuimedo> I haven't tried the lxd provider, do you have a link to it?
[11:35] <jamespage> apuimedo, midonet-api and midonet-agent are working ok
[11:35] <apuimedo> just to verify, jamespage
[11:35] <jamespage> apuimedo, kinda - right now its xenial development + juju built from source with a patch...
[11:35] <apuimedo> juju ssh midonet-api/0
[11:35] <apuimedo> FOO=`sudo midonet-cli tunnel-zone list`
[11:36] <apuimedo> crap
[11:36] <apuimedo> wrong command
[11:36] <jamespage> hehe
[11:36] <apuimedo> sudo -i
[11:36] <apuimedo> midonet-cli -e tunnel-zone list
[11:36] <apuimedo> it will give you a uuid
[11:36] <apuimedo> then
[11:37] <apuimedo> midonet-cli -e tunnel-zone uuid_you_got member list
[11:37] <apuimedo> if you have some member, we should be fine
[11:37] <jamespage> I don't have a tunnel zone afaict
[11:37] <apuimedo> otherwise you have the same bug I do with maas
[11:37] <jamespage> $ tunnel-zone list
[11:37] <jamespage> tzone tzone0 name default_tz type gre
[11:37] <jamespage> is that right?
[11:37] <apuimedo> yes
[11:37] <jamespage> oh
[11:38] <jamespage> tzone0?
[11:38] <apuimedo> then tunnel-zone tzone0 member list
[11:38] <apuimedo> tzone0 is an alias
[11:38] <jamespage> zone tzone0 host host0 address 10.0.3.93
[11:38] <apuimedo> ok, so that should be the ip of the compute machine
[11:38] <jamespage> that appears ok
[11:38] <jamespage> yeah - it is
[11:38] <apuimedo> good
[11:38] <jamespage> host list
[11:38] <jamespage> host host0 name juju-2bb878e7-d45e-4422-890e-8778b0aff37c-machine-8 alive true addresses /10.0.3.93,/ffff:ffff:fe80:0:ffff:ffff:fef7:3fff,/127.0.0.1,/0:0:0:0:0:0:0:1,/192.168.122.1
[11:38] <jamespage> that matches
[11:38] <apuimedo> you'll get another member when you add neutron-agents-midonet related to midoent-agent
[11:39] <jamespage> ok
[11:39] <jamespage> doing so now...
[11:39] <apuimedo> for the metadata
[11:39] <apuimedo> otherwise the nova instances are not gonna get metadata for ssh keys and such
[11:41] <jamespage> apuimedo, yeah I observed that already
[11:41] <apuimedo> :-)
[11:41] <jamespage> apuimedo, but was expecting that :-)
[11:41] <apuimedo> jamespage: you're one step ahead
[11:41] <jamespage> ip but not metadata
[11:42] <apuimedo> in case you want to have external connectivity on the instances, I can tell you how to manually add an edge to one of the machines running midolman
[11:42] <apuimedo> I helped alai do it yesterday
[11:42] <jamespage> apuimedo, pls
[11:42] <jamespage> that would be good before we get the gateway charm
[11:42] <apuimedo> :-)
[11:43] <apuimedo> is the floating range 200.200.200.0/24 good for you?
[11:43] <jamespage> i can work with that
[11:43] <apuimedo> okey dokey
[11:44] <apuimedo> so create an external network in neutron
[11:44] <apuimedo> that should automagically create a midonet "provider router" which you can see with midonet-cli -e router list
[11:44] <apuimedo> then
[11:44] <apuimedo> go to the compute node
[11:44] <apuimedo> and do
[11:45] <apuimedo> sudo ip link add type veth
[11:45] <apuimedo> sudo ip link set dev veth0 up
[11:45] <apuimedo> same for veth1
[11:45] <apuimedo> enable sysctl forwarding
[11:46] <apuimedo> sysctl -w net.ipv4.ip_forward
[11:46] <apuimedo> sudo iptables -t nat -I POSTROUTING -s 0.0.0.0/0 -d 200.200.200.0/24 -j MASQUERADE
[11:47] <apuimedo> sudo iptables -t nat -I POSTROUTING -s 200.200.200.0/24 -d 0.0.0.0/0 -j MASQUERADE
[11:47] <apuimedo> jamespage: did you get the uuid of the provider router?
[11:49] <apuimedo> i wonder if doing veth creation on lxc containers is problematic though
[11:55] <apuimedo> jamespage: going out for lunch
[11:56] <apuimedo> I'll be back in an hour or so
[12:19] <jamespage> apuimedo|lunch, ack
[12:19] <jamespage> apuimedo|lunch, agents running, host registered - just not part of the tunnel zone yet...
[12:19]  * jamespage pokes more
[12:43] <jamespage> apuimedo|lunch, doing lunch myself now
[12:43] <jamespage> some bits on -agents-midonet
[12:43] <jamespage> the template for /etc/neutron/neutron.conf is missing:
[12:43] <jamespage> [agent]
[12:43] <jamespage> root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[12:43] <jamespage> [oslo_concurrency]
[12:43] <jamespage> lock_path = /var/lock/neutron
[12:44] <jamespage> without which the dhcp agent is not able to create and manage namespaces
[12:44] <jamespage> the unit running -agents-midonet is listed in 'host list' but is not registered in to the tunnel-zone
[12:44] <jamespage> I'm running with kilo (and not mem)
[12:47] <jamespage> apuimedo|lunch, exported bundle from my env - http://paste.ubuntu.com/14865744/
[13:13] <apuimedo|lunch> that looks small
[13:13] <apuimedo|lunch> jamespage: no config?
[13:14] <apuimedo|lunch> ah yes, there is on some
[13:32] <apuimedo|lunch> jamespage: oh, I added that to the neutron.conf template on friday, I may have forgotten to push
[13:38] <apuimedo|lunch> that's odd
[13:39] <apuimedo|lunch> jamespage: can you send me the logs for the midonet-api unit?
[13:45] <jamespage> apuimedo|lunch, yup
[13:47] <jamespage> apuimedo|lunch, http://paste.ubuntu.com/14866135/
[13:57] <apuimedo> thanks
[14:02] <apuimedo> odd, I only see one ADD_MEMEBER
[14:02] <apuimedo> *MEMBER
[14:03] <apuimedo> and one nice failure
[14:05] <apuimedo> it seems zookeeper went down and then recovered
[14:56] <jose> marcoceppi_: busy atm?
[15:01] <jamespage> apuimedo, what do I need to poke to get things functional?
[15:02] <apuimedo> jamespage: well, we can manually add the neutron-agents-midonet host to the tunnel zone
[15:03] <apuimedo> midonet-cli -e tunnel-zone uuid_of_the_tunnel_zone add member host uuid_of_the_host address ip_address_of_the_host
[15:04] <apuimedo> did you get to do the iptables and veth creation I told you about before?
[15:06] <jamespage> apuimedo, just getting to that now
[15:08] <apuimedo> ok
[15:20] <jamespage> apuimedo, veth in containers it fine - its all namespace OK
[15:20] <jamespage> apuimedo, done those setup steps
[15:23] <apuimedo> cool
[15:24] <jamespage> apuimedo, I'm basically doing this - https://www.rdoproject.org/networking/midonet-integration/
[15:24] <jamespage> ?
[15:24] <apuimedo> without the bridge
[15:24] <apuimedo> since it it not necessary
[15:24] <apuimedo> did you create an external neutron network?
[15:25] <jamespage> apuimedo, yes - I can see the router in midonet-cli, but not in neutron - is that right?
[15:25] <apuimedo> it is :-)
[15:25] <apuimedo> it's like a parent router
[15:25] <jamespage> okay
[15:26] <apuimedo> midonet-cli -e router uuid_of_the_provider_router add port address 200.200.200.2 net 200.200.200.0/24
[15:26] <apuimedo> this will give you a uui
[15:26] <apuimedo> *uuid
[15:26] <jamespage> apuimedo, done
[15:27] <apuimedo> midonet-cli -e host uuid_of_the_compute_node_in_host_list add binding port router router_uuid port previous_uuid interface veth1
[15:27] <apuimedo> after this command, from nova-compute/0, you should be able to ping 200.200.200.2
[15:36] <jamespage> apuimedo, hmm - I may have an issue with cassandra - one second
[15:37] <apuimedo> ok
[15:43] <jamespage> apuimedo, hmm so midonet-agent is not configuring the connection to cassandra with credentials
[15:44] <apuimedo> no
[15:44] <jamespage> apuimedo, I can turn auth off on cassandra
[15:45] <jamespage> its on by default
[15:45] <jamespage> one sec
[15:45] <apuimedo> Didn't I send you the charm config for Cassandr?!
[15:47] <jamespage> apuimedo, erm no
[15:47] <jamespage> at least I don't think so
[15:47] <apuimedo> mmm
[15:47] <apuimedo> let me check
[16:44] <jamespage> apuimedo, ok re-deployed - still not using mem - but the config in midonet-cli looked good and I can see midolman logging port events on the edges...
[16:44] <jamespage> that said no cigar on the ping yet
[16:44] <apuimedo> mmm
[16:45] <apuimedo> ip -4 route on the host where there's the veths
[16:46] <jamespage> default via 10.0.3.1 dev eth0
[16:46] <jamespage> 10.0.3.0/24 dev eth0  proto kernel  scope link  src 10.0.3.5
[16:46] <jamespage> 192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1
[16:46] <jamespage> apuimedo, ^^
[16:46] <jamespage> apuimedo, I saw:
[16:46] <jamespage> 2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Datapath port veth1 added
[16:46] <jamespage> 2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Port 4/veth1/a8d30ec9-e44d-42fa-9d90-0d02203581cf became active
[16:47] <apuimedo> jamespage: you are missing a link scope route for 200.200.200.1
[16:47] <apuimedo> did you do
[16:47] <apuimedo> ip addr add 200.200.200.1/24 dev veth0?
[16:48] <jamespage> apuimedo, erm I have now
[16:48] <jamespage> missed that - apologies
[16:48] <apuimedo> no problem ;-)
[16:49] <apuimedo> oh, add a subnet with 200.200.200.0/24 to the public net
[16:49] <jamespage> apuimedo, already done
[16:50] <apuimedo> can you ping 200.200.200.2
[16:51] <jamespage> apuimedo, nope
[16:51] <jamespage> should i be able to see that bound anywhere on the server?
[16:51] <apuimedo> mmm
[16:51] <apuimedo> on midonet-cli
[16:52] <apuimedo> create a router
[16:52] <apuimedo> in neutron
[16:53] <jamespage> apuimedo, plugged into external and internal networks?
[16:53] <apuimedo> yes
[16:54] <apuimedo> but even so, we should have ping already, yesterday with alai we already had it at this step
[16:54] <apuimedo> I feel like we probably missed something
[16:54] <apuimedo> iptables -n -L -t nat
[16:55] <apuimedo> jamespage: oh, and
[16:55] <alai> yes thanks apuimedo , i think we are getting very close
[16:55] <apuimedo> midonet-cli router provider_router_uuid port list
[16:55] <apuimedo> maybe we forgot to bind the right address
[16:57] <jamespage> apuimedo, how do I unbind the previous bound router port?
[16:57] <apuimedo> why, the current one doesn't have address jamespage ?
[16:58] <jamespage> apuimedo, so i created the router in neutron, and plugged it in
[16:58] <jamespage> router router0 name MidoNet Provider Router state up
[16:58] <jamespage> router router1 name public-router state up infilter chain6 outfilter chain7
[16:58] <apuimedo> that's good
[16:58] <jamespage> I now have extra ports for the neutron created router
[16:59] <apuimedo> can you show me the output of the provier router port list?
[16:59] <jamespage> apuimedo,
[16:59] <jamespage> port port8 device router1 state up plugged no mac ac:ca:ba:72:6f:7c address 200.200.200.2 net 169.254.255.0/30 peer router0:port2
[16:59] <jamespage> port port10 device router1 state up plugged no mac ac:ca:ba:7b:6c:76 address 192.168.21.1 net 192.168.21.0/24 peer bridge1:port0
[16:59] <jamespage> port port9 device bridge1 state up plugged no peer router1:port1
[17:00] <jamespage> apuimedo, erm maybe
[17:00] <apuimedo> mmm
[17:00] <jamespage> apuimedo, I'm not finding midonet-cli very discoverable
[17:01] <jamespage> apuimedo, ah I also have to dial into another call...
[17:01] <jamespage> biab
[17:01] <apuimedo> jamespage: yes, midonet-cli takes a while to get used to
[17:01] <apuimedo> ok, we can continue later/tomorrow
[19:08] <agunturu> Is it possible to get the list of parameters to a JUJU action?
[19:09] <agunturu> The “juju action defined” is listing the actions, but not the parameters.
[19:09] <agunturu> ubuntu@juju:~/mwc16charms/trusty/clearwater-juju$ juju action defined ims-a
[19:09] <agunturu> create-user: Create a user.
[19:09] <agunturu> delete-user: Delete a user.
[19:33] <marcoceppi_> agunturu: juju action defined --schema
[19:35] <agunturu> Hi marcoceppi_. Thanks that works
[19:36] <marcoceppi_> agunturu: cheers!
[22:02] <firl> lazypower|summit you around?
[22:55] <marcoceppi_> firl: we're GMT+1 atm, he might be in bed
[22:56] <firl> marcoceppi_: gotcha thanks!