/srv/irclogs.ubuntu.com/2016/02/03/#juju.txt

=== Guest80711 is now known as med_
yuanyoujamespage:  Hi, I want to get a config option value from another charm ,I use config('ext-port') but can't the value from the charm neutron-gateway? How can I get this value "ext-port" from neutron-gateway? Thanks02:35
=== yuanyou_ is now known as yuanyou
=== coreycb` is now known as coreycb
jamespageyuanyou, hey - config is always scoped to a specific charm - its possible to distribute that across relations between charms, but you'd have todo that explicitly - what's your use case?09:26
nagyzjamespage, did you see my question from yesterday? I might have missed your reply as my client got disconnected apparently for a bit09:35
deanmanHello trying to run juju inside an ubuntu wily64 VM behind proxy10:24
deanmanI've configured proxy inside environment.yam and when trying to use the local provider to deploy a simple reds service i get an error  "DEBUG httpbakery client.go:226 } -> error <nil>". Any hints?10:25
jamespagenagyz, hey - my irc was on and off whilst travelling - ask me again :-)10:28
=== axino` is now known as axino
apuimedo|awayjamespage: are you back from the travel?10:34
jamespageapuimedo|away, I am yes10:35
apuimedo|away:-)10:35
apuimedo|awayjamespage: https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709 reminder :P10:36
jamespageapuimedo|away, looking10:36
jamespageapuimedo|away, did you see my comments in midonet-api on the review bug?10:36
apuimedo|awaynot yet, I'll check it after I finish the current meeting10:37
jamespageapuimedo|away, oh - right - that merge includes shared-secret...10:38
nagyzjamespage, I'd like to use juju to deploy openstack but I already have a ceph cluster up and running which was not done using juju - is there a way to deploy the ceph charms and point them to the existing installation?10:39
jamespagenagyz, no - sorry - that's not possible10:40
nagyzI guess it's also not possible to deploy a new availability zone but instead of deploying keystone and horizon point them to an already existing one?10:40
jamespageits a popular request and its probably not that much work to figure out a proxy charm that implements the client interface of the ceph charm10:40
jamespagebut its not been written by anyone yet...10:40
jamespagenagyz, az in which context? nova?10:41
nagyzyeah10:41
jamespageits possible todo multiple regions with the charm with a single keystone and horizon10:41
jamespageaz is adjunct to that concept - its just a grouping of servers within a region...10:41
nagyzwhat I meant is that we already have openstack up and running and we want to add a new az that I wanted to deploy using juju10:41
jamespagenagyz, do you mean region or az?10:42
nagyzah, sorry, right, I meant region.10:43
nagyzthe only shared components between regions are keystone and horizon, right?10:43
nagyzso is it possible to deploy everything except keystone and horizon with juju, and for those just point them to the existing installation?10:44
jamespagenagyz, ok - so all of the charms have a 'region' setting; you have to deploy new instances of all charms with the region set to the new region name - horizon just relates to keystone so that should just dtrt; keystone you can specify multiple regions10:44
jamespagenagyz, oh right - so the proxy question again in a different context - no that's not possible10:44
nagyzright, but when I add a keystone relation so the different charms can register in the endpoints, they need to proxy10:44
nagyzah, right.10:44
nagyzok, so juju is only for greenfield deployments, I see :)10:44
jamespagenagyz, yes10:45
jamespagesorry10:45
jamespagenot a retro fit...10:45
nagyzso I'd need to figure out how to migrate the current data over...10:45
nagyzwhich is not going to happen on the ceph side (100TB+)10:45
nagyzso there goes juju for me I assume :( :)10:45
jamespagenagyz, yah - that's quite a bit of data...10:46
jamespagenagyz, lemme check on the proxy charm thing10:46
nagyzthe new one we're about to deploy is ~2PB which I expect the pesky users to fill up quickly :-)10:46
jamespageicey, cholcombe: ^^ I know we discussed proxying an existing ceph deployment into a juju deployed openstack cloud - have either of you done any work in that area?10:46
nagyzor I guess we could write the proxy charm ourselves.10:46
jamespagenagyz, that's def possible10:47
iceyjamespage nagyz: I haven't done any work on that yet10:47
jamespagenagyz, we'd love to support you in that effort if you decide to go that route...10:47
nagyzseems like every project I touch I end up writing code for - same happened to OpenStack itself.10:47
jamespagenagyz, welcome to open source!10:47
nagyzhah10:47
nagyzwith the current ceph charms is it possible to deploy the mons separately?10:48
jamespagenagyz, yes10:48
jamespageinfact we have a bit of a redesign in flight for that10:48
nagyzI know there is ceph-osd which only adds osds to existing clusters but looked to me like the ceph charm installs both the mon and the osd code10:48
jamespagenagyz, the ceph charm is a superset of ceph-osd - but you can run it with no osd-devices configuration, so it just does the mon10:48
nagyzand sets it up10:48
nagyzah, got it10:48
nagyzand then just keep adding ceph-osds10:48
jamespagenagyz, icey has been working on a new ceph-mon charm, chopping out the osd support from ceph and simplifying the charm10:49
nagyzare you aware that if I deploy using 'osd-devices: a' and then change it to 'osd-devices: a b' then it doesn't work? :-)10:49
nagyzit wants to re-run ceph-deploy on a, which fails10:49
jamespagenagyz, osd-devices is quite blunt10:49
nagyzso I cannot add drives inside an osd once deployed?10:49
jamespagenagyz, oh wait - that should not happen - the charm should detect that a is in use and just skip it10:49
jamespagenagyz, please file a bug for that10:49
jamespagedef a regression if that is the case...10:50
nagyzok, in my -limited- testing, this didn't work. I'll retest and see10:50
* jamespage wonders if icey broke my code...10:50
nagyzfor quick setup and teardown I'm using juju now to test stuff10:50
iceyjamespage: I don't think so...10:50
nagyzbut ofcourse juju itself has problems with my bonded maas 1.9 network10:50
jamespageicey, just kidding ;-)10:50
nagyz50% of deployments fail10:50
iceyyeah yeah10:50
nagyzthanks jamespage icey for the info10:51
jamespagenagyz, oh that sounds nasty - again if you're hitting specific issues please raise bugs on juju as well - you sound like you're right on the edge of feature support and we're working towards 16.04 in three months...10:51
jamespageso any feedback on latest juju + maas 1.9 with network bonding vlans etc... is super useful right now10:51
nagyzyeah we're going to stick to 14.04 for the next 6+ after .04 is released tho :P10:51
nagyzright I've been opening maas bugs left and right10:52
jamespagenagyz, ack - you and alot of people10:52
jamespagenagyz, i think most uses take the new lts at release into testing and then go to production in that type of timescale...10:52
nagyzagreed - we'll do the same10:52
jamespagetesttesttesttesttest10:52
* jamespage apologies for the mantra...10:53
jamespagenagyz, if you have other question either ask here or on the juju ML10:53
nagyzwill do10:53
jamespageI try to watch both ;-)10:53
jamespageapuimedo|away, hey - so I have most of midonet deployed apart from the host-agents bit10:54
jamespageapuimedo|away, I was surprised that my instance got a dhcp IP address even without that - does midolman do something clever on the compute node?10:54
apuimedo|awayjamespage: you mean apart of the neutron-agents-midonet?10:55
jamespageapuimedo|away, yeah10:55
=== apuimedo|away is now known as apuimedo
apuimedojamespage: MidoNet dhcp driver only does a noop and sets a namespace for the metadata driver10:56
apuimedojamespage: MidoNet serves dhcp itself :P10:56
jamespageapuimedo, ah!10:56
apuimedoit's much comfier10:56
jamespageso that comes from midonet-api or midolman?10:56
nagyzone more question on the network side now that you guys are talking about it: let's say I have eth0 deployed on 10.1.0.0/16 and eth1 deployed on 10.2.0.0/16 - is it possible to tell juju to use one subnet for exposing networks and the other just for spinning up containers for example?10:56
apuimedoand with the new release, we also provide metadata from the agent10:56
apuimedomidolman10:56
nagyzis there a juju network doc that I could read about the maas integration?10:56
jamespagenagyz, not quite...10:57
jamespageits coming with juju 2.010:57
apuimedobut juju will still take a while to catch up with the new MidoNet release10:57
jamespage'network spaces'10:57
jamespagelemme dig you out a reference10:57
jamespageapuimedo, hmm ok10:57
nagyzjamespage, maas introduced fabric and space but it's very confusing even for someone with good network experience10:57
apuimedojamespage: ryotagami will be the one adding the v5 support10:57
nagyzjamespage, the wording is not very exact in the maas docs10:57
jamespagenagyz, I'll provide them that feedback10:57
jamespagehmm - none in channel10:57
jamespagenagyz, fabric10:58
jamespagenagyz, sorry - I know there are some documentation updates in flight - MAAS 1.9 is still fairly new10:58
jamespagenagyz, broadly10:58
nagyzI guess one is L2 the other is L3 separation10:58
jamespagesubnet >---- space10:58
jamespageso multiple subnets in a space10:59
jamespagea space is a collection of subnets (l2 network segments) with equivalent security profile10:59
apuimedojamespage: which public-address will get you then?10:59
jamespageso thing DMZ, APPS, PCI for space10:59
apuimedoand which private-address10:59
jamespageapuimedo, oh - not got that far yet :-)10:59
apuimedoin hookenv10:59
apuimedojamespage: I was thinking hard about that problem before going to bed10:59
jamespagesorry to many conversations11:00
* jamespage reads backscroll11:00
apuimedojamespage: I was just joining your conversation with nagyz11:00
jamespageapuimedo, well private-address almost becomes obsolete11:00
apuimedobecause it will prolly affect openstack charms deployment11:00
jamespageit still exists as really the IP address on the network with the default route11:00
apuimedoyou'll probably have a management network11:00
jamespageapuimedo, oh it will11:01
apuimedoand a data network11:01
jamespageapuimedo, most charms support that with congig11:01
nagyzright11:01
apuimedoso if I want to future proof11:01
jamespageapuimedo, we're working on network space support across the charms in the run up to 16.0411:01
apuimedoI need to get from my charms the ip on a specific maas network11:01
nagyzso how is it currently done? can I declare the different subnets and have MAAS manage DHCP+DNS on them and juju use it?11:01
jamespagewhen that feature GA's in juju11:01
jamespageapuimedo, ok - so there will be some additional hook tooling for this11:01
apuimedonagyz: so does maas already provide dnsmasq for more than a net?11:01
jamespageapuimedo, and probably some extra metadata stanza's - thats still in flux11:02
apuimedojamespage: good. I'll be looking forward to see it then11:02
nagyzapuimedo, good question actually - it still could be buggy.11:02
apuimedothanks11:02
nagyzapuimedo, even with one subnet, maas dns is broken for me11:02
nagyz(I promised to open bugs)11:02
apuimedonagyz: in which way?11:02
jamespageapuimedo, basically its 'network-get -r <binding> --primary-addres'11:02
jamespagething binding == management network or data network for tenant traffic11:03
apuimedowell, alai did mention yesterday that on some lxc containers she was not getting "search " in /etc/resolv.conf11:03
nagyzwhen doing the initial enlistment the node gets a DHCP IP which is registered to DNS, but then when a new node wants to enlist it gets the same IP which already has a DNS record so it gets back a bad request for the enlistment rest call11:03
jamespagebut services are bound to spaces, so if units end up on different subnets within a space, they still get the right IP for the local subnet11:03
nagyzwhich breaks enlistment so breaks maas totally11:03
nagyzso the DHCP record <> DNS record sync is quite flaky11:03
apuimedonagyz: that sounds very strange. It would be more of a dnsmasq bug, and those don't come often11:04
nagyzwhy would it be dnsmasq?11:04
apuimedonagyz: I got once the setting screwed up11:04
jamespagenagyz, hmm - that sounds like a bug or some sort of configuration issue - I've not seen that problem in our lab11:04
apuimedoso I had to destroy the environment and recreate it11:04
jamespageapuimedo, its isc-dhcp server in MAAS (not dnsmasq)11:04
nagyzI'd LOVE to use maas's built in DNS instead of writing a manual designate syncer script11:04
nagyzI've found an isc-dhcp-server bug when using multiple subnets in trusty...11:04
apuimedooh, then I misremember badly11:04
nagyzit's pending confirmed, triaged11:04
nagyzso I'm down to one subnet.11:05
apuimedoI had such a funny one. I did juju ssh foo11:05
apuimedoand juju was taking me to the `bar` machine11:05
nagyzbut even then if I right now flip it from DHCP to DHCP+DNS, it just kills my environment fully - no more commissioning or deploy possible11:05
nagyzand I don't want to reenlist the 100+ nodes :)11:05
nagyz(vm snapshots ftw)11:05
jamespagenagyz, suffice to say lots of improvements in managing DNS across multiple subnets using MAAS - if there are isc-dhcp bugs, we're fortunate that there is lots of expertise both in the MAAS team and across the ubuntu distro to resolve that...11:06
nagyzjamespage, right I think the bug only affects the version shipped in trusty11:06
nagyzI can dig out the bug if you want. :)11:06
nagyzhttps://bugs.launchpad.net/maas/+bug/152161811:07
mupBug #1521618: wrong subnet in DHCP answer when multiple networks are present <MAAS:Triaged> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1521618>11:07
nagyzhere it is11:07
nagyzbasically I can't even do PXE boot as it mixes up the subnet infos11:07
jamespagenagyz, please raise bugs and mail the juju and MAAS mailing lists with any probs - feedback is a hugely valuable part of our development process.11:07
nagyza lot of times I open bugs then I get an invalid and blake explains to me that it's not the right way to hold it ;-)11:07
jamespage'effects me to' with a comment always good to help with priorities on existing bugs....11:07
jamespagenagyz, hehe - ok - sometimes a post to the ML first is a nice idea11:08
nagyzis the ML active for maas?11:08
nagyzI saw the dev ML archive that had like 50 mails in a year11:08
jamespagenagyz, but I'd always rather have the bug and explain that no11:08
jamespagenagyz, meh - just email on the juju one - that has the right eyes on it11:08
nagyz:D11:09
nagyzlol11:09
jamespageno/not11:09
nagyzso would doing a keystone proxy charm be considerably harder than the ceph proxy?11:09
nagyz(is "proxy charm" an official term?)11:09
jamespagenagyz, well proxy charm is in my head really ;-)11:09
nagyzfound some google reference to it, might have been from you11:09
nagyzbut I think it makes a lot of sense11:10
jamespagenagyz, but that's what its doing - implementing the required interfaces, but just proxying out to a different backend service.11:10
nagyzalthough our corporate overlords needs to give me an OK before I could opensource any of the code I write which takes months...11:10
jamespagerather than running them itself...11:10
nagyzit can skip all installation phases and just has to implement the "answer" part of a relationship I guess11:11
nagyz(haven't looked at juju internals)11:11
jamespagenagyz, basically yes11:12
nagyzone last question before I leave to grab a bite11:12
jamespagenagyz, ok11:12
nagyzlet's say I deploy ubuntu now on trusty from the liberty cloud archive11:12
jamespageack11:12
nagyzonce mitaka comes out, how would the upgrade look like using juju?11:12
jamespagenagyz, oh right11:12
jamespagetwo options11:12
jamespageeither - "all units at once"11:12
jamespagejuju set glance openstack-origin=cloud:trusty-mitaka11:13
jamespageor11:13
jamespagejuju set glance openstack-origin=cloud:trusty-mitaka action-managed-upgrade=True11:13
jamespageand then do11:13
jamespagejuju action do glance/0 openstack-upgrade11:13
nagyzjust changing the origin wouldn't actually have the charm change the config options (if it got renamed from a to b, for example)11:13
nagyzah11:13
nagyzthat's the openstack-upgrade part :)11:13
nagyzso I can right now test this going from kilo to liberty for example, right?11:13
nagyzshould that work?11:14
jamespageyes11:14
jamespagewe've had this is folsom :-)11:14
nagyzcool11:14
nagyzI'd like to like juju.11:14
nagyzhelp me like it.11:14
nagyz:)11:14
jamespagenagyz, look for 'openstack-origin' for that behaviour11:14
nagyzso I could just go, deploy icehouse without the origin parameter, add the parameter to kilo, upgrade, change it to liberty, upgrade...11:14
nagyzand shouldn't break a thing?11:14
jamespage'source' is less well specificied - some charms will do an upgrade (rabbitmq-server) - some won't (ceph)11:14
jamespagenagyz, that's the idea and what we test yes11:15
jamespagenagyz, let me dig you out a link11:15
nagyzare these covered in internal CD/CI tests?11:15
jamespagenagyz, https://www.openstack.org/summit/tokyo-2015/videos/presentation/canonical-amazing-operations-201-in-place-upgrades-of-openstack-with-running-workloads11:16
jamespagenagyz, yes11:16
nagyzah cool will watch later11:17
jamespagewe also re-verify before every charm release (3 month cadence)11:17
nagyzI was actually in tokyo but missed this11:17
jamespagenagyz, wolsen is awesome...11:17
jamespagenagyz, watch out for godzilla ;)11:17
nagyzthe food in austin will be better11:18
nagyz:)11:18
jamespagehah11:18
jamespagenagyz, I'm hoping to have our charm development upstream under openstack soon11:18
jamespagenagyz, so we might get some design session time for discussing roadmap for the next 6 months for the charms11:18
nagyzyou mean under openstack's github?11:18
jamespagenagyz, yeah - git/gerrit workflow...11:19
nagyzthat would actually mean I could contribute code without doing any legal paperwork11:19
nagyzas we're cleared for all-openstack stuff11:19
jamespagenagyz, better make that happen quick then :)11:19
nagyzI need to have this region up by the end of the month11:19
jamespagenagyz, we sprinted on making the transition smooth last week which was really the final blocker; I just need to complete the review for the infra team and make sure everything is ready to go11:20
jamespageand all devs know where to go once we migrate!11:20
nagyzcool11:20
nagyzlooking forward to that11:20
jamespageyeah me to11:20
nagyzwe need to do the stress test on our ceph first but I can get someone on my team to look into charm internals to assess the proxy11:21
nagyzso if no work is done yet, we have a fresh start11:21
nagyzok, really off to grab a bite - talk to you later and thanks for all the infos!11:22
jamespagenagyz, tbh you can do your development where you like11:22
jamespagenagyz, ack -  ttfn11:22
=== jesse_ is now known as randleman
jamespageapuimedo, right back to reviewing midonet11:31
apuimedovery well11:31
apuimedojamespage: I'm doing a bugfix for midonet-api midonet-agent interaction11:32
jamespageapuimedo, ok11:32
apuimedobecause the juju-info relation is giving me hostname.domain11:32
apuimedoand the matching was on just hostname11:32
jamespageapuimedo, is neutron-agents-midonet usable yet?11:33
apuimedoso if you have a domain and dns configured maas, that is problematic11:33
jamespageapuimedo, oh I'm testing on my laptop under LXD11:33
apuimedojamespage: the one that runs on bare-metal and then neutron-api goes inside lxc? Should be11:33
jamespageits quicker for reviews....11:33
apuimedojamespage: so no dns in your setup, right?11:34
jamespagenope - everything in LXD containers - its work inflight right now...11:34
jamespageapuimedo, simple dns11:34
jamespageip/host forward reverse lookup only11:34
apuimedook11:34
apuimedowell, it should probably work then11:34
apuimedoI haven't tried the lxd provider, do you have a link to it?11:35
jamespageapuimedo, midonet-api and midonet-agent are working ok11:35
apuimedojust to verify, jamespage11:35
jamespageapuimedo, kinda - right now its xenial development + juju built from source with a patch...11:35
apuimedojuju ssh midonet-api/011:35
apuimedoFOO=`sudo midonet-cli tunnel-zone list`11:35
apuimedocrap11:36
apuimedowrong command11:36
jamespagehehe11:36
apuimedosudo -i11:36
apuimedomidonet-cli -e tunnel-zone list11:36
apuimedoit will give you a uuid11:36
apuimedothen11:36
apuimedomidonet-cli -e tunnel-zone uuid_you_got member list11:37
apuimedoif you have some member, we should be fine11:37
jamespageI don't have a tunnel zone afaict11:37
apuimedootherwise you have the same bug I do with maas11:37
jamespage$ tunnel-zone list11:37
jamespagetzone tzone0 name default_tz type gre11:37
jamespageis that right?11:37
apuimedoyes11:37
jamespageoh11:37
jamespagetzone0?11:38
apuimedothen tunnel-zone tzone0 member list11:38
apuimedotzone0 is an alias11:38
jamespagezone tzone0 host host0 address 10.0.3.9311:38
apuimedook, so that should be the ip of the compute machine11:38
jamespagethat appears ok11:38
jamespageyeah - it is11:38
apuimedogood11:38
jamespagehost list11:38
jamespagehost host0 name juju-2bb878e7-d45e-4422-890e-8778b0aff37c-machine-8 alive true addresses /10.0.3.93,/ffff:ffff:fe80:0:ffff:ffff:fef7:3fff,/127.0.0.1,/0:0:0:0:0:0:0:1,/192.168.122.111:38
jamespagethat matches11:38
apuimedoyou'll get another member when you add neutron-agents-midonet related to midoent-agent11:38
jamespageok11:39
jamespagedoing so now...11:39
apuimedofor the metadata11:39
apuimedootherwise the nova instances are not gonna get metadata for ssh keys and such11:39
jamespageapuimedo, yeah I observed that already11:41
apuimedo:-)11:41
jamespageapuimedo, but was expecting that :-)11:41
apuimedojamespage: you're one step ahead11:41
jamespageip but not metadata11:41
apuimedoin case you want to have external connectivity on the instances, I can tell you how to manually add an edge to one of the machines running midolman11:42
apuimedoI helped alai do it yesterday11:42
jamespageapuimedo, pls11:42
jamespagethat would be good before we get the gateway charm11:42
apuimedo:-)11:42
apuimedois the floating range 200.200.200.0/24 good for you?11:43
jamespagei can work with that11:43
apuimedookey dokey11:43
apuimedoso create an external network in neutron11:44
apuimedothat should automagically create a midonet "provider router" which you can see with midonet-cli -e router list11:44
apuimedothen11:44
apuimedogo to the compute node11:44
apuimedoand do11:44
apuimedosudo ip link add type veth11:45
apuimedosudo ip link set dev veth0 up11:45
apuimedosame for veth111:45
apuimedoenable sysctl forwarding11:45
apuimedosysctl -w net.ipv4.ip_forward11:46
apuimedosudo iptables -t nat -I POSTROUTING -s 0.0.0.0/0 -d 200.200.200.0/24 -j MASQUERADE11:46
apuimedosudo iptables -t nat -I POSTROUTING -s 200.200.200.0/24 -d 0.0.0.0/0 -j MASQUERADE11:47
apuimedojamespage: did you get the uuid of the provider router?11:47
apuimedoi wonder if doing veth creation on lxc containers is problematic though11:49
apuimedojamespage: going out for lunch11:55
apuimedoI'll be back in an hour or so11:56
=== apuimedo is now known as apuimedo|lunch
jamespageapuimedo|lunch, ack12:19
jamespageapuimedo|lunch, agents running, host registered - just not part of the tunnel zone yet...12:19
* jamespage pokes more12:19
jamespageapuimedo|lunch, doing lunch myself now12:43
jamespagesome bits on -agents-midonet12:43
jamespagethe template for /etc/neutron/neutron.conf is missing:12:43
jamespage[agent]12:43
jamespageroot_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf12:43
jamespage[oslo_concurrency]12:43
jamespagelock_path = /var/lock/neutron12:43
jamespagewithout which the dhcp agent is not able to create and manage namespaces12:44
jamespagethe unit running -agents-midonet is listed in 'host list' but is not registered in to the tunnel-zone12:44
jamespageI'm running with kilo (and not mem)12:44
jamespageapuimedo|lunch, exported bundle from my env - http://paste.ubuntu.com/14865744/12:47
apuimedo|lunchthat looks small13:13
apuimedo|lunchjamespage: no config?13:13
apuimedo|lunchah yes, there is on some13:14
apuimedo|lunchjamespage: oh, I added that to the neutron.conf template on friday, I may have forgotten to push13:32
apuimedo|lunchthat's odd13:38
apuimedo|lunchjamespage: can you send me the logs for the midonet-api unit?13:39
jamespageapuimedo|lunch, yup13:45
jamespageapuimedo|lunch, http://paste.ubuntu.com/14866135/13:47
=== apuimedo|lunch is now known as apuimedo
apuimedothanks13:57
apuimedoodd, I only see one ADD_MEMEBER14:02
apuimedo*MEMBER14:02
apuimedoand one nice failure14:03
apuimedoit seems zookeeper went down and then recovered14:05
josemarcoceppi_: busy atm?14:56
jamespageapuimedo, what do I need to poke to get things functional?15:01
apuimedojamespage: well, we can manually add the neutron-agents-midonet host to the tunnel zone15:02
apuimedomidonet-cli -e tunnel-zone uuid_of_the_tunnel_zone add member host uuid_of_the_host address ip_address_of_the_host15:03
apuimedodid you get to do the iptables and veth creation I told you about before?15:04
jamespageapuimedo, just getting to that now15:06
apuimedook15:08
jamespageapuimedo, veth in containers it fine - its all namespace OK15:20
jamespageapuimedo, done those setup steps15:20
apuimedocool15:23
jamespageapuimedo, I'm basically doing this - https://www.rdoproject.org/networking/midonet-integration/15:24
jamespage?15:24
apuimedowithout the bridge15:24
apuimedosince it it not necessary15:24
apuimedodid you create an external neutron network?15:24
jamespageapuimedo, yes - I can see the router in midonet-cli, but not in neutron - is that right?15:25
apuimedoit is :-)15:25
apuimedoit's like a parent router15:25
jamespageokay15:25
apuimedomidonet-cli -e router uuid_of_the_provider_router add port address 200.200.200.2 net 200.200.200.0/2415:26
apuimedothis will give you a uui15:26
apuimedo*uuid15:26
jamespageapuimedo, done15:26
apuimedomidonet-cli -e host uuid_of_the_compute_node_in_host_list add binding port router router_uuid port previous_uuid interface veth115:27
apuimedoafter this command, from nova-compute/0, you should be able to ping 200.200.200.215:27
jamespageapuimedo, hmm - I may have an issue with cassandra - one second15:36
apuimedook15:37
jamespageapuimedo, hmm so midonet-agent is not configuring the connection to cassandra with credentials15:43
apuimedono15:44
jamespageapuimedo, I can turn auth off on cassandra15:44
jamespageits on by default15:45
jamespageone sec15:45
apuimedoDidn't I send you the charm config for Cassandr?!15:45
jamespageapuimedo, erm no15:47
jamespageat least I don't think so15:47
apuimedommm15:47
apuimedolet me check15:47
jamespageapuimedo, ok re-deployed - still not using mem - but the config in midonet-cli looked good and I can see midolman logging port events on the edges...16:44
jamespagethat said no cigar on the ping yet16:44
apuimedommm16:44
apuimedoip -4 route on the host where there's the veths16:45
jamespagedefault via 10.0.3.1 dev eth016:46
jamespage10.0.3.0/24 dev eth0  proto kernel  scope link  src 10.0.3.516:46
jamespage192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.116:46
jamespageapuimedo, ^^16:46
jamespageapuimedo, I saw:16:46
jamespage2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Datapath port veth1 added16:46
jamespage2016.02.03 16:46:05.339 INFO  [midolman-akka.actor.default-dispatcher-5] datapath-control -  Port 4/veth1/a8d30ec9-e44d-42fa-9d90-0d02203581cf became active16:46
apuimedojamespage: you are missing a link scope route for 200.200.200.116:47
apuimedodid you do16:47
apuimedoip addr add 200.200.200.1/24 dev veth0?16:47
jamespageapuimedo, erm I have now16:48
jamespagemissed that - apologies16:48
apuimedono problem ;-)16:48
apuimedooh, add a subnet with 200.200.200.0/24 to the public net16:49
jamespageapuimedo, already done16:49
apuimedocan you ping 200.200.200.216:50
jamespageapuimedo, nope16:51
jamespageshould i be able to see that bound anywhere on the server?16:51
apuimedommm16:51
apuimedoon midonet-cli16:51
apuimedocreate a router16:52
apuimedoin neutron16:52
jamespageapuimedo, plugged into external and internal networks?16:53
apuimedoyes16:53
apuimedobut even so, we should have ping already, yesterday with alai we already had it at this step16:54
apuimedoI feel like we probably missed something16:54
apuimedoiptables -n -L -t nat16:54
apuimedojamespage: oh, and16:55
alaiyes thanks apuimedo , i think we are getting very close16:55
apuimedomidonet-cli router provider_router_uuid port list16:55
apuimedomaybe we forgot to bind the right address16:55
jamespageapuimedo, how do I unbind the previous bound router port?16:57
apuimedowhy, the current one doesn't have address jamespage ?16:57
jamespageapuimedo, so i created the router in neutron, and plugged it in16:58
jamespagerouter router0 name MidoNet Provider Router state up16:58
jamespagerouter router1 name public-router state up infilter chain6 outfilter chain716:58
apuimedothat's good16:58
jamespageI now have extra ports for the neutron created router16:58
apuimedocan you show me the output of the provier router port list?16:59
jamespageapuimedo,16:59
jamespageport port8 device router1 state up plugged no mac ac:ca:ba:72:6f:7c address 200.200.200.2 net 169.254.255.0/30 peer router0:port216:59
jamespageport port10 device router1 state up plugged no mac ac:ca:ba:7b:6c:76 address 192.168.21.1 net 192.168.21.0/24 peer bridge1:port016:59
jamespageport port9 device bridge1 state up plugged no peer router1:port116:59
jamespageapuimedo, erm maybe17:00
apuimedommm17:00
jamespageapuimedo, I'm not finding midonet-cli very discoverable17:00
jamespageapuimedo, ah I also have to dial into another call...17:01
jamespagebiab17:01
apuimedojamespage: yes, midonet-cli takes a while to get used to17:01
apuimedook, we can continue later/tomorrow17:01
agunturuIs it possible to get the list of parameters to a JUJU action?19:08
agunturuThe “juju action defined” is listing the actions, but not the parameters.19:09
agunturuubuntu@juju:~/mwc16charms/trusty/clearwater-juju$ juju action defined ims-a19:09
agunturucreate-user: Create a user.19:09
agunturudelete-user: Delete a user.19:09
marcoceppi_agunturu: juju action defined --schema19:33
agunturuHi marcoceppi_. Thanks that works19:35
marcoceppi_agunturu: cheers!19:36
=== rcj` is now known as rcj
firllazypower|summit you around?22:02
marcoceppi_firl: we're GMT+1 atm, he might be in bed22:55
firlmarcoceppi_: gotcha thanks!22:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!