=== Guest5314 is now known as med_ === catbus1 is now known as catbus1-afk [02:50] jamespage, i totally missed your msg re: dellstack / charmstore bundle. it's all clear, done with metal stuff, and paused those jobs so they don't clobber ya. === CyberJacob is now known as zz_CyberJacob === zz_CyberJacob is now known as CyberJacob === CyberJacob is now known as zz_CyberJacob [07:51] beisner, ta [07:52] jamespage: morning. we've started on fixing the code for bundle v3 to v4 migration. could you point me or rogpeppe1 to the final openstack bundle.yaml you and Makyo came up yesterday, please. it'll serve as a basis. ty [07:53] urulama, jamespage: i'd prefer to get the bundles.yaml (v3 format) so i can use it as part of the test corpus [07:53] rogpeppe1, urulama: has both - lp:~james-page/charms/bundles/openstack-base/bundle [07:54] jamespage: thanks [08:00] jamespage, beisner net split deployed for Trusty/Icehouse and guest successfully booted and accessed [09:16] beisner, I think https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/266239 is ready to land now if you get any time for a review [09:23] rogpeppe1, urulama: I think my v4 bundle.yaml is good now - do I just need to drop the v3 version and push to the official charmers branch to magically make everything work again? [09:24] jamespage: it looks to me as if the new v4 support does not support your placement [09:24] jamespage: :-\ [09:25] rogpeppe1, I just tested it with the latest juju-deployer and it all looks OK to me [09:25] jamespage: oh, that's great then! [09:25] rogpeppe1, I am of course making the assumption that v4 support in deployer == v4 support elsewhere. [09:26] jamespage: i think that if v4 bundle.yaml is present, the v3 bundle is not taken for migration, so it doesn't matter if it's there or not [09:26] jamespage: i can't quite see *how* it works, because AFAICS there's explicit logic to rule out placements of the form "lxc:ceph/2" [09:27] rogpeppe1, http://paste.ubuntu.com/12005743/ [09:27] jamespage: you're using deployer revision 151, right? [09:28] rogpeppe1, I'm using 0.5.0 as off pypi yesterday [09:28] jamespage: yup, seems like the one [09:28] jamespage: interesting [09:28] rogpeppe1, if you see that in my bundle, you don't have the latest copy btw [09:29] I had to switch / -> = === zz_CyberJacob is now known as CyberJacob [09:32] jamespage: in the v4 bundle? [09:32] rogpeppe1, yes [09:32] jamespage: hmm, that shouldn't work [09:33] rogpeppe1, that;s what Makyo told me todo last night [09:34] * rogpeppe1 has a look [09:34] jamespage: ok, i see what's happening === rogpeppe1 is now known as rogpeppe [09:35] jamespage: the deployer thinks it's a v3 bundle [09:35] jamespage: ... but that doesn't make sense either, because it hasn't got top level bundles; but maybe it has heuristics for that [09:38] jamespage: your bundle is missing a machines section (all machines mentioned in the placement must be declared) [09:38] jamespage: if you put that in, i think the deployer will recognise it as a v4 bundle [09:38] jamespage: ... and then the deployment will fail as i expected [09:39] jamespage: so if you try uploading the bundle to the charm store, it will fail because it's not in valid v4 syntax [09:39] jamespage: (i see "invalid placement syntax "lxc:ceph=1" (and 9 more errors)" when i try parsing your bundle [09:40] rogpeppe, Makyo gave me this to validate things yesterday: [09:40] ./juju-bundlelib/devenv/bin/getchangeset bundle.yaml [09:40] that generates a changeset afaict [09:41] * rogpeppe fetches juju-bundlelib [09:42] jamespage: this is what i was using to validate: http://paste.ubuntu.com/12005806/ [09:43] jamespage: that's using the same logic that the charm store will use to validate the bundle (except that the charm store also verifies that the charms exist in the store) [09:47] jamespage: so, line 297 of jujubundlelib/validation.py: [09:47] is_legacy_bundle = machines is None === CyberJacob is now known as zz_CyberJacob === zz_CyberJacob is now known as CyberJacob === CyberJacob is now known as zz_CyberJacob === zz_CyberJacob is now known as CyberJacob [10:54] yooooo [11:14] jose: We just hit https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-full-disk-formatting/+merge/267011 with a partner, if you wouldn't mind merging. :) [11:23] too many options [11:23] juju / kubernetes/ aws elastic beanstalk [11:23] how to find the right choice :( [12:07] beisner, I'd really like to add tempest to the mojo tests, did anyone write a charm off the back of the spec you were cooking up? [12:10] gnuoy, not yet. and yep i want to add that as well. i have a local wip for that (basically to do what we do on the other uosci runs, until a tempest charm exists). [12:11] gnuoy, we've just gathered use cases and wishlists from stakeholders, which i think gives a pretty good view into what we want the charm to do. [14:00] Odd_Bloke: just woke up. will take a look in a few mins and test! [14:00] jose: Thanks! [14:01] jose: I've patched the partner in situ, so it's not burning hot. [14:01] oh, great. [14:01] jose: So get breakfast and a coffee. ;) [14:01] does chocolate milk work? :P [14:11] jamespage, gnuoy - ok, metal deploys are underway with juju/proposed 1.24.4 [14:13] beisner, awesome-o [14:13] jamespage, gnuoy - we can now flip that bit in uosci for mojo runs (juju ppa stable|devel|proposed) [14:26] gnuoy, +1 on https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/266239 [14:26] \o/ thanks [14:27] yw, thank you too! [14:27] gnuoy, which will lead to a rebase and merge on uosci's temp fork @ https://code.launchpad.net/~1chb1n/ubuntu-openstack-ci/mojo-runner-enhance/+merge/265726 [14:27] gnuoy, but i just did a merge test of yours into mine, and there were no conflicts, so i should have that ready in short order. [14:27] oops wrong link on mine up there [14:28] https://code.launchpad.net/~1chb1n/openstack-mojo-specs/net-id-ext-port-fix [14:28] ^ that'll be the one to land after yours, pending lint test... [14:28] oh heck, it's already merged lol. [14:29] ha! where was on on 7/31? [14:29] * beisner points uosci back at os mojo spec trunk [14:29] gnuoy, er umm, thanks for the merge ;-) [14:29] np :) [15:02] jamespage, gnuoy - T-K/next + 1.24.4 bare metal 7-machine smoosh a-ok; there are more U:OS version combos queued up behind that, but I'd says +1 from the container standpoint. fyi @ http://paste.ubuntu.com/12007371 [15:03] tip top [15:39] lazyPower: can a subordinate charm have another subordinate? [15:39] apuimedo: negative [15:39] mmm [15:39] subordinates can be related over relation, but not stacked [15:39] that's a bit of a problem [15:40] ok, I'll think some way to work around it [15:41] thanks [15:53] apuimedo: what are you trying to acheive? [15:54] well, I was working on a subordinate charm for neutron-server that provides neutron-metadata-agent [15:54] but that one needs the midonet-agent charm in the same scope as well [15:55] marcoceppi: because it's the midonet-agent who proxies the call [15:55] *calls [15:57] since it is not possible [15:58] I'll just modify neutron-server [15:58] so that it relates to midonet-api (for the plugin config) as jamespage told me in the previous review [15:58] I hear my name [15:58] and it will also have a midonet-host relation with container scope [15:58] that will pull the midonet-agent charm [15:59] and when neutron-plugin is midonet [15:59] it will configure and run the neutron-metadata-agent [15:59] apuimedo, hmm - does that require any kernel level magic? [15:59] jamespage: what does? [15:59] midonet-agent? [15:59] the neutron-metadata-agent [16:00] oh [16:00] let me check [16:00] apuimedo, there are benefits to having the neutron-api charm (which hosts neutron-server) containerizable [16:01] which is why we run dhcp/l3/metadata agents on the neutron-gateway charm, which is definately not containerizable [16:01] jamespage: our reference architecture has a network controller machine [16:01] that runs just neutron-server, neutron-dhcp and neutron-metadata agent [16:01] (well, and midonet-agent, of course) [16:01] we do not need l3 agent [16:02] apuimedo, well we have used cases for the gateway charm that do much the same thing [16:02] apuimedo, nsx for example just uses it for dhcp (and maybe metadata - can't remember) [16:03] I'm not sure I see the point on deploying an extra charm for just the metadata and the dhcp agents [16:03] apuimedo, by splitting out tenant instance facing services, you can scale differently [16:03] which need practically the same configuration and relations as neutron-server [16:04] well you have the midonet-agent bit already - that can be reused with the neutron-gateway charm [16:04] jamespage: our gateways scale differently [16:05] I'd have to have the neutron-gateway charm just run neutron-metadata-agent [16:05] and dhcp? [16:05] and point to nova-api for the metadata service [16:05] sorry, yes, dhcp too :P [16:05] apuimedo, oh - wait - the neutron-gateway charm also runs the nova-api-metadata service [16:06] its pretty self contained [16:06] yes [16:06] exactly [16:06] the backend comms is over rpc to the nova-conductors [16:06] I don't think we need that [16:06] apuimedo, so your intent is to run dhcp and metadata services under the neutron-api charm? [16:06] the metadata proxying goes through midonet [16:07] and the next release won't even have a metadata agent [16:07] that is what matches best our reference architecture [16:07] apuimedo, so it will just communicate with the nova-api-metadata directly? [16:07] we usually do it like that [16:07] well, with nova-api [16:07] * jamespage nods [16:07] yes [16:08] (that means adding a "shared-secret" config to nova-cloud-controller [16:08] apuimedo, got something I can look at with regards your reference architecture? [16:08] or not configuring either [16:08] jamespage: well, we have the deployment docs [16:09] jamespage: http://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_architecture.html [16:09] sure [16:09] http://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_hosts_and_services.html [16:10] I'm not going to bundle everything on the controller node [16:10] obviously [16:10] we also do HA and stuff, this is just a basic setup [16:10] but we pretty much always keep the Neutron unit as listed [16:11] so, for Juju, I'd do something like https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig [16:11] but with neutron getting its own machine [16:13] and getting midonet-api and midonet-agent pulled in the machine as subordinates with scope:container [16:13] jamespage: ^^ [16:13] apuimedo, I see [16:13] apuimedo, you're intending on using containers? [16:14] apuimedo, midonet-api fronts to neutron-api right? [16:14] apuimedo, (lots going on - and rrd brain sometimes) [16:14] jamespage: you mean lxc for some services? [16:14] jamespage: midonet-api is what the neutron plugin talks to [16:14] sort of a backend to neutron [16:14] requests go [16:14] apuimedo, ack - so in the general approach we take for Ubuntu OpenStack, that would be deployed in its own LXC container [16:15] neutron -> midonet-plugin -> midonet-api -> Zookeeper [16:15] by fragmenting into containers, you get the ability to scale each layer independently [16:15] as sizing requires [16:15] jamespage: but then it would not be reachable from other machines, would it? [16:16] I remember there was some limitation on lxc communication [16:16] apuimedo, hrm - yes it would [16:16] maybe it was between lxc on different machines? [16:16] apuimedo, Juju LXC containers are directly network addressable [16:16] across machyine [16:16] across machines [16:16] so I don't remember what it was [16:17] well, so you mean putting midonet-api in an lxc container [16:17] neutron-api and midonet-agent I'd still prefer to run on the metal [16:17] apuimedo, juju deploy --to lxc:3 midonet-api [16:17] +1 [16:17] running ovs-like bridges on lxc makes me uneasy [16:17] apuimedo, that's exactly the point I'm making [16:17] apuimedo, neutron-api is currently containerizable in all use-cases [16:18] apuimedo, neutron-gateway has all the code you need todo dhcp/metadata etc... [16:18] and is designed to go on the bare-metal [16:18] juju / kubernetes/ aws elastic beanstalk [16:18] how to choose === catbus1-afk is now known as catbus1 [16:19] mnk0, well that first one is pretty nice imho [16:19] ;-) [16:19] mnk0, you know juju can deploy kubernetes right? [16:19] yeah i want to use juju but im getting confused about how to actually use it [16:19] :/ [16:19] jamespage: would you approve of a neutron-gateway that does not run nova-api-metadata but that instead goes to nova-cloud-controller and pulls midonet-agent as subordinate? [16:20] I'm not sure how many things I'll have to disable [16:20] yeah ive found some interesting information about juju for kubernetes [16:20] apuimedo, midonet-agent as a sub - no problemo [16:20] but again still newbie [16:20] it seems a bit more troublesome than just adding a couple of services to neutron-server [16:20] apuimedo, I don't see the need to use nova-cc for the api-metadata service tho? [16:20] apuimedo, trust me - its minimal - I'll even work a diff for that if you like :-) [16:21] jamespage: ok, I'll take another look at it [16:21] the nova-cc thing is for my sanity [16:21] it's what we always have on the field [16:21] apuimedo, actually I have an inflight for something similar - let me dig it out [16:22] apuimedo, https://code.launchpad.net/~sdn-charmers/charms/trusty/neutron-gateway/ovs-odl/+merge/265237 [16:22] that SDN option still makes use of l3 and other bits, but that's a typical impact on the gateway charm [16:23] including unit tests to validate [16:25] apuimedo, you would need to trim down the list of packages and config files, so the diff should be even more minimal [16:25] jamespage: alright, I'll give it a shot [16:25] I'll let you know later ;-) [16:25] jamespage: against neutron-gateway/next, right? [16:27] yah [16:35] jamespage, re: rmq. what is the minimal scenario that i can expect rmq to form a cluster? (i'm reworking the amulet tests) [16:36] jamespage, cluster-relation-joined hook is where that seems to happen, but just deploying multiple rmq units doesn't seem to trigger that. [16:36] hmm [16:36] I'd expect just multiple units to form a cluster [16:37] jamespage, that's how the rmq amulet test is written, but it's failing those tests because two rmq units are two separate rmq clusters. cluster_status on each unit shows a 1-node cluster. [16:38] urph [16:38] that sounds bad [16:38] jamespage, but if cluster-relation-joined|changed hooks fire (ie when pulling hacluster or ceph into the picture), rmq forms a cluster [16:41] beisner, current stable charm is ok [16:44] jamespage, so wolsen and i have been t-shooting those rmq tests in next (the tests have logic errors in cluster status checks in that they just check for exit 0 on cluster_status check, instead of actually checking that each unit is in the cluster) [16:45] jamespage, and in that process, have decided a test rewrite a la the other os-charm tests is in order. [16:45] beisner, ok - so I grabbed /next and did a 3 unit deploy [16:46] beisner, looks ok to me [16:47] jamespage, tests consistently show this. @L261, 283 each unit has it's own 1-node cluster http://paste.ubuntu.com/12008073/ [16:47] jamespage, just trying to determine broken test vs broken charm, suspect the former. [16:48] beisner, do the tests use hacluster and ceph? [16:48] jamespage, jstat as of moment of fail: http://paste.ubuntu.com/12008087/ [16:49] jamespage, jstat long version http://paste.ubuntu.com/12008092/ [16:50] Odd_Bloke: had to leave for university, but I'll be back home in a couple hours. I'll check by then. Sorry about the delay! [16:50] jose: Longest breakfast and coffee ever. ;) [16:51] jose: (No worries, there's no urgency on it ATM) [16:51] hehe [16:52] beisner, oh - wait in the that configuration, we don't form a native cluster [16:54] jamespage, i eventually got that test to pass by adding some waits. but this scenario fails even if i wait forever: [16:54] http://paste.ubuntu.com/12008126/ [16:55] ie. cluster_status on each unit shows that 1-node cluster. [17:00] urgh [17:00] beisner, the dreaded wait [17:00] anyway I really need to eod - ttfn [17:00] jamespage, ack thanks. o/ [18:21] lazyPower: which is the best way to add a repo/ppa to my juju/maas environment? [18:34] apuimedo: add-apt-repository is how i generally do it [18:34] on which machine? [18:34] (so that it is available when some charm is installed) [18:34] This is for testing the neutron-api charm deployment while I still don't have one package in Ubuntu repos [18:34] lazyPower: ^^ [18:35] apuimedo: why not add, adding the repository to teh charm? [18:35] that way it adds the repo, updates teh apt cache consistently until it makes it into distro [18:35] lazyPower: the charm does not currently have an option to add a repo [18:35] hmm, i'm not following [18:35] is this a charm thats outside your control? [18:36] it belongs to the openstack-charmers team [18:36] I'm not sure how they feel about adding a config option to add repos [18:36] jamespage: gnuoy ^^ [18:36] ah, typically i fork and publish to my namespace, use that until its depreciated [18:36] but for the moment I can add it [18:36] cool [18:37] that's what I was thinking on doing [18:37] ;-) [18:37] oops [18:37] Gotta run to catch the last bus [18:37] talk to you tomorrow [18:37] thanks lazyPower [18:37] cheers apuimedo [18:53] jamespage, i know you're past eod - just observed that with next and stable, rmq x 3, cluster happens as expected. test code just needs love. [22:12] marcoceppi: are you still around? [22:12] lazyPower: I am [22:13] 1 sec, let me create a multi-file pastebin. i need your eyes for a second on a deployer bug that i cant seem to track down [22:14] https://gist.github.com/chuckbutler/7b5d724eee5d4b5b6c08 [22:14] do you see anything obvious with the bundle that i've missed? [22:18] lazyPower: otp, 2 mins [22:22] marcoceppi: i think i found it actually. missing charm in the store API that's referenced in this bundle [22:27] wait no, its there [22:38] marcoceppi: yeah i'm stumped, if you have any ideas i'm open to them