/srv/irclogs.ubuntu.com/2015/08/05/#juju.txt

=== Guest5314 is now known as med_
=== catbus1 is now known as catbus1-afk
beisnerjamespage, i totally missed your msg re: dellstack / charmstore bundle.  it's all clear, done with metal stuff, and paused those jobs so they don't clobber ya.02:50
=== CyberJacob is now known as zz_CyberJacob
=== zz_CyberJacob is now known as CyberJacob
=== CyberJacob is now known as zz_CyberJacob
jamespagebeisner, ta07:51
urulamajamespage: morning. we've started on fixing the code for bundle v3 to v4 migration. could you point me or rogpeppe1 to the final openstack bundle.yaml you and Makyo came up yesterday, please. it'll serve as a basis. ty07:52
rogpeppe1urulama, jamespage: i'd prefer to get the bundles.yaml (v3 format) so i can use it as part of the test corpus07:53
jamespagerogpeppe1, urulama: has both - lp:~james-page/charms/bundles/openstack-base/bundle07:53
rogpeppe1jamespage: thanks07:54
gnuoyjamespage, beisner net split deployed for Trusty/Icehouse and guest successfully booted and accessed08:00
gnuoybeisner, I think https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/266239 is ready to land now if you get any time for a review09:16
jamespagerogpeppe1, urulama: I think my v4 bundle.yaml is good now - do I just need to drop the v3 version and push to the official charmers branch to magically make everything work again?09:23
rogpeppe1jamespage: it looks to me as if the new v4 support does not support your placement09:24
rogpeppe1jamespage: :-\09:24
jamespagerogpeppe1, I just tested it with the latest juju-deployer and it all looks OK to me09:25
rogpeppe1jamespage: oh, that's great then!09:25
jamespagerogpeppe1, I am of course making the assumption that v4 support in deployer == v4 support elsewhere.09:25
urulamajamespage: i think that if v4 bundle.yaml is present, the v3 bundle is not taken for migration, so it doesn't matter if it's there or not09:26
rogpeppe1jamespage: i can't quite see *how* it works, because AFAICS there's explicit logic to rule out placements of the form "lxc:ceph/2"09:26
jamespagerogpeppe1, http://paste.ubuntu.com/12005743/09:27
rogpeppe1jamespage: you're using deployer revision 151, right?09:27
jamespagerogpeppe1, I'm using 0.5.0 as off pypi yesterday09:28
rogpeppe1jamespage: yup, seems like the one09:28
rogpeppe1jamespage: interesting09:28
jamespagerogpeppe1, if you see that in my bundle, you don't have the latest copy btw09:28
jamespageI had to switch / -> =09:29
=== zz_CyberJacob is now known as CyberJacob
rogpeppe1jamespage: in the v4 bundle?09:32
jamespagerogpeppe1, yes09:32
rogpeppe1jamespage: hmm, that shouldn't work09:32
jamespagerogpeppe1, that;s what Makyo told me todo last night09:33
* rogpeppe1 has a look09:34
rogpeppe1jamespage: ok, i see what's happening09:34
=== rogpeppe1 is now known as rogpeppe
rogpeppejamespage: the deployer thinks it's a v3 bundle09:35
rogpeppejamespage: ... but that doesn't make sense either, because it hasn't got top level bundles; but maybe it has heuristics for that09:35
rogpeppejamespage: your bundle is missing a machines section (all machines mentioned in the placement must be declared)09:38
rogpeppejamespage: if you put that in, i think the deployer will recognise it as a v4 bundle09:38
rogpeppejamespage: ... and then the deployment will fail as i expected09:38
rogpeppejamespage: so if you try uploading the bundle to the charm store, it will fail because it's not in valid v4 syntax09:39
rogpeppejamespage: (i see "invalid placement syntax "lxc:ceph=1" (and 9 more errors)" when i try parsing your bundle09:39
jamespagerogpeppe, Makyo gave me this to validate things yesterday:09:40
jamespage./juju-bundlelib/devenv/bin/getchangeset  bundle.yaml09:40
jamespagethat generates a changeset afaict09:40
* rogpeppe fetches juju-bundlelib09:41
rogpeppejamespage: this is what i was using to validate: http://paste.ubuntu.com/12005806/09:42
rogpeppejamespage: that's using the same logic that the charm store will use to validate the bundle (except that the charm store also verifies that the charms exist in the store)09:43
rogpeppejamespage: so, line 297 of jujubundlelib/validation.py:09:47
rogpeppe   is_legacy_bundle = machines is None09:47
=== CyberJacob is now known as zz_CyberJacob
=== zz_CyberJacob is now known as CyberJacob
=== CyberJacob is now known as zz_CyberJacob
=== zz_CyberJacob is now known as CyberJacob
mnk0yooooo10:54
Odd_Blokejose: We just hit https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-full-disk-formatting/+merge/267011 with a partner, if you wouldn't mind merging. :)11:14
mnk0too many options11:23
mnk0juju / kubernetes/ aws elastic beanstalk11:23
mnk0how to find the right choice :(11:23
gnuoybeisner, I'd really like to add tempest to the mojo tests, did anyone write a charm off the back of the spec you were cooking up?12:07
beisnergnuoy, not yet.   and yep i want to add that as well.  i have a local wip for that (basically to do what we do on the other uosci runs, until a tempest charm exists).12:10
beisnergnuoy, we've just gathered use cases and wishlists from stakeholders, which i think gives a pretty good view into what we want the charm to do.12:11
joseOdd_Bloke: just woke up. will take a look in a few mins and test!14:00
Odd_Blokejose: Thanks!14:00
Odd_Blokejose: I've patched the partner in situ, so it's not burning hot.14:01
joseoh, great.14:01
Odd_Blokejose: So get breakfast and a coffee. ;)14:01
josedoes chocolate milk work? :P14:01
beisnerjamespage, gnuoy - ok, metal deploys are underway with juju/proposed 1.24.414:11
jamespagebeisner, awesome-o14:13
beisnerjamespage, gnuoy - we can now flip that bit in uosci for mojo runs (juju ppa stable|devel|proposed)14:13
beisnergnuoy, +1 on https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/26623914:26
gnuoy\o/ thanks14:26
beisneryw, thank you too!14:27
beisnergnuoy, which will lead to a rebase and merge on uosci's temp fork @ https://code.launchpad.net/~1chb1n/ubuntu-openstack-ci/mojo-runner-enhance/+merge/26572614:27
beisnergnuoy, but i just did a merge test of yours into mine, and there were no conflicts, so i should have that ready in short order.14:27
beisneroops wrong link on mine up there14:27
beisnerhttps://code.launchpad.net/~1chb1n/openstack-mojo-specs/net-id-ext-port-fix14:28
beisner^ that'll be the one to land after yours, pending lint test...14:28
beisneroh heck, it's already merged lol.14:28
beisnerha!  where was on on 7/31?14:29
* beisner points uosci back at os mojo spec trunk14:29
beisnergnuoy, er umm, thanks for the merge ;-)14:29
gnuoynp :)14:29
beisnerjamespage, gnuoy - T-K/next + 1.24.4  bare metal 7-machine smoosh a-ok;  there are more U:OS version combos queued up behind that, but I'd says +1 from the container standpoint.   fyi @ http://paste.ubuntu.com/1200737115:02
gnuoytip top15:03
apuimedolazyPower: can a subordinate charm have another subordinate?15:39
lazyPowerapuimedo: negative15:39
apuimedommm15:39
lazyPowersubordinates can be related over relation, but not stacked15:39
apuimedothat's a bit of a problem15:39
apuimedook, I'll think some way to work around it15:40
apuimedothanks15:41
marcoceppiapuimedo: what are you trying to acheive?15:53
apuimedowell, I was working on a subordinate charm for neutron-server that provides neutron-metadata-agent15:54
apuimedobut that one needs the midonet-agent charm in the same scope as well15:54
apuimedomarcoceppi: because it's the midonet-agent who proxies the call15:55
apuimedo*calls15:55
apuimedosince it is not possible15:57
apuimedoI'll just modify neutron-server15:58
apuimedoso that it relates to midonet-api (for the plugin config) as jamespage told me in the previous review15:58
jamespageI hear my name15:58
apuimedoand it will also have a midonet-host relation with container scope15:58
apuimedothat will pull the midonet-agent charm15:58
apuimedoand when neutron-plugin is midonet15:59
apuimedoit will configure and run the neutron-metadata-agent15:59
jamespageapuimedo, hmm - does that require any kernel level magic?15:59
apuimedojamespage: what does?15:59
apuimedomidonet-agent?15:59
jamespagethe neutron-metadata-agent15:59
apuimedooh16:00
apuimedolet me check16:00
jamespageapuimedo, there are benefits to having the neutron-api charm (which hosts neutron-server) containerizable16:00
jamespagewhich is why we run dhcp/l3/metadata agents on the neutron-gateway charm, which is definately not containerizable16:01
apuimedojamespage: our reference architecture has a network controller machine16:01
apuimedothat runs just neutron-server, neutron-dhcp and neutron-metadata agent16:01
apuimedo(well, and midonet-agent, of course)16:01
apuimedowe do not need l3 agent16:01
jamespageapuimedo, well we have used cases for the gateway charm that do much the same thing16:02
jamespageapuimedo, nsx for example just uses it for dhcp (and maybe metadata - can't remember)16:02
apuimedoI'm not sure I see the point on deploying an extra charm for just the metadata and the dhcp agents16:03
jamespageapuimedo, by splitting out tenant instance facing services, you can scale differently16:03
apuimedowhich need practically the same configuration and relations as neutron-server16:03
jamespagewell you have the midonet-agent bit already - that can be reused with the neutron-gateway charm16:04
apuimedojamespage: our gateways scale differently16:04
apuimedoI'd have to have the neutron-gateway charm just run neutron-metadata-agent16:05
jamespageand dhcp?16:05
apuimedoand point to nova-api for the metadata service16:05
apuimedosorry, yes, dhcp too :P16:05
jamespageapuimedo, oh - wait - the neutron-gateway charm also runs the nova-api-metadata service16:05
jamespageits pretty self contained16:06
apuimedoyes16:06
apuimedoexactly16:06
jamespagethe backend comms is over rpc to the nova-conductors16:06
apuimedoI don't think we need that16:06
jamespageapuimedo, so your intent is to run dhcp and metadata services under the neutron-api charm?16:06
apuimedothe metadata proxying goes through midonet16:06
apuimedoand the next release won't even have a metadata agent16:07
apuimedothat is what matches best our reference architecture16:07
jamespageapuimedo, so it will just communicate with the nova-api-metadata directly?16:07
apuimedowe usually do it like that16:07
apuimedowell, with nova-api16:07
* jamespage nods16:07
apuimedoyes16:07
apuimedo(that means adding a "shared-secret" config to nova-cloud-controller16:08
jamespageapuimedo, got something I can look at with regards your reference architecture?16:08
apuimedoor not configuring either16:08
apuimedojamespage: well, we have the deployment docs16:08
apuimedojamespage: http://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_architecture.html16:09
jamespagesure16:09
apuimedohttp://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_hosts_and_services.html16:09
apuimedoI'm not going to bundle everything on the controller node16:10
apuimedoobviously16:10
apuimedowe also do HA and stuff, this is just a basic setup16:10
apuimedobut we pretty much always keep the Neutron unit as listed16:10
apuimedoso, for Juju, I'd do something like https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig16:11
apuimedobut with neutron getting its own machine16:11
apuimedoand getting midonet-api and midonet-agent pulled in the machine as subordinates with scope:container16:13
apuimedojamespage: ^^16:13
jamespageapuimedo, I see16:13
jamespageapuimedo, you're intending on using containers?16:13
jamespageapuimedo, midonet-api fronts to neutron-api right?16:14
jamespageapuimedo, (lots going on - and rrd brain sometimes)16:14
apuimedojamespage: you mean lxc for some services?16:14
apuimedojamespage: midonet-api is what the neutron plugin talks to16:14
apuimedosort of a backend to neutron16:14
apuimedorequests go16:14
jamespageapuimedo, ack - so in the general approach we take for Ubuntu OpenStack, that would be deployed in its own LXC container16:14
apuimedoneutron -> midonet-plugin -> midonet-api -> Zookeeper16:15
jamespageby fragmenting into containers, you get the ability to scale each layer independently16:15
jamespageas sizing requires16:15
apuimedojamespage: but then it would not be reachable from other machines, would it?16:15
apuimedoI remember there was some limitation on lxc communication16:16
jamespageapuimedo, hrm - yes it would16:16
apuimedomaybe it was between lxc on different machines?16:16
jamespageapuimedo, Juju LXC containers are directly network addressable16:16
jamespageacross machyine16:16
jamespageacross machines16:16
apuimedoso I don't remember what it was16:16
apuimedowell, so you mean putting midonet-api in an lxc container16:17
apuimedoneutron-api and midonet-agent I'd still prefer to run on the metal16:17
jamespageapuimedo, juju deploy --to lxc:3 midonet-api16:17
jamespage+116:17
apuimedorunning ovs-like bridges on lxc makes me uneasy16:17
jamespageapuimedo, that's exactly the point I'm making16:17
jamespageapuimedo, neutron-api is currently containerizable in all use-cases16:17
jamespageapuimedo, neutron-gateway has all the code you need todo dhcp/metadata etc...16:18
jamespageand is designed to go on the bare-metal16:18
mnk0juju / kubernetes/ aws elastic beanstalk16:18
mnk0how to choose16:18
=== catbus1-afk is now known as catbus1
jamespagemnk0, well that first one is pretty nice imho16:19
jamespage;-)16:19
jamespagemnk0, you know juju can deploy kubernetes right?16:19
mnk0yeah i want to use juju but  im getting confused about how to actually use it16:19
mnk0:/16:19
apuimedojamespage: would you approve of a neutron-gateway that does not run nova-api-metadata but that instead goes to nova-cloud-controller and pulls midonet-agent as subordinate?16:19
apuimedoI'm not sure how many things I'll have to disable16:20
mnk0yeah ive found some interesting information about juju for kubernetes16:20
jamespageapuimedo, midonet-agent as a sub - no problemo16:20
mnk0but again still newbie16:20
apuimedoit seems a bit more troublesome than just adding a couple of services to neutron-server16:20
jamespageapuimedo, I don't see the need to use nova-cc for the api-metadata service tho?16:20
jamespageapuimedo, trust me - its minimal - I'll even work a diff for that if you like :-)16:20
apuimedojamespage: ok, I'll take another look at it16:21
apuimedothe nova-cc thing is for my sanity16:21
apuimedoit's what we always have on the field16:21
jamespageapuimedo, actually I have an inflight for something similar - let me dig it out16:21
jamespageapuimedo, https://code.launchpad.net/~sdn-charmers/charms/trusty/neutron-gateway/ovs-odl/+merge/26523716:22
jamespagethat SDN option still makes use of l3 and other bits, but that's a typical impact on the gateway charm16:22
jamespageincluding unit tests to validate16:23
jamespageapuimedo, you would need to trim down the list of packages and config files, so the diff should be even more minimal16:25
apuimedojamespage: alright, I'll give it a shot16:25
apuimedoI'll let you know later ;-)16:25
apuimedojamespage: against neutron-gateway/next, right?16:25
jamespageyah16:27
beisnerjamespage, re: rmq.  what is the minimal scenario that i can expect rmq to form a cluster?   (i'm reworking the amulet tests)16:35
beisnerjamespage, cluster-relation-joined hook is where that seems to happen, but just deploying multiple rmq units doesn't seem to trigger that.16:36
jamespagehmm16:36
jamespageI'd expect just multiple units to form a cluster16:36
beisnerjamespage, that's how the rmq amulet test is written, but it's failing those tests because two rmq units are two separate rmq clusters.  cluster_status on each unit shows a 1-node cluster.16:37
jamespageurph16:38
jamespagethat sounds bad16:38
beisnerjamespage, but if cluster-relation-joined|changed hooks fire (ie when pulling hacluster or ceph into the picture), rmq forms a cluster16:38
jamespagebeisner, current stable charm is ok16:41
beisnerjamespage, so wolsen and i have been t-shooting those rmq tests in next (the tests have logic errors in cluster status checks in that they just check for exit 0 on cluster_status check, instead of actually checking that each unit is in the cluster)16:44
beisnerjamespage, and in that process, have decided a test rewrite a la the other os-charm tests is in order.16:45
jamespagebeisner, ok - so I grabbed /next and did a 3 unit deploy16:45
jamespagebeisner, looks ok to me16:46
beisnerjamespage, tests consistently show this.  @L261, 283  each unit has it's own 1-node cluster  http://paste.ubuntu.com/12008073/16:47
beisnerjamespage, just trying to determine broken test vs broken charm, suspect the former.16:47
jamespagebeisner, do the tests use hacluster and ceph?16:48
beisnerjamespage, jstat as of moment of fail:  http://paste.ubuntu.com/12008087/16:48
beisnerjamespage, jstat long version http://paste.ubuntu.com/12008092/16:49
joseOdd_Bloke: had to leave for university, but I'll be back home in a couple hours. I'll check by then. Sorry about the delay!16:50
Odd_Blokejose: Longest breakfast and coffee ever. ;)16:50
Odd_Blokejose: (No worries, there's no urgency on it ATM)16:51
josehehe16:51
jamespagebeisner, oh - wait in the that configuration, we don't form a native cluster16:52
beisnerjamespage, i eventually got that test to pass by adding some waits.   but this scenario fails even if i wait forever:16:54
beisnerhttp://paste.ubuntu.com/12008126/16:54
beisnerie. cluster_status on each unit shows that 1-node cluster.16:55
jamespageurgh17:00
jamespagebeisner, the dreaded wait17:00
jamespageanyway I really need to eod - ttfn17:00
beisnerjamespage, ack thanks.  o/17:00
apuimedolazyPower: which is the best way to add a repo/ppa to my juju/maas environment?18:21
lazyPowerapuimedo: add-apt-repository is how i generally do it18:34
apuimedoon which machine?18:34
apuimedo(so that it is available when some charm is installed)18:34
apuimedoThis is for testing the neutron-api charm deployment while I still don't have one package in Ubuntu repos18:34
apuimedolazyPower: ^^18:34
lazyPowerapuimedo: why not add, adding the repository to teh charm?18:35
lazyPowerthat way it adds the repo, updates teh apt cache consistently until it makes it into distro18:35
apuimedolazyPower: the charm does not currently have an option to add a repo18:35
lazyPowerhmm, i'm not following18:35
lazyPoweris this a charm thats outside your control?18:35
apuimedoit belongs to the openstack-charmers team18:36
apuimedoI'm not sure how they feel about adding a config option to add repos18:36
apuimedojamespage: gnuoy ^^18:36
lazyPowerah, typically i fork and publish to my namespace, use that until its depreciated18:36
apuimedobut for the moment I can add it18:36
apuimedocool18:36
apuimedothat's what I was thinking on doing18:37
apuimedo;-)18:37
apuimedooops18:37
apuimedoGotta run to catch the last bus18:37
apuimedotalk to you tomorrow18:37
apuimedothanks lazyPower18:37
lazyPowercheers apuimedo18:37
beisnerjamespage, i know you're past eod - just observed that with next and stable, rmq x 3, cluster happens as expected.  test code just needs love.18:53
lazyPowermarcoceppi: are you still around?22:12
marcoceppilazyPower: I am22:12
lazyPower1 sec, let me create a multi-file pastebin. i need your eyes for a second on a deployer bug that i cant seem to track down22:13
lazyPowerhttps://gist.github.com/chuckbutler/7b5d724eee5d4b5b6c0822:14
lazyPowerdo you see anything obvious with the bundle that i've missed?22:14
marcoceppilazyPower: otp, 2 mins22:18
lazyPowermarcoceppi: i think i found it actually. missing charm in the store API that's referenced in this bundle22:22
lazyPowerwait no, its there22:27
lazyPowermarcoceppi: yeah i'm stumped, if you have any ideas i'm open to them22:38

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!