/srv/irclogs.ubuntu.com/2015/08/04/#juju.txt

beisnerjamespage, marcoceppi - ugg.  so that merge fixed our use case, but broke all other use cases.  please review @ https://code.launchpad.net/~1chb1n/charms/trusty/cinder/next.ephem-key-error/+merge/26682603:34
=== moqq_ is now known as moqq
jamespagebeisner, landed07:47
=== Spads_ is now known as Spads
jamespageddellav, reviewed - couple of niggles - also amulet tests are failing - not had time to dig yet.09:17
apuimedojamespage: Hi10:51
jamespageapuimedo, hello10:51
apuimedoI was looking at the next version of nova-compute10:51
* jamespage nods10:52
apuimedodid I get this right, that it will run nova-api-metadata on each machine if neutron-openvswitch sends it the shared metadata secret10:52
apuimedo?10:52
jamespageapuimedo, yes - that was added to support neutron dvr for ml2/ovs10:53
jamespageinfact I think that's in the stable charm as well10:53
jamespagethat was a 15.04 feature10:53
apuimedoaha10:54
jamespageapuimedo, neutron-openvswitch will also setup and configure l3-agent and metadata-agent in that particular configuration10:55
apuimedojamespage: and the neutron-metadata-agent still runs on neutron-gateway pointing to nova-cloud-controller, right?10:55
apuimedooh, neutron's metadata-agent will run on each compute host?10:55
jamespageapuimedo, neutron-metadata-agent on the gateway points to itself - it runs a nova-api-metadata as well10:55
jamespageapuimedo, that's correct10:55
apuimedointeresting10:55
apuimedothanks10:55
apuimedo:-)10:55
jamespageapuimedo, it only services requests for instances located on the same hypervisor10:56
apuimedothat should help with scalability ;-)10:56
apuimedoand I guess it gets the data from the rabbitmq server10:56
jamespageapuimedo, yah10:56
jamespageapuimedo, nova-api-metadata <-> conductor10:57
jamespagethat's a potential bottleneck still but it is horizontally scalable10:57
jamespageI'd been considering whether we should support 'roles' for nova-cc10:57
apuimedoyeah10:57
apuimedoroles?10:57
jamespageso you could have a scale out conductor/scheduler pool10:57
jamespagewith a smaller public facing set of API services10:58
jamespageapuimedo, we have something like that in cinder atm10:58
jamespagethe cinder charm can do all roles, or just some of them, allowing this type of split10:58
jamespagecinder-api/cinder-scheduler/cinder-volume10:58
jamespageesp import for when using iscsi volumes10:58
jamespageyou want a big volume backend, with a smaller set of schedulers and api servers10:59
apuimedoI see10:59
apuimedojamespage: sounds like you'll end up with a similar split as Kolla has11:00
apuimedowhere almost everything is a separate unit11:00
jamespageapuimedo, not that familiar with kolla - sounds like I need to read11:00
jamespageoh - openstack/dockers11:01
jamespageapuimedo, well I guess the bonus with juju charms is we could do a kolla like single process approach, or you can opt todo more than one thing in the same container11:01
* jamespage likes flexibility11:01
apuimedo;-)11:02
=== apuimedo is now known as apuimedo|lunch
beisnero/ jamespage, gnuoy11:23
beisnerfyi  in for just a few min before preschool registration, then back again.11:23
beisnergnuoy, i see a pile of error instances on serverstack belonging to you and to me11:25
beisnerjamespage, thanks for the merge11:28
=== CyberJacob is now known as zz_CyberJacob
=== zz_CyberJacob is now known as CyberJacob
ddellavjamespage, hmm, weird that amulet tests are failing, i didn't change that much.13:33
jamespageddellav, that might be serverstack tbh - we're having some troubles13:33
ddellavah yea, ok13:33
ddellavyea, right at the bottom of the amulet output you can see it failed due to rabbitmq: http://paste.ubuntu.com/12000161/13:51
beisnerjamespage - raised this before we forget about it bug 1481362   < also fyi wolsen dosaboy gnuoy coreycb14:02
mupBug #1481362: pxc server 5.6 on vivid does not create /var/lib/mysql <amulet> <openstack> <uosci> <percona-xtradb-cluster-5.6 (Ubuntu):New> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481362>14:02
jamespageurgh - I think I just put ovs in a spin on all the compute hosts - sorry folks14:06
jamespagefixing now14:06
beisnerjamespage, yeah, connectivity lost to bastions.  it's ok.  you'll make it all shiny and new i know you will.14:10
beisneralso just raised this against pxc re: deprecation warn on > vivid:  bug 148136714:11
mupBug #1481367:  'dataset-size' has been deprecated, please useinnodb_buffer_pool_size option instead <amulet> <openstack> <uosci> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481367>14:11
apuimedo|lunchjamespage: is there some minimal openstack bundle that groups some charms in the same machine like nova-cloud-controler and neutron-api?14:30
jamespageapuimedo|lunch, yes14:31
apuimedo|lunchI only saw https://jujucharms.com/openstack-base/3414:31
apuimedo|lunchand I don't have 17 machines around :P14:31
jamespageapuimedo|lunch, oh - I was about to point to that14:31
=== apuimedo|lunch is now known as apuimedo
jamespageapuimedo|lunch, that only needs 4 servers14:31
jamespage(check the readme)14:31
jamespageits 17 units, 4 physical machines14:31
apuimedoah, I must have misread the bundle file14:32
apuimedoI don't see any lxc reference14:32
apuimedoin the bundle.yaml14:32
apuimedooh!14:33
apuimedojamespage: why is there a bundel.yaml and a bundle.yaml.orig?14:33
apuimedoonly the latter has the lxc references14:33
jamespageapuimedo, its an artifact of the charm store injestion14:33
apuimedoso the one that should be used is the  .orig, right?14:34
jamespageapuimedo, although I don't believe it should scrub machine placement like that14:34
jamespageapuimedo, yes14:34
jamespagefor now14:34
jamespagerick_h_, ^^ is that right?  https://jujucharms.com/openstack-base/  - the bundle.yaml has lost the machine placement data?14:35
rick_h_jamespage: looking14:35
jamespagerick_h_, thats def changed - it used to be fewer machines than units, but not any longer14:35
rick_h_urulama: rogpeppe1 ^ looks like lxc got lost in the transition from v3 to v4?14:36
* rogpeppe1 reads back14:36
rick_h_rogpeppe1: basically https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig vs https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundle.yaml it lost the placement14:38
rogpeppe1rick_h_: yeah, i see that now. just investigating14:38
rick_h_jamespage: with the deployer release tvansteenburgh is doing today we can do true machine placement in the new bundle. We'll look into the bug, but the best way forward, once that deployer is out there, is to check out the 'machines' part in https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md14:39
rogpeppe1rick_h_, jamespage: i see what has happened14:39
rick_h_jamespage: and get rid of bundles.yaml and go to bundle.yaml as the only file in the bundle.14:39
rogpeppe1jamespage: i wasn't aware (and i don't think it was documented) that the "to:" field in a legacy bundle could be list as well as a string14:39
jamespageeeek!14:40
rick_h_rogpeppe1: urulama that's a bit :( as openstack is preparing new release and their bundles are kind of busted atm now14:40
beisnercoreycb, thanks for the merge :)14:42
rogpeppe1i'm quite surprised that the goyaml package allowed unmarshaling of a list into a string (it seems to have just ignored it)14:43
urulamarick_h_: i'd say +1 on the quick fix of using bundle.yaml until we come up with a solution14:43
rick_h_jamespage: can the to: be representing as a string and repushed to get through a fix until the charmstore can be updated?14:43
coreycbbeisner, you're welcome, thanks for the updates14:44
rick_h_jamespage: hmm, looks like not with ceph/nova compute14:44
rick_h_urulama: the problem is that it's not supposed in the deployer yet until the release is out and folks get it. Kind of rock/hard place atm. /me wonders if you can do ceph, ceph, ceph for the nova-compute one...where's that doc14:45
rick_h_rogpeppe1: http://pythonhosted.org/juju-deployer/config.html#placement is the docs for that and the wordpress example has the lists.14:46
rogpeppe1rick_h_: it's possible i skipped over that syntax because there were no bundles around that actually used it (that I could find - there are none in the corpus used for testing migratebundle)14:49
apuimedois it possible in bundles to define a config that applies to all the charms that share the setting14:50
apuimedolike openstack-origin ?14:50
lazyPowerapuimedo: certainly. there's an overrides: key that you can use14:51
apuimedolazyPower: at the same level as "services" ?14:51
lazyPowerapuimedo: its a parent level key, same topical level as "services"14:52
apuimedogood14:52
apuimedothanks14:52
rogpeppe1jamespage: in the new format, the syntax would be:14:53
rogpeppe1to: ["lxc:ceph/1"]14:53
apuimedoI wonder why the bundle.yaml.orig doesn't use it14:53
lazyPowerapuimedo: something like this - http://paste.ubuntu.com/12000543/14:53
rogpeppe1jamespage: (the unit syntax being the same as juju's usual unit syntax)14:53
apuimedoI seem to remember that it was in some previous versions14:53
apuimedorogpeppe1: is this new syntax on stable?14:53
rogpeppe1apuimedo: "on stable" ?14:55
rogpeppe1apuimedo: the new bundle syntax is considerably stripped down from the old syntax (and somewhat more general in places too)14:55
apuimedoI mean the stable bundle deployer14:58
apuimedowhen you install juju on trusty14:58
rick_h_apuimedo: it's been in trunk for a long time. tvansteenburgh is working on a release. It's not in the current trusty :(14:59
jamespagebeisner, is there a bit of dellstack I could use to refresh the charm-store bundle?14:59
apuimedook14:59
apuimedorick_h_: does that mean that bundles will have to be updated? Is there some degree of backwards compatibility?15:00
rick_h_apuimedo: yes, it'll support both but we'll be working on deprecating v3 and disallowing new v3 ones to the charmstore15:00
rick_h_apuimedo: the format is only slightly different. It's mostly the same, just removes the top key from the file15:00
rick_h_most bundles will work with a delete of the first line and dedent15:01
apuimedook15:01
apuimedorick_h_: does the "to" allow you to put two charms on the same container scope, or only when one of them is subordinate?15:02
rogpeppe1apuimedo, rick_h_: but placement is thing that has changed most15:02
rogpeppe1the ting15:02
rogpeppe1the thing15:02
rogpeppe1apuimedo: the format does. i'm not sure about the deployer implementation.15:04
jamespagerick_h_, urulama, rogpeppe1: working on moving to bundle.yaml now15:06
rick_h_jamespage: Makyo can help with that15:07
apuimedolazyPower: the bundles that have "0" in a bundle are deployed?15:07
apuimedothe one for openstack has a few charms like that15:07
lazyPowerapuimedo: i'm sorry i dont follow.15:08
jamespagerick_h_, Makyo: first cut here - lp:~james-page/charms/bundles/openstack-base/bundle15:08
jamespagerick_h_, is there a nice way I can programatically query for the latest charm revisions?15:08
lazyPowerapuimedo: can i get a link to the bundle you are looking at?15:09
rick_h_jamespage: sure thing https://api.jujucharms.com/v4/ceph/meta/any15:09
jamespagerick_h_, ta muchly15:09
apuimedosure15:09
apuimedolazyPower: https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig15:10
urulama_jamespage: if you need just revision, you can use this as well https://api.jujucharms.com/v4/ceph/meta/id-revision15:10
apuimedoyou'll see that there's quite a few with units: 015:11
lazyPowerapuimedo: in the case of subordinates, thats a requirement15:11
rogpeppe1jamespage: that first line looks potentially spurious15:11
rogpeppe1jamespage: did you mean "series: trusty" there?15:11
apuimedolazyPower: ah, good :-)15:11
lazyPowerhowever i'm not sure why neutron-openvswitch has num_units: 015:12
lazyPowerits not a subordinate that i can see15:12
lazyPoweri lied, line 2 - it sure is15:13
lazyPowerso, yeah - everything in here with num_units: 0 is due to the charm being a subordinate apuimedo15:13
apuimedoit is15:13
apuimedo;-)15:13
apuimedothanks lazyPower15:13
lazyPowernp15:14
apuimedombruzek: I was in your ancestral town on Sunday15:14
apuimedotook a panorama picture from the bus15:14
mbruzekapuimedo: wow!15:14
mbruzekapuimedo: Cool!15:14
Makyojamespage, First glance: first line should be 'series: trusty' and last line should be removed, but it looks okay otherwise.15:15
apuimedoI wish they would have stopped so I could buy pastries15:15
MakyoEr, sorry.  First line should be removed entirely, last line can stay.15:15
Makyojamespage, ^15:15
jamespagegot it15:15
jamespageMakyo, how do I actually test this?15:17
Makyojamespage, one sec, spinning up a GUI with latest code; dragging the yaml there will run it through the bundle validation.15:18
Makyojamespage, Actually, you should be able to use demo.jujucharms.com - dragging the bundle.yaml onto the canvas should validate the bundle.15:19
Makyojamespage, (recent updates to the GUI affect committing uncommitted bundles, which is unrelated)15:20
jamespageMakyo, error - console?15:22
jamespageI'm such a web brower numpty - apologies15:22
Makyojamespage, oops, hmm, that's partly our fault.  That message should be changed for that site.  Running it locally, pastebin in a sec.15:25
mbruzekapuimedo: please send me the picture if you would.15:29
apuimedoIt's a bit moved15:32
Makyojamespage, Here's a working v4 bundle http://paste.ubuntu.com/12000801/ (v4 bundles need a machine spec, even if it's empty, to allow placement directives like "lxc:ceph/0")15:36
Makyojamespage, I validated that with https://github.com/juju/juju-bundlelib (git clone; make; devenv/bin/getchangeset bundle.yaml)15:37
Makyojamespage, The only thing that might be a problem is that I had to include juju-gui in order for one placement to work, which may not fly when deploying via the gui.15:58
jamespageMakyo, updated again15:59
jamespage(I wrote a small parser to query the charmstore api and update revisions)16:00
Makyojamespage, Here's one that passes validation: http://paste.ubuntu.com/12000995/ We have to use the placement directives from the older style of bundles to reference the bootstrap node (cc rick_h_ urulama )16:12
Makyojamespage, we also no longer have a top-level YAML node ('openstack-base') in the charmstore16:12
rick_h_Makyo: oh, yea bootstrap node is a no no16:13
Makyorick_h_, Meaning I should take it out?16:14
rick_h_Makyo: no, meaning that I expect that to not be pretty16:14
Makyorick_h_, ah, alright.  Yeah, if we're referencing the bootstrap node, we have to use v3-style placement directives, otherwise it gets lost looking for a machine named "0"16:14
jamespageMakyo, ok - that's validating now16:17
Makyojamespage, Awesome16:18
=== Guest47499 is now known as anthonyf
=== anthonyf is now known as Guest410
=== CyberJacob is now known as zz_CyberJacob
=== zz_CyberJacob is now known as CyberJacob
beisnerjamespage or gnuoy - almost forgot this puppy.  please see/review/land:   https://code.launchpad.net/~billy-olsen/charms/trusty/rabbitmq-server/ch-sync-cli-fix/+merge/26661917:51
beisnerwolsen, fyi ^17:51
wolsenbeisner, last vote is that amulet fails still for it - though it does fix the import issue17:52
beisnerwolsen, ack.  my vote is to merge as-is, on the basis of fixing an import issue.  and address pre-existing failure of that 1 test separately.17:52
wolsenbeisner, but I would be +1 on removing the import - yeah17:52
beisnerwolsen, that may or may not still happen, but as it's syncd into all of the other os-charms, that is the state of things.17:53
beisnerwolsen, ie.  we won't very well be able to t-shoot the functional test with the import error.17:54
wolsenbeisner, if jamespage, gnuoy, or dosaboy don't respond in short order I'll land it17:54
beisnerwolsen, ack, tyvm17:54
beisnerdosaboy, wolsen - thanks fellas18:09
=== Guest410 is now known as anthonyf
=== anthonyf is now known as Guest26675
beisnercoreycb, fyi, dashboard 051-basic-trusty-juno-git is failing atm http://paste.ubuntu.com/12002116/   so far, that's the only *git* test failing in today's pre-release checks.19:10
coreycbbeisner, ok I'll take a look19:13
beisnercoreycb, ta19:14
beisnerwolsen, ping19:43
wolsenbeisner, pong19:43
beisnerso rmq passes locally, because the deployed instances and the bastion/host running tests are in the same ip space.   it fails in uosci, because the bastion host is on a different /24 than the deployed units.19:43
beisnerand...  the undercloud (serverstack) sec group looks like this:   http://paste.ubuntu.com/12002355/19:44
beisnerie. packets just don't arrive19:44
beisnerso is my theory19:44
beisnerthis hasn't been an issue, because no other os-charm tests attempt communication from the machine that's executing tests.19:45
beisnerit's all done via charm hooks and juju run, etc., which puts traffic on the same side of the fence so to speak.19:46
* wolsen looks at the code again19:46
wolsenbeisner, seems like a perfectly plausible scenario19:47
wolsenwhat you are describing19:47
beisnerwolsen, well, maybe not.  the first 2 make it and check ok.  the 3rd check, which is to send to one rmq unit, then check the other rmq unit for that message, is what fails.19:53
beisneridea 1 sec..19:53
wolsenbeisner, yeah it shouldn't be the port issue since rabbitmq-server is exposed19:56
wolsenbeisner, agree with your idea- I think its likely that there's a timing thing going on20:01
beisnerwolsen, well no dice even with a 2 min wait.20:06
wolsenbeisner, hmm may want to get some queue information to see what the hapolicy is on the queue20:07
wolsenmake sure it is what we think it is20:07
beisnerwolsen, i think the test is fine. i think rmq or its cluster is not ok.20:08
beisneri've got the enviro stood up, can send msg and chk msg from same unit ok.   but when i send one manually to one unit, it never arrives at the 2nd.20:09
wolsenbeisner, well actually if you look at the "cluster status", its only reporting itself in the cluster (each of them actually)20:10
beisnerindeed wolsen20:10
beisneroh wait this thing is hard-coding a vip @ 192.168.77.1120:12
beisnerwolsen, ^20:12
joseI need haaaaalp! anyone knows a list of *all* (or at least most) ports that Juju uses?20:13
wolsenbeisner, that sounds like a bug as well, but I'm not sure its _the_ problem20:14
wolsenbeisner, as the clustering doesn't rely on a vip iirc20:14
beisnerwolsen, right, this test isn't checking based on that, but when the tests are extended, they will need to consume the vip env var as pxc and hacluster do.20:14
beisnerwolsen, so the earlier tests should actual fail out on this.  ie.      # Verify that the rabbitmq cluster status is correct.20:16
wolsenjose, 22, 17017, and 8040 for the storage port I believe (though check your ~/.juju/environments.yaml for the storage-port)20:16
wolsenbeisner, well that would be dashing if it did fail on that20:17
beisnerlol yep wolsen20:17
josethanks wolsen20:19
beisnerwolsen, so this is my first dive into the rmq test.20:23
beisnerso if i deploy 2 rmq units, do they just know to cluster togther, like bunnies?20:23
wolsenbeisner, well if 2 bunnies got together you'd have a lot more than a cluster :P20:27
wolsenbeisner, but essentially, that's the theory20:27
wolseniirc, there will be an exchange of erlang cookies and the configuration files updated20:27
wolsenbeisner, do you have hook logs?20:30
beisneryeah but destroy, redeploying ...20:31
beisnerwolsen, i mean yah, we have a dozen or more jenkins jobs, all with full logs and etc pulls.20:36
wolsenbeisner, duh, should've looked there thx20:36
beisnerwolsen, lol my proposal with the delay just freakin passed test 20.  http://10.245.162.77:8080/view/Dashboards/view/Amulet/job/charm_amulet_test/5639/console20:38
beisnerlive view ^20:38
wolsenbeisner, with the 2 minute delay?20:39
beisnerwolsen, 30s20:39
beisnerwolsen, the 2 min was me doing it manually20:39
wolsenbeisner, well most importantly - it clustered20:39
beisnerwolsen, i really want to refactor these tests.   we're doing cluster tests inside one named relation check, etc.20:40
wolsenbeisner, +120:40
beisnerwolsen, and check for presence of all units in the cluster check, instead of just checking that the command succeeds.20:40
wolsenyep20:41
beisnerbump bummmm.  30 passes20:47
beisnerdang. #40 failed20:54
sebas5384ping jose21:00
sebas5384jcastro: ping21:07
beisnerwolsen, for test 40 (just 2 rmq units), the cluster-relation-changed hook is never triggered (which looks to be by design).  we get an install hook, and a config-changed hook.  so afaict, it should not be expected to form a cluster.21:17
beisnercluster-relation-joined rather21:23
moqq1.24 doesn’t use differnt ports does it? i ran an upgrade from 1.23 to 1.24 and all the machines worked as expected except one, and it’s stuck on 1.23 and when i start the agent it fails to connec to the master on 1707021:51
moqqah21:52
moqqnevermind21:52
* moqq facepalm21:52
mbruzekhello marcoceppi.  There have been several changes to charm-helpers that I am interested in, can you make a release to pypi?22:00
mbruzekor authorize me to do such a thing22:00
marcoceppimoqq are you still having high CPU issues on 1.24?22:17
elopioHello.22:30
elopioI need to install a deb in my tarmac machine, but the tarmac charm doesn't allow for this.22:30
elopioshould I extend the charm to get deb urls to install, or just remember to install it manually in case I have to redeploy it?22:30
marcoceppielopio: extending the charm is one way, another way, if it's just a deb taht needs to be added, is to create a subordinate charm that only has an install hook which installs that deb22:33
marcoceppithat way you don't have to fork the main charm and you can pack any customizations into that subordinate charm22:34
elopiomarcoceppi: I'll try sending my change upstream. If they don't apply it soon, the subordinate seems good.22:34
jcastrosebas5384: yo!22:42

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!