/srv/irclogs.ubuntu.com/2015/06/26/#juju-dev.txt

davecheneyda fuq ?00:01
thumperdavecheney: yeah, this seems to be the fundamental problem behind the lxc containers not upgrading00:06
* thumper is still digging00:06
* thumper tries to igrore work for a bit and go to lunch00:26
=== natefinch-afk is now known as natefinch
menn0thumper: based on circumstantial evidence only it looks like the a stuck lease/leadership worker is behind bug 146656501:10
mupBug #1466565: Upgraded juju to 1.24 dies shortly after starting <cts> <landscape> <sts> <upgrade-juju> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1466565>01:10
menn0thumper: based on a log message indicating that a watcher fired due to a change in the leases collection long after just about everything else was dead01:11
* menn0 goes to try a quick repro01:11
davecheneythumper: http://paste.ubuntu.com/11776424/01:18
davecheney5 races, including the obscure apiserver one01:18
davecheneythat we talked about in the standup01:18
natefinchdave, always playing the race card01:20
natefinchericsnow: you around?01:27
=== kadams54 is now known as kadams54-away
davecheneythumper: who maintains gomass api ?01:49
davecheneyhttps://bugs.launchpad.net/juju-core/+bug/146897201:49
mupBug #1468972: provider/maas: race in launchpad.net/gomaasapi <juju-core:New> <https://launchpad.net/bugs/1468972>01:49
mupBug #1468972 opened: provider/maas: race in launchpad.net/gomaasapi <juju-core:New> <https://launchpad.net/bugs/1468972>01:51
menn0thumper: bingo... able to repro01:52
menn0wallyworld: we're never doing another 1.23 release again are we?01:54
wallyworldno01:55
wallyworldthat's the plan01:55
menn0cool01:55
menn0wallyworld: the reason I ask is that I'm looking at a problem upgrading out of a 1.23 env which seems fairly easy to hit (almost certainly due to the lease/leadership workers not exiting)01:56
wallyworldhmmm01:57
menn0wallyworld: adam c has hit it and I can repro it pretty easily01:57
menn0wallyworld: seems like anyone who ended up on 1.23 could have trouble getting off it01:57
wallyworldi guess we could do another release then01:57
menn0wallyworld: that wouldn't help01:57
wallyworldor have to01:57
=== kadams54-away is now known as kadams54
menn0wallyworld: they wouldn't be able to upgrade to that either01:57
wallyworldah yeah01:58
menn0wallyworld: the issue is prevent the agent from exiting to restart into the new version01:58
wallyworldis there a workaround we can document?01:58
menn0wallyworld: it should be possible to work around it by manually setting the symlink01:58
axwmenn0: I *think* killing the jujud process would fix it01:58
wallyworldthat will have to be what we do then i guess01:58
axwit just deadlocks when shutting down01:59
axwif it's the bug I fixed01:59
menn0axw: no that doesn't help because the symlink gets changed as one of the very last things that jujud does b4 it exists01:59
axwah right01:59
menn0axw: and b/c some workers aren't finishing it's not getting to that01:59
* axw nods01:59
axwmenn0: btw, reviewed your branches. sorry for not doing so yesterday02:00
menn0adam gets a minute or so of working Juju before it wants to restart and then gets stuck02:00
menn0axw: thanks. no worries.02:00
menn0axw: good catches for both of the problems you noticed02:02
=== anthonyf is now known as Guest78303
thumperdavecheney: technically we maintain gomass api02:12
thumpermenn0: which repro are you talking about?02:13
davecheneylaunchpad.net/gomaasappi02:13
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
menn0thumper: bug 146656502:44
mupBug #1466565: Upgraded juju to 1.24 dies shortly after starting <cts> <landscape> <sts> <upgrade-juju> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1466565>02:44
thumpermenn0: yes?02:45
menn0thumper: this is pretty serious actually... anyone who upgraded to 1.23 is likely to have a hard time getting off it02:45
menn0thumper: see the ticket for trivial repro steps02:45
menn0thumper: manual steps are required to upgrade02:45
menn0thumper: the culprit appears to be the lease worker not honouring kill requests02:46
* thumper nods02:46
axwwallyworld: I'm playing around with the Azure portal, which looks like it's using the new model... and putting machines in the same AS still forces them to the same domain-name/IP02:58
axw:(02:58
wallyworldoh :-(02:58
wallyworldcan you email the ms guys we have been talking to and ask about it?02:58
axwwallyworld: ok03:00
wallyworldty, may not be the answer we want but at least they may be able to explain why etc03:00
mupBug #1466565 changed: Upgraded juju to 1.24 dies shortly after starting <cts> <landscape> <sts> <upgrade-juju> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1466565>03:09
wallyworldaxw: there's a blue card in the Next lane - binding volumes/filesystems. That one has actually been done as part of the volume deletion work03:13
axwwallyworld: yes, apart from UI to change binding03:14
axwwallyworld: so I'll change it to just the missing bits03:14
wallyworldaxw: so i reckon we should add an unplanned card worth 5 or 8 for the work done03:14
axwwallyworld: it was part of the persistent volume deletion03:15
axwwhich was just woefully underestimated03:15
wallyworldyep, i under estimated the resources card too :-(03:15
wallyworldaxw: also, if/when you get a chance ptal at the resources pr again  :-)03:15
axwwallyworld: sure, just writing this email to guy03:16
wallyworldnp03:16
mupBug #1466565 opened: Upgraded juju to 1.24 dies shortly after starting <cts> <landscape> <sts> <upgrade-juju> <juju-core 1.23:Won't Fix by menno.smits> <juju-core 1.24:Invalid by menno.smits> <https://launchpad.net/bugs/1466565>03:21
axwwallyworld: sorry, dunno why I thought you were storing the URL now. I think I saw the params struct and thought that's what you were storing in state03:27
wallyworldnp03:28
axwwallyworld: LGTM03:31
wallyworldyay, ty03:31
mupBug #1466565 changed: Upgraded juju to 1.24 dies shortly after starting <cts> <landscape> <sts> <upgrade-juju> <juju-core 1.23:Won't Fix by menno.smits> <juju-core 1.24:Invalid by menno.smits> <https://launchpad.net/bugs/1466565>03:33
menn0omg so much fail04:00
menn0you pull a string and broken stuff appears everywhere04:01
thumperwallyworld, axw: can you join a hangout plxz?04:12
thumperhttps://plus.google.com/hangouts/_/canonical.com/onyx-standup04:12
axwthumper: omw04:13
axwthumper: are you in? just says "trying to join the call"04:15
thumperaxw: I had that earlier today too...04:15
* thumper tries a direct invite04:15
=== kadams54 is now known as kadams54-away
thumperaxw: when did this commit land BTW?04:28
axwthumper: 1.2404:28
thumperI'm wondering if we should pull 1.24.104:29
thumperbecause this problem will stop any non-state server upgrading I think04:29
axwthumper: probably not a bad idea. how come this got through CI? is it only affecting things that don't support KVM?04:29
thumperno idea04:30
thumpermaybe...04:30
thumperthere is an open issue though about CI around upgrades04:30
thumperas we have found so many upgrade problems04:30
thumperwhich CI didn't catch04:30
axwthumper: got the OK from OIL too I think, though not sure if they do upgrade or clean install04:30
thumperI assigned you to the wrong bug04:31
thumperhang on04:31
axwthumper: ta04:32
thumperbug 146696904:36
mupBug #1466969: Upgrading 1.20.14 -> 1.24.0 fails <canonical-bootstack> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Triaged by axwalk> <https://launchpad.net/bugs/1466969>04:36
axwhurngh, can't test because I have vivid04:36
axwshould fail from 1.23 I guess04:36
thumper1.23 is terrible04:36
thumperyou can't upgrade from 1.23 due to lease / leadership issues04:36
thumpertry 1.22 or 1.2004:37
thumperI have some 1.20.14 binaries if you want them :)04:37
axwthumper: I can build them, juju 1.20 doesn't work on vivid04:37
axwno systemd04:37
thumperugh04:37
thumpergeez04:38
axwnever mind, I'll work something out04:38
thumperaxw: you could reproduce in ec204:38
axwyep. I think I have a VM anyway04:38
thumperaxw: by deploying ubuntu into a container04:38
thumperok04:38
thumperaxw: was this for 1.24.1 or 1.24.0?04:39
thumperaxw: because there is another bug about failing to upgrade from 1.24.0 to 1.24.104:39
axwthumper: pretty sure 1.24, I'll double check04:39
axw.0 I mean04:40
* thumper wouldn't be surprised if it is a different bug04:40
thumperso many bugs04:40
thumper:-(04:40
axwthumper: yep, 1.24.004:41
thumperok... so this other upgrade problem is something else04:41
* thumper takes a deep breath04:41
axwthumper: how do I work around this syslog upgrade issue?04:42
axw      upgrade to 1.24.2.1 failed (will retry): move syslog config from LogDir to DataDir: error(s) while moving old syslog config files: invalid argument04:42
thumperha04:42
thumperI build from the 1.24.1 tag04:42
axwI see, that was only broken in 1.24.2 ?04:43
thumperor mkdir /etc/juju-<namespace>04:43
thumperyep04:43
thumperit is the commit after updating the version to 1.24.204:43
axwokey dokey04:43
axwI'll try that04:43
mupBug #1468994 opened: Multi-env unsafe leadership documents written to settings collection <juju-core:Triaged by menno.smits> <juju-core 1.24:In Progress by menno.smits> <https://launchpad.net/bugs/1468994>05:06
menn0thumper: digging into the leadership settings issue... the _id field was being prefix correctly05:17
menn0thumper: but the env-uuid field wasn't being added05:17
menn0thumper: so there's no cross-env leakage issues, but the upgrade step definitely gets confused05:18
* menn0 updates ticket05:18
menn0axw: can I get a quick review of http://reviews.vapour.ws/r/2036/ please05:28
menn0it's a one-liner :)05:28
axwmenn0: sure05:28
axwmenn0: is there a minimal test you can add for it? or is that coming later?05:29
menn0axw: i'll have a look... i didn't have to change any tests when making this change05:30
axwmenn0: right, but we had missing test coverage right?05:30
axwmenn0: maybe not worthwhile. I'll LGTM and leave it to your discretion05:31
menn0axw: thinking about it, a test at this layer doensnt make sense since it's actually the responsibility of a lower level to add the env-uuid field05:32
axwmenn0: fair enough05:32
menn0axw: the fact that the lower layer didn't blow up when given a doc like this will be fixed in a later PR05:32
menn0axw: and tested there05:33
axwmenn0: SGTM05:33
axwshipit05:33
menn0axw: cheers05:33
menn0thumper: https://github.com/juju/juju/pull/2662 and https://github.com/juju/juju/pull/2661 are merging now. they're the minimum fixes for the leaderships settings doc env-uuid issue for 1.24 and master. More to come to avoid this kind of thing in the future of course.05:36
axwthumper: seems there's another problem too :/    2015-06-26 06:22:31 ERROR juju.worker runner.go:218 exited "api": login for "machine-1" blocked because upgrade in progress06:25
axwthumper: (machine-1 hasn't upgraded yet)06:26
dimiternvoidspace, dooferlad, hey guys, since you're on call reviewers today, along with fwereade, please review any non-reviewed PRs with priority08:17
fwereadedimitern, am so doing :)08:17
dimiternfwereade, cheers :)08:24
dooferladdimitern: on it.08:30
dimiterndooferlad, ta!08:30
voidspacecool08:30
dooferladdimitern: the other topic for the day seems to be bootstack related. Should we sync up with Peter now08:30
dooferlad?08:30
dimiterndooferlad, I'm talking to him in #juju @c08:31
dooferladdimitern: ah, I was expecting on a different channel.08:31
dimiterndooferlad, standup?09:02
mupBug #1469077 opened: Leadership claims, document larger than capped size <landscape> <leadership> <juju-core:New> <https://launchpad.net/bugs/1469077>09:58
Syed_AHello !11:30
Syed_Asubmitted two bugs last night.11:30
Syed_A[1] https://bugs.launchpad.net/charms/+source/quantum-gateway/+bug/146893911:30
mupBug #1468939: Instances fail to get metadata: The 'service_metadata_proxy' option must be enabled. <quantum-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1468939>11:30
Syed_Ahttps://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1468918/11:31
mupBug #1468918: neutron-server fails to start; python-neutron-vpnaas and python-neutron-lbaas packages are missing. <nova-cloud-controller (Juju Charms Collection):New> <https://launchpad.net/bugs/1468918>11:31
Syed_Ajamespage: Hello11:32
mupBug #1469130 opened: tools migration fails when upgrading 1.20.14 to 1.24.1 on ec2 <ec2-provider> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1469130>12:01
fwereademattyw, would you close http://reviews.vapour.ws/r/1460/ one way or the other? looks like it has ship-its12:02
mattywfwereade, oooh, had forgotten about this12:03
fwereademattyw, cheers12:03
mattywfwereade, the comments seem controversial - care to make a casting vote - land or just close?12:03
fwereademattyw, I'm inclined to trust dave and andrew's apparent approval; nobody's complained, so land it12:04
mattywfwereade, landing, thanks very much12:05
mattywfwereade, thanks for noticing, had totally forgotten about this12:06
fwereadeniedbalski, niedbalski_: so, I'm sorry, I don't know what happened with your patches http://reviews.vapour.ws/r/1698/ and http://reviews.vapour.ws/r/1717/ ; it seems they got ship-its but never landed? if you check whether they need updating, and let me know their status, I will make sure they get landed12:15
fwereadedimitern, http://reviews.vapour.ws/r/1403/ ?12:16
dimiternfwereade, looking12:24
dimiternfwereade, that needs to land yes, it's been a while12:25
dimiternfwereade, I'll fix/respond to the current reviews and ask you for a final stamp12:26
fwereadedimitern, cool12:26
jamespageSyed_A, which openstack release?12:38
Syed_Ajamespage: Kilo12:38
jamespageSyed_A, for that second bug, neutron-server is not supported on nova-cloud-controller - you have to use the neutron-api charm12:38
jamespagethat applies for >= kilo12:38
jamespageSyed_A, can you make sure that your quantum-gateway charm is up-to-date - the kilo template should have the right things set12:40
Syed_Ajamespage: Ok, so if i deploy neutron-api charm i wouldn't need to install vpnaas or lbass ?12:40
jamespageSyed_A, the neutron-api charm knows how to deploy those things for >= kilo12:40
Syed_Ajamespage: Roger that.12:40
jamespageit will enable them - nova-cloud-controller only supported 'embedded neutron-server' up to juno I think12:41
Syed_Ajamespage: This may be a silly question but how can i make sure that quantum-gateway charm is up-to-date ?12:41
dimiternfwereade, updated http://reviews.vapour.ws/r/1403/ PTAL12:41
jamespageSyed_A, are you deployed from branches or from the juju charm store?12:41
Syed_Ajamespage: juju charm store.12:49
jamespageSyed_A, which version does 'juju status' tell you have deployed then12:49
jamespageSyed_A, version 16 has the required templates:12:50
jamespagehttps://api.jujucharms.com/charmstore/v4/trusty/quantum-gateway-16/archive/templates/kilo/nova.conf12:50
Syed_AOk,,, checking ...12:50
Syed_Ajamespage: charm: cs:trusty/quantum-gateway-1612:53
fwereadedimitern, LGTM12:54
jamespageSyed_A, what's your openstack-origin configuration?12:54
Syed_Ajamespage: Unfortunately, in this setup openstack-origin is not present but there is an ansible variable which specify openstack release which is set to kilo.12:58
Syed_Ajamespage: The variable is used to set this repository, repo="deb http://ubuntu-cloud.archive.canonical.com/ubuntu {{ ansible_lsb.codename }}-updates/{{ openstack_release }} main"12:58
jamespageSyed_A, I need to understand what the charm thinks it should be doing12:59
jamespageif openstack-origin is not set correctly, it won't use the right templates12:59
Syed_Ajamespage: Ok, i am going to set openstack_origin in the config right now.12:59
jamespageirrespective of what you put in sources :)12:59
jamespageSyed_A, this may have working in the past, but for the last release we switched how we determine openstack series to support the deploy from source feature in the charms13:00
jamespageSyed_A, my statement about openstack-origin will apply across all of the openstack charms btw13:01
jamespagethe template loader is constructed based on that configuration13:01
jamespageso it will assume a default of icehouse on trusty for example13:01
Syed_Ajamespage: Ohhh, i got it, so this might be the reason why this charm which used to work fine, now fails.13:03
jamespageSyed_A, that's quite possible13:03
jamespageSyed_A, before we determined version based on packages installed - however for deploy from source, there are not any openstack packages installed :-)13:04
dimiternfwereade, last look? http://reviews.vapour.ws/r/1403/13:14
fwereadedimitern, if that's all you changed just land it :)13:15
dimiternfwereade, cheers :) will do13:15
Syed_Ajamespage: I am deploying a fresh setup with these configs. [1] http://paste.ubuntu.com/11778630/ && [2] http://paste.ubuntu.com/11778641/13:42
jamespageSyed_A, openstack-dashboard needs openstack-origin as well13:43
jamespagebut looks much better13:43
jamespageSyed_A, I must introduce you to bundles :-)13:43
Syed_Ajamespage: bundles ? :)13:44
jamespageSyed_A, hmm - you're doing alot of --to=X to the same machines ?13:44
Syed_Ajamespage: Yes, specifying exactly where a service should go ? Isn't a good practice.13:45
jamespageSyed_A, bundles - https://jujucharms.com/openstack-base/13:45
jamespageSyed_A, pushing multiple services onto the same machines without using containers won;t work13:46
jamespageSyed_A, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ProviderColocationSupport13:47
Syed_Ajamespage: This is why i was working on a lxc based OpenStack deployment. But for now we are just deploying nova-compute and quantum-gateway on seperate machine which used to work in the past.13:50
Syed_Ajamespage: Our lxc based bits are also ready just need to patch the lxc-ubuntu-cloud template for our 3 nics per container requirement.13:50
jamespageSyed_A, I thought you where - good13:51
jamespageyou pastebin confused me13:51
Syed_Ajamespage: Sorry about that. alice(controller) is 1, bob(compute) is 2 and charlie(quantum-gateway) is 3. :)13:52
jamespageSyed_A, but you are going to use lxc containers right?13:52
Syed_Ajamespage: No, not in this setup.13:53
jamespageSyed_A, most of the controller services won't work13:53
jamespageSyed_A, they assume control over the filesystem, so are not safe to deploy without containers13:53
Syed_Ajamespage: ohhh that w'd be a problem. :/13:55
jamespageSyed_A, yeah - I know they will all at-least conflict on haproxy configuration13:56
jamespageSyed_A, we enable that by default now13:56
Syed_Ajamespage: for haproxy, we have a customized haproxy.cfg which fixes the issue13:57
* fwereade was up until 2 last night, taking an extended break, may or may not be back at a reasonable time14:02
jamespageSyed_A, you guys are terrifying me - all I can say is ymmv14:03
mbruzekHas anyone seen a problem with the GCE provider today?  The juju bootstrap command is giving this error: ERROR failed to bootstrap environment: cannot start bootstrap instance: no "trusty" images in us-central1 with arches [amd64 arm64 armhf i386 ppc64el]14:04
sinzuimbruzek: I am in #cloudware. I haven’t gotten any answers14:07
sinzuimbruzek: there are NO images for gce http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:gce.sjson14:08
Syed_Ajamespage: Our goal is to eventually move towards lxc based openstack deployment as suggested by the community. Right now i am only trying to fix this issue for the time being. We have every intention to follow the process as suggested on ubuntu wiki.14:08
mbruzeksinzui: strange that this worked before, I am just seeing this error today14:08
sinzuimbruzek: CI tests gce, we saw the failre about 15 hours ago.14:09
mbruzeksinzui: Did you file a bug that I can contribute to?14:10
sinzuimbruzek: no, because this is an ops issue. I am not aware of a project for gce images14:10
sinzuimbruzek: I am crafting a email asking for someone with power to explain the situation14:11
Syed_Ajamespage: You were right about the conflict at  haproxy , neutron-api failed to install and logs this:INFO install error: cannot open 9696/tcp (unit "neutron-api/0"): conflicts with existing 9696/tcp (unit "nova-cloud-controller/0")14:17
Syed_Ajamespage: Looks like nova-cloud-controller and neutron-api are both installing neutron-server.14:20
jamespageSyed_A, yes14:21
jamespageSyed_A, hmm - yes - that won't work well on a single unit14:21
jamespageSyed_A, there is a huge assumption in the charms that they 'own' the unit14:21
Syed_Ajamespage: Ok, so how can i stop nova-cloud-controller from installing neutron-server.14:25
Syed_Ajamespage: Will it work if i deploy neutrona-api unit on quantum-gateway node ?14:29
jamespageSyed_A, nope - neutron-api will trample all over the gateway charms config files14:30
Syed_Ajamespage: compute node then ?14:30
jamespageSyed_A, nova-cc decides to stop managing neutron-server - but not straight away14:30
jamespageSyed_A, same problem - but this time neutron-openvswitch's config files14:30
jamespageSyed_A, the charms are just not designed for this type of use14:30
mupBug #1469184 opened: listSuite teardown fails <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core 1.24:New> <https://launchpad.net/bugs/1469184>14:32
mupBug #1469186 opened: ContextRelationSuite teardown fails <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1469186>14:32
Syed_Ajamespage: Don't you think charms should be able to deploy a standalone controller node say a VM.14:32
jamespageSyed_A, I'm adverse to changing the design principle each charm has in that it 'owns' the unit filesystem14:34
jamespageSyed_A, LXC containers give us a lightweight way to manage this, without having to have alot of complexity in the charms to deal with this problem14:35
Syed_Ajamespage: I am inclined to agree with you. LXC works better but the use case that somebody might want to deploy a openstack controller node without using lxc is a valid use case.14:37
jamespageSyed_A, I don't disagree with that - just saying maybe the charms are not the right way to fullfil that14:38
natefinchfwereade: why did we write our own RPC implementation when there's one in the stdlib?14:45
mupBug #1469189 opened: unitUpgraderSuite teardown panic <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1469189>15:02
mupBug #1469193 opened: juju selects wrong address for API <sts> <juju-core:New> <https://launchpad.net/bugs/1469193>15:02
mupBug #1469196 opened: runlistener nil pointer / invalid address <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1469196>15:02
Syed_Ajamespage: Ok let's say i fix the neutron-server manually but what about the instance metadata not working ?15:02
jamespageSyed_A, that should be fixed by correctly specifying openstack-origin15:02
Syed_Ajamespage: testing ...15:06
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
Syed_Ajamespage: Ok so instance metadata is working.15:33
fwereadenatefinch, I can't remember what it does that the stdlib one didn't, but I know it was something :/15:33
Syed_Ajamespage: As per your suggestion correctly specifying openstack-origin fixed the issue.15:33
fwereadenatefinch, rogpeppe would remember15:33
rogpeppenatefinch: there were a few reasons15:34
rogpeppenatefinch: the main one is that with the stdlib version you don't get to have per-connection context15:34
natefinchrogpeppe: ahh, interesting, yeah15:35
rogpeppenatefinch: also, the way you have to phrase the stdlib methods is awkward15:35
Syed_Ajamespage: If somebody is deploying openstack on public cloud and they cannot use lxc, so the suggestion here will be to start a new vm and install neutron-api as a standalone unit there ?15:38
mupBug #1469199 opened: State server seems to have died <cloud-install-failure> <juju-core:New> <https://launchpad.net/bugs/1469199>15:41
natefinchrogpeppe: yeah, the stdlib way is kind of annoying, I'm surprised they didn't do it the way ours does... (traditional val, error return)... but I'm sure there was a reason at the time15:41
rogpeppenatefinch: it's simpler to implement the way they did it15:41
rogpeppenatefinch: but my reasoning was we were going to be writing lots of API entry points, so the additional complexity in the rpc package was worth it15:42
voidspacemgz: ping15:46
mgzvoidspace: hey15:49
voidspacemgz: it's alright, I think I've sorted it15:49
voidspacemgz: had a question about gomaasapi which you seem to have touched15:50
mgzvoidspace: okay, I hall remain in the dark15:50
voidspacemgz: heh15:51
voidspacemgz: I hate creating JSON maps in Go :-/15:53
mgzvoidspace: it is not the most fun15:53
jamespageSyed_A, yes - but that is very much an edge case15:56
jamespagemost clouds are deployed on metal :-)15:57
jamespageSyed_A, infact what you suggest is exactly how we test the openstack charms - we have a small QA cloud (5 compute nodes) which we can standup a full openstack cloud ontop of15:57
jamespagewe can run ~15 clouds in parallel15:57
jamespageand do things like test HA etc...15:57
Syed_Ajamespage: Correct most clouds are deployed on metal. But with the latest charms neutron-api and nova-cloud-controller cannot be installed on the same physical machine ?15:58
jamespageSyed_A, that is absolutley the case - and you will hit issues with other conflicts as well16:00
jamespageSyed_A, which is why we have https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ProviderColocationSupport16:00
Syed_Ajamespage: We also have a small setup where we test openstack. I set up HA LXC openstack setup last week. Which was fun :)16:00
jamespage:-)16:00
jamespageSyed_A, its neat - the qa cloud i refer to is juju deployed, and is HA control plane under lxc as well16:00
Syed_Ajamespage: Cool !16:02
ericsnownatefinch: regarding RB, did you mean the GH integration isn't working or something else?16:18
=== kadams54 is now known as kadams54-away
sinzuimbruzek: gce streams are back16:27
mbruzeksinzui: thank you16:27
natefinchericsnow: yes, the GH integration... like, I made a PR vs. juju-process-docker and no review was created on RB16:55
natefinchericsnow: I probably just missed a steo16:56
natefinchste16:56
natefinchstep16:56
natefincharg...16:56
ericsnownatefinch: yeah, the repo did not have the web hook set up (I've added it)17:06
=== kadams54-away is now known as kadams54
natefinchericsnow: can you document the steps in the wiki?17:08
ericsnownatefinch: sure17:09
natefinchericsnow: so, process server api in process/api/server.go?18:03
ericsnownatefinch: how about process/api/server/uniter.go18:04
ericsnownatefinch: params would live in process/api/params.go18:04
natefinchericsnow: is there a reason to split out the params, server, and client stuff?  if each one is fairly simple and probably fits in a single file...18:05
ericsnownatefinch: my expectation is that it won't fit well in a single file18:06
natefinchericsnow: ok18:06
=== kadams54 is now known as kadams54-away
natefinchericsnow: when are those state functions getting merged into the feature branch?19:43
ericsnownatefinch: likely not before Monday19:44
natefinchericsnow: ok19:44
natefinchthis whole "duplicate every single struct in the API" thing gets really tiresome20:21
mupBug #1469318 opened: apitserver: TestAgentConnectionsShutDownWhenStateDies takes > 30 seconds to run <juju-core:New> <https://launchpad.net/bugs/1469318>21:51

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!