[00:01] <cherylj> alexisb: "cannot determine machine endpoint bindings: endpoint bindings
[00:01] <cherylj>       for "s#dummy-source" not found'"
[00:01] <cherylj> alexisb: not sure what that means in terms of their code change, though
[00:02] <alexisb> ok
[00:50] <davecheney> 2016-02-02 00:47:47 ERROR juju.worker runner.go:229 exited "toolsversionchecker": cannot update tools information: cannot get latest version: cannot find available tools: cannot read product data, invalid URL "https://juju-dist.s3.amazonaws.com/tools/streams/v1/com.ubuntu.juju:released:tools.sjson" not found
[00:51] <davecheney> 2016-02-02 00:47:55 ERROR juju.worker runner.go:229 exited "toolsversionchecker": cannot update tools information: cannot get latest version: cannot find available tools: cannot read product data, invalid URL "https://juju-dist.s3.amazonaws.com/tools/streams/v1/com.ubuntu.juju:released:tools.sjson" not found
[00:51] <davecheney> is it _really_ necessary to poll for new tools every 8 seconds
[00:55] <cherylj> juju has OCD
[01:17] <davecheney> honestly ,checking once and hour would be overkill
[01:40] <davecheney> thumper: https://github.com/juju/juju/pull/4259
[01:40] <davecheney> pprof PR for review
[01:45] <davecheney> i wish the juju deployer hadn't stopped working
[01:45] <davecheney> i know people tell me that i'm using an outdated versin
[01:45] <davecheney> but frankly, if we (juju) have to live in a world where we have to keep making the versino in trusty work for the next 3 years
[01:45] <davecheney> then the same should apply to the deployer
[01:51] <mwhudson> axw: do you know things about azure?
[01:55] <davecheney>   File "/usr/lib/python2.7/dist-packages/deployer/env/go.py", line 96, in get_cli_status
[01:55] <davecheney>     status = super(GoEnvironment, self).get_cli_status()
[01:56] <davecheney> cherylj: thumper who changed the default juju status to be _not_ yaml
[01:56] <davecheney> that has broken the juju-deployer shipped in trusty
[01:58] <davecheney> yup, that's the problem
[01:58] <davecheney> cherylj: where should I log this bug ? against juju
[01:58] <davecheney> or against the deployer that it needs to call juju status with an explicit formatter ?
[02:00] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1540697
[02:00] <mup> Bug #1540697: Recent change to juju status has broken compatibility with trusty juju-deployer <juju-core:New> <https://launchpad.net/bugs/1540697>
[02:05] <axw> mwhudson: I do, sorry, missed your message
[02:05] <mwhudson> axw: heh you answered the thread anyway
[02:07] <axw> davecheney: the (informal) plan for 2.0 is that anyone who needs structured output needs to explicitly specify the format
[02:07] <axw> davecheney: otherwise you get tabular, and its format might change over time
[02:08] <natefinch> wallyworld, axw: git blame says you guys might know about the implementation of deploying bundles with juju core?
[02:08] <axw> so long as that holds, deployer needs to be changed
[02:08] <axw> natefinch: I know some, I added storage support
[02:08] <mwhudson> i guess deployer in trusty needs to be sru-ed then?
[02:08] <wallyworld> natefinch: francesco did the work, we tweaked as needed
[02:09] <natefinch> wallyworld: is it me, or do we just drop all the flags (except storage) on the floor if you target a bundle?
[02:09] <mup> Bug #1540697 opened: Recent change to juju status has broken compatibility with trusty juju-deployer <juju-core:New> <https://launchpad.net/bugs/1540697>
[02:09] <wallyworld> davecheney: deployer in trusty will work with 1.25
[02:09] <wallyworld> tht is our mandate
[02:10] <axw> natefinch: which flags?
[02:10] <wallyworld> 2.0 will be co-installable with 1.25 andf 1.25 wil be supported for 2 years
[02:10] <natefinch> axw: juju deploy flags
[02:10] <wallyworld> 2.0 will not be backwards compatible
[02:10] <wallyworld> we will support juju in trusty via 1.25
[02:11] <natefinch> axw: like --constraints -n --networks etc ... looking at the code, it seems like we woudl totally let you 'juju deploy cs:/bundle/foo/bar --constraints=blah' and not complain, but completely ignore the constraints
[02:12] <natefinch> axw: it's the "not complain" part that bothers me
[02:12] <axw> natefinch: looks a bit like it. I'd say because it kind of changes the meaning of deploy entirely. not one service, but multiple
[02:12] <axw> natefinch: yeah
[02:12] <axw> we should error if you specify them
[02:12] <natefinch> axw: exactly my thinking
[02:13] <axw> natefinch: I added a bundle-specific syntax to the --storage command that allows you to specify for services in a bundle. we probably should do that for other things, but erroring would be a good start
[02:14] <natefinch> axw: yep.  ok, just wanted to make sure I wasn't missing anything obvious before filing a bug
[02:25] <davecheney> wallyworld: ahh, ok
[02:25] <davecheney> thumper: http://paste.ubuntu.com/14854594/
[02:26] <wallyworld> davecheney: we shoudl try and make deployer work though if possible i think. 2.0 has deployer built in but i don't think it is yet fully functionally complete
[02:26] <davecheney> this is on a modest sized deployment
[02:26] <davecheney> wallyworld: it's a trivial chang to the deployer
[02:26] <davecheney> but that has to be backported to trusty
[02:26] <davecheney> which is not trivial
[02:26] <thumper> davecheney: nice
[02:26] <wallyworld> davecheney: it is - my other concern is that in 2.0 old yaml fields like agent0state are gone
[02:26] <davecheney> thumper: i'm going to commit this and backport it
[02:26] <davecheney> so we can investigate that failure
[02:26] <thumper> ack
[02:27] <wallyworld> so deployer may need subsequent tweaking
[02:29] <davecheney> precise environments still have two mongodbs installed ...
[02:29] <davecheney> root@ip-10-250-24-28:/tmp# ps auxww | grep mongo
[02:29] <davecheney> mongodb   5462  0.6  0.9 372940 38264 ?        Ssl  01:48   0:15 /usr/bin/mongod --config /etc/mongodb.conf
[02:29] <davecheney> root      6410  3.2  1.2 3026932 46808 ?       Ssl  02:28   0:01 /usr/bin/mongod --auth --dbpath /var/lib/juju/db --sslOnNormalPorts --sslPEMKeyFile /var/lib/juju/server.pem --sslPEMKeyPassword xxxxxxx --port 37017 --noprealloc --syslog --smallfiles --journal --keyFile /var/lib/juju/shared-secret --replSet juju --ipv6 --oplogSize 512
[02:29] <davecheney> root      6511  0.0  0.0   8100   932 pts/1    S+   02:29   0:00 grep --color=auto mongo
[02:29] <davecheney> our mongob does not restart if the process dies for any reason :(
[02:30] <thumper> really?
[02:30] <thumper> isn't that what upstart or systemd does?
[02:30] <davecheney> kill 6410
[02:30] <davecheney> process does not restart
[02:32] <davecheney> root@ip-10-250-24-28:/tmp# initctl list | grep juju
[02:32] <davecheney> juju-db stop/waiting
[02:32] <davecheney> juju-clean-shutdown stop/waiting
[02:32] <davecheney> jujud-machine-0 start/running, process 5548
[02:32] <davecheney> process does not restart
[02:39] <mup> Bug #1512718 changed: Leader election fails with "leadership failure: leadership manager stopped" <arm64> <landscape> <sts> <juju-core:Triaged> <juju-core 1.24:Triaged> <juju-core 1.25:Incomplete> <https://launchpad.net/bugs/1512718>
[02:39] <mup> Bug #1540697 changed: Recent change to juju status has broken compatibility with trusty juju-deployer <juju-core:Invalid> <juju-deployer:New> <https://launchpad.net/bugs/1540697>
[02:39] <mup> Bug #1540701 opened: juju deploy <bundle> silently ignores most CLI flags <bundles> <deploy> <juju-core:New> <https://launchpad.net/bugs/1540701>
[02:40] <thumper> wallyworld: under which situation will a service not have a status doc?
[02:40] <wallyworld> thumper: never; there ws a bug in an early 1.24(?) release where that could have been the case
[02:41] <thumper> wallyworld: so I can assume that there will always be a status doc for services?
[02:41] <thumper> for migration?
[02:41] <wallyworld> yep
[02:41] <thumper> awesome
[02:41] <thumper> ta
[02:41] <wallyworld> the status doc is written first
[02:41] <wallyworld> in the []txn.Ops slice
[02:42] <thumper> kk
[02:55] <davecheney> thumper: master is blocekd
[02:55] <davecheney> can I have your permission to land this change ?
[03:12] <davecheney> thumper: and environment with 11 machines, has 51 firewall watchers running
[03:12] <thumper> 51?
[03:12] <thumper> wow
[03:12] <davecheney> no, sorry,
[03:12] <davecheney> 20 @ 0x437c2a 0x446482 0x445472 0xa3f46f 0xa40234 0xd53b36 0x464e41
[03:12] <davecheney> #	0xa3f46f	github.com/juju/juju/worker/firewaller.(*machineData).watchLoop+0x43f		/home/dfc/src/github.com/juju/juju/worker/firewaller/firewaller.go:767
[03:12] <davecheney> #	0xa40234	github.com/juju/juju/worker/firewaller.(*Firewaller).startMachine.func1+0x44	/home/dfc/src/github.com/juju/juju/worker/firewaller/firewaller.go:243
[03:12] <davecheney> #	0xd53b36	github.com/juju/juju/worker/catacomb.Invoke.func3+0x76				/home/dfc/src/github.com/juju/juju/worker/catacomb/catacomb.go:115
[03:12] <davecheney> 20
[03:13] <davecheney> we do have 58 moingodb conections in use
[03:13] <davecheney> http://paste.ubuntu.com/14854643/
[03:13] <davecheney> it's just a bloodbatch
[03:13] <davecheney> it's just a bloodbath
[03:13] <davecheney> and this environment is idle
[03:15] <thumper> davecheney: I've added my mark on the github PR
[03:16] <thumper> davecheney: try to land it now
[03:34] <davecheney> ta
[03:42] <davecheney> cmd/pprof/pprof.go:61:2: cannot find package "runtime/trace" in any of: /usr/lib/go/src/pkg/runtime/trace (from $GOROOT) /mnt/jenkins/workspace/github-merge-juju/tmp.f4uCbWbBZu/RELEASE/src/runtime/trace (from $GOPATH)
[03:42] <davecheney> shake's fist at Go 1.2
[03:43]  * davecheney deletes things til go 1.2 stops complaining
[04:03] <mup> Bug #1540316 changed: provider/maas: bridgescript should support vlans <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1540316>
[04:06] <mup> Bug #1540316 opened: provider/maas: bridgescript should support vlans <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1540316>
[04:15] <mup> Bug #1540316 changed: provider/maas: bridgescript should support vlans <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1540316>
[04:25] <axw> wallyworld: not quite as big as your diffs, but sorry all the same: http://reviews.vapour.ws/r/3698/diff/#
[04:26] <wallyworld> revenge is sweet
[04:26] <axw> on a positive note, net reduction of ~1600 lines
[04:49] <wallyworld> axw: reviewed, i love seeing code deleted
[04:49] <axw> wallyworld: thanks
[04:53] <axw> wallyworld: there's a card for creating the second model already, is a TODO necessary?
[04:53] <wallyworld> nah
[05:26] <davecheney> thumper, each client of a 1.25 api server consumes 50 goroutines
[06:17] <axw> wallyworld: how do you feel about this? https://github.com/juju/juju/compare/cloud-credentials...axw:cloud-credentials-detect-regions
[06:23] <anastasiamac> axw: love the name! detector :D
[06:24] <wallyworld> axw: looking
[06:29] <wallyworld> axw: it does give a way to avoid having a personal clouds file for openstack, if standard env vars are set via sourcing a credentials file or whatever
[06:29] <wallyworld> i think it makes sense
[06:29] <wallyworld> and we can use an lxd env var for the lxd host
[06:30] <axw> wallyworld: might not even need to, you could list the remotes
[06:30] <axw> wallyworld: e.g. "lxc remote list"
[06:30] <axw> wallyworld: and report each of those as a region
[06:30] <wallyworld> true
[06:31] <axw> wallyworld: I'll tidy this up and propose then. it's a bit less hacky than what was there, and supports openstack. we can revisit if there's dissent
[06:33] <wallyworld> yup
[09:09] <voidspace> dimitern: thanks for the review, I replied to all your comments.
[09:10] <voidspace> dimitern: is your WIP branch ready yet?
[09:11] <dimitern> voidspace, it is ready - the ci run last night revealed the main issue, which I'm fixing now: don't assume bindings always exist
[09:12] <dimitern> voidspace, so I'll be pushing a few more commits on top to fix that and re-enable the tests I disabled
[09:12] <voidspace> dimitern: cool
[09:28] <dimitern> voidspace, responded to your replies
[09:33] <voidspace> dimitern: I think you're actually wrong about the test channel - the test really is the owner as it closes the channel.
[09:33] <voidspace> dimitern: and the "code path" is still the same.
[09:33] <voidspace> dimitern: the other two issues - ok.
[09:35] <dimitern> voidspace, I don't mind that much if you leave the chan creation outside of the patched func
[10:02] <frobware> dimitern, standup
[10:08] <voidspace> frobware: dimitern: dooferlad: PR for merge of latest master changes back to maas-spaces. http://reviews.vapour.ws/r/3703/
[10:12] <dimitern> voidspace, LGTM - you'll likely manage to land this before my branch which also includes it, so there shouldn't be a conflict
[10:13] <mup> Bug #1540832 opened: Hard wired bridge prevents the use of the fan overlay network <juju-core:New> <https://launchpad.net/bugs/1540832>
[10:16] <mup> Bug #1540832 changed: Hard wired bridge prevents the use of the fan overlay network <juju-core:New> <https://launchpad.net/bugs/1540832>
[10:19] <mup> Bug #1540832 opened: Hard wired bridge prevents the use of the fan overlay network <juju-core:New> <https://launchpad.net/bugs/1540832>
[10:51] <frobware> dimitern, dooferlad, voidspace, jam: the trusty/series issue we talked about in the standup is being tracked here atm: https://bugs.launchpad.net/juju-core/+bug/1540771
[10:51] <mup> Bug #1540771: Bootstrap fails with trusty is not a valid distro_series <bootstrap> <juju-core:Invalid> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1540771>
[10:53] <frobware> mgz, looking for help with bug 1540771. dimitern plans to update his branch again (RSN) and we'll want another CI run but little point if we keep running into said bug.
[10:53] <mup> Bug #1540771: Bootstrap fails with trusty is not a valid distro_series <bootstrap> <juju-core:Invalid> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1540771>
[11:22] <dooferlad> frobware/dimitern/voidspace: Updated bind syntax: http://reviews.vapour.ws/r/3569/diff
[11:23] <dimitern> dooferlad, cheers, looking
[11:34] <dimitern> dooferlad, reviewed
[12:07] <dooferlad> dimitern: http://reviews.vapour.ws/r/3569 addressed those issues.
[12:28] <dimitern> dooferlad, thanks! let's get it in then I guess
[12:38] <mup> Bug #1540900 opened: juju deploy ignores model default-series <juju-core:New> <https://launchpad.net/bugs/1540900>
[12:46] <jam> fwereade_: ping
[12:47] <mup> Bug #1540900 changed: juju deploy ignores model default-series <juju-core:New> <https://launchpad.net/bugs/1540900>
[12:47] <mfoord> frobware: what's the release notes link again please
[12:48] <mfoord> frobware: never mind, found it
[12:48] <frobware> mfoord, thought so as you show up as already viewing. :)
[12:53] <mfoord> frobware: heh
[12:56] <mup> Bug #1540900 opened: juju deploy ignores model default-series <juju-core:New> <https://launchpad.net/bugs/1540900>
[12:56] <jam> fwereade_: I had a question about "connects", and another one about some lxd configuration stuff
[13:17] <fwereade_> jam, heyhey
[14:01] <voidspace> cherylj:  dimitern:  frobware: alexisb: given that 2.0 is dropping maas 1.8 support we should probably not mention improved support for maas 1.8 in the release notes (there's a maas 1.8 compatibility section)
[14:01] <voidspace> bug 1534636
[14:01] <mup> Bug #1534636: Destroying a hosted model in the local provider leaves the controller unusable <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1534636>
[14:01] <frobware> jam: saw your doc, but punting until I get the maas-spaces unit tests to work on ppc64
[14:02] <voidspace> cherylj:  dimitern:  frobware: alexisb: we can probably also drop the mention of bug 1534636 as a known issue, as the local provider won't
[14:02] <voidspace> *won't be there in 2.0
[14:02]  * voidspace lunches
[14:02]  * frobware follows suit
[14:06] <dimitern> cherylj, frobware, alexisb, voidspace, I think that section was copied from 1.25 rel. notes, so we should indeed drop it for the 2.0 notes
[14:16] <jam> fwereade_: hey, still around?
[14:36] <perrito666> bbl
[14:41] <fwereade_> jam, ha, yes, but apparently notification-blind
[14:42] <mattyw> jam, any reason why we shouldn't do $$merge$$ on this https://github.com/juju/juju/pull/4131 ?
[14:44] <cherylj> mattyw: yes, we are trying to get some branches landed and we'd rather not have master be a moving target
[14:44] <mattyw> cherylj, ok cool thanks
[14:45] <cherylj> frobware: I see now that maas-spaces has some new commits.  What's the expectation now for testing?  are we still testing dimitern's branch?
[14:47] <mattyw> cherylj, ok - judging by the comment lxd provider is broken in xenail so it probably needs to land before the release - but I don't really know
[14:47] <mattyw> cherylj, I'm not involved - I'm just an interested party
[14:48] <cherylj> mattyw: we can look at getting it into alpha2, but really, if it misses, alpha3 is next week
[14:48] <mattyw> cherylj, ack
[14:59] <frobware> cherylj, yes, new commits for blocking API and bootstrap whilst discovery taking place (this is the one we talked about last night). bind syntax change for '@' -> '=' which came out of CT. and merge current master into maas-spaces.
[14:59] <frobware> cherylj, dimitern is updating and fixing tests based on the CI run.
[14:59] <cherylj> frobware: is dimitern's branch up to date with maas-spaces again?
[14:59] <frobware> cherylj, dimitern: do you plan to rebase maas-spaces onto your branch?
[15:00] <cherylj> otherwise we need to get a bless on dimitern's, then merge into maas-spaces, then get a bless on maas-spaces before it can all land in master
[15:00] <frobware> cherylj, ack.
[15:00] <frobware> cherylj, I would advocate that dimiter rebases so we only do test once
[15:01] <cherylj> frobware: agreed
[15:01] <cherylj> and don't forget to pull in master too.  There were some changes that came in
[15:01] <frobware> cherylj, we did yesterday - need to check if we're talking about the same change.
[15:01] <cherylj> frobware: probably same one.  the pprof thing
[15:02] <frobware> cherylj, yep
[15:02] <frobware> cherylj, 3 things to fix in dimiter's branch: CI test failures surrounding bindings, skipped unit tests (though for expediency we could skip these), and ppc64 unit test failures.
[15:02] <cherylj> frobware: wallyworld has a PR up for a different bug that's only been seen on his branch (although the underlying problem exists on master too).  I think it should go into his branch rather than master so we can demonstrate in CI that it fixes the problem
[15:03] <cherylj> frobware: what's the confidence level in getting a bless on dimitern's branch today
[15:05] <frobware> cherylj, ppc64 might still fail. looking at that now. need to hear from dimiter about progress on the other problems. also bug 1540771 might cause a cursed build again.
[15:05] <mup> Bug #1540771: Bootstrap fails with trusty is not a valid distro_series <bootstrap> <juju-core:Invalid> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1540771>
[15:06] <cherylj> frobware: is that bug a problem with the MAAS setup?  or a problem on the branch?
[15:07] <frobware> cherylj, I would say MAAS setup and/or CI setup.
[15:07] <cherylj> frobware: is there someone who can work with jog_ to track it down so we don't hit it again?
[15:09] <frobware> cherylj, put me down, though I have a hard commitment 7:30pm.
[15:11] <cherylj> frobware: ok, jog_ should be in before then
[15:11] <frobware> cherylj, I'm not entirely sure how quick a CI run can be but I would advocate we have a stripped down run that initially only runs the failures; we need to iterate faster
[15:12] <cherylj> frobware: yes, but your changes could cause some of the previously passing tests to fail, so we need to get a complete run before a merge.  We could see about doing some initial tests for verification before a complete run
[15:12] <cherylj> sinzui: is there a way we could just run selected tests for maas-spaces?
[15:12] <cherylj> mgz: ^^
[15:12] <frobware> cherylj, agreed.
[15:13] <frobware> cherylj, I just don't want to wait $N hours to find out we haven't fixed the existing failures
[15:14] <cherylj> frobware: yeah, I understand
[15:18] <sinzui> cherylj: we can re-run selected tests for previously tested revisions. We cna do this beause we saved  the built packages and streams. We cannot run tests with a branch.
[15:21] <frobware> cherylj, do you know if we have access to ppc64?
[15:22] <frobware> I'm futzing trying to fix this unit test failure
[15:22] <frobware> I have a qemu ppc64 running but it's not terribly efficient or productive.
[15:22] <frobware> actually, I cannot build juju atm (on ppc64)
[15:24] <cherylj> mgz, sinzui, is it possible to get frobware access to the CI ppc machines?
[15:26] <sinzui> cherylj: yes, though I am informed that ports.ubuntu.com is down and that limits what we can do. No installs
[15:26] <cherylj> sinzui: should be okay, it's just a unit test run
[15:26] <frobware> sinzui, cherylj: ah, that explains some of my problems over the last 20 mins.
[15:26] <cherylj> d'oh
[15:34] <cherylj> frobware, dimitern, I've copied over the charms that CI uses for the basic deploy tests here:  https://private-fileshare.canonical.com/~cherylj/dummy-charms/
[15:34] <cherylj> there's also a txt file with the steps used to deploy
[15:35] <cherylj> you can start with testing on AWS, since that's failing on the maas-spaces-controller-space-config
[15:37] <frobware> dooferlad, if you're between tasks please could you help out on the test failures ^^
[15:37] <dooferlad> frobware: I am not quite, but will be soon.
[15:40] <natefinch> ericsnow: it occurs to me that we need to have the resource download code on the server regardless, to support the GUI
[15:41] <ericsnow> natefinch: I wasn't aware of any GUI-related requirements
[15:42] <natefinch> ericsnow: I think rick_h__ would probably be upset if we couldn't deploy charms from the gui that used resources in the charmstore
[15:42] <ericsnow> natefinch: regardless, you make a good point about what the GUI might need
[15:42] <natefinch> ericsnow: but you're right that there was nothing in the spec about it
[15:42] <cherylj> fwereade_: would you be able to review https://github.com/juju/juju/pull/4266 ?  It's related to that bug I talked to you about yesterday
[15:42] <ericsnow> natefinch: I can follow up with the GUI team
[15:45] <rick_h__> natefinch: can you email the context there?
[15:47] <natefinch> rick_h__: sure
[15:49] <ericsnow> rick_h__, natefinch: I just send an email
[15:49] <rick_h__> natefinch: ty on my phobe aylt the conference and want to better follow what's up
[15:51] <fwereade_> cherylj, alexisb: ack
[15:53] <ericsnow> natefinch: let's get back together some time after lunch to put together the demo for last iteration
[15:53] <natefinch> ericsnow: ok
[16:13] <jog> hi frobware, I'm here if you have questions about bug 1540771
[16:13] <mup> Bug #1540771: Bootstrap fails with trusty is not a valid distro_series <bootstrap> <juju-core:Invalid> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1540771>
[16:15] <frobware> jog, immediate question is if we run again will this just repeat. is this at related to not running against MAAS 1.7 or 1.8?
[16:18] <jog> frobware, it failed several time in a row in the same way. Master had run before maas-spaces and passed. The bug is open for failures against MAAS 1.9
[16:19] <frobware> jog, but new against 1.9?
[16:19] <jog> frobware, not sure what you mean?
[16:22] <frobware> jog, trying to understand whether it is our branch that causes the failure or a separate 1.9 issue
[16:24] <jog> well I just reran the last revision of master (1cb8f03) and don't see the failure there. I'm running the maas-spaces revision that failed again now, just to be sure there was no other issue.
[16:27] <frobware> dimitern, ^^
[16:28] <frobware> jog, thx - will keep an eye out for the result.
[16:35] <frobware> jog, is your "reran" http://reports.vapour.ws/releases/3564
[16:35] <frobware> jog, because that show as cursed
[16:36] <frobware> jog, ah, but only for ppc64.
[17:01] <natefinch> man, the function that actually does charm deployments from the client is a hairy mess
[17:02] <natefinch> 100 lines of if statements
[17:03] <perrito666> how sad
[17:04] <natefinch> and weird implicit meanings to error values... good lord
[17:07] <natefinch> perrito666: https://github.com/juju/juju/blob/master/cmd/juju/commands/deploy.go#L249
[17:08] <perrito666> deploy charm Or bundle, just in case
[17:08] <perrito666> is there a reason why that is not the at least 3 methods it should be?
[17:09] <natefinch> perrito666: the problem is that our deploy command does way too much and has way too much encoded into the string we pass it
[17:09] <perrito666> "we might have been given a bundle" <-- that is one.  " if not a bundle then maybe a local charm" <- there is another,
[17:10] <natefinch> the fact that this code is a nightmare is just an indication that the UX is a nightmare...
[17:10] <natefinch> I mean, the code could be better even given the bad UX
[17:10] <perrito666> that is not true, code can be bad and ux be decent
[17:11] <perrito666> I have  glanced at that function and already know it can be split
[17:11] <perrito666> I do have the "fresh pair of eyes" advantage
[17:11] <natefinch> yeah, it absolutely should be refactored
[17:11] <perrito666> but cmon, a bit of good will might make things better
[17:11] <natefinch> I think someone added bundles and just didn't want to refactor what was there
[17:12] <perrito666> well mcdonalds lunch is not the best alternative to stay awake after lunch
[17:12] <natefinch> heh
[17:21] <perrito666> bbl
[17:24] <natefinch> oh how nice, we're throwing away an error and assuming we know what was in it
[17:25] <natefinch> oh hey, someone fixed it on master... ok, I gotta rebase.
[18:08] <voidspace> dimitern: we have 19 code branches checking for the AddressAllocation feature flag
[18:16] <dimitern> voidspace, so next step is to prepare a PR that drops them I guess
[18:24] <voidspace> dimitern: did we ever get devices by default on master?
[18:24] <voidspace> dimitern: it looks to me, from skimming the code, that if we lose AddressAllocation we lose containers as devices too
[18:25] <voidspace> dimitern: and we go back to leaking DHCP leases
[18:39] <perrito666> bq
[18:53] <dimitern> voidspace, that's ok - we'll redo it properly anyway
[19:23] <natefinch> gah, why do we have a client api function that no one calls? (ServiceDeployWithNetworks)
[19:25] <natefinch> .....maybe we keep it so the tests can test the server's backwards compatibility
[19:26] <natefinch> hmm, in which case it should be in test code only.
[19:27] <mup> Bug #1540469 changed: [ARM64][LXD Provider][2.0-alpha1-0ubuntu1~16.04.1~juju1] juju run command ERROR fork/exec /usr/bin/ssh: cannot allocate memory <run> <juju-core:New> <https://launchpad.net/bugs/1540469>
[19:30] <mup> Bug #1540469 opened: [ARM64][LXD Provider][2.0-alpha1-0ubuntu1~16.04.1~juju1] juju run command ERROR fork/exec /usr/bin/ssh: cannot allocate memory <run> <juju-core:New> <https://launchpad.net/bugs/1540469>
[19:32] <perrito666> natefinch: are you sure no one calls it?
[19:32] <natefinch> perrito666: grep says it's only called in tests
[19:32] <perrito666> I do recall that there is a cascade of calls for service deploy
[19:32] <perrito666> ServiceDeploy -> serviceDeployWithX -> ServiceDeployWithY and so on
[19:33] <perrito666> natefinch: take a look a few versions back, perhaps its legacy and asking to go
[19:34] <natefinch> perrito666: yeah, almost certainly... like I said, my guess is that it needs to stick around for testing that we keep backwards compatibility with old clients... but that means it should be moved to test code, so it's more obviously something you should let rot in a corner
[19:34] <perrito666> actually, if you are working in 2, that is a death sentence
[19:36] <mup> Bug #1540469 changed: [ARM64][LXD Provider][2.0-alpha1-0ubuntu1~16.04.1~juju1] juju run command ERROR fork/exec /usr/bin/ssh: cannot allocate memory <run> <juju-core:New> <https://launchpad.net/bugs/1540469>
[19:38] <natefinch> perrito666: *shrug* I'll file a bug
[19:39] <mup> Bug #1540469 opened: [ARM64][LXD Provider][2.0-alpha1-0ubuntu1~16.04.1~juju1] juju run command ERROR fork/exec /usr/bin/ssh: cannot allocate memory <run> <juju-core:New> <https://launchpad.net/bugs/1540469>
[19:42] <mup> Bug #1540469 changed: [ARM64][LXD Provider][2.0-alpha1-0ubuntu1~16.04.1~juju1] juju run command ERROR fork/exec /usr/bin/ssh: cannot allocate memory <run> <juju-core:New> <https://launchpad.net/bugs/1540469>
[20:03] <natefinch> dammit... something is throttleing the hell out of my laptop lately
[20:03] <perrito666> natefinch: use htop?
[20:04] <perrito666> aghh, sms spam, just what I needed
[20:05] <natefinch> perrito666: htop is very pretty and I have no idea what it means
[20:05] <perrito666> natefinch: use the man page
[20:06] <perrito666> if the upper graphs for microprocessor are full (or the ram one) something is behaving wrongly, you can use f6 iirc to sort things by cpu or ram
[20:06] <perrito666> and that would put the culprit at the top
[20:09] <natefinch> perrito666: I'm running tests, and expect my processors to all be pegged, but they're all at like 50%, and my tests are taking a lot longer than usual to run
[20:10] <perrito666> natefinch: try iotop then
[20:10] <perrito666> perhaps something is hammering your hd
[20:13] <natefinch> hmm disk read is always 0, I presume that's a bug, or my computer would not be working ;)
[20:28] <natefinch> that weird feeling when you change the signature of like 3 very important functions.... and no tests fail
[20:28] <natefinch> ...I'm sure it's fine
[20:29] <perrito666> natefinch: http://www.quickmeme.com/img/a7/a7eb19ae524e1d066fae16e5e7c0438c86fa321468cd3e3d0764731cf86f869c.jpg
[20:29] <natefinch> lol indeed
[21:04] <davechen1y> no menn0 today ?
[21:04] <davechen1y> paging the on call reviewer: http://reviews.vapour.ws/r/3700/
[21:04] <davechen1y> to the OR, STAT!
[21:05] <perrito666> that is me
[21:05]  * perrito666 reviews
[21:05] <perrito666> stat?
[21:06] <natefinch> perrito666: means "right now"
[21:06] <natefinch> perrito666: I believe it was backronymed to Sooner Than Already There
[21:07]  * natefinch makes perrito666 google backronym instead of stat
[21:07] <perrito666> natefinch: its ok, for me davechen1y 's now is 11 hours away so technically I can pull it
[21:09] <natefinch> perrito666: https://www.youtube.com/watch?v=gNIwlRClHsQ
[21:13] <natefinch> ericsnow: I gotta run a little early today (will be back later as usual).  Want to talk demo?
[21:14] <ericsnow> natefinch: sure
[21:14] <ericsnow> natefinch: moonstone
[21:18] <perrito666> davechen1y: there you got a bunch of fixmes :p and all more than 10 hours before your asked time
[21:18] <menn0> waigani: ship it on the auth worker PR
[21:20] <waigani> menn0: thanks
[21:43]  * perrito666 makes coffee before a long long review
[21:58] <natefinch> bbl
[22:21] <perrito666> I wish we had un-shipit as an option
[22:53] <menn0> waigani: deployer review done
[23:06] <menn0> wallyworld: http://reviews.vapour.ws/r/3711/ doesn't need a review right?
[23:07] <wallyworld> menn0: nope, is a straight merge of a feature branch
[23:07] <wallyworld> a huge one at that
[23:07] <menn0> wallyworld: I noticed so I'm glad to hear I don't need to review it :)
[23:08] <wallyworld> all the terminology changes etc
[23:08] <wallyworld> as per your earlier email to dev list
[23:08] <wallyworld> model model model
[23:12] <wallyworld> cherylj: i wonder if the add-machine "regression" is just a timebomb in 1.25 say, waiting to show up like the leadership issue did, and we just haven't seen it yet
[23:13] <wallyworld> you'd think it would show up in more places
[23:13] <perrito666> menn0: tx for the rev
[23:18] <rick_h__> just dropping as an FYI: https://twitter.com/mitechie/status/694506568000344064
[23:19] <perrito666> neat
[23:20] <rick_h__> yea, really good attendance here at the summit
[23:23]  * perrito666 wanted to go
[23:24] <rick_h__> we'll have to hit up the next one. We had a couple of core folks here
[23:29] <davechen1y> cherylj: when will the next 1.25 build come out ?
[23:30] <davechen1y> ie, if I want to get my profiling stuff into a 1.25 build
[23:30] <davechen1y> what deadline should I focus on ?
[23:41] <cherylj> davechen1y: I'm targeting late next week
[23:41] <cherylj> for 1.25.4
[23:42] <cherylj> hooray!  api-command-rename has LANDED!
[23:43]  * rick_h__ does a giant happy dance
[23:43] <cherylj> master will stay blocked until we get a bless
[23:43] <cherylj> then we shall open up the flood gates