[00:01] <menn0_> anyone able to do a (likely quick) review: https://codereview.appspot.com/100620046/
[00:12] <perrito666> menn0: this is is a purely personal thought, but decodeyaml could escalate that error to decodeyamlfrostdout who could escalateit up and so the actual error assertion would be in the test using it
[00:13] <menn0> perrito666: the reason for doing it the way I did is that it avoids boilerplate in each test method
[00:14] <menn0> perrito666: there's no backtrace in the failure output is there?
[00:14] <perrito666> menn0: iirc no
[00:14] <menn0> perrito666: my Python background is showing :)
[00:15] <menn0> perrito666: I'll change the tests according to your suggestion.
[00:15] <perrito666> oh, I am as pythonish as you, worry not
[00:27] <perrito666> menn0: you might want to get a better review, I am relatively new here too
[00:28] <menn0> perrito666: understood. thanks for looking.
[01:04] <wallyworld> morning axw
[01:05] <axw> hey wallyworld
[01:05] <wallyworld> sadly jenkins is not very happy, but the failures are seemingly random, one off things
[01:07] <axw> :(
[01:07] <wallyworld> for now though, there's a couple of bugs that need looking at for a 1.19.3 release. i plan to grab one after i run some gccgo tests, so if you are at a loose end, feel free to grab one also
[01:07] <axw> sure
[01:07] <axw> I was going to fix that utopic one first
[01:08] <axw> then look at gccgo things
[01:08] <wallyworld> sure ok
[01:08] <axw> wallyworld: turns out 1.18 *does* use juju-mongodb on local, but *only* on trusty
[01:08] <axw> should be trusty onwards
[01:08] <wallyworld> lol ok
[01:10] <wallyworld> axw: feel free then to look at gccgo stuff after the mongo fix - first thing we need to do is fully understand how many test failures are left to fix
[01:10] <axw> nps
[01:12] <wallyworld> hopefully it won't be too much effort to ensure that bugs in lp reflect the situation - one bug tagged ppc64el / test-failure per issue
[01:12] <wallyworld> then we can drive that count down to 0
[01:47] <waigani> wallyworld: ping
[01:48] <waigani> wallyworld: JujuConnSuite already calls dummy.Reset() in its TearDownTest. That was basically my fix :(
[01:48] <wallyworld> ah ok
[01:49] <wallyworld> well there's still something wrong sadly :-(
[01:49] <waigani> yep
[01:50] <wallyworld> we can see how often it comes up
[01:50] <waigani> okay, you know where to fine me
[02:01] <axw> wallyworld: https://codereview.appspot.com/99410053/ please
[02:01] <wallyworld> looking
[03:09] <bodie_> to check whether my dependencies.tsv is suitable for use with gccgo, I can just "make" in the juju-core root directory, right?
[03:11] <jam> bodie_: are you wanting to check that the dependencies themselves are compatible with gccgo?
[03:12] <jam> bodie_: AFAICT the Makefile knows nothing about dependencies.tsv, though we should probably fix that
[03:13] <bodie_> well, I figured if I've aligned deps with godeps, then if I make with gccgo, I should be able to tell whether the build is broken or not
[03:13] <bodie_> I just don't know how to make with gccgo, which I'm sure is something simple :) just not seeing it
[03:13] <jam> bodie_: so "make" will only switch to gccgo if you aren't on x86, armel or armhf
[03:13] <bodie_> yeah, I was just noticing that
[03:13] <jam> go build -compiler=gccgo launchpad.net/juju-core/...
[03:14] <bodie_> okay, thanks
[03:14] <jam> the "..." means recursively
[03:14] <bodie_> that's simpler than I expected
[03:34] <davecheney> jam: bodie_ that is an accurate summary
[03:35] <davecheney> iff you are using trusty you can test with both compilers on your amd64 machine
[03:35] <davecheney> just use the -compiler=gccgo flag to switch to gccgo
[03:35] <davecheney> if that passes, that is all that we expect
[03:37] <bodie_> hmmm
[03:37] <bodie_> I thought I was on 14.04, but I'm on saucy
[03:37] <bodie_> I'll have to try on my pc
[03:41] <davecheney> bodie_: we can probably get the compiler in a backport ppa
[03:41] <davecheney> but you probably want to upgrade to trusty pretty soon
[03:41] <davecheney> a. it'll be easier
[03:41] <davecheney> b. saucy suport expores at the end of JUly ? I think
[03:42] <davecheney> c. it's the smoothest upgrade i've had, it's very polished from saucy -> trusty
[03:42] <bodie_> makes sense :) there was some reason I'd set up our dev remote as saucy... I think there was some kind of weird build issue when we were coming onboard
[03:42] <davecheney> and this laptop has gone from P -> Q -> R -> S
[03:42] <bodie_> nice
[03:42] <davecheney> -> T
[04:15] <davecheney> axw: did you take https://bugs.launchpad.net/juju-core/+bug/1321492 ?
[04:15] <_mup_> Bug #1321492: provider/openstack: gccgo test failures <gccgo> <ppc64el> <test-failure> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1321492>
[04:18] <axw> davecheney: I did
[04:19] <axw> davecheney: working through gccgo bugs now
[04:19] <davecheney> sorry, just proposed a fix
[04:19] <axw> no worries, was trivial anyway
[04:24] <davecheney> axw: i'm going to fix all the trivials in the ppc tests today
[04:24] <davecheney> then we'll be able to see the underlying problems better
[04:24] <davecheney> there are, roughtly
[04:24] <davecheney> 3
[04:24] <davecheney> one tools related failure
[04:24] <davecheney> a timeout with the joyent tests
[04:25] <davecheney> and a runtime or compiler crash
[04:25] <davecheney> i want to expose those as the only failures
[04:25] <davecheney> https://codereview.appspot.com/95550045
[04:25] <axw> ok, cool
[04:25] <davecheney> the rest are just noise
[04:25] <axw> davecheney: the one in worker may not be trivial, if that's the runtime/compiler crash you're referring to
[04:25] <davecheney> yup
[04:25] <axw> looks like it's related to the receive loop in init()
[04:26] <davecheney> axw: you take a look at https://bugs.launchpad.net/juju-core/+bug/1303583
[04:26] <_mup_> Bug #1303583: provider/azure: new test failure <gccgo> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1303583>
[04:26] <davecheney> i see you've snagged it
[04:26] <axw> davecheney: already proposed
[04:27] <davecheney> link ?
[04:27] <axw> https://codereview.appspot.com/93520047/
[04:27] <davecheney> axw: do you use lbox propose -bug=NNN
[04:28] <davecheney> the it links the bug to the branch
[04:28] <axw> davecheney: I don't because of the milestone thing
[04:28] <davecheney> so when you get an email that the branch is merged
[04:28] <axw> I usually link manually, but sometimes forget
[04:28] <davecheney> you can click through to the bug
[04:28] <davecheney> fix the milestone and mark it fix comitted
[04:30] <davecheney> axw: fwiw i tag all these bugs as gccgo and pcc64el
[04:31] <axw> davecheney: yup, have been looking at the gccgo tag so far
[04:31] <davecheney> i consider them to be synonyms, but others like to track them independanty
[04:44] <bodie_> I think my MR is mostly ready to roll, but I'm having some trouble when I try to build with GCCGo.  I'm not sure if I'm doing something wrong.  Can anyone else verify?
[04:44] <bodie_> https://codereview.appspot.com/94540044/
[04:44] <bodie_> it adds a few dependencies and one of them doesn't seem to be happy.
[04:45] <davecheney> bodie_: which one
[04:45] <davecheney> i'll try
[04:45] <bodie_> github.com/sigu-399/gojsonpointer
[04:45] <davecheney> % cover
[04:45] <davecheney> PASS
[04:45] <davecheney> coverage: 66.7% of statements
[04:45] <davecheney> hmmm ...
[04:46] <davecheney>  % go test -compiler=gccgo
[04:46] <davecheney> PASS
[04:46] <davecheney> ok      github.com/sigu-399/gojsonpointer       0.047s
[04:46] <davecheney> lucky(~/src/github.com/sigu-399/gojsonpointer) %
[04:46] <davecheney> works for me
[04:46] <davecheney> bodie_: if you are using saucy
[04:46] <davecheney> you will have gccgo-4.8
[04:46] <davecheney> which is, sadly, not up to the task
[04:46] <bodie_> I'm on Trusty on my PC here
[04:47] <davecheney> bodie_: can you show what you see, use paste.ubuntu.com
[04:47] <davecheney> or install pastebinit
[04:47] <bodie_> yeah
[04:48] <bodie_> http://paste.ubuntu.com/7500153/
[04:48] <bodie_> perhaps it's because I'm using zsh...
[04:49] <davecheney> oh
[04:49] <bodie_> negative
[04:49] <bodie_> same issue using bash
[04:49] <davecheney> that is a differnt problem
[04:49] <davecheney> here is what's happened
[04:50] <davecheney> 1. you did go get -u -v launchpad.net/juju-core/...
[04:50] <davecheney> which is going to fetch all the _current_ juju dependencies
[04:50] <bodie_> right
[04:50] <davecheney> then you switch to your branch
[04:50] <davecheney> which essentially has new dependencies
[04:50] <davecheney> which is why godeps is whinging
[04:50] <davecheney> the simplest solution for today would be
[04:50] <davecheney> go get -u -v github.com/sigu-399/gojsonpointer
[04:50] <bodie_> I figured I should switch before doing a godeps so that I get the same version I had in my code
[04:50] <davecheney> and the same for the other two new deps
[04:50] <davecheney> then godeps should be happy
[04:51] <bodie_> oh.... but it's complaining because my code is an older version of core...?
[04:51] <bodie_> wouldn't ... oh
[04:51] <bodie_> I see
[04:51] <bodie_> could I go get lp:~binary132/juju-core/charm-actions ?
[04:53] <axw> davecheney: I got to the bottom of the issue in worker
[04:53] <axw> davecheney: seems that closing a channel twice in gccgo will abort hte process, even if you recover the error
[04:53] <axw> panic*
[04:54] <davecheney> axw: hmm
[04:54] <davecheney> is that worth a bug upstream ?
[04:54] <davecheney> one nice thing about gccgo is gdb actuially works
[04:54] <axw> davecheney: http://play.golang.org/p/eUJw-e1GRJ
[04:54] <axw> davecheney: yeah I think so
[04:55] <davecheney> axw: two secs
[04:55] <davecheney> bodie_: are you going ok for the moment ?
[04:55] <waigani> how do I patch name, username and id of the current user for testing? I've tried s.PatchEnvironment("USER", "admin001"), but in the test switch still prints "jesse"
[04:55] <davecheney> axw: what happens if you make the call stack a bit longer
[04:56] <davecheney> i suspect that the main function, or main gorotuine might be a bit special in gccgo
[04:56] <bodie_> yes, I'm about to head to bed actually (UTC-4)
[04:56] <axw> davecheney: it wasn't in the main function in the test, that's just a minimal repro
[04:56] <bodie_> would the right way to do things from the beginning be to fetch via lp:~binary132/juju-core/charm-actions directly?
[04:56] <davecheney> axw: http://play.golang.org/p/VNezaP7ADp
[04:56] <davecheney> bang on
[04:57] <axw> davecheney: FYI it's in worker/simpler.go, the Kill method
[04:57] <axw> we can work around it, but a upstream bug is warranted
[04:57] <bodie_> hrm, gccgo not found on path on trusty either
[04:57] <davecheney> axw: do you want to raise it ?
[04:57] <davecheney> or i can do it
[04:58] <axw> I can
[04:58] <davecheney> cool, excellent
[04:58] <davecheney> surely double closing a channel is a bug
[04:58] <davecheney> and using recover to cover for that is a horrid smell
[04:58] <axw> yeah
[04:58] <davecheney> axw: raise the bug on golang.org/issue
[04:59] <axw> will do
[04:59] <davecheney> i don't think the issuetracker on the gofront-end project is used as much
[04:59] <bodie_> can I just apt-get install gccgo?
[04:59] <bodie_> will that be suitable?
[04:59] <davecheney> bodie_: yes
[04:59] <bodie_> okay, that's nice
[04:59] <davecheney> bodie_: actually, you'll have to do
[04:59] <davecheney> apt-get install gccgo-4.9
[05:00] <bodie_> uh oh, already started build.  do I need to clean
[05:00] <bodie_> ?
[05:00] <bodie_> er, it looks like I may have installed gccgo-4.9 by default (since trusty)
[05:01] <davecheney> gccgo -v
[05:01] <davecheney> % gccgo -v 2>&1 | tail -n1
[05:01] <davecheney> gcc version 4.9.0 20140405 (experimental) [trunk revision 209157] (Ubuntu 4.9-20140406-0ubuntu1)
[05:02] <bodie_> http://paste.ubuntu.com/7500172/
[05:02] <davecheney> bodie_: run godeps -u dependencies.tsv
[05:02] <davecheney> if it whinges, fix the problem and run it again
[05:02] <axw> davecheney: filed https://code.google.com/p/go/issues/detail?id=8070
[05:03] <axw> davecheney: I can fix it in our code, unless you're already doing that
[05:03] <davecheney> added some code
[05:03] <davecheney> sorry, tags
[05:03] <davecheney> axw: if you're there have a crack
[05:03] <davecheney> i'm doing another ppc test run to find some more issues
[05:03] <bodie_> all righty
[05:04] <bodie_> there we go, seems to be working
[05:04] <bodie_> thanks
[05:04] <bodie_> and built!
[05:04] <davecheney> win
[05:04] <davecheney> is the bot running ?
[05:04] <davecheney> it's been a long time since I marked that change as accepted
[05:04] <davecheney> approved
[05:04] <bodie_> bot?
[05:05] <davecheney> bodie_: the commit landing bot
[05:05] <davecheney> aww crap
[05:05] <davecheney> i see the problme
[05:07] <bodie_> I need to lbox propose again, right?
[05:08] <davecheney> always
[05:08] <davecheney> propose early, propose often
[05:08] <jcw4_> if in doubt.... propose again
[05:08] <bodie_> then propose s'more!
[05:09] <bodie_> yeah, my last propose was -wip
[05:11] <bodie_> and there it should be
[05:11] <bodie_> https://codereview.appspot.com/94540044
[05:12] <davecheney> wallyworld: axw what are we going to do about the joyent tests ?
[05:12] <wallyworld> which bit?
[05:12] <wallyworld> the time taken to run them?
[05:13] <wallyworld> i'll got pull requests waiting to be merged by upstream
[05:13] <wallyworld> that will fix the execution tim
[05:13] <wallyworld> is there anything else?
[05:15] <davecheney> that's basically it
[05:15] <davecheney> they run so slowly on ppc they always timeout
[05:15] <wallyworld> yeah. i did fix it over a week ago
[05:15] <wallyworld> just need to get my changes to upstream libs merged in, hopefully this week
[05:16] <wallyworld> davecheney: axw : so, do you have a feel for how long we need to get the tests running under gccgo and ppc (assuming joyent ones are fixed)? 1 week? 2 weeks?
[05:17] <davecheney> wallyworld: today i feel confident that we can close this off in a week
[05:17] <wallyworld> o/
[05:17] <wallyworld> \o/
[05:17] <davecheney> ask me tomrrow, we might have a different answer
[05:17] <wallyworld> lol ok
[05:17] <davecheney> but it's looking pretty good so far
[05:18] <wallyworld> is there anything outside our control? what's the exposure to compiler issues?
[05:18] <davecheney> ask axw, i think it's managable
[05:18] <axw> wallyworld: so far seems fine
[05:18] <wallyworld> compiler issues?
[05:18] <axw> wallyworld: only one gccgo-specific issue, but that's because our code is a bit crap
[05:18] <axw> brtb
[05:19] <wallyworld> ok, so we can fix our code hopefully
[05:19] <axw> yes I am fixing that one atm
[05:20] <wallyworld> axw: looking at your address polling branch - it seems that DNSName is not really used with the changes made
[05:20] <davecheney> http://paste.ubuntu.com/7500164/
[05:20] <davecheney> some of these have fixes waiting to be landed by the bot
[05:20] <axw> wallyworld: it's not meant to be anymore
[05:21] <wallyworld> axw: yeah, i had heard that i thin, just wanted to confirm. so we should follow up and remove it i guess
[05:21] <wallyworld> davecheney: that pastebin doesn't look too bad
[05:21] <axw> wallyworld: oh sorry, misunderstood
[05:21] <axw> wallyworld: if it's not used anywhere, definitely
[05:22] <axw> wallyworld: I thought it might still be used in status
[05:22] <wallyworld> axw: it doesn't *seem* to be at first glance. not sure about status
[05:22] <axw> but it wouldn't... it uses machhine's public address now
[05:22] <wallyworld> yeah
[05:22] <axw> wallyworld: I will follow up and wipe it out
[05:22] <wallyworld> only issue is we would want to give people te dns name to connect to
[05:22] <wallyworld> rather than an ip address
[05:22] <axw> we can do that still, with Addresses
[05:23] <axw> there's an address type
[05:23] <wallyworld> ah yes, true
[05:23] <bodie_> 'night all
[05:23] <axw> night
[05:23] <wallyworld> seeya
[05:23] <wallyworld> axw: ok, i'll get a 2nd opinion on getting rid of dnsname and we can nuke it
[05:24] <axw> cool
[05:30] <axw> davecheney: https://codereview.appspot.com/98480046 fixes worker
[05:31] <davecheney> man, that is a nasty code smell
[05:31] <davecheney> why isn't this code using a tomb ?
[05:31] <davecheney> this is exactly what a tomb is for
[05:32]  * axw shrugs
[05:33] <axw> tomb seems a little heavy here, it's pretty trivial code
[05:33] <davecheney> lol
[05:47] <jcw4> wallyworld: dumb question... jc.DeepEquals is not the same as gc.DeepEquals right?
[05:47] <jcw4> wallyworld: if I want to use jc.SameContents what import is that?
[05:47] <jam> jcw4: IIRC it differs based on whether []slice(nil) == []slice()
[05:47] <wallyworld> jcw4: same end result, but jc.DeepEquals gives far better errors, so use that
[05:47] <jam> is the empty slice the same as a nil slice
[05:48] <wallyworld> plus the slice difference, yeah
[05:48] <jcw4> jam, wallyworld cool thanks
[05:48] <wallyworld> gc.DeepEquals is considered deprecated
[05:48] <jcw4> what is the import for jc.*
[05:48] <jcw4> juju-core?
[05:48] <wallyworld> launchpad.net/juju-core/checkers i think
[05:48] <jcw4> k, tx!
[05:49] <wallyworld> no
[05:49] <wallyworld> bad memory
[05:49] <wallyworld> github.com/juju/testing/checkers
[05:50] <jcw4> wallyworld: I see it now... it's in a handful of tests already
[05:50] <wallyworld> yeah
[06:00] <jcw4_> hmm;  gc.DeepEquals is okay for maps, but use jc.SameContents for slices?
[06:01] <jcw4_> it looks like even gc.DeepEquals is intended for slices...
[06:01] <jcw4_> is there any similar comparison tool for maps?
[06:01] <jcw4_> Or do I need to compare the keys and values separately?
[06:02] <dimitern> jam, hey
[06:02] <jam> morning dimitern
[06:03] <dimitern> jam, so everybody else than niemeyer is chickening out of giving me an lgtm for this goamz branch, 3rd day in a row, and i'm starting to get frustrated.. https://codereview.appspot.com/98430044/
[06:03] <wallyworld> jcw4: DeepEquals works for slices sure, but it fails if order is different
[06:04] <dimitern> and he even gave me a not lgtm, but i fixed what we discussed, and i'll be waiting for him to appear later today and hopefully approve it
[06:04] <wallyworld> SameContents doesn't care about order, so you can think of it as treating a slice like a set
[06:04] <wallyworld> well, except sets can't have dupes
[06:04] <jcw4> wallyworld: I'm getting an error using SameContents with two maps
[06:05] <wallyworld> use DeepEquals
[06:05] <jcw4> wallyworld: ok
[06:05] <jcw4> wallyworld: tx!
[06:05] <wallyworld> np
[06:21] <davecheney> http://paste.ubuntu.com/7500334/
[06:21] <davecheney> axw: getting closer
[06:23] <axw> nice
[06:23] <jcw4> mgz, fwereade : I think this is the last go round: https://codereview.appspot.com/98260043
[06:28] <jam> dimitern: I have some comments, but adding reviews just adds more chefs to the pot :). I still feel like you're in the best position to be comfortable with it, and if you are, LGTM
[06:29] <dimitern> jam, thank you!
[06:52] <rogpeppe> mornin' all
[06:59] <dimitern> hey rogpeppe
[06:59] <rogpeppe> dimitern: yo!
[07:00] <dimitern> rogpeppe, just back from holiday? or you're hanging in other channels usually?
[07:00] <dimitern> :)
[07:00] <rogpeppe> dimitern: just back from holiday
[07:00] <rogpeppe> dimitern: was in Cypress for two weeks
[07:00] <dimitern> rogpeppe, excellent!
[07:01] <rogpeppe> dimitern: yeah, it was!
[07:06] <davecheney> night all
[07:06] <dimitern> 'night davecheney
[07:10] <dimitern> jam, so no standups today, just the x-team meeting @14.30 utc?
[07:11] <wallyworld_> axw: i will add another test before landing - you're right, it's messy to do
[07:11] <axw> wallyworld_: okey dokey
[07:11] <wallyworld_> i was being lazy :-)
[07:12] <axw> wallyworld_: I just noticed there's a card "TestEnsureAdminUser is still broken"
[07:12] <axw> wallyworld_: did CI fail again?
[07:12] <wallyworld_> yeah, the latest test run has it failing, and also several over the past few days
[07:12] <wallyworld_> not every time
[07:12] <wallyworld_> but often enough
[07:12] <wallyworld_> the other failures all seem to be one off
[07:12] <axw> wallyworld_: latest run of "walk tests" on trusty is buggered, the machine is out of space
[07:13] <wallyworld_> this is the one i looked at just before
[07:13] <wallyworld_> http://juju-ci.vapour.ws:8080/job/walk-unit-tests-amd64-trusty/330/console
[07:13] <axw> ok
[07:13] <wallyworld_> plus a few others over the past few days
[07:13] <wallyworld_> it seems much better but not quite fully fixed
[07:14]  * wallyworld_ -> soccer
[07:14] <axw> later
[07:50] <jam> dimitern: right now we didn't have the team standup because of the x-team meeting, I might reintroduce it, but we hadn't talked about it, so I'll skip it today
[07:52] <dimitern> jam, ok, do any of us need to be at the x-team?
[07:52] <jam> dimitern: the goal is that if you *can* be there you should, as we'd like people to go to 2 out of 3 of the meetings.
[07:53] <jam> I will not be able to be there tonight, as it is my 10-year anniversary weekend
[07:53] <jam> but I'll probably be going to most of them.
[07:53] <dimitern> jam, right, I'll try to attend then - it is one of the poor-quality-voip-conference calls right?
[07:54] <TheMue> morning
[07:54] <jam> dimitern: ah sorry, this is the whole team juju meeting at 18:00 UTC
[07:54] <jam> dimitern: sorry, the x-team meeting at 14:30UTC is not one you have to be at
[07:54] <jam> we need a smattering of people from Juju to be there.
[07:55] <dimitern> jam, ah, ok
[07:55] <jam> dimitern: the one at 18:00UTC is the replacement for the weekly whole team meeting that we used to have at 10:00UTC on Thursdays
[07:55]  * TheMue is looking forward to the 1800UTC meeting, see some of the new US colleagues.
[07:55] <dimitern> jam, it'll be more difficult to attend the one at 18 utc - it is the only one this week and will rotate next week at another time iirc?
[07:56] <jam> I think 18:00 is around 20:00 for you, right?
[07:56] <jam> dimitern: right, next week it will be 8 hours later
[07:56] <jam> (16 hours earlier, depends on POV)
[07:56] <TheMue> jam: it is 20 CEST, yes
[07:56] <dimitern> jam, :) so +8h each week?
[07:56] <jam> dimitern: right
[07:56] <dimitern> hey TheMue
[07:56] <jam> we'd like people to make it to 2 out of 3
[07:56] <TheMue> dimitern: heya
[07:57] <dimitern> jam, ok, so 18 utc, 2 utc and 10 utc
[07:57] <TheMue> jam: but don’t expect me to join the 200UTC meeting :D only if I cannot sleep
[07:57] <TheMue> dimitern: yep
[07:57] <jam> TheMue: yep.
[07:57] <jam> It is at 6am here, so I can *just* make it.
[07:57] <dimitern> jam, why 2 out of 3 ?
[07:58] <jam> dimitern: it is an approximate, but hopefully we can make that work
[07:58] <TheMue> 6am is hard enough *yawn*
[07:58] <jam> we want people to see each other as much as we can
[07:59] <jam> dimitern: it isn't hard and fast, but we used to see everyone every week, but can't actually do that with 21 people
[07:59] <dimitern> jam, i see - that way at least for one of the meetings everyone will make an effort to attend at an inconvenient time
[08:00] <jam> it is possible that we could make it at the start of and end of some people's work day
[08:00] <jam> but it is too hard to actually try to do the numbers to actually figure out what time of day that would be
[08:00] <jam> this way at least 1 per 3 weeks should definitely be in your normal TZ
[08:01] <dimitern> or perhaps make 4 meetings instead of 3, that way the intended goal is easier to achieve i think
[08:01] <dimitern> each one 6h apart
[08:02] <dimitern> jam, was that an option?
[08:03] <jam> it wasn't ever brought up, it would be something to consider
[08:34] <voidspace> morning all
[09:09] <jam> dimitern: the next patch in my API versioning saga: https://codereview.appspot.com/97630045
[09:09] <dimitern> jam, looking
[09:11]  * jam takes his dog to the kennel for the weekend, bb < 1hr
[09:15] <perrito666> morning
[09:18] <voidspace> perrito666: morning
[09:47]  * jam is back
[09:47] <jam> morning perrito666 and voidspace
[09:52] <voidspace> jam: morning
[10:01] <mgz> have I actually got internet right now...
[10:02] <wallyworld_> mgz: yes you have :-) you ok for 1:1?
[10:03] <mgz> now google is responding, I'm there
[10:10] <voidspace> gah, the srvRoot authorizer methods aren't tested
[10:10] <voidspace> I've added a new one
[10:10] <voidspace> so testing it means working out how to test those methods at all
[10:12] <voidspace> unless they're tested elsewhere
[10:20] <voidspace> jam: ping
[10:20] <jam> voidspace: ?
[10:20] <voidspace> jam: state/apiserver/root.go
[10:20] <voidspace> jam: srvRoot represents a client's connection
[10:20] <jam> well, it represents the API
[10:21] <voidspace> jam: and it is the main implementation of the Authenticator interface
[10:21] <jam> for clients or agents
[10:21] <voidspace> jam: the Authenticator methods on it aren't tested directly that I can find
[10:21] <voidspace> and in fact most tests use the FakeAuthorizer
[10:21] <jam> voidspace: I'm pretty sure they are all tested indirectly, but yes, there are no direct tests of srvRoot behavior
[10:21] <voidspace> I've added a (simple) method to this, and the only test is against the FakeAuthorizer (which also now has this method)
[10:22] <jam> voidspace: so things that test the APIServer directly use FakeAuthorizer, but there are a bunch of client-side tests that go end-to-end that have a real srvRoot in the middle.
[10:22] <jam> voidspace: but if you can write some direct tests, I'd be happier to see them.
[10:22] <voidspace> jam: so the only way to make that work would be to create a client that is neither representing a MachineAgent nor a UnitAgent
[10:23] <voidspace> jam: and we struggled to do that as it seemed the way to get the API was "OpenAPIAsMachine"
[10:23] <jam> voidspace: we have 3 types of entities that access the API, Client, MachineAgent and UnitAgent
[10:23] <voidspace> right, but how to get a test client entity wasn't obvious
[10:23] <jam> voidspace: so opening the API as the admin user is not a machine or unit agent
[10:23] <voidspace> ah, ok
[10:24] <jam> voidspace: things that access the "Client" facade are not machine or units either.
[10:24] <dimitern> jam, reviewed
[10:24] <voidspace> if a direct test is preferable (srvRoot is a private type) can you suggest how I would do that or point me at something that gets at it for testing
[10:24] <jam> voidspace: I know of nothing that tests it directly, and I'm working on that right now for some API versioning tests
[10:25] <voidspace> ah right
[10:25] <jam> *I* like to have direct Unit tests of each layer
[10:25] <jam> not everyone felt the same
[10:25] <jam> namely, the people who implemented this the first time
[10:25] <voidspace> yep, me too - it seems wrong to only indirectly test
[10:25] <jam> felt that it was better to test from the actual client code
[10:25] <voidspace> because those tests can be changed or removed as it's not obvious what they're testing
[10:25] <jam> dimitern: thanks
[10:25] <voidspace> unless you add a note
[10:25] <voidspace> it's also not obvious where to go to find them / update them
[10:26] <fwereade> voidspace, yeah, +1 to direct tests
[10:26] <voidspace> well, +1 in theory
[10:26] <fwereade> voidspace, it helps with interface sanity more than anything else I'm aware of
[10:26] <voidspace> private types that are hard to construct make it difficult
[10:26] <voidspace> obviously not built with testability in mind
[10:26] <fwereade> voidspace, +1 also to tiny weeny little packages that don't have internal layers that really need tests but don't have them
[10:30] <jam> voidspace: fwiw I'm currently overhauling srvRoot a *lot* so you may not want to poke at it too much.
[10:32] <voidspace> jam: ok, it's literally a one line method
[10:33] <voidspace> jam: maybe it's not warranted as there's only one use for it - if I see the need again I'll add it
[10:33] <voidspace> jam: and by then it should be easier to test due to your work
[10:47] <jam> sgtm
[10:55] <jam> fwereade: don't forget the team leads call in 5min. If you're available, I have a quick question wrt API that I'd like to run by you (if you can join early)
[11:00] <voidspace> fwereade: in terms of making GetRsyslogConfig bulky, it takes no params
[11:00] <voidspace> fwereade: so is it just a question of renaming to Configs and returning a slice of results?
[11:01] <fwereade> voidspace, shouldn't we be asking for the rsyslog config for ourselves, though,much as we watch the config for ourselves?
[11:01] <jam> dimitern: https://codereview.appspot.com/97570051 implements the "Does this Facade return the exact type I expected"
[11:01] <voidspace> fwereade: no, units and state servers all need to log to "all the state servers"
[11:02] <voidspace> fwereade: so we want "the global config for everything" in all cases
[11:02] <jam> voidspace: fwiw ^^ removes some of the coupling so you can create one with apiserver.TestingSrvRoot(State) which would let you write the tests you wanted. (once all this stuff lands)
[11:02] <voidspace> jam: ah, cool
[11:02] <fwereade> voidspace, the answers for "where should machine 7 log to" and "where should machine 9 log to" may happen to always be the same
[11:03] <fwereade> voidspace, but I think they're different questions
[11:03] <fwereade> voidspace, and I think it's a bit nicer to keep the object of the sentence implied by the API call explicit ratherthan implicit in the connection
[11:03] <fwereade> voidspace, any evidence of sanity there?
[11:03] <voidspace> fwereade: it sounds like unnecessary complexity in the name of trying to be consistent
[11:04] <voidspace> fwereade: the *explicit task* we are achieving is "make sure everything logs to all the state servers"
[11:04] <voidspace> that's the goal of HA - present a single world view with redundancy
[11:05] <voidspace> so to provide an api that looks like we could have multiple world views actually breaks that model
[11:05] <voidspace> (conceptually)
[11:05] <fwereade> voidspace, it's more about (1), yes, consistency, but (2) trying to make it easy to build the API out of easily composable chunks: and methods with implicit params don't really help there
[11:05] <fwereade> voidspace, I may have to think about that for a mo, but I'm not sure I agree
[11:06] <voidspace> I defer to you of course, but I'm a bit wary of the "make all API calls bulk calls even when not appropriate" approach
[11:06] <voidspace> I'd rather we think about whether it makes sense for each endpoint (and yes bearing the future in mind because API churn is a pain)
[11:06] <voidspace> but I still defer to you :-)
[11:07] <voidspace> but we *want* a single logging endpoint for the user - and we want HA to provide redundancy for that *behind the scenes*
[11:07] <dimitern> jam, thanks, will look in a bit
[11:07] <voidspace> so it seems to me that the model here is a single call
[11:07] <fwereade> voidspace, it is certainly less compelling than it originally was, because at long last we're getting api versioning, so the cost and complexity of changing an API is much smaller
[11:07] <fwereade> voidspace, there are 2 questions in play though
[11:08] <voidspace> versioning isn't a silver bullet - you still have to maintain the obsolete version, which can be a real nuisance
[11:08] <voidspace> so better to get it right :-)
[11:08] <fwereade> voidspace, one is, should inform the server who's sending the logs when we ask where they should be sent, and I think that's a yes
[11:09] <voidspace> "should (?) inform the server" ?
[11:09] <fwereade> voidspace, "we" or "the client" or whoever
[11:10] <fwereade> voidspace, "where do all logs go always" != "where do $entity's logs go", even if the answers are currently the same
[11:10] <voidspace> that's the one I'm disputing I think - we explicitly want all logs to go to the same place
[11:10] <voidspace> doorbell - back in 2
[11:11] <wwitzel3_> hello
[11:11] <voidspace> wwitzel3: hello
[11:12] <fwereade> voidspace, the other is "is the cost/benefit of mandating all-bulk calls likely to pay off" and I *think* they actually are: the consistency argument is stronger than you might think, because people do what they see us doing already; and IMO enough calls are worth bulking that it's a win to provide a consistent API that always allows it even if we don't always use it
[11:12] <voidspace> fwereade: if we start sending different logs to different places, we'll be breaking the HA model - which is, all logs are always available and it doesn't matter which state server you contact
[11:12] <voidspace> fwereade: right, I certainly don't want to fight strongly on that point
[11:13] <fwereade> voidspace, strawman: additional external logging targets for units of particular services
[11:13] <voidspace> fwereade: so, the args should be a slice of entities and the result a slice of configs?
[11:13] <fwereade> voidspace, yes please
[11:14] <voidspace> fwereade: hmmm...
[11:14] <voidspace> fwereade: ok
[11:14] <fwereade> voidspace, my argument is not that it always makes sense; it's that it often does, and that the bonus from consistency pushes it past the line to just-always-do-this
[11:15] <voidspace> fwereade: ok
[11:53] <voidspace> fwereade: for bulk calls, is the convention that, assuming no "global error" (causing the whole call to fail) happens, you return "result, nil" - where result is a collection (.Results) with an Error field for individual errors
[11:54] <rogpeppe> voidspace: yes
[11:54] <voidspace> so even where this one call with one error, you return nil for the error - but result.Results[0].Error has the real error
[11:54] <voidspace> cool
[11:55] <rogpeppe> jam, fwereade: hiya, i was talking earlier with frankban about moving store out of core, and we think that it's a reasonable principle that nothing outside juju-core imports juju-core's TestBase. does that seem right to you?
[11:56] <fwereade> rogpeppe, agreed; but ideally we'd be moving stuff to github.com/juju/testing rather than duping or  dropping functionality
[11:56] <rogpeppe> fwereade: yup
[11:56] <fwereade> rogpeppe, perfect
[11:56] <rogpeppe> fwereade: but TestBase in particular has very core-specific functionality
[11:57] <rogpeppe> fwereade: but packages like filetesting will move too
[11:57] <fwereade> rogpeppe, yep, +1 to that
[11:58] <jam> rogpeppe: in general we'd rather have things not depend on code inside juju-core, and have things nicely pulled out into smaller modules. I'd be willing to be a bit pragmatic about it
[11:59] <rogpeppe> jam: i hope we can be good about it. in particular, i very much hope that we can make a strictly non-cyclic dependency graph between repositories.
[12:07] <jam> fwereade: https://codereview.appspot.com/100460045/ is the one patch along the way that didn't actually get an LGTM, I think I've finished the work on it that I wanted to do.
[12:07] <jam> dimitern: ^^ if you want to give that a look as well
[12:08] <jam> it is *mostly* a mechanical application of the previous patches.
[12:08] <voidspace> rogpeppe: I didn't notice that reply came from you
[12:08] <voidspace> rogpeppe: thanks, and hi
[12:08] <rogpeppe> voidspace: hiya :-)
[12:09] <perrito666> hey rogpeppe wb
[12:09] <rogpeppe> perrito666: ta!
[12:10] <jam> fwereade: https://codereview.appspot.com/97630045/ is a followup that Dimiter reviewed, and https://codereview.appspot.com/97570051/ is one that needs reveiew
[12:18] <dimitern> jam, reviewed https://codereview.appspot.com/97570051/
[12:18] <jam> dimitern: thanks
[12:19]  * perrito666 wonders why ctrl+w doesnt work on the screen he is looking at instead of the one with focus
[12:19] <voidspace> perrito666: heh, I do that all the time
[12:38] <voidspace> hah, I wondered why I suddenly had all these failures on our rsyslog branch
[12:38] <voidspace> wwitzel3: ping
[12:39] <voidspace> wwitzel3: revision 2755 (most recent one) of your rsyslog branch merges a branch of mine that was a dead end
[12:39] <voidspace> wwitzel3: I restarted the work in a different branch
[12:39] <voidspace> wwitzel3: can you back out that revision?
[12:42] <voidspace> wwitzel3: I have further commits so it's harder for me to do
[12:44] <voidspace> in the meantime
[12:44]  * voidspace lunches
[13:04] <jam> voidspace: "bzr merge -r 2755..2754" should do what you want, fwiw
[13:24] <wwitzel3> voidspace: was eating breakfast and getting Jessa off to work, ping me when you're back, I don't even see that revision on my branch of rsyslog-api
[13:32] <tasdomas> fwereade, hi
[13:33] <tasdomas> fwereade, I've started working on moving juju-core/cmd to a separate package (github.com/juju/cmd probably) - what is the process of creating a new juju repo on github?
[13:51] <fwereade> tasdomas, I'm a bit surprised by cmd... oh, yeah, the store commands use it now
[13:51] <fwereade> tasdomas, are you in the team on github?
[13:57] <voidspace> wwitzel3: http://bazaar.launchpad.net/~wwitzel3/juju-core/009-ha-rsyslog-api/revision/2755
[13:58] <voidspace> wwitzel3: is that not your branch? https://code.launchpad.net/~wwitzel3/juju-core/009-ha-rsyslog-api
[13:58] <wwitzel3> voidspace: huh, I just don't see it locally I guess
[13:58] <voidspace> heh
[13:59] <voidspace> wwitzel3: the command that jam gave is the one you should run
[13:59] <wwitzel3> voidspace: it is indeed my branch, but locally when I look at the log, I see 2747 as the latest
[13:59] <voidspace> it would be more painful for me
[13:59] <voidspace> hah
[14:01] <wwitzel3> voidspace: ok, try now? ..
[14:01] <wwitzel3> not sure if I should cross my fingers or plug my ears
[14:01] <wwitzel3> both? ..
[14:02] <voidspace> hah
[14:03] <voidspace> wwitzel3: now I see the latest revision as 2747 (!?) but with the offending revision still in it
[14:03] <voidspace> I believe
[14:03] <voidspace> when I merged I got no changes anyway
[14:03] <wwitzel3> ok what is the offending revision # now?
[14:03] <voidspace> 2747
[14:05] <wwitzel3> voidspace: ok, pushed up the revert of that (I hope)
[14:22] <voidspace> wwitzel3: ah, I see what happened I think
[14:23] <voidspace> wwitzel3: you intended to merge in my changes removing the obsolete check that Port and CACert had changed
[14:23] <voidspace> wwitzel3: this was the branch that should have been merged
[14:23] <voidspace> https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-shortcut-removal/+merge/220105
[14:24] <voidspace> wwitzel3:  lp:~mfoord/juju-core/ha-rsyslog-shortcut-removal
[14:26] <wwitzel3> ok, merged and pushed
[14:29] <voidspace> wwitzel3: and rsyslog worker tests are passing again for me!
[14:29] <voidspace> and I don't think we lost anything...
[14:30] <wwitzel3> great :)
[14:30] <voidspace> hmmm...
[14:30] <voidspace> except this on your branch
[14:30] <voidspace> wwitzel3:
[14:30] <voidspace>     Diff: 12160 lines (+2221/-2600) 262 files modified (has conflicts)
[14:30] <wwitzel3> great
[14:30] <voidspace> wwitzel3: you may need to merge trunk and resolve conflicts
[14:31] <voidspace> :-o
[14:31] <wwitzel3> yep, doing that now
[14:32] <wwitzel3> 11 conflicts encountered.
[14:32] <wwitzel3> awesome
[14:32] <voidspace> weird
[14:32] <voidspace> wwitzel3: files you've not touched you can resolve with --take-other
[14:33] <wwitzel3> well that is handy :)
[14:33] <voidspace> wwitzel3: and in fact, the conflicts are pretty much all in files we've not touched
[14:33] <wwitzel3> voidspace: pushed
[14:33] <voidspace> wwitzel3: thanks
[14:34] <wwitzel3> voidspace: we should pair after this call
[14:34] <voidspace> wwitzel3: ok
[14:37] <voidspace> anyone recognise this error: "missing agent version in environment config"
[14:37] <voidspace> in provider/dummy
[14:38] <voidspace> actually comes from testing.AssertStartInstance
[14:38] <perrito666> voidspace: your config file is being generated without version ?
[14:38] <voidspace> perrito666: sure, but it's from a test
[14:38] <voidspace> sorry, yeah - test failure error
[14:38] <voidspace> ah, never mind
[14:39] <voidspace> I have build failures
[14:39] <wwitzel3> hah
[14:39] <wwitzel3> priorities
[14:39] <wwitzel3> :P
[14:39] <voidspace> in the openstack provider
[14:40] <voidspace> I wonder if dependencies.tsv wasn't updated on trunk
[14:41] <voidspace> if I merge from trunk it says nothing to do
[14:42] <voidspace> but if I switch to trunk and run godeps it *does* update some dependencies
[14:42] <voidspace> wwitzel3: you still have an 11000 line diff
[14:42] <voidspace> wwitzel3: I think you somehow collapsed a bunch of changes into that single revision
[14:43] <voidspace> or something
[14:43] <voidspace> somehow our branch has undone a load of changes
[14:43] <voidspace> goddammit
[14:44] <wwitzel3> well, I can undo the merge I did earlier
[14:44] <wwitzel3> where I jumped back a revision
[14:44] <voidspace> right, that would bring back the unwanted changes from my branch
[14:45] <voidspace> let's look at the revision history and see if we can work out where we want to go back to
[14:46] <voidspace> wwitzel3: it seems like you want to go back to 2746 (which I thought was what you'd done previously)
[14:46] <voidspace> and then merge trunk in
[14:46] <voidspace> which should leave just our changes
[14:49] <voidspace> ah
[14:50] <voidspace> so the problem is that trunk has already been merged into your branch
[14:50] <voidspace> so the reverse merge *undoes* trunk revisions
[14:50] <voidspace> merging trunk again says "Nothing to do" because it sees those revisions in the merge history
[14:50] <voidspace> mgz: ping
[14:51] <dimitern> niemeyer, hey
[14:51] <niemeyer> dimitern: Heya
[14:51] <dimitern> niemeyer, I fixed what we discussed in https://codereview.appspot.com/98430044/, can you please take a look if it's good to land?
[14:52] <niemeyer> dimitern: Will do, thanks
[14:54] <bodie_> fwereade / others, looks like all the libs in question are apache2-licensed in their source code but LICEN(S|C)E[.txt] isn't included in the two deps for gojsonschema.  should I open MRs or is it sufficient to have the license in the files themselves?
[14:54] <voidspace> wwitzel3: I'm going to try and fix it by branching trunk and merging from revision 2747 of your branch
[14:54] <wwitzel3> voidspace: ok
[14:54] <voidspace> wwitzel3: that *seems* to have worked
[14:55] <voidspace> wwitzel3: we may need to abandon your one - the history seems to be irrevocably screwed now (?)
[14:56] <voidspace> wwitzel3: I'm in moonstone by the way
[14:56] <niemeyer> dimitern: Does it make sense to have the same domain name for different private IPs?
[14:57] <niemeyer> dimitern: Ah, nevermind
[14:57] <niemeyer> dimitern: The loop breaks out on the first entry
[14:57] <dimitern> niemeyer, right
[14:57] <niemeyer> dimitern: Okay, LGTM
[14:57] <dimitern> niemeyer, thanks!
[14:59] <voidspace> wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220667
[15:06] <voidspace> wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220669
[15:07] <wwitzel3> voidspace: second link looks right
[15:09] <bodie_> someone help me understand what fwereade means by "d" in these comments? https://codereview.appspot.com/94540044/diff/60001/charm/actions_test.go
[15:10] <perrito666> bodie_: delete
[15:15] <voidspace> wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-2/+merge/220603
[15:17] <mgz> voidspace: hey, sorry missed ping earlier
[15:17] <bodie_> o/ mgz
[15:17] <mgz> hey bodie
[15:18] <voidspace> mgz: we have a screwed bazaar branch that we can't recover
[15:18] <voidspace> mgz: we've mostly worked round it now and are abandoning the branch
[15:18] <voidspace> mgz: but I thought you might be able to help
[15:18] <mgz> voidspace: fallback is just pull out the changes in the tree into a fresh branch normally
[15:19] <voidspace> mgz: right, that's what I'm doing
[15:19] <voidspace> mgz: except it's changes from three different branches we had consolidated
[15:19] <voidspace> and it was the consolidation that screwed us
[15:19] <mgz> what fun
[15:19] <voidspace> and then unfortunately there was a merge back the other way - making it hard to back out changes without also reverting a load of trunk changes
[15:19] <voidspace> which is what we did
[15:19] <voidspace> and why we're screwed
[15:20] <voidspace> we reverted 11000 lines of changes from trunk
[15:20] <voidspace> and the history shows those changes as already merged - so we can't bring them back again
[15:20] <wwitzel3> they probably weren't needed anyway
[15:20] <voidspace> yeah, probably
[15:24] <voidspace> wwitzel3: this is now the full consolidated branch
[15:24] <voidspace> https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220669
[15:26] <mgz> voidspace: losing the history shouldn't matter much, as we're going to be losing history anyway next week
[15:26] <voidspace> mgz: really, we're just dumping into github rather than converting the repo?
[15:26] <hazmat> mgz, losing history?
[15:26] <hazmat> what?
[15:27] <hazmat> easy to convert with history
[15:27] <mgz> voidspace: we're converting, but that's a lossy conversion
[15:27] <voidspace> oh, ok
[15:27] <voidspace> we'll just pretend it's git's fault
[15:27] <voidspace> I mean, I'm sure nothing like this will ever happen once we're using git
[15:28] <voidspace> git is not *at all* renowned for mangling repos into completely unusable states
[15:28] <perrito666> voidspace: nope, git users are though
[15:28] <sinzui> voidspace, perrito666, bodie_ is anyone planning to land a branch in the next hour? I want to take CI offline to upgrade how it builds test packages
[15:29] <perrito666> sinzui: nope
[15:29] <jcw4_> sinzui I'm hoping to in a couple hours
[15:29] <bodie_> hoping to get one in asap, but struggling a little with tests right now.  I can push it back
[15:29] <voidspace> perrito666: git will happily chainsaw your repo into an unusable mess
[15:30] <voidspace> and smile whilst doing it
[15:30] <voidspace> sinzui: an hour should be fine
[15:30] <perrito666> voidspace: but will blame me
[15:30] <perrito666> so technically its my fault
[15:30] <voidspace> sure...
[15:30] <voidspace> I'll happily blame you too if you like :-D
[15:31] <sinzui> bodie_, does the landing bot hate you?
[15:31] <perrito666> voidspace: usually when I use git, which I like, I feel like barely above using diff+patch+ftp
[15:31] <bodie_> I assume I would know if it did
[15:31] <voidspace> heh
[15:31] <bodie_> and I don't, so it must not
[15:31] <voidspace> fwereade: we've created a new CL as we managed to "break" the other one
[15:31] <voidspace> fwereade: https://codereview.appspot.com/91630045/
[15:32] <voidspace> fwereade: that addresses your previous review comments as discussed (I believe)
[15:48] <bodie_> jcw4, mgz, I'm not certain I'll be able to have a voice session at 1600 -- michelle will be getting home any minute now and i failed to communicate the plan to her so I might have to do text this time
[15:49] <jcw4_> I think we're preferring IRC right now anyway
[15:49] <mgz> that's fine
[15:49] <jcw4_> #jujuskunkworks to reduce the noise here though?
[15:49] <mgz> lets
[16:48] <jcw4> fwereade: I'm assuming this is what you wanted with the err handling and return of newActionID: http://bazaar.launchpad.net/~johnweldon4/juju-core/action/view/head:/state/unit.go#L1343
[16:49] <voidspace> mgz: ping
[16:52] <fwereade> jcw4, yeah, looks good
[16:52] <jcw4> fwereade: ta
[16:53] <voidspace> fwereade: so I assume that when we release 1.20, what is currently "upgrades/steps118.go" will be changed to "upgrades/steps120.go"
[16:54] <bodie_> fwereade, https://codereview.appspot.com/94540044
[16:54] <bodie_> hopefully this is satisfactory :)
[16:55] <bodie_> I made a few tweaks to the regexp that I'd not caught until I added the param name tests you'd requested
[16:55] <fwereade> voidspace, no? those are needed for anything pre-1.18 to be valid for 1.18; they don't need to be run again from 1.18 to 1.20; others might, and they'd be 1.20 ones (or I guess 1.19 ones, I hope we'd discover them earlier than 1.20...)
[16:56] <mgz> voidspace: hey
[16:57] <voidspace> fwereade: but we don't have steps116.go, I thought we only supported 1.18  -> 1.20
[16:57] <voidspace> mgz: hi, I was going to ask what I'm asking will
[16:57] <bodie_> right -- so the deps I added with gojsonschema have apache v2 license in the source files but not repos themselves
[16:58] <voidspace> I'd better my changes
[16:58] <bodie_> do I need to get the author to put LICENSE.txt in the repos or are we good?
[16:58] <voidspace> *revert my changes
[17:10] <fwereade> voidspace, the idea is that we *should* be able to support arbitrary upgrades, and that we run them from src->dst in order
[17:10] <coreycb> I have a local provider deployment of mysql to kvm that hangs in pending state.  any idea if that is fixed by bug 1317197?
[17:10] <_mup_> Bug #1317197: juju deployed services to lxc containers stuck in pending <oil> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1317197>
[17:10] <voidspace> fwereade: ok, I'll leave the syslog port upgrade step in place
[17:11] <coreycb> sinzui, maybe you know ^
[17:13] <sinzui> coreycb, sorry, the bug is unrelated. That bug is about a lock dir used when working with lxc templates
[17:15] <coreycb> sinzui, ok need a bug for my issue?
[17:17] <sinzui> coreycb, common reasons for localhosts stuck in pending is a bad cloud image cache. I don't know how kvm manages that
[17:17] <sinzui> coreycb, I think we do need a bug https://bugs.launchpad.net/juju-core/+bugs?field.tag=kvm doesn't describe the issue
[17:30] <jcw4> fwereade, or mgz : if you get a chance to review -- https://codereview.appspot.com/98260043/
[17:30] <coreycb> sinzui, thanks, I opened bug 1322281
[17:30] <_mup_> Bug #1322281:  local provider deployment of mysql to kvm hangs in pending state <juju-core:New> <https://launchpad.net/bugs/1322281>
[17:30] <sinzui> thank you
[17:37] <natefinch> Ahh, there's nothing like starting the day at 4am with a puking 11 month old.
[17:38] <rick_h_> natefinch: all uphill from here
[17:38] <natefinch> haha, yeah
[17:40] <wwitzel3> I've ended the night with a puking 30 year old before, if that makes you feel any better :P
[17:40] <rick_h_> wwitzel3: yourself doesn't count :P
[17:41] <wwitzel3> I'd puke, thats fiscally irresponsible
[17:41] <wwitzel3> I don't
[17:41] <jcw4> wwitzel3: I don't know... with a 30 yro you can laugh at them
[17:42] <jcw4> and... walk awy
[17:42] <wwitzel3> I feel like you can laugh at an 11 month old too, doubt they would remember
[17:42] <jcw4> haha
[17:42] <wwitzel3> oh, yeah, well the walk away part .. not so much
[17:43] <natefinch> At least I didn't have to catch her puke in my hands this time, that was nice.
[17:44] <wwitzel3> I see as a parent your definition of nice has to change slightly
[17:44] <natefinch> lol yup
[17:44] <jcw4> haha yep
[17:51] <voidspace> natefinch: morning
[17:52] <natefinch> voidspace: morning :)
[17:54] <voidspace> hmmm... so adding an attribute to the slice of immutableAttributes in config.go isn't sufficient to make it actually immutable
[17:54] <voidspace> I thought that was a bit hopeful
[17:54] <voidspace> ah, wait
[17:54] <voidspace> I didn't go install first
[17:55] <voidspace> maybe it still is
[17:55] <voidspace> and it looks like it is
[17:56] <voidspace> ERROR cannot change syslog-port from 6514 to 3030
[17:56] <voidspace> that's what I wanted :-)
[17:56] <natefinch> nice
[17:56] <voidspace> ship it!
[17:57] <voidspace> I should probably test it
[17:57] <bodie_> rogpeppe, you around?
[17:57] <voidspace> but that can wait until the morning
[17:57] <voidspace> g'night all
[17:57] <voidspace> EOD
[17:57] <bodie_> o/
[18:56] <alexisb> natefinch, ping
[18:56] <natefinch> alexisb: howdy
[18:56] <alexisb> given you are the lead that is awake :)
[18:57] <natefinch> heh
[18:57] <alexisb> can you take a look at 2 bugs real fast and get them to triaged state
[18:57] <alexisb> most likely they are both critical given they are blocking teams but I want someone to take a quick look first
[18:58] <alexisb> 1322302 and 1322281
[18:58] <natefinch> ok
[18:58] <alexisb> thanks
[18:58] <bodie_> natefinch, does that mean you can give some input on whether we should use a dep of relatively unknown quality from github?  rogpeppe indicated some concern
[18:58] <natefinch> bodie_:
[18:58] <natefinch> bodie_: er yep
[18:58] <natefinch> bodie_: what's the repo?
[18:59] <bodie_> https://github.com/xeipuuv/gojsonschema
[18:59] <bodie_> AFAIK, it's the go-to JSON-Schema implementation for Go -- the only other github repo for gojsonschema is a fork of it which was recently merged
[18:59] <bodie_> I could loop you into the MR email thread if you'd like
[18:59] <natefinch> bodie_: I think I saw part of it at leasty
[19:00] <bodie_> about licensing, right
[19:00] <bodie_> I should let you take care of what alexisb wanted, but let me know if you have any input
[19:00] <natefinch> bodie_: I'll take a look at it.
[19:20] <natefinch> alexisb: I made the first one high, since slow is not the same as "doesn't work"
[19:20] <natefinch> alexisb: the second one is "doesn't work" so I made it critical
[19:20] <natefinch> alexisb: both need investigation by dev or QA to confirm it's reproducible, though
[19:21] <alexisb> natefinch, thank you, can you make sure to comment on any needs that block forward progress from juju-core in the bug?
[19:28] <natefinch> alexisb: sure
[19:59] <perrito666> natefinch: here is the everybody is happy patch https://codereview.appspot.com/92320043
[20:01] <natefinch> perrito666: nice
[21:22] <waigani> morning all
[21:23] <waigani> how do you find the current juju user?
[21:34] <arosales> wallyworld_: et'all are folks seeing issues on HP Cloud
[21:34] <arosales> mbruzek: is seeing hex output in juju status
[21:34] <wallyworld_> oh really? 1.18.3?
[21:35] <arosales> wallyworld_:  1.19.3 I think
[21:35] <mbruzek> I am seeing some very strange output
[21:35] <mbruzek> http://paste.ubuntu.com/7503161/
[21:35] <mbruzek> juju status
[21:36] <wallyworld_> ok, so crappy error message :-( looks like the security group quota may have been exceeded perhaps
[21:36] <wallyworld_> let me have a look to see if i can figure it out
[21:40] <wallyworld_> mbruzek: so, looking at the hp console, it seems your machine 1 reports an " Error(spawning) "
[21:40] <wallyworld_> so at first glance it looks like juju tried to start the machine and got an error back from hp cloud
[21:40] <wallyworld_> and reported the error in juju status but in a crappy way
[21:41] <wallyworld_> you could try destroying the machine in juju
[21:41] <mbruzek> wallyworld_, I will do that now
[21:41] <wallyworld_> and possibly manually deleting from hp cloud
[21:41] <wallyworld_> perhaps file a bug about the poor error message
[21:42] <mbruzek> wallyworld_, this is my second time seeing these problems, I believe they will persist after I destroy-environment and re-bootstrap
[21:43] <wallyworld_> mbruzek: it seem though that some machines start ok?
[21:43] <mbruzek> correct, but there always seems to be one that does not.
[21:43] <wallyworld_> i tried clicking on the console log in the hp cloud admin page page it just hung
[21:43] <mbruzek> $ juju destroy-machine 1
[21:43] <mbruzek> ERROR no machines were destroyed: machine 1 has unit "cassandra/0" assigned
[21:43] <mbruzek> Do I need to --force?
[21:44] <wallyworld_> you either need to destroy the unit/service first or i think there's now a --force option?
[21:44] <wallyworld_> perhaps the all-machine.log on the start server will give clues
[21:45] <wallyworld_> you could juju debug-log in another terminal window as you try and deploy charms
[21:45] <wallyworld_> and then see what it says if the machine provisioning on the cloud fails
[21:45] <wallyworld_> not start server, state server
[21:46] <wallyworld_> arosales: you meeting with the joyent folks today?
[21:47] <mbruzek> wallyworld_, I already had juju debug-log running in another window!
[21:47] <wallyworld_> what did it say as machine 1 failed to start?
[21:47] <wallyworld_> pastebin it if you want
[21:50] <arosales> wallyworld_: they defered till next wed
[21:50] <mbruzek> Is there somewhere all-machine.log is on my system?  I am cut and pasting this to pastebin and it is not very fast
[21:50] <arosales> wallyworld_: any traction on your pull requests?
[21:50] <wallyworld_> arosales: no :-(, still pending. we really need them merged
[21:50] <wallyworld_> mbruzek: the log file is on the state server
[21:51] <mbruzek> wallyworld_, Here is the top of the juju debug-log command after I started the deployement
[21:51] <mbruzek> http://pastebin.ubuntu.com/7503212/
[21:51] <arosales> wallyworld_: ugh. ok I'll ping again. Really they shouldn't have to meet with me to get the pull request merged. I thought they were taking a look.
[21:51] <mbruzek> wallyworld_, machine-0: 2014-05-22 19:48:53 ERROR juju.provisioner provisioner_task.go:417 cannot start instance for ma
[21:51] <mbruzek> chine "1": cannot run instance: failed to run a server with nova.RunServerOpts
[21:52] <wallyworld_> arosales: part of the issue is that the ppc tests keep timing out because of the issue, and we are under prssure to get those sorted out
[21:52] <wallyworld_> arosales: maybe they are looking at them :-) i'll check later today to see if they're still pending
[21:53] <wallyworld_> mbruzek: so, it's saying that the cloud has reported an error starting the instance but doesn't really say what the error is (eg error code)
[21:55] <arosales> wallyworld_: ok I'll add that into my ping.
[21:55] <wallyworld_> thank you :-)
[21:57] <wallyworld_> mbruzek: did you just destory environment?
[21:57] <mbruzek> wallyworld_, Yes I wanted to try a simpler case.
[21:58] <wallyworld_> the console shows just a state server machine now which is green, but also an old machine 1
[21:58] <wallyworld_> and i think that's the issue
[21:58] <wallyworld_> the old machine 1 isn't getting removed because it's broken
[21:58] <mbruzek> I am on the hpconsole but not seeing that information what page are you on?
[21:58] <wallyworld_> https://console.hpcloud.com/compute/az-1_region-a_geo-1/servers
[21:58] <wallyworld_> if the old machine 1 is there, juju can't start another one with the same name
[21:59] <mbruzek> OK I see
[21:59] <wallyworld_> that would be a logical explanation for the error
[21:59] <mbruzek> Yes
[21:59] <mbruzek> I just bootstrapped so I should only have 1 server out there.  Let me destroy and clean that one up manually
[21:59] <wallyworld_> yep, i reckon that will solve your issue (i hope)
[22:00] <arosales> wallyworld_: pinged again re joyent pull requests
[22:00] <wallyworld_> awesome, thanks
[22:07] <mbruzek> wallyworld_, Terminating juju-hp-mbruzek-machine-1 seems to have done the trick.  My deployment is looking to be in better shape.
[22:08] <wallyworld_> great :-)
[22:08] <wallyworld_> my guess there was an error starting it in the first place and hence a subsequent desroy env couldn't kill it
[22:08] <mbruzek> I agree
[22:13] <wallyworld_> hazmat: you around?
[22:16] <hazmat> wallyworld_, am now
[22:16] <hazmat> waigani, what's up
[22:16] <waigani> hazmat: hey :)
[22:16] <hazmat> doh.
[22:16] <hazmat> :-)
[22:17] <waigani> ah you wanted wallyworld_
[22:17] <wallyworld_> hazmat: with dan's bug report about lxc staying in pending a long time - that's just because the first machine has to download the lxc image
[22:17] <hazmat> wallyworld_, hmm..
[22:17] <wallyworld_> we added cloning for them so the next one started fast
[22:17] <hazmat> wallyworld_, so let me pose it differently
[22:17] <hazmat> i agree though that's possibly the primary issue there
[22:18] <wallyworld_> we could do something like allow juju to copy an image across at bootstrap
[22:18] <hazmat> wallyworld_, if i have containers already running (via manual).. it still takes up to a minute for them to show as started
[22:18] <hazmat> during which they go through various odd permutations in status agent-state: (started)
[22:18] <wallyworld_> ok, that's a bit different
[22:18] <hazmat> it feels like the pinger/heartbeat is a bit schizo there
[22:19] <wallyworld_> that seems like a sepaate but legitimate issue
[22:19] <hazmat> wallyworld_, the fix there would be lxc template hooks respecting proxy settings
[22:19] <hazmat> re dan's issue
[22:19] <hazmat> that would decrease the time
[22:19] <hazmat> alternatively downloading the image while giving feedback to the user
[22:19] <wallyworld_> adding feedback is on the todo list
[22:20] <hazmat> ie.. we don't call out 'install' hook running.. we just leave things in pending
[22:20] <wallyworld_> i'm not sure if fetching the template honours the proxy settings
[22:20] <wallyworld_> you think it doesn't?
[22:20] <hazmat> its just doing a wget, but not sure the env variables propagate
[22:20] <hazmat> from juju's invocation
[22:20] <wallyworld_> ok, we can check that
[22:21] <wallyworld_> there's no good short term soluation though sadly
[22:21] <wallyworld_> apart from the proxy thing
[22:22] <hazmat> wallyworld_, any feedback to the user via status would be helpful to users re long running ops
[22:22] <wallyworld_> agree +100, but it's not necessarily trivial, it's on the todo list for sure
[22:22] <hazmat> ack
[22:23] <wallyworld_> i hope that dan's group can bear the pain of the initial download and use cloning for speedier deployment for the thers
[22:23] <wallyworld_> others
[22:23] <wallyworld_> i'll comment on the bug
[22:24] <wallyworld_> hazmat: seems also bug 1322281 which was just raised is the same thing
[22:24] <_mup_> Bug #1322281:  local provider deployment of mysql to kvm hangs in pending state <cloud-installer> <juju-core:Triaged> <https://launchpad.net/bugs/1322281>
[22:24] <wallyworld_> "hangs" might mean slow template download
[22:25] <wallyworld_> or no http egress except via proxy perhaps
[22:25] <wallyworld_> so i think if we check the proxy thing, that may be what we can do short term
[22:25] <wallyworld_> i'm sort a hoping we do not currently use the proxy settings so we have something to fix
[22:28] <hazmat> wallyworld_, its more than proxy.. its respect  a cache
[22:28]  * wallyworld_ -> breakfast bbiab
[22:28] <hazmat> although typically the same
[22:28] <hazmat> most of thse are orange boxes
[22:28] <wallyworld_> which cache?
[22:28] <wallyworld_> a squid cache?
[22:28] <wallyworld_> which i guess the proxy would point to
[22:28] <hazmat> wallyworld_, yeah.. squid-deb-proxy cache.. also orange boxes have full archive mirrors
[22:29] <hazmat> which we don't configure/use
[22:29] <wallyworld_> ok, i think we can/should add in proxy support for getting template
[22:29] <wallyworld_> that should be a simple fix
[22:29] <hazmat> sounds good
[22:29] <wallyworld_> thanks for the input
[22:30] <wallyworld_> now i am really off to eat and caffeinate myself
[23:20] <wallyworld_> hazmat: i've done some checking. the lxc-create which calls wget is invoked via golang's exec.Command(). this does pass through env vars like http_proxy. so obstently, if they were to set http-proxy and /or apt-htty-proxy in their env config, those should be passed through and used when the template is fetched
[23:21] <wallyworld_> do you know if they set http-proxy in their env?
[23:23] <hazmat> wallyworld_, talking to kirkland atm about setting up a transparent proxy on the orangebox, so everything hits the proxy cache.
[23:23] <hazmat> wallyworld_, i've heard different reports from different folks,  i don't have a solid baseline for analysis
[23:23] <wallyworld_> hazmat: great, so i hope/expect that will solve the bug
[23:23] <wallyworld_> can you let me know how you get on?
[23:23] <wallyworld_> so i can schedule juju work if needed
[23:24] <wallyworld_> even without a transparent cache, setting http-proxy in env config should work