[00:17] <waigani> thumper: using Entity Tag now: https://github.com/juju/juju/pull/22
[00:18] <waigani> my branches were all out of sync, had to pull, godeps, make check - back on track now
[01:15] <wallyworld> axw: morning. would you have any time today or tomorrow to look at bug 1325830?
[01:15] <_mup_> Bug #1325830: Can't destroy MAAS environment with LXCs <destroy-environment> <landscape> <lxc> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1325830>
[01:15] <axw> wallyworld: yep, I can take a look
[01:15] <wallyworld> \o/
[01:16] <wallyworld> axw: i have to disappear for a couple of hours to have lunch with a friend who has had a death in the family. back a bit later
[01:16] <axw> wallyworld: :(  no worries, ttyl
[01:16] <wallyworld> yeah, car crash :-(
[01:33] <thumper> waigani: ack, will look
[01:45] <davecheney> sinzui: ping
[01:53] <davecheney> does anyone know where the recipes for the precise builds of juju are ?
[01:53] <davecheney> wallyworld: axw  ?
[01:54] <axw> nope, sorry
[01:54] <davecheney> damn
[01:54] <davecheney> wanna get the precise build up to go 1.2
[01:54] <davecheney> so we can resolve all the dependency issues
[02:04] <davecheney> does anyone know how juju even gets built for precise
[02:04] <davecheney> I cannot find anything on the project page
[02:04] <davecheney> there is no linkage from any milestone/series to a ubuntu series
[02:14] <thumper> davecheney: I'm guessing *magic*
[02:14] <rick_h_> thumper: took my answer
[02:14]  * thumper high fives rick_h_
[02:15] <davecheney> thumper: fond it
[02:15] <davecheney> wondering how I can copy the trusty package into this ppa
[02:15] <davecheney> https://launchpad.net/~juju/+archive/golang
[02:18] <thumper> I appologise in advance for this massive branch
[02:22] <thumper> hmm...
[02:22]  * thumper tries something
[02:22] <davecheney> gave up, sent email
[02:23] <thumper> wtf is github.com/bmizerany/pat ?
[02:23] <rick_h_> heh, someone else brought that up today
[02:23] <rick_h_> missed what the result of that was
[02:23] <davecheney> yes
[02:23] <davecheney> who landed that
[02:24] <davecheney> 'twas jam in dae6b348
[02:26] <thumper> what is the command that does fetch and pull ?
[02:26] <davecheney> git ?
[02:26] <davecheney> or go ?
[02:27] <rick_h_> fetch and merge is done with pull
[02:28] <thumper> git
[02:28] <thumper> rick_h_: but I don't want to merge
[02:28] <thumper> I just want to update master
[02:28] <rick_h_> ok, then don't use pull
[02:28] <rick_h_> git pull upstream master
[02:28] <thumper> rick_h_: so what do I use?
[02:28]  * rick_h_ think you guys are using upstream to refer to the official juju repo
[02:29] <davecheney> yup
[02:29] <thumper> rick_h_: ok, new question, interactively merge parts of another branch into the current one
[02:29] <rick_h_> thumper: git cherry-pick $commit? or you want per code block in the diff?
[02:29] <thumper> per code block
[02:30] <rick_h_> interesting question
[02:30] <thumper> ah... here I was hoping that you'd know
[02:31]  * thumper falls back on the push the entire massive branch up for review
[02:32] <rick_h_> thumper: http://stackoverflow.com/questions/449541/how-do-you-merge-selective-files-with-git-merge
[02:33] <rick_h_> "Git also has great support for doing “reverse” squashes where a single commit is split into multiple patches. Below is an example of how to split a commit that has multiple unrelated changes in the same file. "
[02:33] <rick_h_> http://magazine.redhat.com/2008/05/02/shipping-quality-code-with-git/
[02:33] <thumper> fuck that
[02:33] <rick_h_> lol
[02:33] <thumper> sorry reviewer
[02:34]  * thumper needs to rebase
[02:35] <rick_h_> git cherry-pick --no-commit might also be useful
[02:41]  * thumper is in conflict resolution hell
[02:45] <thumper> if this test run has no problem, fair dinkum it'll be a miricle
[02:45]  * thumper sighs
[02:45] <thumper> spelling was never a strong suit
[02:47] <axw> oh fun, we've got an ICE on the bot trying to build the ec2 tests
[02:49] <davecheney> ice ?
[02:49] <axw> internal compiler error
[02:49] <davecheney> go ?
[02:49] <axw> provider/ec2/live_test.go:1: internal compiler error: dgcsym: off=8589934928, size=589934736, type struct { overflow *struct { overflow *struct { overflow *struct { overflow *struct { overflow *<...>; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id }; keys [8]string; values [8][]instance.Id
[02:49] <davecheney> wow
[02:49] <davecheney> never seen that one
[02:50] <davecheney> off and size are AFU
[03:01] <thumper> wow, that's weird
[03:01] <thumper> after merging trunk, I get a test failure in the logging worker
[03:01] <thumper> futz
[03:02] <thumper> I bet it isn't isolating the env var
[03:04] <thumper> yup
[03:04] <sinzui> davecheney, lp recipes don't work with go. I upload source packages tot he ppa. I backported 1.2 the juju-packagers devel ppa a few weeks ago. 1.19.3 precise and saucy was built with it
[03:05] <thumper> in someones infinite wisdom of test refactoring, they have broken isolation
[03:05] <sinzui> davecheney, as there are no reports of badness. I intend to build the 1.20 with it. I need to work with foundations though to discuss backporting 1.2 to ctools
[03:05] <davecheney> sinzui:  right
[03:06] <davecheney> could you please reply to that thread and tell jamespage not to do anything then
[03:06] <davecheney> i was obvkoiusly looking in the wrong
[03:06] <davecheney> place
[03:06] <sinzui> okay
[03:06] <davecheney> but that also means that mgz can land his branch to the code which uses go.crypto/ssh
[03:06] <davecheney> and then juju builds from the trunk of all the source
[03:06] <davecheney> \o/
[03:08] <davecheney> https://bugs.launchpad.net/bugs/1312940
[03:08] <_mup_> Bug #1312940: Update to use gosshnew from go.crypto <ssh> <juju-core:In Progress by gz> <https://launchpad.net/bugs/1312940>
[03:11]  * thumper running daughter down to hockey
[03:12] <davecheney> axw: where are the instructions for setting up origin and upstream branches
[03:12] <davecheney> ?
[03:12] <axw> CONTRIBUTING.md
[03:12] <davecheney> i thought they were in CONTRIBUTING.md
[03:12] <axw> they're not?
[03:12] <davecheney> i must be blind
[03:12] <davecheney> maybe not on the branch I have
[03:12]  * axw looks
[03:13] <axw> davecheney: https://github.com/juju/juju/blob/master/CONTRIBUTING.md#fork
[03:13] <davecheney> ta
[03:13] <davecheney> i must be looking at an old branch
[03:14] <sinzui> bugger. the arm64 machine fell of the net. I am removing it from the list of packages and tools to build to unblock CI from testing the current revision
[03:17] <axw> sinzui: is that go 1.2 PPA safe to add to our github merge job's setup already? we've got a compiler error that's likely to disappear if I upgrade it now
[03:18] <axw> the next question being where is the PPA?
[03:19] <sinzui> The PPA is not used to test...we cannot wait hours or days to build packages to test
[03:19] <axw> how will the unit test jobs get the 1.2 compiler then?
[03:20] <axw> the precise ones anyway
[03:20] <sinzui> axw jamespage setup a private ppa that we build into. It is private because we need to ensure people do not download from it before I publish tools
[03:21] <sinzui> axw. We can copy the built packages to a public testing ppa. I can update the run-unit-test job to add the ppa for saucy and precise.
[03:22] <axw> sinzui: I'm talking about run-unit-tests-precise-amd64 for example, we'll need to update the go compiler on there
[03:22] <sinzui> axw. I can do that tomorrow when I am truely awake
[03:22] <axw> ok
[03:23] <sinzui> axw, good point about that test I need to make the same change to the non-revision test
[03:23]  * thumper sighs
[03:23] <thumper> How do I squash my commits?
[03:23] <thumper> rebased with master already
[03:23] <axw> thumper: are you doing it interactively?
[03:23] <mwhudson> you can do it in rebase -i i think
[03:24] <axw> git rebase -i
[03:24] <thumper> it says I'm up to date
[03:24] <rick_h_> cdid you merge master or rebase master, always rebase
[03:24] <mwhudson> git rebase -i master
[03:24] <sinzui> thumper, I git rebase -i --autosquash master. Change all the the first commit it s (squash)
[03:25] <sinzui> thumper, git will let you revise the commit message in the next screen
[03:25] <thumper> sinzui: I tried that, it didn't work
[03:25] <sinzui> git hates you
[03:25] <thumper> it does
[03:25] <sinzui> it knows who you are
[03:25] <axw> ah hrm, our merge job is running trusty and hence go 1.21.1
[03:25]  * thumper tries again
[03:25] <axw> crapola
[03:25] <sinzui> hack on git-bzr and bzr-git and then only use bzr command
[03:27] <thumper> how do I set the editor git uses?
[03:28] <davecheney> $EDITOR
[03:28] <thumper> it obviously doesn't honour EDITOR
[03:28] <davecheney> it really does
[03:29] <mwhudson> unless you've set core.editor
[03:29] <sinzui> thumper, It honour vi/vim, probably non-gui emacs
[03:29]  * davecheney jarring chord
[03:30] <davecheney> axw: thanks for the review
[03:30] <axw> np
[03:30] <davecheney> i should be able to land this
[03:30] <davecheney> and if all the bits are in place
[03:30] <davecheney> it'll work
[03:30] <davecheney> if not, it won't
[03:30] <axw> the landing bot is 1.2
[03:31]  * davecheney fires forward missiles
[03:31] <axw> easy enough to back out if we need to
[03:32] <sinzui> axw, because landing bot is trusty right?
[03:32] <axw> sinzui: yep
[03:33] <axw> we did talk about making it precise, but haven't yet
[03:34] <davecheney> fail
[03:35] <davecheney> sure that bot is running 1.2 ?
[03:35] <axw> Get:90 http://us-east-1.archive.ubuntu.com/ubuntu/ trusty/universe golang-go amd64 2:1.2.1-2ubuntu1 [8,104 kB]
[03:35] <davecheney> hmmm
[03:35] <sinzui> axw I can start the copy of the packages to https://launchpad.net/~juju/+archive/experimental now. the could be there in minute.
[03:35]  * sinzui looks at the test that needs it
[03:36] <axw> sinzui: thanks, no rush. the test failure I've got is failing with 1.2 :(
[03:37] <axw> davecheney: I think that something maybe running earlier on with go 1
[03:37] <axw> davecheney: the bit that creates the tarball I guess
[03:38] <davecheney> maybe
[03:38] <davecheney> i'm not sure a go 1.3 dependency didn't leak into go.crypto/ssh
[03:43] <davecheney> axw: i'm investigating
[03:43] <axw> okey dokey
[03:43] <davecheney> not quite sure what is going on
[03:43] <axw> I will continue with my ICE picking
[03:43] <davecheney> ok
[03:43] <davecheney> shitload of compiler errors from 1.2 are fixed in 1.3
[03:44] <sinzui> all hell is breaking loose. I cannot ssh to the CI machines in lcy02
[03:44] <sinzui> Why couldn't this fail during my work hours
[03:45] <davecheney> sinzui: you know how these things work
[03:46] <sinzui> I think fate is pushing me to move all canonistack testing to lcy01... I setup a slave to test kvm there saturday
[03:48] <sinzui> and the ppc slave just went offline because the gateway is gone
[03:49]  * davecheney sound of air being let out of a baloon
[03:49] <thumper> wallyworld, menn0: this is a big one... https://github.com/juju/juju/pull/68
[03:49] <thumper> sorry about that
[03:49] <davecheney> thumper: you did all this in one commit :)
[03:50] <menn0> queue jokes about thumper's big branch...
[03:50] <menn0> menn0: reviewing now
[03:50] <thumper> davecheney: squished it
[03:51] <thumper> I'd prefer to work out my pipeline workflow with git
[03:51] <thumper> then it would have been much nicer
[03:51] <thumper> as bit as it is, it should be fairly straigh forward
[03:52] <menn0> it might be worth looking at Stacked Git. I haven't used it before though.
[03:53] <thumper> menn0: I did look at it, not really suitable for what I want
[03:53] <menn0> can someone tell me what Juju uses for the mongo db username and password? I'm trying to connect using mongo shell
[03:54] <davecheney> menn0: admin user ?
[03:55] <davecheney> ^ note, guess
[03:55] <menn0> that's what I figured too but that doesn't seem to work
[03:55] <davecheney> :sadface:
[03:55] <davecheney> axw: still investigating
[03:55] <menn0> here's what I'm doing: mongo 127.0.0.1:37017/juju-db --ssl --username admin --password <env's admin password from .jenv>
[03:56] <davecheney> my trusty vm is installing the internet
[03:56] <menn0> I get "login failed"
[03:56]  * menn0 hunts through the code
[03:57] <davecheney> login failed tells you it's working
[03:57] <menn0> ?
[03:57] <davecheney> sorry, that wasn't helpful
[03:58] <davecheney> got 660, 92% complete
[03:58] <davecheney> large update is large
[03:59] <axw> bleh. changed from using a function literal to a struct+method and it goes away...
[04:00]  * davecheney sobs
[04:02] <axw> davecheney: btw yeah, whatever it is is fixed in 1.3 - I'm using rc1
[04:10] <davecheney> axw: i cannot reproduce that failure on a clean machine
[04:10] <davecheney> dfc@trusty:~$ go get -u -v code.google.com/p/go.crypto/ssh
[04:10] <davecheney> code.google.com/p/go.crypto (download)
[04:10] <davecheney> code.google.com/p/go.crypto/ssh
[04:10] <davecheney> dfc@trusty:~$ go version
[04:10] <davecheney> go version go1.2.1 linux/amd64
[04:10] <axw> davecheney: tried with go1? pretty sure the tarball creation is done on precise
[04:11] <davecheney> this won't work on go1
[04:11] <davecheney> axw: the bot should be doing
[04:11] <davecheney> go get -d -v
[04:12] <davecheney> is it possible to make it do that ?
[04:12]  * axw looks what it's doing
[04:12] <davecheney> (it won't be using -d)
[04:12] <axw> this part is part of common CI, so I need to check if/how to change it
[04:13] <davecheney> or, can you apply the go 1.2 deb ?
[04:13] <davecheney> that will also solve the problem
[04:13] <davecheney> and we get precise and trusty build coverage to boot
[04:17] <axw> davecheney: heh, the problem is that it can't build godeps
[04:18] <axw> re 1.2 on the box, not my call. it's sinzui's machine. he said he'd update it tomorrow I think
[04:18] <davecheney> ok
[04:18] <davecheney> thanks
[04:18] <sinzui> I just copied the packages to the ~juju/experimental ppa. The tests will use them tomorrow
[04:19] <davecheney> sinzui: what about updating the version of Go on the jenkins host ?
[04:19] <sinzui> davecheney, We don't compile there or run unit tests on it
[04:20] <davecheney> sinzui: compilation happens
[04:20] <davecheney> http://juju-ci.vapour.ws:8080/job/github-merge-juju/93/console
[04:20] <sinzui> davecheney, not my job
[04:20] <davecheney> this is building because go get will compile anything it downloads
[04:20] <sinzui> And last I saw, that job ran in an instance, not in jenkins
[04:21] <davecheney> Started by remote host 54.86.142.177
[04:21] <davecheney> Building on master
[04:21] <davecheney> http://juju-ci.vapour.ws:8080/computer/(master)/
[04:21] <davecheney> master jenkins node
[04:21] <axw> sinzui: it's building the tarball
[04:21] <axw> sinzui: that needs godeps, and it's having problems doing that it seems
[04:21] <davecheney> sinzui: there are two solutions
[04:21] <thumper> heh, I'm sure the non-americans here would appreciate that the A0 paper size is by definition one square meter
[04:21] <davecheney> 1. use go get -d, which will skip building anything it go get's
[04:22] <davecheney> or upgrade to go 1.2
[04:22] <axw> hmm actually it mustn't be to do with godeps, it was building that fine before
[04:22]  * axw looks again
[04:23] <axw> it's building them on there. we could just change it to do this all on the lander
[04:23] <sinzui> davecheney, That job builds the tarball locally. the actual building and testing happens in a ubuntu instance provisioned by the test. I think that ami is trusty
[04:23] <sinzui> mgz, know the details
[04:23] <axw> sinzui: the tarball gets built on the jenkins host, but we can change that
[04:24] <axw> and part of building the tarball seems to be to run "go build ./..."
[04:24] <davecheney> axw: i think it's running go get launchpad.net/juju-core/... to fetch all the dependencies
[04:24] <davecheney> then runnig godeps -u ... to switch their revisions
[04:25] <axw> davecheney: I can show you the script if you like. it first does "go get -d <stuff>", then builds/runs godeps, then runs "go build ./..."
[04:25] <sinzui> Yes, but that those are stripped out of the packaging, and the test rebuilds with the local compiler. The cargo culted test ruins on many series and archs, and the build is redone with the proper compiler
[04:25] <davecheney> axw: ok, if it's doing go build ./...
[04:26] <davecheney> then the machine running that needs to run go 1.2
[04:26] <axw> yeah, we'll just fix the lander script to do all this on the lander
[04:26] <sinzui> I advised mgz to switch the to run-unit-tests which I can change to use specific golang.
[04:26] <davecheney> m'kay
[04:27] <axw> sinzui: no worries, we know what the issues is so we can deal with it now
[04:27] <davecheney> thanks sinzui
[04:27] <davecheney> thanks axw
[04:27] <thumper> menn0: did you want to chat about the review, or is it all good?
[04:28] <menn0> thumper: still looking
[04:28] <menn0> thumper: all ok so far. just some minor suggestions so far
[04:29] <menn0> thumper: one thing. github.com/juju/juju/state/factory is just for testing right?
[04:29] <thumper> menn0: yes
[04:29] <thumper> menn0: it imports gocheck
[04:29] <menn0> the module name doesn't really indicate that
[04:29] <thumper> menn0: I wanted a name that wasn't "testing", and moved it out of "state/testing" for that reason
[04:30] <thumper> we have a proliferation of testing packages...
[04:30] <thumper> could move it to be under a testing package...
[04:30] <thumper> that may make it more obvious
[04:30] <thumper> testing/factory maybe
[04:30] <menn0> that might be worthwile.  github.com/juju/juju/state/testing/factory
[04:31] <menn0>  github.com/juju/juju/testing/factory is good too
[04:32] <menn0> thumper: also, given that all the factory does is make stuff what about User() instead of MakeUser(). Some of the places that are now using the factory are much longer then they used to be.
[04:33] <thumper> menn0: longer yes, but clearer IMO
[04:33] <thumper> re: User, maybe...
[04:33] <thumper> I'd like to get wallyworld's input
[04:33] <thumper> and maybe axw
[04:34] <thumper> wallyworld because he understands the idiom from launchpad
[04:34] <menn0> this is a definite win: -	cs.State.AddUser(state.AdminUser, "", "pass")
[04:34] <menn0>  +	cs.State.AddAdminUser("pass")
[04:34] <menn0> but i'm not sure that this is:
[04:34] <menn0> -	_, err = s.State.AddUser("arble", "", "pass")
[04:34] <menn0>  +	s.factory.MakeUser(factory.UserParams{Username: "arble"})
[04:34] <thumper> I do kind of like the verb-noun aspect of MakeUser
[04:35] <menn0> I see that
[04:35] <thumper> menn0: the clear bit of the second is that you don't have to modify all the call sites when you change how you make users
[04:35] <thumper> like I did by changing params to state.AddUser
[04:35] <menn0> completely agree with that
[04:35] <thumper> also
[04:36] <thumper> don't need to check err
[04:36] <thumper> as the factory does that
[04:36] <menn0> totally agree that that's better too
[04:36] <thumper> I feel that the slightly longer line is better for clarity of intent
[04:36] <menn0> my only (minor) gripe is that the lines are bit harder to grok now
[04:36] <menn0> overall it's a win I suppose
[04:37]  * menn0 misses keyword args
[04:37] <thumper> well, I'd prefer: s.factory.MakeUser(username="arable")
[04:37] <thumper> but we can't do that
[04:37]  * menn0 nods
[04:37] <menn0> keep it the way it is
[04:37] <thumper> this is my attempt to take a good python testing pattern and use it here
[04:37] <menn0> it's definitely better than before
[04:38] <thumper> I think so
[04:38] <thumper> I think also as the factory grows other methods
[04:38] <thumper> like MakeMachine, MakeService...
[04:38] <thumper> etc
[04:38] <thumper> MakeUnit would MakeService and MakeMachine by default...
[04:38] <thumper> that type of pattern
[04:38] <menn0> yep that all sounds great
[04:39] <menn0> of course the methods could just be Machine, Service, Unit :)
[04:39] <thumper> but if you have a service called Machine...
[04:39] <menn0> so you get: s.factory.Unit(...), s.factory.Service(...)
[04:39] <thumper> it doesn't say what
[04:39] <thumper> meh...
[04:39] <menn0> I actually don't care that much
[04:40] <thumper> just being devil's advocate?
[04:40] <menn0> yes
[04:40] <menn0> I usually prefer verb-noun myself
[04:41] <menn0> idea: you could call the factory field itself "make" so the calls look like: s.make.User(...), s.make.Unit(...) etc :)
[04:41] <menn0> probably too confusing for the uninitiated
[04:41] <menn0> but more concise
[04:44] <thumper> haha
[04:44] <thumper> hmm...
[04:45] <thumper> I do expect that we'll end up with some form of DSL for tests, but this may be taking it a step too far :)
[04:46]  * thumper hasn't got any lxc fixes done yet...
[04:46] <thumper> bad thumper
[04:50] <menn0> thumper: review done. I've included the relevant bits from our discussion here.
[04:50] <thumper> ok, ta
[04:51] <menn0> thumper: it should have been about PRs right? :)
[04:51] <menn0> about 3 even
[04:51] <thumper> yeah...
[04:51] <thumper> if I was using bzr, it totally would have been
[04:52] <thumper> but I don't yet know how to do this easily with git
[04:52] <thumper> menn0: I'm thinking about the MakeAnyUser...
[04:52] <thumper> may well be worth it
[04:52] <thumper> for the general use case
[04:52] <menn0> thumper: it's a minor thing but it's going to get used a lot
[04:53] <thumper> that way the caller doesn't need to import the package to get a random thing
[04:53] <thumper> if the factory is in the base suite
[04:53]  * thumper nods
[04:53] <thumper> I agree with making the default thing very easy
[04:53] <menn0> yep that alone makes it worthwhile
[04:53]  * thumper does that
[04:54] <menn0> completely unrelated question... how do you figure out which state server is master?
[04:55] <waigani_> just created a tag from a string in order to get that string from the tag
[04:55] <waigani_> there's a bit of code I'm pretty proud of!
[04:58] <thumper> no idea sorry menn0
[04:58] <menn0> I've found a manual way to do it
[04:58] <menn0> connect using mongo shell until the prompt says "PRIMARY" :)
[04:59] <menn0> (trying each machine)
[05:03] <thumper> wallyworld,axw: is the bot wedged? there seem to be many pending merges
[05:03]  * wallyworld looks
[05:04] <axw> it's currently idle
[05:04] <axw> my build passed not long ago
[05:04] <wallyworld> last run was 30 mins ago
[05:06] <thumper> ok
[05:07] <wallyworld> thumper: i just ran the lander by hand, it claims no pull requests
[05:07] <thumper> ok
[05:08] <wallyworld> thumper: JujuConnSuite is kinda evil but convenient
[05:08] <davecheney> thumper: where is your $$merge$$ comment
[05:08] <davecheney> ?
[05:08] <wallyworld> should the factory be a fixture?
[05:09] <wallyworld> thumper: also, one of the cts guys has volunteered to fix some juju bugs. i could get him to look at local provider usability - improved status etc, or whatever else
[05:10] <wallyworld> that way you're  not on the hook for it
[05:13] <waigani_> thumper: your factory looks great :)
[05:14] <thumper> wallyworld: I think so
[05:14] <wallyworld> which question?
[05:14] <thumper> davecheney: almost there...
[05:14] <thumper> just fixing something
[05:14] <davecheney> ok
[05:14] <thumper> wallyworld: what do you mean by "should the factory be a fixture" ?
[05:15]  * thumper is in and out of the office to help make dinner
[05:15] <wallyworld> don't add it to JujuConnSuite  - introduce a new Suite with the factory
[05:15] <wallyworld> like the FakeHome fixtures
[05:16] <wallyworld> JujuConnSite will hopefully die at some point
[05:16] <wallyworld> but it's convenient i know
[05:17] <wallyworld> if you had some specific local provider issues logged as bugs, let me know. the initial template image delay progress would be one
[05:20] <thumper> wallyworld: it needs a state connection
[05:20] <thumper> wallyworld: so, ackward
[05:20] <wallyworld> yes it does, you are right
[05:21] <wallyworld> i guess JujuConnSuite is the right place then for now
[05:21] <thumper> wallyworld: I did a bare minimal state thing for the factory_test
[05:22] <thumper> it would be good to get a base test suite that isn't as bloated as the JujuConnSuite
[05:22] <thumper> ok, merge is accepted, I'm done for the day
[05:22] <thumper> later ...
[05:24] <wallyworld> axw: will your change to stop instances to ignore containers break the local provider?
[05:24] <axw> wallyworld: no, because they're not containers as far as state is concerned
[05:25] <wallyworld> np, just thought i'd check :-)
[05:25] <axw> they're top-level machines that just happen to be implemented as containers
[05:25] <wallyworld> yup, sounds good. can't e too careful :-)
[06:54] <dimitern> morning all
[07:45] <rogpeppe1> ha, you can't add review comments on github unless they're actually part of the diff
[07:45] <rogpeppe1> that's so bogus
[08:09] <TheMue> hmm, my branch doesn’t get merged, despite the $$merge$$
[08:10] <axw> TheMue: is your membership public?
[08:10] <TheMue> axw: eh, where can I check this?
[08:11] <axw> https://github.com/orgs/juju/members
[08:11] <axw> the answer is no
[08:11] <axw> make it public, and your $$merge$$ comment will count
[08:11] <TheMue> *sniff* I’m not public
[08:11] <TheMue> axw: thx, will change
[08:11] <axw> np
[08:41] <voidspace> morning all
[08:43] <mattyw> voidspace, morning
[08:45] <TheMue> axw: now it tells me that the PR is not mergable, any idea? it only contains a new document, nothing else.
[08:46] <axw> TheMue: that's a strange transient error from GitHub we've seen a bunch
[08:46] <axw> TheMue: you'll have to $$merge$$ again I'm afraid
[08:47] <mattyw> axw, have you got a moment to talk about the feedback here: https://github.com/juju/juju/pull/64?
[08:47] <mattyw> axw, and by that I mean options for making the output better
[08:47] <axw> mattyw: briefly, I've got a standup to go to in 10m or so
[08:47] <TheMue> axw: ok, thx again
[08:48] <axw> mattyw: I made that comment after seeing what the output looks like from the test case you wrote
[08:48] <axw> it's all sorta glommed together and hard to read
[08:48] <axw> mattyw: by which I mean the tools in the format a;b;c;d;e;f
[08:49] <mattyw> axw, ok, I could just split the context.tools up by the different tools and print each on a new line, it might look weird in the log, but to console it might looks ok
[08:49] <axw> hm, forgot about the log...
[08:50] <axw> mattyw: maybe just leave it for now, if people are actually bothered we can change it
[08:50] <mattyw> a tiny part of me wonders if the whole command could output in yaml or json (using cmd.Output) when it's in dry mode, but that feels like overkill - and doesn't really solve our problem either I think
[08:50] <axw> yeah I certainly wouldn't bother going that far
[08:51] <mattyw> I could also split the tools up and log each line - there's the chance the tools could get split up in the log
[08:51] <mattyw> but it would probably be fine
[08:51] <axw> mattyw: I reckon either leave it as is, or change to ", " separator
[08:52] <axw> I'm not *that* fussed, it just looked a bit funky
[08:53] <wallyworld> fwereade: hiya
[08:53] <fwereade> wallyworld, heyhey
[08:54] <fwereade> wallyworld, still chugging through that giant review, I'm afraid
[08:54] <wallyworld> no problem, sorry
[08:54] <wallyworld> fwereade: i start out to fix some of our intermittent CI bot failures and the branch ended up refactoring a bit of mongo stuff
[08:54] <fwereade> wallyworld, I guess you could have created the new package before using it, but nbd
[08:55] <fwereade> wallyworld, despite my cornucopia of nitpicks it's a fantastic change
[08:55] <wallyworld> fwereade: you mean the txn stuff?
[08:55] <wallyworld> yes, i could have created it but... git
[08:55] <fwereade> wallyworld, yeah, I'm not far enough up the page to see what else you did
[08:55] <wallyworld> too hard to break stuff up
[08:55] <fwereade> wallyworld, indeed :)
[08:56] <wallyworld> fwereade: would you hate me to take a look at another change referneced above?
[08:56] <wallyworld> fwereade: i start out to fix our
[08:56] <wallyworld> bah
[08:56] <wallyworld> https://github.com/juju/juju/pull/72
[08:56] <wallyworld> i have a theory as to why we are getting timeout errors in tests
[08:56] <wallyworld> on the bt for various mongo operations
[08:57] <mattyw> axw, thanks for the feedback, I'll see if I can come up with something better without changing too much
[08:57] <fwereade> wallyworld, I would be delighted to, especially if it's fixing that stuff :)
[08:57] <wallyworld> but it ended up turning into refactoring - getting a bunch of mongo stuff out of state
[08:57] <fwereade> wallyworld, I think I'll try to finish this one first
[08:57] <wallyworld> and moved from agent/mongo into top level mongo
[08:57] <wallyworld> np, just thought you'd want an opinion on it
[08:57] <axw> mattyw: cheers
[08:57] <fwereade> wallyworld, that also sounds pretty awesome
[08:58] <axw> mattyw: btw did you end up looking into the port range bug?
[08:58] <wallyworld> fwereade: i need someone to double check - i removed the deprecated AddUser and replaced with UpsertUser
[08:58] <wallyworld> anyways, only when you get time
[08:58] <fwereade> wallyworld, txn with both insert and update?
[08:58] <wallyworld> ?
[08:59] <fwereade> wallyworld, was asking what you meant by upsert
[08:59] <axw> it's an mgo thing
[08:59] <wallyworld> oh, that's a mongo api call
[08:59] <axw> insert or update I think
[08:59] <wallyworld> yup
[08:59] <fwereade> wallyworld, axw, I know what is normally
[08:59] <fwereade> wallyworld, axw, not clear how it integrates with mgo/txn
[08:59] <mattyw> axw, I looked into it a little, not done anything yet. I have some "ideas" I'd like to discuss with an adult before I do anything
[09:00] <wallyworld> fwereade: the mgo driver docs say to use Upsert from 2.4 onwards
[09:00] <fwereade> wallyworld, axw, and would kinda prefer to keep mongoisms out of the state interface as much as possible
[09:00] <axw> mattyw: heh :)  okay. I'm interested in it too, would like to hear your thoughts some time
[09:00] <wallyworld> fwereade: it's a direct replacement mgo.AddUser becomes mgo.UpsertUser
[09:00] <fwereade> wallyworld, ah!
[09:00] <wallyworld> state calls it
[09:00] <wallyworld> sorry
[09:00] <wallyworld> bad comms
[09:00] <fwereade> wallyworld, sorry, I see, it's for actual*mongo* users
[09:00] <fwereade> wallyworld, that sounds fine
[09:00] <wallyworld> yeah, sorry
[09:01] <wallyworld> Upsert needs roles defined
[09:01] <wallyworld> not just readonly=true/false like adduser
[09:01] <wallyworld> so i need my choice of roles checked
[09:02] <mattyw> axw, there was a comment on the bug by fwereade that I need to ask about as I don't understand the implication: https://bugs.launchpad.net/juju-core/+bug/1216644/comments/2
[09:02] <_mup_> Bug #1216644: allow open-port to expose several ports <addressability> <improvement> <strategy> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1216644>
[09:03] <wallyworld> mgz: axw: quick standup?
[09:03] <axw> wallyworld: be there in a sec
[09:03] <axw> mattyw: not entirely sure, but we use a security group for each instance to firewall ports unless explicitly opened ("firewall-mode: global" changes this behaviour)
[09:04] <axw> mattyw: we'll probably want to change things around to use iptables on each machine, so they manage their own firewalls
[09:04] <axw> mattyw: then nuke the security groups
[09:06] <fwereade> mattyw, ah, are you working on that bug?
[09:06] <fwereade> mattyw, let's chat in 5 mins?
[09:06] <mattyw> fwereade, I'm not, but I started looking at it yesterday to see if I could work on it
[09:07] <fwereade> mattyw, it's pretty massive I'm afraid
[09:07] <mattyw> fwereade, I think I have enough to keep me busy for this week, but it might be good to discuss it so we can get a plan for it?
[09:08] <fwereade> mattyw, ok, let's have a hangout and I will braindump issues at you until you cry ;p
[09:10] <mattyw> fwereade, just ping me when you're ready
[09:21] <rogpeppe1> this PR removes the charm package from juju-core. reviews appreciated (it's all mechanical change) https://github.com/juju/juju/pull/74
[09:21] <rogpeppe1> dimitern, fwereade, axw: ^
[09:24] <rogpeppe1> fwereade, jam: just about to create a new repo for the charm store. wondering about names. thinking of "charmstore" rather than "store". what do you think?
[09:24] <fwereade> rogpeppe1, +1
[09:24] <jam> rogpeppe1: sgtm
[09:25] <rogpeppe1> fwereade, jam: ok, will do, thanks
[09:25] <rogpeppe1> i think it's ok even though it will also be used to store bundles
[09:25] <dimitern> rogpeppe1, looking
[09:27] <dimitern> rogpeppe1, reviewed +1 question
[09:28] <rogpeppe1> dimitern: thanks
[09:51] <wallyworld> fwereade: thanks wading though my txn bibs and bobs, i'll start addressing the concerns but won't be done till tomorrow. i got sidetracked today on fixing more CI bot failures (mgo timeouts)
[09:52] <voidspace> rogpeppe1: ping
[09:52] <rogpeppe1> voidspace: pong
[09:53] <voidspace> rogpeppe: do you have a minute or two to chat?
[09:53] <rogpeppe> voidspace: sure.
[09:55] <jam> fwereade: dimitern: so I'm slowly resolving the conflicts and bringing my API versioning stuff over to git. Unfortunately it touches a lot of stuff, and we're moving code all over the place, so it is a bit slow going. However the first step of having the RPC code support versioned requests has been done.
[09:55] <jam> Would you like to review it now? I took the opportunity to do it in a "more compatible" way with the existing infrastructure, since I have to work my way up the stack anyway
[09:56] <jam> It isn't *much* different than before, but it means we do expect to know about the type when we lookup what method to call, but we don't expect a concrete object until we actually make the call.
[09:57] <dimitern> jam, I'll definitely have a look - what's the link?
[09:57] <jam> dimitern: I haven't created a pull request yet but https://github.com/jameinel/juju/tree/api-versioning
[09:58] <jam> I wasn't going to do a PR for something in progress that nobody wanted to review. but I can do one now
[10:01] <jam> dimitern: fwereade: https://github.com/juju/juju/pull/75
[10:01] <dimitern> jam, cheers
[10:01] <fwereade> jam, I will try to get to it as I can, I am slowly grinding through the reviews while not otherwise engaged
[10:02] <jam> fwereade: understood
[10:02] <jam> you just looked at the code before, so I thought it might be "faster" for you to look at the new version
[10:48] <jam> vladk: standup ?
[11:21] <rogpeppe1> fwereade: how would you feel about moving cmd/{charmd,charmload,charm-admin}into github.com/juju/charmstore ?
[11:21] <rogpeppe1> fwereade: then there's nothing in juju-core that depends on charmstore
[11:21] <rogpeppe1> jam: ^
[11:22] <jam> rogpeppe1: aren't those the ones that just exist to build the charm tools ?
[11:22] <jam> I'm trying to remember what they actually do
[11:22] <jam> (clearly I'm *very* attached to where they are today :)
[11:23] <rogpeppe1> jam: they're commands that talk to the store
[11:31] <rogpeppe1> mgz: ping
[11:32] <mgz> rogpeppe1: hey
[11:33] <rogpeppe1> mgz: when you fetch dependencies in the 'bot, do you use "go get -t" ?
[11:33] <rogpeppe1> mgz: to fetch testing dependencies too
[11:33] <mgz> nope
[11:34] <rogpeppe1> mgz: hmm, i wonder if godeps -t should fetch only testing dependencies of the initially mentioned packages, and not recursively
[11:34] <mgz> we don't have any test-only deps at present right?
[11:34] <rogpeppe1> mgz: yeah, we do
[11:35] <rogpeppe1> mgz: github.com/bmizerany/pat imports github.com/bmizerany/assert, but only for its tests
[11:35] <mgz> ah, I saw those got added, but not the context
[11:35] <rogpeppe1> mgz: so when i did "godeps -t ./... > dependencies.tsv", that package (and a couple more) showed up
[11:35] <rogpeppe1> mgz: but the 'bot isn't fetching them, so the PR failed
[11:37] <mgz> rogpeppe1: can we even use either of those branhces?
[11:38] <rogpeppe1> mgz: how do you mean?
[11:38] <mgz> oh, I guess he's sort of got it licenced
[11:38] <mgz> the first one has it at the bottom of the readme and the other says, but does not include "MIT licence"
[11:39] <mgz> don't seem like the best deps ever though
[11:39] <rogpeppe1> mgz: i'm not that happy about their inclusion, tbh
[11:39] <mgz> what's requiring them?
[11:40] <mgz> assert at least seems totally redundant with what we have anyway
[11:40] <rogpeppe1> mgz: the new dispatch code in state/apiserver/apiserver.go
[11:40] <rogpeppe1> mgz: this is only for the tests in bmizerany/pat
[11:40] <rogpeppe1> mgz: so we don't actually need those deps, as long as we don't run those tests
[11:41] <rogpeppe1> mgz: i think i'm best off just excluding secondary testing deps from godeps output
[11:41] <rogpeppe1> mgz: the godep tool does that, it seems
[11:42] <mgz> why is it the charm splitting pr...
[11:43] <mgz> ah, it's unrelated cleanup? I'm generally confused
[11:44] <rogpeppe1> mgz: it has always been supposed standard practice to generate dependencies.tsv by doing "godeps -t ./... > dependencies.tsv"
[11:44] <rogpeppe1> mgz: so i did that, and ended up with these new deps
[11:45] <mgz> yeah, and then you get the blame on unrelated breakage
[11:45] <rogpeppe1> mgz: yeah
[11:45] <rogpeppe1> mgz: it's ok, i'm removing the deps explicitly
[11:45] <mgz> rogpeppe1: can you un-change the dependencies.tsv for now, land your branch, and post to the list about this?
[11:45] <rogpeppe1> mgz: doing that
[11:46] <rogpeppe1> mgz: we'll see if it lands ok now
[11:46] <mgz> rogpeppe1: thanks
[11:54] <rogpeppe1> mgz: i've fixed the godeps tool
[11:55] <mgz> I shall pullet
[11:55] <mgz> just chicken out the branch now
[12:08] <bodie_> https://github.com/GoogleCloudPlatform/kubernetes
[12:09] <bodie_> this is going to make some waves
[12:09] <bodie_> methinks
[12:09] <bodie_> morning!
[12:13] <fwereade> rogpeppe1, oops, sorry I missed you: yes, move the store cmds into store
[12:13] <rogpeppe1> fwereade: cool
[12:15] <rogpeppe1> fwereade: that will require factoring out cmd too. do you think github.com/juju/cmd is a reasonable name?
[12:15] <fwereade> rogpeppe1, yes, that's fine by me
[12:15] <bodie_> oh, fwereade regarding the schema validation -- Jeremy raised the point that proper json-schemas don't look much like the samples we're working with in our tests
[12:15] <fwereade> rogpeppe1, I'm trying to remember who else was talking about doing that
[12:15] <bodie_> https://github.com/juju/docs/pull/117#commitcomment-6601483
[12:15] <fwereade> rogpeppe1, whoever it was, you should talk to them
[12:15] <bodie_> actually, I'll move this content to skunkworks
[12:15] <rogpeppe1> bodie_: hiya
[12:16]  * fwereade congratulates self on providing clear and useful direction :/
[12:16] <rogpeppe1> bodie_: did you find out what was causing that infinite recursion?
[12:16] <bodie_> no, I just got up
[12:16] <rogpeppe1> bodie_: (FWIW, i'm concerned about that - it's a definite possible DOS attack)
[12:16] <rogpeppe1> bodie_: np
[12:16] <bodie_> yeah, it's concerning
[12:17] <rogpeppe1> i'm worried that we're taking on board this whole huge json-schema standard when we really don't need anything nearly so complex
[12:17] <bodie_> right.  our original PR actually didn't use json-schema at all
[12:18] <rick_h_> rogpeppe1: just keep in mind that while it starts out in actions, it's on the roadmap to use it for charm/bundle config
[12:18] <bodie_> mostly because I hadn't been shared the google doc where Mark was indicating his interest in json-schema
[12:18] <rogpeppe1> rick_h_: i realise that. it still seems like overkill though.
[12:18] <rogpeppe1> rick_h_: we could define something really simple that would probably be sufficient and easy enough to implement in a short amount of time.
[12:19] <rogpeppe1> the json-schema thing seems mostly like "here's a standard that we can point to; then all our problems will go away"
[12:20] <rick_h_> rogpeppe1: no doubt. However there's merit in getting on board with a published standard and help us with some of the lack of definition we hit with juju models.
[12:20] <rick_h_> rogpeppe1: but yea, I'd not heard about it until Mark S linked it and got us investigating on it
[12:20] <bodie_> anyway -- (also fwereade), when I had it parsing out of a file, it wasn't validating these "simplistic" schemas anyway
[12:20] <rogpeppe1> rick_h_: i'd heard about it, but wrote it off when i actually looked at it
[12:21] <rick_h_> rogpeppe1: I guess we're also less resistant as we've got a good set of tools on the JS end currently and json fits our life great anyway
[12:21] <bodie_> wasn't *invalidating* them
[12:21] <bodie_> honestly, rogpeppe1 I believe that if we run with it a bit, I think it won't be that bad, mostly because we've cleared the hurdles around it
[12:21] <rogpeppe1> rick_h_: what kind of thing are you thinking of when you say "the lack of definition we hit with juju models" ?
[12:22] <bodie_> now that it's in State, it should be a simple matter of x := gojsonschema.NewJsonSchemaDocument(Actions()["snapshot"].Params)
[12:22] <bodie_> x.Validate(incomingJsonBlob)
[12:22] <rick_h_> rogpeppe1: 'what can I put in here?' in config, actions will need to be very clear, relations are painful for new people to work with as they're not clear.
[12:22] <bodie_> for errorValue := range x.Errors() or whatnot
[12:22] <rick_h_> rogpeppe1: it's the 'with great flexibility comes a lack of clear immediate understanding' kind of thing
[12:22] <bodie_> and, that's pretty much the whole kit n' kaboodle
[12:31]  * fwereade needs to be away to laura's school for a while, back later
[12:33] <rogpeppe1> rick_h_: it would help if relations config attributes were at least documented *somewhere* :-)
[12:33] <rick_h_> rogpeppe1: yea, just playing a bit of devils advocate for the idea of using jsonschema :)
[12:34] <rogpeppe1> rick_h_: i'm definitely not against using some kind of schema
[12:35] <rogpeppe1> rick_h_: i just think that json-schema is yet another example of a W3C-style bloated and unnecessarily complex standard.
[12:35] <rogpeppe1> rick_h_: which is a shame, because there's a possibility for something really nice in this space
[12:37] <bodie_> on a side note, I think I'm going to discard the meta-validation branch
[12:37] <rogpeppe1> bodie_: sgtm for the time being
[12:37] <bodie_> it's not doing anything for us as far as I can tell (it wasn't discarding simplistic schemas) and it's turning into a more complicated problem than I had hoped, it was really more of a simple experiment
[12:37] <rogpeppe1> bodie_: but i'd really like to get to the bottom of why it went infinitely recursive
[12:38] <bodie_> yeah
[12:39] <rogpeppe1> bodie_: it's something to do with the {"$ref": "#"}, i think
[12:39] <rogpeppe1> bodie_: at least that's one portion of the infinite recursion
[12:40] <rogpeppe1> bodie_: i don't know what that's supposed to mean though
[12:40] <bodie_> well # is just a URL fragment with no path
[12:40] <bodie_> so, it's a json-reference, I think
[12:40] <rogpeppe1> bodie_: so what's it meant to mean as a reference?
[12:42] <bodie_> not sure I could tell you
[12:42] <bodie_> jdorn might know
[12:43] <bodie_> http://json-schema.org/latest/json-schema-core.html
[12:43] <bodie_> er
[12:43] <bodie_> http://json-schema.org/latest/json-schema-core.html#anchor30
[12:43] <bodie_> I suppose it's a reference to the root document
[12:43] <bodie_> so, you're probably right that it's where the recursion is coming in
[12:45] <rogpeppe1> bodie_: yeah, i'd just come to that conclusion
[12:45] <bodie_> https://github.com/binary132/gojsonschema/blob/master/schemaDocument.go#L639
[12:45] <rogpeppe1> bodie_: it's actually a desired recursive definition
[12:46] <bodie_> I guess because it's its own schema
[12:51] <bodie_> which means, a recursive self-reference like that shouldn't break it :/
[12:51] <rogpeppe1> bodie_: i don't think that's the reason
[12:52] <rogpeppe1> bodie_: i think it's because it's actually defining a self-referential type
[12:52] <rogpeppe1> bodie_: similarly to how you'd define a linked list using struct types
[12:52] <rogpeppe1> bodie_: or a grammar
[12:53] <bodie_> I wonder if the recursion is happening here: "$schema": "http://json-schema.org/draft-04/schema#"
[12:54] <bodie_> normally, that would be a reference to the same URL as the document itself
[12:54] <bodie_> but in our case, it's a reference to a remote document
[12:55] <bodie_> rogpeppe1, I see what you're saying, I think that's right
[12:55] <bodie_> in other words, schemaArray.items are themselves conformant to json-schema as a whole, i.e. are json-schema documents
[12:55] <rogpeppe1> bodie_: the $schema thing isn't the problem
[12:55] <rogpeppe1> bodie_: it still does the same thing without that
[12:55] <rogpeppe1> bodie_: yup
[12:56] <bodie_> I just don't see why it would work fine as a file reference or URL reference, but not when loaded as a string
[12:56] <bodie_> maybe encoding/json is altering something
[12:56] <rogpeppe1> bodie_: i doubt it
[12:56] <bodie_> I don't think that makes sense, but it's the only thing I can think of
[12:57] <bodie_> or there's a different logical chain that gets followed if it's a url vs a map
[12:57] <rogpeppe1> bodie_: it's something to do with that, i'm pretty sure
[12:57] <bodie_> https://github.com/binary132/gojsonschema/blob/master/schemaDocument.go#L38
[12:57] <rogpeppe1> bodie_: there's something called a "pool" which gets looked up in jsonreference
[12:58] <rogpeppe1> bodie_: and i suspect that's not filled correctly
[12:58] <bodie_> case string = URL, case map = the one that has a problem, that we're using
[12:58] <rogpeppe1> bodie_: i bet the pool caches the result of its Get
[13:03] <bodie_> well
[13:04] <bodie_> rogpeppe1, I'm seeing that the getFileJson / getHttpJson methods use interface{} for unmarshaling rather than map[string]interface{}
[13:05] <bodie_> it seems kind of silly, but it's possible there's a wonky type switch happening somewhere
[13:06] <bodie_> but, I guess that JSON would unmarshal as a map[string]interface{} even if it was getting unmarshaled into an interface{}
[13:06] <bodie_> so that seems like a dead end
[13:07] <bodie_> I need to get a shower and eat before my meeting with mark... :/
[13:07] <bodie_> really stressful to feel bogged down in this point of the project
[13:07] <bodie_> but, it is what it is
[13:08] <bodie_> you're right that json-schema caused much more complication than we expected
[13:08] <rogpeppe1> bodie_: i've reduced the test case to this: http://paste.ubuntu.com/7628481/
[13:08] <rogpeppe1> bodie_: it's probably not yet minimal, but it's much smaller
[13:09] <rogpeppe1> bodie_: here's some code i've been using to repo the issue: http://paste.ubuntu.com/7628485/
[13:09] <bodie_> rogpeppe1, do you have that on a branch in your github?  I'd already thrown out my branch and local copy
[13:09] <bodie_> ah, I see
[13:10] <rogpeppe1> bodie_: you can always get your branch back via the reflog
[13:10] <bodie_> interesting, i've never used reflog
[13:11] <rogpeppe1> bodie_: i also put a log print somewhere in the recursion so that i can do go run tst.go 2<&1 | head -10000 | wc
[13:11] <rogpeppe1> bodie_: to avoid the runaway process chewing up all my resources
[13:11] <rogpeppe1> bodie_: reflog is indispensible sometimes
[13:11] <rogpeppe1> bodie_: branches only get GC'd after 30 days
[13:12] <bodie_> awesome
[13:16] <bodie_> well, that's a relief, it's not something to do with the additions I made to gojsonreference
[13:16] <bodie_> or rather, reductions / surgical alterations :P
[13:16] <mattyw> dimitern, ping?
[13:21] <natefinch> bodie_, rogpeppe1: it seems like we could save a ton of time by just defining our own super simplistic schema.  Name, Description, Type(Bool, Int, Float, Date, String).  Done.  Could be coded up in like 20 minutes, does everything we need it to do.
[13:22] <rogpeppe1> natefinch: we already had that
[13:22] <rogpeppe1> natefinch: but we need something that can be deeper
[13:22] <rogpeppe1> natefinch: i suggested something like: T: int | float | string | bool | struct (field: T, ...) | map (T) | array (T)
[13:22] <rick_h_> natefinch: right, but for actions we need thinks like urls, file paths, and such
[13:23] <natefinch> rick_h_: I hear you saying strings, strings, and such
[13:23] <rogpeppe1> rick_h_: but do those things need to be verifiable as such in the schema?
[13:23] <rick_h_> natefinch: and soon extensions like resource locations and the like.
[13:23] <natefinch> and more strings
[13:23] <rogpeppe1> i tend to agree with nate here
[13:23] <bodie_> so, we're having a face-to-face with mark in about half an hour here, I'm uncomfortable coming to him and presenting "we're arguing about whether we need json-schema on the backend"
[13:23] <natefinch> bodie_: make rogpeppe1 do it ;)
[13:24] <bodie_> sigh, he's got a very valid point that if we're permitting users to define charms that stack explosions, that's bad
[13:24] <bodie_> *cause*
[13:25] <rogpeppe1> bodie_: it's a bunch of code that we didn't vet, and we don't understand
[13:25] <bodie_> and then we start getting into constraining json-schema, rooting around for certain keys and certain values on those keys, we don't necessarily have a perfect picture of what's broken, anyway
[13:25] <rogpeppe1> bodie_: and we don't even understand the json-schema standard ourselves
[13:25] <bodie_> right
[13:26] <natefinch> I think it's a really bad idea to do a standard we don't understand with code we don't understand and didn't write and don't trust.  Especially when we already have something workable that we do understand and we do trust.
[13:27] <natefinch> Just be totally honest with Mark about the reservations people have with jsonschema and the code and the questionable benefits we get from using a standard that likely no one else uses either.
[13:27] <rick_h_> It feels a bit like "Why use SQL when we've got data access we know and trust because we don't know SQL and the library that you use to interact with it"
[13:27] <bodie_> um, because SQL is bad.  everyone knows that
[13:27] <bodie_> ;)
[13:27]  * rick_h_ goes back to his sql-loving corner
[13:27] <natefinch> rick_h_: there's a huge difference between SQL and jsonschema
[13:28] <rogpeppe1> rick_h_: it's a question of value added vs cost incurred
[13:28] <natefinch> that too
[13:28] <bodie_> I think the major difference here is that one has a good implementation in Go and the other doesn't
[13:28] <bodie_> :/
[13:28] <natefinch> bodie_: tons of people use SQL.... who uses jsonschema?
[13:28] <rick_h_> and that feels like a poor reason to ditch something when we're talking about writing our own code anyway
[13:28] <bodie_> don't know, I'm not a frontend programmer
[13:29] <rick_h_> https://github.com/jdorn/json-editor/pulse/monthly
[13:29] <rick_h_> just since it's asked.
[13:30] <bodie_> I think it's probably salvageable with a little elbow grease, but it's not a guaranteed thing that this library will be in suitable shape without an indeterminate amount of monkeying about with it
[13:30] <bodie_> this coming from someone who is far too heavy handed with timeline estimates
[13:32] <rogpeppe1> bodie_: i suspect it would be less time to implement it from scratch.
[13:32] <bodie_> and then it could also be done right
[13:33] <bodie_> but, then we're diverting that energy away from <whatever the person is working on>
[13:33] <rogpeppe1> bodie_: especially since we can then prioritise the pieces we need most
[13:33]  * rogpeppe1 can barely resist going to do it himself.
[13:34] <bodie_> I'll be glad to play along if I suddenly find myself with a lot more free time on my hands =P
[13:34] <rogpeppe1> :-)
[13:35] <bodie_> but, really hoping it doesn't come to that.  I'm really enjoying working with you guys, and I'd love to stay involved :) but I think I need to get some more product rolled out as proof that it would be a rational direction to go -- and this stuff has been a major eddy current
[13:38] <bodie_> well, que sera, sera
[13:42] <voidspace> yay, internet back
[13:42] <voidspace> for now anyway
[13:42] <natefinch> voidspace: yay!
[13:42] <voidspace> natefinch: did you get my emails?
[13:42] <voidspace> natefinch: I finally just sent the [full] one about HA and backup
[13:42] <natefinch> voidspace: yes, sorry, busy morning or I would have responded.  Had a response half typed out :/
[13:43] <voidspace> np
[13:44] <natefinch> so, that's a good point that we can't be assured of getting the same server, especially on Azure
[13:44] <voidspace> yep
[13:45] <voidspace> (my connection is 2mbit downstream and 154kbps upstream by the way - so I'm not sure if that's good enough for standup / hangout. I'll try though)
[13:45] <wwitzel3> natefinch: standup is an hour later today right?
[13:45] <ericsnow> voidspace: avoiding me, huh? <wink>
[13:45] <voidspace> wwitzel3: morning
[13:45] <natefinch> wwitzel3: coreycb
[13:45] <voidspace> ericsnow: hey, hi
[13:45] <natefinch> wwitzel3: correct
[13:46] <voidspace> ericsnow: sorry, I didn't see your messages until I had to leave
[13:46]  * natefinch tries to auto-complete words that aren't usernames and falls on his face
[13:46] <voidspace> ericsnow: I thought I'd been very successfully avoiding you though :-)
[13:46] <ericsnow> voidspace: no worries :)
[13:46] <voidspace> ericsnow: how is day 3 treating you?
[13:46] <ericsnow> voidspace: good.  just starting though
[13:46] <voidspace> ericsnow: welcome to the mad house
[13:46] <wwitzel3> voidspace: morning :) I've been lurking addressing issues that fwereade pointed out on my pull request
[13:46] <voidspace> ericsnow: is this your normal start time?
[13:47] <voidspace> wwitzel3: cool
[13:47] <voidspace> my internet is ropier-than-usual today
[13:47] <ericsnow> voidspace: we'll see (trying to start no later than 7 so I can sync better with the squad)
[13:47] <voidspace> first time it's gone down (during the day) for a while
[13:47] <voidspace> ericsnow: cool
[13:49] <voidspace> natefinch: what's the use case for storing backups on the state server - is it just so the juju command can complete immediately?
[13:49] <voidspace> natefinch: because that doesn't seem like a *real* use case...
[13:50] <ericsnow> BTW, my new laptop is mostly sorted out, trying to remember all the customizations/apps I had the last time I set up Ubuntu on my desktop
[13:51] <ericsnow> so what's the story with UDS?
[13:51] <dimitern> mattyw, hey, i'm back
[13:54] <ericsnow> natefinch: I got a PR up before EOD for the CONTRIBUTING refresh  and a bit of good feedback from davecheney
[13:55] <ericsnow> natefinch: (I also filed a bug on the tracker for the task)
[13:56] <mattyw> dimitern, hey there, your review - n as well as --dry-run. Is there any precedent for that?
[13:56] <mattyw> dimitern, only I've never seen -n used in that way before
[13:56] <ericsnow> natefinch: how do I update the pull request with changes in response to the feedback?
[13:57] <natefinch> voidspace: I don't know that there's a real need to store them on the server.  Possibly so you can easily just fire off a backup from anywhere without needing to worry about where it'll get stored (like from the GUI).
[13:57] <voidspace> natefinch: the GUI can download it too, just like the CLI
[13:58] <voidspace> natefinch: is storing them on the server (an asynchronous api) a requirement or our own idea?
[13:58] <voidspace> natefinch: if it's a requirement we'll have to go with the more complex implementation
[13:58] <natefinch> ericsnow: I think just committing to the same branch will update the PR automatically
[13:59] <voidspace> natefinch: whenever a state server is asked to list backups it will have to ask the other state servers what backups they have
[13:59] <natefinch> voidspace: it was an idea that was stated at the sprint.  The "requirements" are: do backup the right way
[13:59] <perrito666> how about we discuss this on the meeting we should all be  :p
[13:59] <dimitern> mattyw, no precedent
[13:59] <voidspace> natefinch: I think "the right way" is to use cloud storage
[13:59] <natefinch> perrito666: it's in an hour, I moved the wednesday one
[13:59] <perrito666> lol
[14:00] <voidspace> natefinch: and without that, immediate download (not storing on state server) is "good enough"
[14:00] <natefinch> perrito666: since I have a TOSCA meeting now
[14:00] <voidspace> my 2c worth
[14:00] <perrito666> that explains why voidspace and Iare the only ones there
[14:00] <dimitern> mattyw, git uses --dry-run or -n, and as long as it's documented as equivalent it should be fine
[14:00] <voidspace> hah
[14:00] <natefinch> voidspace: seems good to me. Let's do it
[14:00] <rogpeppe1> fwereade: it seems a bit wrong to me that the --version flag is defined inside SuperCommand. do you concur?
[14:00] <perrito666> voidspace: your 2 instances of you just froze
[14:00] <voidspace> natefinch: that simplifies the implementation a great deal
[14:00] <perrito666> I dropped from the call until an h from now
[14:00] <voidspace> perrito666: haha
[14:01] <voidspace> ok
[14:01] <voidspace> perrito666: see you in an hour
[14:01] <voidspace> natefinch: I think it means your api is unneeded
[14:01]  * perrito666 wishes his calendar would move meetings instead of just adding more
[14:01] <voidspace> natefinch: we have a single endpoint, handled in state/apiserver/apiserver.go that the client does a POST to
[14:01]  * rogpeppe1 gets lunch
[14:01] <ericsnow> natefinch: so how do I avoid polluting history with multiple commits when I ultimately just want one?
[14:02] <voidspace> ericsnow: rebase I guess
[14:02] <ericsnow> voidspace: a git thing, right?
[14:02]  * ericsnow pines for hg
[14:02] <voidspace> ericsnow: yeah - rebase is the evil history rewriting that git supports
[14:03] <voidspace> ericsnow: http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html
[14:03] <voidspace> ericsnow: the rest of us are pining for bzr which is even better than Hg
[14:03] <voidspace> ericsnow: with Hg you get multiple commits on mainline too
[14:03] <perrito666> natefinch: just ftr, you did not move the calendar appt
[14:03] <ericsnow> voidspace: I guess that's a consequence of the pull request approach
[14:04] <voidspace> ericsnow: right
[14:04] <natefinch> perrito666: hrmph... I did something...
[14:04] <natefinch> sorry guys
[14:04] <wwitzel3> voidspace: well, if you are only rebasing the upstream in to your feature branch AND your feature branch has never been merged to the upstream stream .. it really is just a short cut of stashing your changes, creating a new branch, and then reapplying them.
[14:05] <voidspace> wwitzel3: right
[14:06] <wwitzel3> but that said, if you don't do that, it can really mess things up
[14:14]  * perrito666 ponders a quick implementation of gunzip for a test
[14:14] <natefinch> perrito666: http://golang.org/pkg/compress/gzip/
[14:15] <ericsnow> perrito666: system()?
[14:15] <perrito666> natefinch: yup, that is what I meant, I am creating the backup inside a tgz but I need to gunzip it to make sure it has the proper contents
[14:16] <perrito666> ericsnow: I am so not doing that in a test :p
[14:16] <ericsnow> perrito666: oh, like a real test ;)
[14:16] <perrito666> ericsnow: yup, this code was faster to write than to test :p
[14:17] <natefinch> perrito666: seems like no reason you can't open up a zipped file to check it's contents.
[14:18] <natefinch> s/it's/its/
[14:18] <perrito666> natefinch: me no understands
[14:18] <perrito666> to many negations on that sentence
[14:19] <natefinch> perrito666: sorry.... mostly "sure, of course you can write that test"
[14:19] <natefinch> perrito666: f, err := os.Open(filename);  gz := gzip.NewReader(f);  // read from gz
[14:22] <mattyw> dimitern, I worry about having -n to mean dry run, as -n is used elsewhere to mean number of
[14:23] <natefinch> mattyw, dimitern: totally agree ^^
[14:26] <dimitern> mattyw, it is, but not in the same command - how about "-z" then?
[14:28] <mattyw> dimitern, I don't have any complaints about that, I'm just searching for what other commands use to see if there is some common usage
[14:29] <natefinch> dimitern: what's wrong with just --dry-run?  Doesn't seem like it's something that needs a short flag
[14:29] <mattyw> natefinch, dimitern apt-get uses -s (to mean simulate)
[14:29] <mattyw> (and also has --dry-run for the same thing)
[14:29] <dimitern> natefinch, i'm totally fine with --dry-run, but users might complain having to type the whole thing - hence the shortcut
[14:31] <ericsnow> dimitern: isn't the problem that short options are a limited commodity (so should be assigned rather conservatively)?
[14:35] <dimitern> ericsnow, that's a fair point yes
[14:37] <mattyw> dimitern, if you're ok I'll change it to --dry-run but otherwise leave it, if anyone complains we can add a short option
[14:41] <dimitern> mattyw, sgtm
[14:48] <natefinch> perrito666, wwitzel3, ericsnow, voidspace:  my wife wants me to bring the kids to a school function... like now.  So I'm going to miss the meeting.  I'll be back in an hour and a half -ish.  Sorry for the late notice
[14:48] <wwitzel3> natefinch: np
[14:48] <perrito666> natefinch: np, have fun on the school thinguie
[14:53] <mattyw> dimitern, that change has been pushed
[14:57] <dimitern> mattyw, ta
[14:59] <alexisb> natefinch, cmars: do either of you have someone on your team that would be available to help with a field bug today?
[15:01] <wwitzel3> voidspace, ericsnow: standup
[15:02] <ericsnow> wwitzel3: coming
[15:03] <alexisb> wwitzel3, can you ask about my ping above
[15:04] <wwitzel3> alexisb: yep, soon as he is back
[15:04] <alexisb> thanks
[15:04] <wwitzel3> alexisb: is the bug in lp?
[15:05] <alexisb> yes, see #juju @ canonical
[15:05] <alexisb>  bug https://bugs.launchpad.net/juju-core/+bug/1089291
[15:05] <_mup_> Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <iso-testing> <theme-oil> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):New> <https://launchpad.net/bugs/1089291>
[15:05] <alexisb> pontentially related to this bug ^^
[15:07] <wwitzel3> rgr, ok
[15:14] <perrito666> voidspace: are you around?
[15:15] <sinzui> Hi devs, We cannogt get any version of juju to work with HP Cloud. This may relate to the region changes. https://bugs.launchpad.net/juju-core/+bug/1328905
[15:15] <_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
[15:17] <fwereade> rogpeppe1, yes, --version should be somewhere else
[15:17] <dimitern> fwereade, jam1, natefinch, others? a trivial PR introducing pending networks in state https://github.com/juju/juju/pull/78
[15:18] <mattyw> alexisb, did you get any response about your field issue?
[15:19] <alexisb> not yet
[15:19] <rogpeppe1> fwereade: ok, thanks. i was wondering though about adding a GetVersion func() string field to SuperCommandParams, so it was easy to plug in
[15:20] <rogpeppe1> although perhaps a new version of SuperCommand that embeds the other one but adds version info might be a better approach
[15:23] <ericsnow> the juju core roadmap presentation at UDS starts in about 35 minutes, no?
[15:23] <rogpeppe1> anyone know what the rationale is for having plugins/local separate from plugins/local/juju-local ?
[15:24] <ericsnow> wwitzel3: is there a bug open for that failed-tests-leave-mongod-running thing?
[15:26] <wwitzel3> ericsnow: no, it only happens if the failure is the result of a panic
[15:26] <wwitzel3> and there is really no way to cleanup anything in that case
[15:26] <voidspace> perrito666: yes
[15:26] <ericsnow> wwitzel3: got it
[15:26] <voidspace> wwitzel3: ah, I thought we were waiting for natefinch, sorry
[15:26] <mattyw> alexisb, I was about to start working on another feature but if I can be any help let me know
[15:26] <voidspace> wasn't watching irc
[15:27] <wwitzel3> voidspace: we just miss you is all
[15:27] <ericsnow> voidspace, wwitzel3: :)
[15:27] <voidspace> heh
[15:27] <voidspace> are you there now?
[15:27] <voidspace> I just joined the room and it was empty
[15:28] <rogpeppe1> ha, the --version flag looks like it's totally undocumented
[15:28] <wwitzel3> rogpeppe1: self documenting?
[15:28] <wwitzel3> ;)
[15:28] <rogpeppe1> wwitzel3: only if you know it's there...
[15:32] <sinzui> natefinch, fwereade, jam, wallyworld, thumper, alexisb: bug 1328905 is super critical Users are reporting that they cannot use any juju with HP
[15:32] <_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
[15:32] <alexisb> yeah more critical bugs!
[15:32] <sinzui> ^ This may relate to region changes and we need to know how to specific the right regions (and AZs?) or HP has an issue that they aren't reporting
[15:35] <fwereade> whoo, critical bugs, indeed :/
[15:48] <alexisb> so jam, fwereade, natefinch, cmars, wallyworld, thumber, we will need to get some focus on critical bugs that have come up the last coupld of days, I will send mail with details I am aware of but I leave it to you all to assign/delegate as appropriate
[15:49] <fwereade> alexisb, cheers, I will be popping on late tonight to chat to thumper anyway
[15:54] <jcastro> alexisb, I am firing up the hangout now
[15:55] <alexisb> jcastro, ack be there is just a moment
[15:55] <jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdbxLFl9wnFqKOiHH4O9UeqMAFRS_cJdePa1FtT5DQH4hv1ag?authuser=0&hl=en
[15:55] <jcastro> rick_h_, ^^^
[15:56] <perrito666> if anyone has a moment https://github.com/juju/juju/pull/79 review will be appreciated :)
[15:57] <rick_h_> jcastro: thuoght it was 2pm?
[16:00] <mfoord> grrrr
[16:00] <mfoord> rubbish internet
[16:07] <mfoord> natefinch: here's a prototype of the backup download handler
[16:07] <mfoord> natefinch: https://github.com/voidspace/juju/compare/download-backup
[16:08] <rogpeppe1> can anyone tell me what these lines are supposed to be doing? cmd/supercommand.go:264,268
[16:08] <rogpeppe1> (starting "if c.subcmd.IsSuperCommand")
[16:09] <rogpeppe1> it looks to me as if the first body of the if is a complete no-op
[16:14]  * rogpeppe1 starts to see, and wishes that he hadn't
[16:15] <mfoord> perrito666: I don't know if you saw my last message, my internet is a bit up and down
[16:16] <mfoord> perrito666: where does your backup stuff live - I can't find it in trunk but I may just be being dumb
[16:19]  * rogpeppe1 feels embarrassed that he signed it off in code review
[16:34] <jam> sinzui: are there other critical bugs, or just that one?
[16:34] <sinzui> jam, just that one.
[16:35] <sinzui> jam. I fear that while hp said they were removing old regions on June 1, there were really removed on june 11
[16:36] <sinzui> jam, and devs, This is what I am reading  to for evidence that everyones' config is wrong: https://docs.hpcloud.com/api/v13/compute/
[16:37] <sinzui> I don't know what value really belongs here
[16:37] <sinzui> region: az-3.region-a.geo-1
[16:37] <jam> sinzui: so that is what used to work, we strip off the az-3 when we need to, but we need it for somethings
[16:38] <sinzui> jam I get further with just region-a.geo-1, and I see in horizon that az3 was given (at random?), but but strap fails...maybe juju couldn't match images to the region?
[16:41] <jam> sinzui: so, IIRC, it used to be that either swift or compute required the extra field, and we would strip it off when looking it up in the other one
[16:42] <jam> mgz: are you still around ?
[16:42] <mgz> jam: yup
[16:42] <jam> mgz: can you investigate the HP cloud stuff?
[16:42] <jam> it sounds like keystone might be returning different data now
[16:43] <mgz> I also saw that the other day, but didn't dig
[16:43] <mgz> the switch to 13.5 has happened now though
[16:43] <mgz> so, we need to switch anything still on the old config
[16:47] <perrito666> voidspace: sorry I was just having lunch, I just pull requested the backup stuff
[16:50] <mfoord> perrito666: cool, what's the link?
[16:51] <perrito666> mfoord:  https://github.com/juju/juju/pull/79
[17:07] <voidspace> natefinch: prototype backup download endpoint - with skeleton tests
[17:07] <voidspace> natefinch: https://github.com/voidspace/juju/compare/download-backup
[17:08] <voidspace> my internet is horrible and I've reached a good place to stop
[17:08] <voidspace> so I'm calling it a day
[17:08] <voidspace> EOD folks
[17:08] <voidspace> g'night
[17:08] <natefinch> voidspace: cool, just got back
[17:08] <natefinch> voidspace: g'night
[17:08] <ericsnow> voidspace: night
[17:08] <voidspace> natefinch: I would appreciate it if you took a look at the download prototype before I hook it into perrito666's backup work
[17:09] <voidspace> natefinch: I don't want to go too far down this road if you think there are horrible issues with the basic approach
[17:09] <voidspace> natefinch: thankfully it's very simple
[17:09] <voidspace> ericsnow: o/ hopefully chat more tomorrow
[17:09] <ericsnow> voidspace: :)
[17:09] <wwitzel3> see ya voidspace
[17:10] <voidspace> wwitzel3: g'night
[17:14] <perrito666> natefinch: we need to figure out if we are going with your approach or michaels one so we can integrate backup into any of those :p
[17:16] <natefinch> perrito666: michael's approach seems a lot simpler and avoids a lot of the problems of connecting to the wrong HA server
[17:17] <perrito666> natefinch: so, we drop much of your work? if not all?
[17:18] <natefinch> perrito666: whole damn thing.   It turns into a single synchronous call to the HTTP API and we get back the data from the backup immediately.  Maybe fwereade has an opinion on that
[17:22] <perrito666> how much is 2 pts in our kanban?
[17:22] <perrito666> I really need to make a post it for that
[17:24] <natefinch> perrito666: a point is half a day
[17:24] <natefinch> half an ideal day, but we only assume like 60-70% efficiency per calendar day
[17:43] <perrito666> fwereade: if you have a sec, ptal to the note I added to https://github.com/juju/juju/pull/30 I am not sure I properly expressed what you wanted to say there.
[17:43] <perrito666> bbl Ill go biking while there is sun outside
[17:43] <wwitzel3> perrito666: ha, read that as bikining
[18:05] <jam> wwitzel3: well, it is sunny, right?
[18:06] <wwitzel3> :)
[18:26] <natefinch> ericsnow: how goes?
[18:26] <ericsnow> good
[18:27] <ericsnow> natefinch: I have something broken in my bash config that is resulting in test failures that I'm trying to track down
[18:27] <ericsnow> natefinch: I also have an update for yesterday's patch that I need to push up
[18:28] <natefinch> ericsnow: interesting..... I think we've seen similar problems from people trying to run tests in non-bash shells.... which is not really a good reflection on our tests
[18:28] <ericsnow> natefinch: yeah, I'll let you know what I find--maybe there is something we could change in the tests to help
[18:29] <natefinch> ericsnow: you have your laptop up and running?
[18:29] <ericsnow> natefinch: pretty much
[18:30] <natefinch> cool
[18:31] <ericsnow> natefinch: installed 14.04 without ever booting into Windows and everything has just worked :)
[18:31] <natefinch> ericsnow: nice
[18:32] <alexisb> ericsnow, what type of laptop did you end up getting?
[18:32] <ericsnow> alexisb: dell XPS 15 (haswell)
[18:35] <alexisb> ericsnow, nice
[18:40] <wwitzel3> natefinch: in HA, if you a state machines goes down, it will eventually get replaced without use intervention right?
[18:41] <natefinch> wwitzel3: nope, you have to manually run ensure-availability
[18:41] <wwitzel3> well, that was unproductive waiting
[18:41] <natefinch> sorry
[18:41] <wwitzel3> yes, I blame you!
[18:44] <natefinch> heh
[19:17] <ericsnow> natefinch: I've pushed the updated patch and see that the pull request automatically updated...
[19:17] <ericsnow> natefinch: but I'm pretty sure I don't want those merge commits showing up there
[19:18] <ericsnow> natefinch: how do I keep that from happening?
[19:31] <natefinch> ericsnow: you can rebase, but it can end up removing comments on old revisions
[19:34] <natefinch> ericsnow: I don't think the old revisions are a big deal.  We had a big discussion about this and I believe we decided to leave all revisions after the initial pull request.
[19:46] <natefinch> ericsnow: I sorta wonder how much "intro to Github" we need in our contributing doc
[19:47] <ericsnow> natefinch: I'd say just a link to some external tutorial
[19:47] <fwereade> perrito666, natefinch: hmm, in-env storage from wallyworld should be HA soon... is it maybe reasonable to save to disk and distribute to other state servers? feel like there might be too much to go wrong
[19:47] <natefinch> ericsnow: I agree
[19:48] <ericsnow> natefinch: I'll add something in
[19:48] <natefinch> fwereade: too much to go wrong with our synchronous approach, or the distributing approach? :)
[19:48] <fwereade> natefinch, the distributing one :-/
[19:48] <natefinch> heh
[19:48] <fwereade> natefinch, which is a shame because I'm not 100% sold on the sync one either
[19:48] <natefinch> fwereade: me either
[19:49] <bodie_> woot, got a properly breaking JSON-Schema validation
[19:49] <natefinch> fwereade: depends too much on your network not failing in the middle... or impatient users hitting control-C
[19:49]  * fwereade cheers at bodie_
[19:49] <natefinch> bodie_: was it improperly breaking before?
[19:49] <fwereade> natefinch, yeah
[19:50] <natefinch> ahh, so validation failures were broken? :)
[19:51] <fwereade> natefinch, I haven't properly read back so point and laugh if this is stupid -- can we do both? ie a sync request kicks off a backup, which gets teed both to disk and to env storage, and recorded as an actual backup in state? so the interface is essentially the sync one but we get half a chance of recording it properly as well?
[19:51] <natefinch> fwereade: we hadn't planned on actually putting it into mongo, if that's what you're talking about
[19:52] <natefinch> fwereade: but yes, both is possible.... the problem was just getting *to* the ones stored on the state machine.
[19:52] <natefinch> (when in HA)
[19:53] <fwereade> natefinch, yeah, I was thinking that once we had env storage in gridfs it would be silly not to use it
[19:53] <bodie_> natefinch, yeah, I was having a pretty hard time getting Validate to INvalidate anything :P
[19:53] <fwereade> natefinch, but it would also be silly to depend on it
[19:53] <natefinch> heh
[19:53] <fwereade> natefinch, so it's more a distribution mechanism than a storage if iyswim
[19:53] <bodie_> natefinch, actually hadn't implemented Validate yet, since we were thinking it would be deeper into the State machinery
[19:53] <bodie_> anyway, it seems to be working now
[19:54] <bodie_> and, I'm discovering some interesting stuff about JSON-Schema
[19:55] <bodie_> for one thing..... encoding/json.Unmarshal doesn't explode if you try to unmarshal "something"
[19:56] <bodie_> it doesn't like `{"something"}`
[19:56] <bodie_> but `"something"` seems to be fine
[19:56] <fwereade> bodie_, that seems reasonable
[19:56] <fwereade> bodie_, {"something"} is a meaningless perversion of an object
[19:56] <bodie_> I didn't realize bare literals like that were JSON values
[19:56] <fwereade> bodie_, "something" is just a serialized string
[19:57] <bodie_> so, fwereade, I think we CAN define "simplistic" schemas
[19:58] <fwereade> bodie_, fantastic
[19:58] <fwereade> bodie_, I thought so too but was starting to get nervous :)
[19:59] <bodie_> fwereade, so... I think if our params looks like -- the example I gave
[19:59] <bodie_> https://github.com/juju/docs/pull/117#commitcomment-6601692
[20:00] <bodie_> well, we'd want it to be a little different
[20:00] <bodie_> let me put something interesting together
[20:00] <bodie_> anyway, then the action could be called something more like --
[20:00] <bodie_> snapshot "outfilename.bz2" --compression 5
[20:01] <bodie_> that would probably be dumb, and maybe you'd only be able to define a single parameter like that
[20:01] <bodie_> but it might be possible
[20:01] <fwereade> bodie_, yeah, I'm a little bit -1 on single-parameter schemas
[20:02] <jcw4> fwereade: just a little bit?
[20:02] <fwereade> bodie_, your point about params being N keyed schemas seemed like a solid one
[20:03] <fwereade> jcw4, yeah, more than a little bit
[20:04] <fwereade> jcw4, I wish this did not apply to me quite so much, but it does: http://www.thepoke.co.uk/2011/05/17/anglo-eu-translation-guide/
[20:04] <bodie_> heheh, this is gonna be good
[20:04] <jcw4> yep... I grew up in Zimbabwe (aka Rhodesia)
[20:05] <jcw4> we consider ourselves more british than the British
[20:05] <bodie_> "quite good"
[20:05] <bodie_> "incidentally"
[20:07] <jcw4> I knew I interpreted your code review comments correctly...
[20:07] <jcw4> fwereade: I only have a few minor comments on your PR
[20:07] <jcw4> supposed to be in quotes ^^
[20:07]  * fwereade looks decidedly shamefaced
[20:08] <fwereade> jcw4, to be fair, even when I am asking someone to completely change things, I have in mind the worst possible case to compare it to
[20:08] <bodie_> jcw4, lol
[20:08] <fwereade> jcw4, so I can always say it's minor ;)
[20:08] <jcw4> :)
[20:08] <jcw4> fwereade: I just think it's such a civilized way of communicating
[20:09] <jcw4> it's a pity non Anglos misinterperet it so much
[20:09] <jcw4> :)
[20:09] <fwereade> jcw4, yeah, it seems entirely natural to me :)
[20:39] <ericsnow> natefinch: okay, it's starting to make sense now
[20:40] <natefinch> ericsnow: good
[20:52]  * fwereade disappears again for a bit, probably back soon, slightly less guaranteed than before but still likely
[20:57] <thumper> fwereade: around?
[20:58] <jcw4> thumper: 5 minutes ago he said he would probably be back soon
[20:59] <thumper> jcw4: ok, ta
[21:16] <waigani> morning all
[21:23]  * thumper is super happy his branch landed last night
[21:26] <menn0> waigani, thumper: (belated) hi!
[21:26] <thumper> o/
[21:26]  * menn0 is a little zombie-ish this morning. 
[21:26] <menn0> Youngest up all night and I'm a bit sick
[21:27] <thumper> menn0: oh no...
[21:27] <menn0> more coffee and I should be ok... just clearing out the inbox
[21:28] <wwitzel3> ugh that reminds me I've neglected email all day
[21:28]  * menn0 is so glad he discovered the Google Labs Gmail Auto Advance feature.
[21:28] <thumper> menn0: what does that do?
[21:29] <menn0> It means you can hit # for Delete or y for Archive and it takes you to the next or prev conversation (configurable) instead of back to the inbox.
[21:29] <menn0> I can churn through emails so much faster now. Almost as efficently as Mutt.
[21:30] <wwitzel3> menn0: nice, I will have to enable that
[21:30] <menn0> It's so good. I'm actually considering going back to Gmail for my personal mail.
[21:32] <bodie_> menn0, you a zero inboxer? ;)
[21:32] <ericsnow> what's the rationale on unsetting $HOME during tests?
[21:32] <thumper> ericsnow: for isolation
[21:32] <menn0> bodie_: I try to be and actually get pretty close most days with my Canonical inbox.
[21:32] <thumper> ericsnow: IIRC, it is set to a test directory
[21:32] <thumper> which is deleted at the end of the test
[21:32]  * thumper is waiting around for fwereade
[21:32]  * thumper needs to walk the dog...
[21:33]  * fwereade is here for thumper
[21:33] <wwitzel3> yeah same here, I try to zero inbox before I EOD
[21:33] <wwitzel3> lol
[21:33] <thumper> fwereade: heh
[21:33] <thumper> fwereade: was just writing that I'll be back in 30min
[21:33] <ericsnow> thumper: I'm getting a bunch of failures because some of my bash startup scripts make use of $HOME
[21:33] <thumper> fwereade: but if you are here now, lets chat
[21:33] <menn0> fwereade, thumper: that's really sweet :)
[21:33] <thumper> menn0: we try...
[21:34] <fwereade> thumper, cool, would you start a hangout while I fix another drink please?
[21:34] <jcw4> awwww
[21:34] <thumper> fwereade: ack
[21:34] <thumper> jcw4, menn0: you guys are just jealous
[21:34] <jcw4> wha?
[21:34] <jcw4> me?
[21:34] <thumper> fwereade: https://plus.google.com/hangouts/_/gte3jtum3i2m4lrjob2lnm72yaa?hl=en
[21:37] <menn0> I shouldn't have said nice things about Gmail. It just went pop - 500 errors :)
[21:37] <menn0> (as in HTTP status 500)
[21:37] <jcw4> whew... I was thinking...
[21:38] <menn0> :)  ... it was only dead for a minute
[21:41] <bodie_> there was a fun gmail outage last year
[21:41] <bodie_> I always thought they were an invulnerable billion-ton giant until that
[21:42] <bodie_> now I just think they're a mostly untouchable billion-ton giant
[21:42] <jcw4> :)
[22:07]  * jcw4 is off for a few hours...
[22:21]  * thumper takes the dog for a quick walk
[22:21] <thumper> thinking time
[22:25]  * ericsnow calls it a day
[22:59] <sinzui> wallyworld, Do you have any insights into this bug. How can I bootstrap with private IPS https://bugs.launchpad.net/juju-core/+bug/1328905
[22:59] <_mup_> Bug #1328905: hpcloud: index file has no data for cloud <ci> <hp-cloud> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1328905>
[23:00] <wallyworld> sinzui: one sec otp
[23:01] <thumper> menn0: coming?
[23:01] <waigani> menn0: you alive?
[23:26] <wallyworld> sinzui: off the phone now, did you want a hangout about that bug?
[23:27] <sinzui> wallyworld, I think I know what has happened
[23:28] <sinzui> wallyworld, The new version requires manual allocation of floating-ips where as the previous version automatically did http://h30499.www3.hp.com/t5/Grounded-in-the-Cloud/Managing-your-Floating-IPs-in-HP-Cloud-13-5/ba-p/6401527#.U5jkER-cYbw
[23:28] <wallyworld> oh joy
[23:28] <sinzui> wallyworld, I suppose the juju config was ignored.
[23:28] <thumper> hmm...
[23:28] <thumper> who wrote that code... was it me?
[23:29] <wallyworld> sinzui: so it failed with use-floating-ip=true?
[23:29] <thumper> how would I know?
[23:29] <sinzui> wallyworld, I am not sure that is really really true because we only have 5 public addresses, and our project/tenant has had more bootstrapped envs that 5
[23:29] <thumper> yes
[23:29] <thumper> yes it was me
[23:29] <thumper> hmm...
[23:30] <sinzui> wallyworld, no it works with that...but I take that to mean the 50 of us sharing juju-scale-test have 5 IPs for either 5 envs or 5 instance total
[23:31] <sinzui> I vaguely recall we always got ip addresses instead of dns for HP envs
[23:31] <thumper> hmm... it seems that I have already implemented what I think should be there...
[23:31] <wallyworld> sinzui: so maybe i am being dumb - if we need to manually allocate a floating ip, how do we do that outside the hp console?
[23:32] <sinzui> wallyworld, there is a hp cli for that
[23:32] <perrito666> rogpeppe1: I know you most likely are not here, but should I consider your review done_
[23:33] <sinzui> wallyworld, I need to read up about something called salt that might allow ssh to a private ip on hp
[23:33] <wallyworld> sinzui: hopefully there's api specs and the transport is http or something
[23:33] <wallyworld> so we can invoke if needed
[23:33] <wallyworld> sinzui: so, do we have a feel for how many people are affected? how many juju users are on 13.x?
[23:34] <wallyworld> is this a blocker for 1.20?
[23:35] <sinzui> wallyworld, in the last 18 hours, both canonistack regions failed, azure's delete problem escalated and we ran out of resources on azure, then we finally switched over to HP's new regions loosing sane ip addresses and AZs. I need to sleep
[23:35] <sinzui> wallyworld, every juju user is affected by this
[23:35] <sinzui> who uses hp
[23:35] <wallyworld> ok, let's talk tomorrow
[23:35] <wallyworld> well that sucks
[23:36] <wallyworld> a vendor change breaking juju like that
[23:37] <sinzui> wallyworld, at the start of the year, it was just the new people who were setup with horizon. A lot of people in Ubuntu engineering couldn't get juju to work because the docs advise AZ prefixes to the region and our boilerplate has  use-floating-ip: false
[23:38] <sinzui> wallyworld, I think the underlying issue is HP ran our of IPs and the failed to solve the network issue with ip6 or dns
[23:38] <wallyworld> sure. doesn't help us though :-(
[23:39] <wallyworld> juju's ip6 support is still wip anyway i think
[23:41] <wallyworld> sinzui: so if we change the boilerplate as per your last bug comment, do you think that will unblock 1.20? pending a better longer term solution
[23:41] <sinzui> wallyworld, It gets a little uglier https://bugs.launchpad.net/juju-core/+bug/1247500
[23:41] <_mup_> Bug #1247500: Floating IPs are not recycled in OpenStack Havana <addressability> <ubuntu-openstack> <juju-core:Triaged> <https://launchpad.net/bugs/1247500>
[23:41] <sinzui> ^ last comment
[23:42] <wallyworld> :-(
[23:42] <perrito666> thumper: howbazaar, really?
[23:43] <wallyworld> sinzui: so short term - use floating ip to true and delete any when env is destroyed will unblock 1.20?
[23:44] <sinzui> wallyworld, I think so...I am still coming to understand this issue after a whole day of testing
[23:44] <sinzui> wallyworld, I am about to destroy and env and see what happens to the ip
[23:44] <wallyworld> sinzui: thanks for doing so much work to explain the problem. really appreciated
[23:44] <sinzui> well ci just did a destroy...at least CI is a little happier
[23:46] <sinzui> wallyworld, test 1 did return the IP
[23:46]  * sinzui does test two
[23:46]  * wallyworld is hopeful
[23:48] <sinzui> wallyworld, I am marking that bug  incomplete. my tests don't repeat it
[23:48] <wallyworld> sinzui: ok. so for now, just the boiler plate change for 1.20
[23:48] <sinzui> yep
[23:48] <wallyworld> great :-)
[23:49] <sinzui> I would like to know how to get the AZ working again
[23:50] <wallyworld> as in specifying region with az2.region-a.blah
[23:50] <wallyworld> we are getting stuff in trunk which will handle az spread automatically
[23:54] <sinzui> yay
[23:55] <wallyworld> sinzui: should be landed this week - currently in review
[23:55] <wallyworld> so explicit region less important i guess
[23:59] <wwitzel3> menn0: thanks for that tip about the gmail labs .. makes email way easier
[23:59] <davecheney> thumper: https://github.com/juju/names/pull/3#discussion-diff-13674308R164
[23:59] <menn0> wwitzel3: np. It's a small thing but it makes a big difference.
[23:59] <davecheney> err, whatever