#juju-dev 2012-09-17
<davecheney> rog: can you paste the link to your doc in here ?
<rog> davecheney: https://docs.google.com/a/canonical.com/document/d/1KujyWU5BtEefqlT9vQ-mN9y-dQEMyZ-iCyByZcaAeAM/edit
<davecheney> hey- this cheeky bugger has taken out twitter handle https://twitter.com/jujudev
<Aram> rog: what's the URL of your proposal?
<rog> Aram:  https://codereview.appspot.com/6493125
<davecheney> is there a reason why charm.Meta has no Format(ter) field ?
<Aram> davecheney: http://paste.ubuntu.com/1210897/
<davecheney> Aram: thank you
<Aram> http://api.mongodb.org/wiki/current/Home.html
<Aram> http://api.mongodb.org/wiki/current/Updating.html#Updating-ModifierOperations
<rog> https://codereview.appspot.com/6489133
<davecheney> fwereade: https://codereview.appspot.com/6490121
<niemeyer> fwereade: Comments sent on https://codereview.appspot.com/6506124/
<fwereade> niemeyer, cheers
<fwereade> niemeyer, yeah, all looks sane, ty :)
<niemeyer> fwereade: Awesometastic :)
<niemeyer> davecheney: Sent some comments on FileVar
<niemeyer> davecheney: Nice idea, btw
<fwereade> http://www.smbc-comics.com/index.php?db=comics&id=2719#comic
<davecheney> watcher failure, http://paste.ubuntu.com/1211395/
<davecheney> Aram: rog https://codereview.appspot.com/6497146
<davecheney> moar failures: http://paste.ubuntu.com/1211458/
<Aram> davecheney: http://paste.ubuntu.com/1211460/
<niemeyer> davecheney: This should do it: https://codereview.appspot.com/6499126
<davecheney> niemeyer: http://paste.ubuntu.com/1211460/
#juju-dev 2012-09-18
<davecheney> lucky(~/go/src) % go version
<davecheney> go version go1.0.3
<davecheney> ^ trying out go-1.0.3
<davecheney> today
<davecheney> as this will be the golang-go deb when we ship
<Aram> http://paste.ubuntu.com/1212512/
<fwereade> Aram, for now, I'm moving RelationUnit.Watch into my queue -- I think it depends fairly heavily on the preliminary stuff I'm doing right now
<fwereade> Aram, complain if you run out of other things to do ;0
<Aram> ok
<Aram> perfect
<davecheney> # launchpad.net/juju-core/charm
<davecheney> charm/url.go:191: undefined: bson.SetZero
<davecheney> ^ why is this bitching ?
<davecheney> Aram: ?
<Aram> davecheney: bzr pull mgo
<davecheney> Aram: thank you
<davecheney> ls
<davecheney> doh
<davecheney> https://codereview.appspot.com/6532043 << niemeyer, fwereade -- small branch that adds juju set --config
<davecheney> actually parsing the options is next
<davecheney> fwereade: https://codereview.appspot.com/6532045 << as discussed
<niemeyer> Easy one: https://codereview.appspot.com/6536043
<fwereade> niemeyer, LGTM
<niemeyer> fwereade: Cheers
<davecheney> niemeyer: http://codereview.appspot.com/6501101/
<niemeyer> Easiest win, EVAR! https://codereview.appspot.com/6530045
<davecheney> LGTM
<davechen1y> fwereade: error: Failed to create merge proposal: lookup api.launchpad.net: no such host
<davechen1y> urr
<davechen1y> fwereade: https://codereview.appspot.com/6527045
<Aram> niemeyer: http://paste.ubuntu.com/1213140/
<niemeyer> Aram: Cheers
<rog> niemeyer: https://codereview.appspot.com/6527047/
<Aram> cobzr: can't remove current branch "77-mstate-watchers-machine-units-nwo2"
<Aram> I almost deleted my work
<Aram> this check is great
#juju-dev 2012-09-19
<niemeyer> http://code.google.com/p/android/issues/detail?id=33590
<davecheney> niemeyer: Aram https://bugs.launchpad.net/juju-core/+bug/1052818
<davecheney> ^ failure in the overnight stress test
<davecheney> fwereade: niemeyer I have reproposed https://codereview.appspot.com/6532043
<davecheney> as a checkpoint before moving on to option parsing
<davecheney> test failure in state, https://bugs.launchpad.net/juju-core/+bug/1052830
<davecheney> probably not that important
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1052836
<davecheney> test failure in jujuc
<fwereade> niemeyer, propsed RU settings: https://codereview.appspot.com/6524048
<davecheney> rog: http://paste.ubuntu.com/1214541/
<davecheney> any ideas ?
<davecheney> i've put the short attempts into ec2
<rog> davecheney: oh, that was my thought
<davecheney> rog: https://bugs.launchpad.net/juju-core/+bug/1052387
<davecheney> ^ le detalis
<davecheney> ... value *errors.errorString = &errors.errorString{s:"cannot find image satisfying constraints: error getting instance types: 404 Not Found"} ("cannot find image satisfying constraints: error getting instance types: 404 Not Found")
<davecheney> [1;3D
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1052853
<davecheney> ^ fails constantly with GOMAXPROCS > 1
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1052859
<davecheney> niemeyer: OT: i think gocheck might be eating an error doing go build
<niemeyer> davecheney: That'd be surprising
<davecheney> specifically if the test build fails (due to out of disk space)
<davecheney> it's a LOW priority ite
<davecheney> item
<davecheney> niemeyer: # launchpad.net/juju-core/juju_test
<davecheney> can't create $WORK/launchpad.net/juju-core/juju/_test/_go_.6: No such file or directory
<davecheney> FAIL    launchpad.net/juju-core/juju [build failed]
<niemeyer> 2012/09/19 09:31:15 error: cannot upload tools: build failed: exit status 2; # launchpad.net/juju-core/cmd
<niemeyer> can't create $WORK/launchpad.net/juju-core/cmd/_obj/_go_.6: No such file or directory
<niemeyer> davecheney: ^
<davecheney> rerunning with -x now
<davecheney> niemeyer: http://paste.ubuntu.com/1214626/
<davecheney> niemeyer: full output of above, http://paste.ubuntu.com/1214627/
 * hazmat waits for the sprinters to return from lunch
<niemeyer> davecheney: vvv
<niemeyer>         if b.mkdirCache[dir] {
<niemeyer>                 if _, err := os.Stat(dir); os.IsNotExist(err) {
<niemeyer>                         panic("cache is wrong")
<niemeyer>                 }
<niemeyer> davecheney: Can you see if your stress test explodes that?
<davecheney> niemeyer: #!/bin/bash
<davecheney> set -e
<davecheney> while true ; do  bzr pull export GOMAXPROCS=$[ 1 + $[ RANDOM % 128 ]] go test $@ 2>&1
<davecheney> done
<davecheney> urgh, paste
<davecheney> http://paste.ubuntu.com/1214766/
<niemeyer> davecheney: Cheers
<davecheney> niemeyer: then  bash stress.bash -x ./mstate ./container ./cloudinit ./juju ./log ./charm/... ./juju ./container ./worker/... ./store
<niemeyer> ;set./s
<davecheney> niemeyer: http://paste.ubuntu.com/1214772/
<niemeyer> panic: cache is wrong
<davecheney> niemeyer: https://code.google.com/p/go/issues/detail?id=4104
<davecheney> niemeyer: also add
<davecheney>         export GOMAXPROCS=$[ 1 + $[ RANDOM % 128 ]]
<niemeyer> davecheney: Done
<rog> niemeyer: https://codereview.appspot.com/6540046
<fwereade> niemeyer, https://codereview.appspot.com/6533052
<rog> niemeyer: https://codereview.appspot.com/6538047
<davecheney> niemeyer: http://code.google.com/p/go/issues/detail?id=4104&sort=-id&colspec=ID%20Status%20Stars%20Priority%20Owner%20Reporter%20Summary
<davecheney> rog: http://code.google.com/p/go/issues/detail?id=4104&sort=-id&colspec=ID%20Status%20Stars%20Priority%20Owner%20Reporter%20Summary
<davecheney> # launchpad.net/juju-core/state
<davecheney> state/relation_test.go:7:2: import "launchpad.net/juju-core/mstate": cannot find package
<davecheney> anyone working on this ?
<davecheney> rog: anyone working on this ^
<hazmat> niemeyer, any objections to me adding clint as an admin to ~juju
<rog> niemeyer: https://codereview.appspot.com/6538048
<niemeyer> hazmat: Nope, no objections
<niemeyer> https://code.google.com/p/go/issues/detail?id=4105
<rog> niemeyer: https://codereview.appspot.com/6527051
<davecheney> juju set always expresses options as key=value ?
<davecheney> i cannot find any case where this is not true, but I wanted to check
<niemeyer> davecheney: +1
<davecheney> ty
<niemeyer> http://juju-dist.s3.amazonaws.com/tools/mongo-2.2.0-precise-amd64.tgz
<niemeyer> davecheney: It lives!
<davecheney> whoop whoop
#juju-dev 2012-09-20
<mramm> Hey frank+everybody!
<TheMue> Good morning, team. ;)
<davecheney> TheMue: hello!
<TheMue> davecheney: heya
<Aram> morning TheMue.
<Aram> TheMue: we renamed mstate to state yesterday and we broke the build. now you fix it.
<TheMue> Aram: hi, mr mstate
<TheMue> Aram: mark just told me, great news (the renaming)
<mramm> Aram: are we letting frank fix everything now?    I guess he is rested up and ready to go!
<mramm> ;)
<rog> TheMue: morning!
<TheMue> rog: hello
<rog> TheMue: we've been missing you!
<TheMue> rog: i would like to be in lisbon too, but now i'll see how i can support your from here.
<niemeyer> davecheney: Beautiful juju status: https://codereview.appspot.com/6551043
<davecheney> niemeyer: nice!
<davecheney> all: should we do a google hangout/standup so we can all see frank
<davecheney> as we have the nice big projector
<davecheney> https://codereview.appspot.com/6547044 << go vet go
<mramm> davecheney: Let's do that at 10:30
<fwereade> niemeyer, https://codereview.appspot.com/6551044 should be *almost* trivial, but I suddenly realised that we apparently had *no* name-validity checking in state, and that seemed like a Bad Thing
<mramm> TheMue: Perhaps the best thing you can do right now is to catch up on the latest trunk, and look through the state changes for the last week or two
<niemeyer> ls
<TheMue> mramm: yep, just pulled it
<mramm> it will be hard to merge you into the moment by moment state updates that are going on here
<mramm> and there is a lot to catch up on
<TheMue> mramm: i go through the mails and compare it to the current code to refresh my understanding :)
<mramm> TheMue: sounds good
<niemeyer> davecheney: Reproposed, please see if I got the cases you brought up or missed one
<davecheney> niemeyer: kk
<niemeyer> fwereade: Hmm.. in which sense.. please speak live when you have a moment :)
<davecheney> niemeyer: one last one, ""
<niemeyer> davecheney: Done
<fwereade> niemeyer, and https://codereview.appspot.com/6552043 sits on top of the foregoing -- turns out I had missed a settings test from state :/
<niemeyer> fwereade: Reviewed
<niemeyer> fwereade: The bottom on
<niemeyer> e
<niemeyer> ls
<davecheney> # launchpad.net/juju-core/environs/dummy
<davecheney> environs/dummy/environs.go:400: undefined: state.Initialize
<davecheney> ^ is anyone working on this ?
<fwereade> niemeyer, reproposed https://codereview.appspot.com/6551044
<niemeyer> fwereade: done
<niemeyer> ls
<niemeyer> davecheney: https://codereview.appspot.com/6547044/ done
<davecheney> niemeyer: ty
<TheMue> lunchtime
<TheMue> enjoy the good seafood overthere, i've got to wait until a lunch with my parents in law on sunday ;)
<niemeyer> fg
<davecheney> rm
<niemeyer> fwereade: https://codereview.appspot.com/6533052/ done
<SpamapS> are you guys in Lisbon this week?
<rog> davecheney: try again
<davecheney> rog: lucky(~) % GOROOT=~/go.103 ~/go.103/bin/go tool vet -h
<davecheney> Usage of vet: -printfuncs="": comma-separated list of print function names to check -v=false: verbose
<davecheney> go tool vet: exit status 2
<davecheney> that's all ya get in 1.0.3
<davecheney> # launchpad.net/juju-core/charm
<davecheney> ../charm/url.go:191: undefined: bson.SetZero
<davecheney> FAIL    launchpad.net/juju-core/state [build failed]
<davecheney> ??
<davecheney> # launchpad.net/juju-core/state_test
<davecheney> ./conn_test.go:64: undefined: D
<niemeyer> SpamapS: Not sure if someone answered, but yes, we're in Lisbon
<SpamapS> niemeyer: explains why you're all quiet in here for once. :)
<niemeyer> SpamapS: :-)
<davecheney> state_test.go:585: c.Assert(err, IsNil)
<davecheney> ... value schema.error_ = schema.error_{want:"string", got:interface {}(nil), path:[]string{".", "type"}} ("type: expected string, got nothing")
<niemeyer> davecheney: ?
<fwereade> niemeyer, rog: https://bugs.launchpad.net/juju-core/+bug/1053419
<niemeyer> ls
<davecheney> i think there might be a bug with environs/config.Configs
<davecheney> that are backed by {m}state
<davecheney> schema.error_ = schema.error_{want:"string", got:interface {}(nil), path:[]string{".", "type"}} ("type: expected string, got nothing")
<davecheney> Aram: schema.error_ = schema.error_{want:"string", got:interface {}(nil), path:[]string{".", "type"}} ("type: expected string, got nothing")
<niemeyer> Hey guys, can I get a review on this: https://codereview.appspot.com/6555043
<niemeyer> This is builddb
<niemeyer> It'll certainly not work on trunk right now, but I'd like to get it in so we can keep it somewhat up-to-date eventually
 * davecheney looks
<niemeyer> I was also stuck on the old state to get it to build MongoDB.. I want to move on now so I continue contributing to (m)state
<davecheney> niemeyer: thank you, reproposed
<davecheney> Aram: https://codereview.appspot.com/6541045
<niemeyer> davecheney: Looking
<niemeyer> Trivial: https://codereview.appspot.com/6549044
<davecheney> niemeyer: +1, although while you have won the battle, you are probably loosing the way
<davecheney> war
<niemeyer> davecheney: Yeah, unfortunately :-)
<niemeyer> Or fortunately, I guess.. moar tests
<davecheney> # launchpad.net/juju-core/environs/dummy
<davecheney> environs/dummy/environs.go:400: cannot use cfg.AllAttrs() (type map[string]interface {}) as type *config.Config in function argument
<davecheney> ^ will fix this next
<davecheney> niemeyer: Aram https://codereview.appspot.com/6541047
<davecheney> ^ trivial
<niemeyer> davecheney: Sweet, LGTM
<davecheney> # launchpad.net/juju-core/worker
<davecheney> worker/environ.go:16: undefined: state.EnvironConfigWatcher
<davecheney> worker/environ.go:21: select case must be receive, send or assign recv
<davecheney> interesting
<niemeyer> All: lp:~niemeyer/juju-core/old-state ... for eternity
 * davecheney plays taps
<rog> Aram, niemeyer: https://codereview.appspot.com/6549045
<rog> niemeyer: next in line (almost exactly as it was in old state): https://codereview.appspot.com/6550047
<niemeyer> :w
#juju-dev 2012-09-21
<davecheney> niemeyer: http://code.google.com/p/go/source/browse/ssh/tcpip_func_test.go?repo=crypto
<davecheney> ^ http with custom Dial()
<davecheney> possibly something similar could be done with mgo
<niemeyer> davecheney: Sweet, thanks
<niemeyer> :w
<TheMue> Aram: your current proposal has changes to the confignode that williams last one has also
<Aram> TheMue: william merged my changes before I merged them into trunk because he needed them.
<Aram> but they are the same changes
<TheMue> Aram: ok, so it's coordinated
<TheMue> Aram: you are also very good identifiable on the team picture :D
<Aram> heh
<TheMue> the man with the hat
<TheMue> lunchtime
<davecheney> TheMue: well spotted
<niemeyer> TheMue: Heya
<niemeyer> TheMue: Welcome back
<TheMue> niemeyer: hello
<TheMue> niemeyer: the go fmt in the review because imho the "true" is one position to far left
<TheMue> niemeyer: maybe it's a display failure in the web
<niemeyer> TheMue: It looks in the right place to me
<TheMue> niemeyer: then just forget it, it seems i need glasses ;)
<Aram> TheMue: lbox now refuses to work if you didn't go fmt beforehand
<Aram> lunchy
<TheMue> Aram: yes, i've seen it. i already wondered
<TheMue> ooops, too late :D
<niemeyer> TheMue: Btw, please update it :)
<niemeyer> (lbox, that is))
 * niemeyer > lunch too
<TheMue> niemeyer: has been my first job as i read your mail while i've been on the island :P
<TheMue> enjoy your meal
<wrtp> davecheney: https://codereview.appspot.com/6553049
 * davecheney wrtp looks
<TheMue> davecheney: it's a larger one
<TheMue> wrtp: you've got a first review
<wrtp> TheMue: thanks
<TheMue> wrtp: i like the naming as 'state service' instead of the used product
<TheMue> state server
<wrtp> TheMue: thanks
<canesin> Hi all, I would like to know if there is any advance in making juju working with spot instaces
<wrtp> davecheney: PTAL
<davecheney> kk
<wrtp> davecheney: PTAL
<niemeyer> juju-db start/running, process 6389
<niemeyer> jujud-machine start/running, process 6397
<niemeyer> jujud-provisioning start/running, process 6409
<niemeyer> $ ps auxw | grep mongo
<niemeyer> root      6389  0.6  1.8 127396 30916 ?        Ssl  14:12   0:00 /opt/mongo/bin/mongod --port 37017 --bind_ip 127.0.0.1 --dbpath=/var/lib/juju/db
<niemeyer> Just sayin'!
<davecheney> wd
<davecheney> wrtp: LGTM
<niemeyer> environs/cloudinit fixes: https://codereview.appspot.com/6543050
<niemeyer> ls
<niemeyer> See.. :(
<niemeyer> Focus is totally broken
<wrtp> niemeyer, davecheney: fairly trivial: https://codereview.appspot.com/6549049
<davecheney> wrtp: LGTM
<davecheney> ubuntu@server-16690:~/src/launchpad.net/juju-core$ go build ./...
<davecheney> # launchpad.net/juju-core/worker/firewaller
<davecheney> worker/firewaller/firewaller.go:330: undefined: state.PortsWatcher
<davecheney> worker/firewaller/firewaller.go:388: undefined: state.FlagWatcher
<davecheney> # launchpad.net/juju-core/worker/uniter/relation
<davecheney> worker/uniter/relation/hookqueue.go:24: undefined: state.RelationUnitsChange
<davecheney> worker/uniter/relation/hookqueue.go:166: undefined: state.RelationUnitsChange
<davecheney> that is all that is left
<niemeyer> davecheney: I'm firewaller, and Aram is working on adding stubs to the relation watchers while William doesn't have them
<davecheney> kk
<niemeyer> https://code.launchpad.net/juju-core/+activereviews
<davecheney> i am fixing provisioner_test
<davecheney> sorry, i sohld have spoken up a few minutes ago
<fwereade> niemeyer, https://codereview.appspot.com/6553051 has EnsureDying/EnsureDead; going back to RelationUnitsWatcher now (at last :()
<davecheney> http://paste.ubuntu.com/1218959/
<davecheney> environconfigwatcher always blows up if the underlying config never set
<fwereade> niemeyer, https://codereview.appspot.com/6553051/diff/1/state/service_test.go#newcode249
<davecheney> Aram: func (w *EnvironConfigWatcher) loop() (err error) { settingsWatcher := w.st.watchConfig("e") defer settingsWatcher.Stop() changes := settingsWatcher.Changes() configNode := <-changes
<davecheney> configNode := <-changes
<Aram> davecheney: https://codereview.appspot.com/6546055
<davecheney> Aram: i guess that works, I don't think I am qualified to review
<fwereade> niemeyer, reproposed https://codereview.appspot.com/6553051
<niemeyer> firewaller is up for review! https://codereview.appspot.com/6548051/
<davecheney> Aram: lp:~niemeyer/juju-core/new-state-firewaller
<davecheney> do that same hack to invalidateEnvironment
<wrtp> davecheney, niemeyer: https://codereview.appspot.com/6554061
<niemeyer> ls
<wrtp> charm	   container   juju    state	trivial  version
<wrtp> cloudinit  downloader  log     store	upstart  worker
<wrtp> cmd	   environs    schema  testing	util
<wrtp> %
<davecheney> wow - i didn't know I could do that
<davecheney> lucky(~/src/launchpad.net/juju-core/worker/provisioner) % bzr switch lp:~niemeyer/juju-core/new-state-firewaller
<davecheney> Updated to revision 555.
<davecheney> Switched to branch: bzr+ssh://bazaar.launchpad.net/~niemeyer/juju-core/new-state-firewaller/
<davecheney> ^ ok, don't do this
<davecheney> it
<davecheney> it'll screw up your cobzr
<davecheney> lucky(~/src/launchpad.net/juju-core) % bzr switch 115-testing-testing
<davecheney> cobzr: .bzr/branch/location points to bzr+ssh://bazaar.launchpad.net/~niemeyer/juju-core/new-state-firewaller
<davecheney> Aram: lp;~dave-cheney/juju-core/001-provisioning-worker/+merge/125794
<davecheney> ^ this branch compiles, but errors
<davecheney> panics
<wrtp> Aram: http://paste.ubuntu.com/1219229/
<wrtp> Aram: https://codereview.appspot.com/6547058/
#juju-dev 2012-09-22
<davecheney> wrtp: did you see my mail about the machineswatcher ?
<davecheney> sorry, firsly, how was your flight ?
<wrtp> davecheney: no i didn't. i've just arrived back. will have a look.
<davecheney> i've managed to blow up my cobzr repo
<davecheney> so i've lost all my old branches
<davecheney> but i'll get them back and port the machines watcher tests
<wrtp> davecheney: fine. a bit crowded at lisbon. got to the front of an hour long queue wait and they said "you could have used the priority queue because you checked in online". grrr.
<wrtp> davecheney: oops. how did you manage that?
<wrtp> davecheney: are you still in lisbon BTW?
<davecheney> yeah, don't leave for hours
<davecheney> eating stale chips and a crap club sandwitch
<davecheney> wrtp: bzr branch url:///
<davecheney> not lp:...
<davecheney> will do a remote branch
<davecheney> and it screwes up cobzr
<davecheney> i did it yesterday, have a look in the channel log for the gorey details
<wrtp> davecheney: oops
<wrtp> davecheney: did you post to juju-dev ?
<davecheney> y
<davecheney> lucky(~/devel/juju-core) % bzr switch 111-util-quango
<davecheney> bzr: ERROR: Not a branch: "/home/dfc/devel/juju-core/.bzr/cobzr/000-foo/
<davecheney> /".
<davecheney> ^ i'm tyring to fix it, but the .bzr/branch/location file cannot have a \n at the end of the line ...
<wrtp> davecheney: ah, found it
<wrtp> davecheney: yeah, definitely needs more tests!
<davecheney> i'm concerned the machines watcher doens't work properly
<davecheney> which is sorta important for the PA :)
<wrtp> davecheney: mere technicalities!
<davecheney> i know
<davecheney> such trivialities
<davecheney> oh brilliant, there were no tests for the old machines watcher ...
 * davecheney le sigh
<wrtp> davecheney: have you looked in watcher_test.go
<wrtp> ?
<davecheney> wrtp: maybe i missed it, but there are no tests that call WatchMachines
<wrtp> davecheney: state_test.go:/WatchMachines
<davecheney> in the old codebase or the new ?
<wrtp> davecheney: old codebase
<wrtp> davecheney: it's true they're fairly poxy tests
<davecheney> ahh yes
<davecheney> i did miss it
<davecheney> well, i'll start with that
<wrtp> davecheney: i'm not sure the table-driven test adds much there tbh
<davecheney> and indeed that is present in the new codebase
<davecheney> I wonder if it's actually testing anything
<wrtp> davecheney: doesn't look implausible, but it could do with doing many more machines.
<wrtp> davecheney: it occurs to me that a StopSync might be useful
<wrtp> davecheney: then we can easily simulate multiple changes arriving at once.
<davecheney> what does Start/Stop sync do ?
<wrtp> davecheney: StartSync triggers a db poll
<davecheney> does it disconnect the watcher from the underlying even stream ?
<davecheney> so stopSync, change some stuff, start sync == big change
<wrtp> davecheney: yup
<davecheney> ok, i'll keep looking
<wrtp> davecheney: we *can* do that now, but it's non-deterministic because the sync might happen while we're making the changes
<davecheney> the PA is just acting like it never sees any machines changes at all
<wrtp> davecheney: have you put a log statement in?
<wrtp> davecheney: just to make sure
<davecheney> they were there from the odl days
<davecheney> but it's been a while
<davecheney> i'll keep digging
<davecheney> hmm, no logging in the normal path
<davecheney> only the error path
<davecheney> that would be a good place for me to start
<davecheney> wrtp: how was customs ?
<davecheney> should I get a move on
<wrtp> davecheney: fine, but quite slow again
<wrtp> davecheney: depends very much on how many people are going through at that time, of course
<davecheney> i should probably go then
<davecheney> wrtp: interesting
<davecheney> [LOG] 97.37511 JUJU:DEBUG watcher: got request: watcher.reqWatch{key:watcher.watchKey{c:"machines", id:interface {}(nil)}, info:watcher.watchInfo{ch:(chan<- watcher.Change)(0xf840151eb0), revno:0}}
<davecheney> [LOG] 97.37554 JUJU rcvd machines change: &{[] []}
<wrtp> davecheney: just thought - does it call AllMachines?
<wrtp> davecheney: 'cos AllMachines is horribly broken
<davecheney> wrtp: yes, but only after actioning machine additions
<wrtp> davecheney: AllMachines only fills in the machine id, nothing else.
<davecheney> wrtp: ok, i'll be careful of this
<davecheney> but this is happening before then
<davecheney> in worker/provisioner:loop()
<davecheney> we wait on the watcher
<davecheney> and this is the change that is received
<wrtp> oops, just almost ran out of  battery
<wrtp> davecheney: the WatchMachines test in the new state seems much better than the old one
<wrtp> davecheney: though it doesn't check what happens when there's an initial machine.
<davecheney> wrtp: yeah, there are a few bugs like that
<davecheney> expecting addmachine to give you machine 1, not machine 0
<davecheney> i think we talked about this yesterday
<davecheney> because of the bootstrap rquirements
<wrtp> davecheney: it looks like it works though
<wrtp> davecheney: yeah we did
<wrtp> davecheney: but after chatting about it with gustavo, we decided to keep things the way they were
<wrtp> davecheney: i.e. the dummy provider bootstrap does not allocate a machine and instance
<davecheney> wrtp: sounds sane
<davecheney> and predictable
<wrtp> davecheney: i dunno. given that every real provider will have (at least) machine 0 allocated at bootstrap time, i sway towards the thought that dummy should too
<wrtp> davecheney: but it does require quite a few tests to be fixed
<davecheney> wrtp: i fear that is mocking too much
<wrtp> davecheney: i definitely see that POV too
<davecheney> the magic that happens behind the scenes in jujud bootstrap is unfortunate
<davecheney> because we either have to ignore it in the tests
<davecheney> or mock it
<davecheney> which is brittle
<wrtp> davecheney: gotta go now. hope you survive your flights ok.
<wrtp> davecheney: see you whenever you manage to surface!
<davecheney> wrtp: cool
<davecheney> i'll keep working on this
<davecheney> btw,m should MachinesWatcher ever return a change with empty Added and Removed ?
#juju-dev 2013-09-16
<axw> thumper: re "This kind of goes against a lot of our coding standards."  -- it wasn't intentional :)
<axw> I have fixed it.
<thumper> cool
<thumper> I sort of noticed that in the next file when I saw william's comment
<thumper> and realised what you had done
<thumper> wallyworld_: cc'ed you on the email with robie about kvm
<thumper> as there is some simplestream stuff there
<wallyworld_> ok
<thumper> for how the infrastructure syncs the images for kvm
<thumper> ok, I'm off for the monday afternoon ice-skating / shopping time
<thumper> back later to chat to william
<axw> wallyworld_, jam: https://codereview.appspot.com/13532047
<axw> can't lbox propose at the moment without this
<jam> axw: doesn't that mean there is a Logf function that is being used incorrectly?
<axw> jam: no, there's two types of Logf
<axw> the vet catches one, and errors on the other which is being used correctly
<axw> loggo.Logf takes a first parameter of the log level
<axw> then the log format
<axw> gocheck's C.Logf takes the log format  as the first arg
<axw> AFAICT there's no way of telling go vet how to distinguish between the two Logfs
<jam> ax
<jam> axw: how did the original get proposed at all then?
<jam> I'm fine with the change, but I'm curious what happened
<axw> jam: yeah I'm not too sure. I will investigate; only thing I can think is that it was proposed, then .lbox.check modified, and a bzr push done
<axw> or there was a merge, but I don't think so
 * axw gets tea and investigates
<axw> jam: it was the latter; there were two independent MPs that got merged
<wallyworld_> axw: sorry, was doing school pickup, do you still need me to look?
<axw> wallyworld_: s'ok, jam's looking
<wallyworld_> ok
<jam> axw: LGTM
<axw> thanks jam
<jam> axw: poke
<jam> axw: I'm looking at your localstorage and ssh storage stuff, and really *I* think that the temp directory should be in or at least next-to the storage location. Because there is 0 guarantee that /tmp is going to be on the same filesystem. (often /tmp is a tmpfs, etc)
<jam> So I'm happy with "ssh storage uses it until the local provider takes it over", but I think the tmp dir should be under storage
<jam> I'm fine with 'tmp' vs '.tmp' I don't think it really matters.
<rogpeppe> mornin' all
<jam> good morning rogpeppe
<rogpeppe> jam: hiya
<TheMue> heyho all
<axw> jam: sorry, was taking a break before. reading now
<axw> jam: it will be in /tmp only if you don't tell it where to go
<axw> and for the only usage, I will be telling it where to go
<jam> axw: but why make that the default rather than next to the storage, since then you are much more sure it will be on the same fs
<axw> jam: for sshstorage, the default is next to it. I could make it the same for filestorage too
<axw> I didn't do that because it'll pollute the local filesystem for sync/generate-tools metadata
<axw> probably not a big deal
<axw> for sshstorage, the default is the remote storagedir with ".tmp" appended
<jam> axw: I think it makes sense to have them match, and I think putting it inside is nice if it is something we will be managing.
<axw> jam: I'll take a look at how feasible it'll be. environs/localstorage may need to change to do something similar. I'm hesitant to change that, because that'll require upgrade checks
<jam> axw: why?
<jam> if it is the temporary dir, we don't see content in it until it is finalized and then moved into place
<jam> right?
<axw> right
<axw> so the way I did it before was
<axw> sshstorage wrote to ${storage}/content, with a staging in ${storage}/tmp
<axw> localstorage expects everything to be directly under ${storage}
<axw> so I changed sshstorage back to doing that
<axw> with a tmp dir as a peer of ${storage}
<axw> if I were to not do that, I'd either need to change localstorage to have content/tmp subdirs, or check the feasibility of pointing localstorage at ${storage}/content
<jam> axw: so from what you've said, localstorage needs to take over what sshstorage uploaded, right?
<axw> jam: correct
<jam> and even if you pointed it at .../content it wouldn't use .../tmp for its temp dir
<axw> yes
<axw> localstorage doesn't do any of this tempdir stuff at the moment
<axw> it probably should, if we're being paranoid
<axw> sorry jam, maybe I confused things. there are three storage implementations things to consider
<axw> sshstorage, filestorage, localstorage
<axw> filestorage was only updated on william's request, and doesn't interact with sshstorage at all
<axw> localstorage is the one that takes over. localstorage does not currently have the write-to-tmp-then-mv behaviour
<axw> and since localstorage  is existing, and used by the local provider, I was hesitant to change its layout
<jam> axw: so right now localstorage just writes directly to the final location, and interrupted transfers will see partial data?
<axw> jam: yup
<jam> axw: I think we want to fix that regardless the rest.
<jam> even if it is $TMP or whatever
<axw> fair enough
<axw> jam: I was thinking of modifying localstorage to become something more generic, exposing another storage via HTTP. I might as well bite the bullet and do that now.
<thumper> fwereade: ping
<fwereade> thumper, pong
<thumper> hangout?
<fwereade> thumper, sure, a quick one, just a mo
 * thumper waits
 * fwereade has found his headphones and is starting one
<fwereade> thumper, https://plus.google.com/hangouts/_/b68108f32c53bcd9634bd39560203820f2029809?hl=en
<jam> dimitern, fwereade: I will not make it to the standup today, I have to go sign my son up for after school activities
<jam> wallyworld_: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/environs/storage.go#L62 (line 62)
<jam> shouldn't that be "if err == nil" then set dataURL = fullURL ?
<jam> (I think we're missing some tests about the return value of Fetch())
<wallyworld_> seems so :-(
<jam> wallyworld_: well, you have "datasourceSuite.Fetch()" which is asserting it gets the relative path back
<jam> but I think that assertion is actually incorrect.
<wallyworld_> could be, i'll look into it
<jam> wallyworld_: k, I was trying to do some assertions that we are properly connecting to an HTTPS location, etc.
<jam> but ran into that.
<wallyworld_> sorry :-( will fix
<jam> np
<dimitern> wallyworld_, rogpeppe , mgz , fwereade, natefinch : standup
<mgz> ta
 * TheMue => lunch
<jam> mgz: poke
<yolanda> hi, i'm having a problem with juju config vars. Doing a config-get for one of the vars throw: subprocess.CalledProcessError: Command '['config-get', 'admin_pubkey', '--format=json']' returned non-zero exit status 2
<yolanda> i tested with juju get nameofservice, and this var has a value
<yolanda> any idea on what can be failing?
<mgz> yolanda: did you look at what the stderr output was?
<yolanda> mgz, i only saw that exit status 2
<mgz> right, but you're the one deploying the charm
<mgz> so, make the charm give you useful output instead of just the exit code
<rogpeppe> axw: trivial change to environs/sshstorage: https://codereview.appspot.com/13321046
<natefinch> is there a way to get gocheck to be less spammy?  It prints out like 100 lines per failure, when really all I want is the line of the test that failed
<mgz> sed
<mgz> only slightly joking...
<natefinch> mgz: that was going to be my next move :/
<rogpeppe> natefinch: gocheck can't know how much to print, because it doesn't know about tests within tests
<rogpeppe> natefinch: i agree it's a problem
<natefinch> rogpeppe: it appears to be printing logging, though... is that our fault for how we're using it?
<rogpeppe> natefinch: we deliberately print logging, because otherwise there's no easy way to see what's happening underneath when a test fails
<rogpeppe> natefinch: it's easy enough to get rid of the log lines though
<natefinch> rogpeppe: most of the time I know why something is failing, or can tell when I get there. I'd much rather have logging be an optional flag than on all the time... it obscures the actual failureas
<rogpeppe> natefinch: it's an interesting possibliity - you could add a flag that LoggingSuite inspects to determine whether to print log messages, perhaps
<rogpeppe> natefinch: i'm not sure whether the default should be on or off though - when i get a random unexpected (and perhaps only occasionally reproducible) error in the tests, i actually want to see as much info as possible
<rogpeppe> natefinch: for example when something fails in the bot
<rogpeppe> natefinch: actually, on balance, i think an environment variable would probably work better
<natefinch> rogpeppe: the bot could always run with --verbose since in theory it shouldn't see failures
<natefinch> rogpeppe: as long as I have a toggle, I don't care where it is :)
<rogpeppe> natefinch: because then it would be just ignored for any package that doesn't happen to use LoggingSuite
<natefinch> rogpeppe: fair enough
<rogpeppe> natefinch: quick review of the above CL? https://codereview.appspot.com/13321046
<rogpeppe> natefinch: it just fixes the build for me
<natefinch> reviewed
<natefinch> rogpeppe: where is the logging currently turned on? I can at least go poke at it locally to make my life easier right now
<rogpeppe> natefinch: see testing.LoggingSuite
<natefinch> rogpeppe: thanks.. that is so much better.  What do you think about me submitting a CL for an environment variable that turns off logging? (default would be same behavior as now)
<rogpeppe> natefinch: seems like an good idea
<rogpeppe> natefinch: perhaps a way to set the logging level actually
<rogpeppe> natefinch: in fact, doesn't loggo accept some kind of config syntax
<rogpeppe> ?
<natefinch> rogpeppe: yeah, there's a SetLogLevel... we could just have that settable via the env variable. Right now it defaults to DEBUG.
<rogpeppe> natefinch: i was actually thinking of ConfigureLoggers
<natefinch> rogpeppe: ahh yeah... setting that seems like a lot of work.  I've used similar per-package log levels in other projects... and never have I ever done anything but set the global log level :)
<rogpeppe> natefinch: well, yeah
<rogpeppe> natefinch: it will potentially be useful in a real juju environment - to enable low level logging in some problematic packages only
<rogpeppe> natefinch: because some low level logging can be extremely verbose
<rogpeppe> natefinch: so a single global log level too big a hammer sometimes
<rogpeppe> natefinch: but for tests, i think i agree
<natefinch> We can always add another env variable later if people need a smaller hammer for their tests
<axw> rogpeppe: thanks for fixing that
<rogpeppe> axw: i guess not many people don't use bash :-)
<axw> what'cha running? rc? :)
<rogpeppe> axw: yeah
<rogpeppe> axw: it's the shell i've used since... jeeze, maybe 1991?
<axw> can't say I've ever used anything apart from bash, really
<axw> zsh briefly
<rogpeppe> axw: rc has some definite advantages for scripting in particular
<axw> csh when I had to at my old job
<axw> *shudder*
<rogpeppe> axw: the semantics are much much cleaner
 * axw has a look at some docs
<rogpeppe> axw: good intro: http://plan9.bell-labs.com/sys/doc/rc.html
<axw> ta
<rogpeppe> axw: in particular see section 28 for a good overview of why it works the way it does
<rogpeppe> i'd appreciate a review of this if possible: https://codereview.appspot.com/13535045
<axw> rogpeppe: what will ConfigStorage be used by?
<rogpeppe> axw: the command line commands
<rogpeppe> axw: the environments.yaml stuff is due to change to allow extra information to be associated with an environment
<rogpeppe> axw: created when an environment is bootstrapped
<rogpeppe> axw: and also to cache current endpoint info so that we don't need to talk to the provider to try to find out the API address for an environment
<rogpeppe> axw: here's my idea for the eventual form of the ConfigStorage interface: http://paste.ubuntu.com/6115093/
<rogpeppe> axw: it'll probably change as it encounters reality
<axw> thanks rogpeppe, reading now
<rogpeppe> axw: thanks
<yolanda> mgz, i was able to debug error message, it shows: panic: version: cannot read forced version: open FORCE-VERSION: permission denied
<yolanda> maybe i'm conflicting with some other var? i'm using admin_fullname, maybe is that reserved?
<yolanda> full log here: http://paste.ubuntu.com/6115131/
<mgz> yolanda: thanks, that's useful
<yolanda> mgz, what can be causing that? seems a bit unrelated with a simple config-get
<yolanda> other vars are working ok, so i'm thinking that the name may be conflicting, but not sure
<mgz> indeed. seems to be an args parsing thing somehow.
<mgz> the logic in version/version.go is:
<mgz> get the dir of args[0] as the location of tools, open the FORCE-VERSION file there
<mgz> but if other config-get invocations work something funky is going on
<mgz> but I don't much like that logic regardless
<yolanda> mgz, should i file a bug?
<mgz> yolanda: yes, and it probably needs more context on what exactly that charm hook is doing
<yolanda> just trying to change the name of config vars and redeploy now
<yolanda> changing admin_xxx for another prefix
<mgz> it seems more likely that the hook is accidentally changing pwd or something to break stuff
<yolanda> mgz, the hook just happens after a db joined, and tries to render some content with these vars, it's first config-get called in the hook
<rogpeppe> lunch
<yolanda> mgz, indeed, i changed the var for something different like "defaultname" but still the same problem
<yolanda> i can't imagine what's causing config-get to fail there
<mgz> yolanda: not sure what to suggest as next debugging step. have you filed that bug yet?
<yolanda> mgz, no, because i don't know what's the cause
<mgz> some bugs are just symptoms :)
<yolanda> mgz, trying to filter a bit, now i'm deploying without external config file
<yolanda> just using defaults
<yolanda> same problem, i'll file the bug
<yolanda> mgz, https://bugs.launchpad.net/juju-core/+bug/1226088
<_mup_> Bug #1226088: config-get fails with "open FORCE-VERSION: permission denied" <juju-core:New> <https://launchpad.net/bugs/1226088>
<mgz> yolanda: thanks. what's the charm? the hook code seems more relevent than the config.
<yolanda> mgz, it's gerrit charm, it's not still public
<yolanda> can i send it to somewhere private?
<mgz> is it actually pviate, or just not ready? can either push to ~yolanda.robla namespace or I guess just one of the canonical datacenter machines if it's really not to be revealed
<yolanda> mgz, we are currently pushing to canonical-ci private team
<yolanda> we are keeping that private at the moment
<arosales> rogpeppe, fwereade outlooks on cutting a 1.14 release today?
<arosales> I see rogpeppe has  a branch out for  https://bugs.launchpad.net/juju-core/+bug/1220027
<_mup_> Bug #1220027: worker/provisioner: cannot restart cleanly due to hard dependency on api server <papercut> <juju-core:In Progress by rogpeppe> <juju-core 1.14:In Progress by rogpeppe> <juju-core trunk:In Progress by rogpeppe> <https://launchpad.net/bugs/1220027>
<arosales> which is the last bug for the milestone  https://launchpad.net/juju-core/+milestone/1.14.0
<rogpeppe> arosales: i think that's all merged
<arosales> rogpeppe, bug is still in progress, could you update the bug status?
<yolanda> mgz, checking with the team about that
<rogpeppe> arosales: and *should* be fixed, though i wasn't able to reproduce the original issue, so i'm not absolutely sure
<arosales> fwereade, robbiew well at this point we need to push 1.14 out
<arosales> if we need to do a 1.14.1 we can address that there if any bits are missing
<robbiew> huh...oh... rogpeppe^
<robbiew> ;)
<arosales> robbiew, sorry
<arosales> auto complete
<robbiew> np
<rogpeppe> arosales: i didn't mark the bug as fixed, because although i sincerely believe that it is, i need someone who can verify that
<mgz> yolanda: see /query
<arosales> hmm davecheney reported the bug initially
<arosales> rogpeppe, did you say the branch is in 1.14?
<rogpeppe> arosales: yes
<arosales> rogpeppe, so if we cut a 1.14 release today it would have 1220027 fix in it, just untested
<rogpeppe> arosales: yes.
<rogpeppe> arosales: so... worth marking as "fix released", i guess?
<arosales> rogpeppe, I think so if the branch is in 1.14, trunk, and 1.15
<rogpeppe> arosales: ok,marked as "fix committed"
<arosales> rogpeppe, thanks. if we need to return to is we can issue a 1.14. 1 or in subsequent 1.15 releases
<natefinch> rogpeppe: btw, it looks like some of the tests rely on logging being enabled during testing
<rogpeppe> natefinch: ah, damn, yes, they do
<natefinch> rogpeppe: that seems like a weakness in the tests, honestly.
<rogpeppe> natefinch: they should probably explicitly enable it, or use a different log channel or something
<natefinch> rogpeppe: yep
<rogpeppe> anyone up for a review? nothing earth shaking, just part on-going config cleanup.: https://codereview.appspot.com/13421050
<mgz> looking
<arosales> sinzui, it looks like the bugs needed for the 1.14 are in fix committed https://launchpad.net/juju-core/+milestone/1.14.0
<arosales> sinzui, thus I would like to start working with you to for the 1.14 and the associated QA.
<arosales> sinzui, we can sync with davecheney when he comes on-line later in the day.
<arosales> sinzui, but wanted to give you a heads up that we'll need to cut a release today.
<mgz> rogpeppe: all makes sense to me
<rogpeppe> mgz: thanks
<sinzui> thank you arosales
 * rogpeppe reaches eod
<rogpeppe> g'night all
<natefinch> rogpeppe: g'night
<sinzui> I cannot build juju core 1.14. I can build and run trunk. Make build for 1.14 exists with an error
<sinzui> # launchpad.net/juju-core/provider/azure
<sinzui> provider/azure/storage.go:96: multiple-value context.GetAnonymousFileURL() in single-value context
<natefinch> sinzui: looking....
<natefinch> sinzui: I think it's a difference in the supported version of gwacl
<sinzui> ah
<natefinch> sinzui: yeah, just update gwacl and that should fix it.  Sorta surprised it built in one place and not the other
<sinzui> natefinch, This is a case of me downgrading. I just pulled the 1.14 branch and switched to it.
<sinzui> I don't think the build-release-tarball.bash script will be happy with non-trunk and it always pulls gwacl tip
<thumper> mramm: ping
<natefinch> sinzui: what's weird is that we cut 1.14 after the change to gwacl
<sinzui> oh?
<natefinch> sinzui: at least, according to the timestamps on each.  the gwacl change was on 9/11 and 1.14 was cut on 9/13
<sinzui> natefinch, trunk clearly has a change to make it compatible: http://pastebin.ubuntu.com/6116619/
<natefinch> sinzui: actually... nevermind, that's the latest change on 1.14, not when it was cut
<sinzui> It is getting selected merged
<sinzui> merged
<sinzui> merges
<natefinch> sinzui: yeah, that definitely needs to be on 1.14.  This is actually sort of a problem with Go in general...
<natefinch> sinzui: I'll ping the juju list and it'll get fixed ASAP... it's EOD for me, or I'd look more into it
<sinzui> I think the immediate solution is to update the release script to acknowledge and use dependencies.tsv. Pulling tip for each is zany
<natefinch> sinzui: yeah, that sounds like the right thing to do
<thumper> fwereade_: ping
 * thumper recalls fwereade_'s move and packing
<thumper> ah crap
 * thumper makes a mental not to check the logrotate code
<mramm> thumper: pong
<thumper> mramm: hey, see email about timing clashes :)
<mramm> ok
<mramm> I am also free later tonight, and might have to go out and help my friend get migraine medication
<mramm> so perhaps it will all work out for the best ;)
<thumper> mramm: so lets just try to touch base later?
<mramm> sure
<thumper> ok
 * thumper taking last daughter to the doctor
<hazmat> jam, fwiw filed bug 1226307  re tool upload
<_mup_> Bug #1226307: juju-core lazily get tools if from public bucket <juju-core:New> <https://launchpad.net/bugs/1226307>
<thumper> mramm: I'm around now for a bit if you are free
<mramm> I am
<thumper> otherwise it'll be after lunch
<thumper> got time now?
<thumper> mramm: I'm in the one on one hangout
<davecheney> bigjools: hey
<davecheney> can you fix permissions in gwacl
<bigjools> davecheney: helleau
<davecheney> https://launchpad.net/~gwacl-hackers
<davecheney> ^ can you add juju-hackers to that
<davecheney> or trasfer ownership
<davecheney> please
<bigjools> who wants to own it?
<davecheney> juju-hackers
<davecheney> make the owner mark ramm
<davecheney> he loves code
<bigjools> I generally dislike teams being members of other teams
<bigjools> because it makes email management harder
<davecheney> bigjools: please, this is a matter of urgency
<bigjools> I'll add you and make you admin
<davecheney> ta
<bigjools> done
<bigjools> but consider the team mess -it's the reason I quit juju hackers
<bigjools> email from other teams that it's also part of.... PITA
#juju-dev 2013-09-17
<thumper> axw or wallyworld: Rietveld: https://codereview.appspot.com/13717044
<thumper> wallyworld: how's baldrick?
<axw> thumper: otp
<wallyworld> ok. stupid tail
<thumper> axw: ack
<thumper> stupid tail or stupid dog?
<wallyworld> both
<wallyworld> thumper: did you talk to william?
<thumper> wallyworld: yeah, but he was packing and moving yesterday
<thumper> wallyworld: landed the first part where it doesn't write to the agent conf
<thumper> and I don't need to any more
<wallyworld> ok
<thumper> so now we have just one place where the logging conf is saved
<thumper> and that is in state
<wallyworld> great
<thumper> so we start the machine and unit agents with --debug
<thumper> always
<thumper> and the logger worker updates as necessary
<thumper> as soon as it starts
<wallyworld> sounds good
<wallyworld> thumper: i'm not 100% across it - will unilaterally removing the file writer stuff up new environments? don't we want to only remove it for old envs?
<thumper> wallyworld: the new environments won't ever set it
<wallyworld> ok
<thumper> wallyworld: and unfortunately we have no way to know which version it is from
<thumper> no point asking if it is there
<wallyworld> ok
<thumper> the method returns an error if it isn't there
<thumper> but we just ignore it
<axw> sorry thumper was having my one-on-one with robbie ... reading back now
<thumper> np, wallyworld handled it
<thumper> was just a review
<axw> ah cool
<thumper> I have another coming
<thumper> wallyworld: re: an integration style test, no we don't have one, and I think the effort in all the wiring up is out of proportion to result
<thumper> given that all the parts are tested
<wallyworld> famous last words :-P
<wallyworld> ok though
<thumper> perhaps it is something that we want to add to the more "live tests" that we do daily with jenkins
<wallyworld> yeah
<thumper> wallyworld: well, we have tests for all the parts
<thumper> so I'm pretty confident there
<wallyworld> ok
<thumper> 24 files changed, 31 insertions(+), 158 deletions(-)
<thumper> that is a good metric I reckon
 * thumper waits for lbox to calculate the diff that LP has already done
<thumper> wallyworld or axw: https://code.launchpad.net/~thumper/juju-core/add-cleanup/+merge/185964
<thumper> lbox finally caught up
<thumper> the description has the link now
<wallyworld> just fixing some stuff, will look in a sec
<thumper> kk
<axw> thumper: just one question on rietveld
<thumper> shoot
<axw> no I mean, there's a question waiting for you on your MP
<thumper> oh
<thumper> ok
<thumper> replied
<axw> thumper: thanks, LGTM
<thumper> ta
<wallyworld> axw: whenever you have a moment, i've updated https://codereview.appspot.com/13725043/ to move the httpstorage as requested
<axw> thanks wallyworld, I'll take a look
<wallyworld> ta, no real urgency
<axw> wallyworld: I thought it was ok being in environs, since it's used by things other than the provider. doesn't really matter I suppose.
<wallyworld> axw: in environs, i'd prefer to keep stuff there generic. if it's ec2 specific, it belongs under the ec2 package imo
<wallyworld> i'd also prefer that stuff not use it outside the provider i guess
<wallyworld> since it is ec2 specific
<wallyworld> and general code should not depend on ec2 isms
<wallyworld> there's been way too much of that which has caused grief dealing with other providers
<axw> doesn't tools lookup still use this to fall back on?
<wallyworld> tools uses a http data source
<wallyworld> which is not ec2 specific
<wallyworld> i really wish the ec2 specific one would go away
<axw> ok. for some reason I thought FindTools still depended on it
<axw> wallyworld: ah, right. it's the sync code that I'm thinking of.
<wallyworld> yeah, that will be migrated to use http://juju.canonical.com as soon as that is ready
<wallyworld> so it's short term
<wallyworld> axw: thanks for the review, i'll address the issues
<wallyworld> i really wish Go had better debugging facilities
<wallyworld> feels like a bad 1970's timewarp sometimes. everything is sooooo primitive
<axw> wallyworld: heh :)   there is gdb support...
<wallyworld> but not good IDe support
<axw> I believe there's even some IDE integration with gdb/go
<axw> liteide?
<wallyworld> i use intellij
<wallyworld> liteide may work, i will need to take another look i guess
<wallyworld> wasn't much last time i looked from memory
<axw> wallyworld: it certainly isn't up to par with Java, but I'll take it over C++ any day for debugging
<wallyworld> really?
<axw> then again, I'm not much of an IDE person :)
<wallyworld> apart from java, which is first class, python is also great
<axw> yeah
<wallyworld> so how to you refactor etc with good ide support?
<axw> to  be fair, they're much older
<wallyworld> or navigate code, or find semantic usages etc
<wallyworld> well go is 4 or 5 years old now?
<axw> yeah, 4
<axw> erm, the hard way :)
<wallyworld> 4 years is plenty long enough for good tools to have emerged
<axw> not sure if you follow the mailing lists, but there was an announcement last week of a new tool called "The Go Oracle"
<axw> this will drive IDE developments
<axw> for doing callgraph analyses and whatnot
<wallyworld> yay, about time :-P
<axw> wallyworld: also, there's basic renaming/refactoring capabilities in gofmt
<axw> godoc gofmt
<axw> if you're not aware of it..
<wallyworld> i was not :-)
<wallyworld> although in 2013 "basic" isn't enough anymore :-)
<axw> yeah, it's certainly not all there
<davecheney> why the fuck is the juju-core repo 22mb ?!?!
<davecheney> who has been storing guttenberg in there
<davecheney> help
<davecheney> % bzr push  bzr+ssh://bazaar.launchpad.net/~go-bot/juju-core/1.14
<davecheney> bzr: ERROR: Cannot lock LockDir(chroot-80212880:///~go-bot/juju-core/1.14/.bzr/branch/lock): Transport operation not possible: readonly transport
<davecheney> cannot tag 1.14
<davecheney> cannot tag 1.14.0
<davechen1y> axw_: is there an issue for juju debug-log screwing up ?
<axw_> davechen1y: not that I'm aware of?
<davechen1y> ok
<davechen1y> i guess that one went through to th keeper
<davechen1y> but it's fixed ? right
<davechen1y> on 1.14.0 ?
<axw_> oh sorry
<axw_> yes there was one
<axw_> and yes it was fixed a while ago
<axw_> I thought you meant one outstanding
<axw_> I'll have a look for it
<axw_> davechen1y:  here's one https://bugs.launchpad.net/juju-core/+bug/1211147
<_mup_> Bug #1211147: Deploying service to bootstrap node causes debug-log to spew messages <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1211147>
<axw_> I can't remember what the other issue was...
<axw_> davechen1y: https://bugs.launchpad.net/juju-core/+bug/1212148
<_mup_> Bug #1212148: rsyslog accumulator strips newlines <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1212148>
<axw_> both merged in 1.13.2
<rogpeppe> mornin' all
<rogpeppe> i'll be out this morning for a doctor's appointment for a couple of hours (hopefully less), leaving in an hour.
<axw_> hey rogpeppe. okey dokey.
<rogpeppe> axw_: hiya
<rogpeppe> dimitern, jam: i've got a doctor's appointment now. back before the standup i hope.
<wallyworld___> jam: if you are interested - https://codereview.appspot.com/13632056 is part 1, part 2 to follow. part 2 will adjust tools fetching to use retry logic when necessary. this current work always does not use retry logic for tools fetching. bootstrap is fast again :-)
<wallyworld___> well, upload tools is still slow
<wallyworld___> but tools seeking is fast
<rogpeppe> back
<mgz> jam: mumble?
<arosales> jam, ping
<davecheney> jam: help
<davecheney> i can't tag the 1.14 branch
<mgz> davecheney: what's up sepcifically?
<mgz> you should just be able to set your user to the bot and run tag against the remote branch as usual
<davecheney> mgz: hmm, maybe that was it
<davecheney> my history from the last tag is missing
<jam> davecheney, arosales: ?
<davecheney> mgz: bzr push  bzr+ssh://bazaar.launchpad.net/~go-bot/juju-core/1.14
<davecheney> what am I missing ?
<arosales> jam, davecheney needs some assistance with the tagging of 1.4
<mgz> ah, that's not creating the tag, you want to make the branch?
<arosales> 1.14, blocking the release needed today
<jam> davecheney: "bzr tag -d bzr+ssh://go-bot@bazaar.launchpad.net/~go-bot/juju-core/1.14"
<mgz> you need ://go-bot@
<jam> well, you need to specify the tag as well :0
<mgz> otherwise you're probably running it as dave
<jam> bzr tag -d bzr+ssh://go-bot@bazaar.launchpad.net/~go-bot/juju-core/1.14 -r -1 juju-core-1.14
<jam> natefinch: https://wiki.canonical.com/CDO/UbuntuServer/IOM-Lab is the "garage maas" that we have set up (I believe)
 * TheMue => lunch
<davecheney> jam: mgz thank you
<natefinch> jam: thanks
<davecheney> jam: mgz https://code.launchpad.net/~go-bot/gwacl/test
<davecheney> ^ is this lp:gwacl ?
<mgz> davecheney: nope.though it's probably a branch of it
<mgz> lp:gwacl is ~gwacl-hackers/gwacl/trunk
<davecheney> mgz: i'm confused the bot doesn't own the branch
<davecheney> the bot owns the test branch
<mgz> davecheney: the bot only owns branches that it manages
<davecheney> meh
<davecheney> no time for scenary today
<davecheney> need to get mr page a release
<arosales> mgz, jam, davecheney, once we have the source tar ball any way the ubuntu package can be built without directly (without going through saucy first)?
<arosales> sorry, let me restate
<arosales> mgz, jam, davecheney, once we have the source tar ball any way the ubuntu package can be built  directly (without going through saucy first)?
<davecheney> arosales: no, but I can build from the source of the tarball to test
<mgz> ...I'm still not sure I understand
<davecheney> might need some jiggery pokery to get the tools to work
<mgz> you can pull the tarball into the packaging branch and build that with the normal debian tools
<davecheney> mgz: good idea
<davecheney> that will work
<davecheney> but probably not necessary
<davecheney> i think I can fake the tools
<arosales> mgz, I think davecheney dputs the built packages once they are available in saucy
<davecheney> arosales: the build-release-tarball also does a smoke test compile of the source after building it anyway
<arosales> I was asking if we could just build directly, and avoid the dput step
<davecheney> arosales: no, i _want_ to do that, but we don't currently
<arosales> davecheney, ah so you currently don't wait for it to be available in saucy.
<davecheney> arosales: we need to wait for saucy _AND_ the ppa backports before we can announce the release
<davecheney> but we do not need to wait for either of those to test it
<davecheney> there will be ways of working around that
<arosales> technically I don't see why we need to wait for saucy. That was my questions about building directly
<davecheney> arosales: my mistake
<davecheney> i was not clear in understanding your goal
 * arosales not wording well either mgz didn't understand either.
<davecheney> arosales: what do you want, not what do you want us to do :)
<arosales> once you have the source tar ball building the packages should be identical to what gets built in saucy.
<davecheney> arosales: correct
<davecheney> not should
<davecheney> _is_
<davecheney> the tarball contains all the source code
<davecheney> there are no external build depdencies
<arosales> Thus to help get the release in the ppa today would it be possible to build juju 1.14 outside of the saucy archive so it is available in juju/stable sooner?
<davecheney> arosales: no, that is not possible
<davecheney> we do not have a build recipe for that
<davecheney> i would *like* the juju source to contain the debian/ build recipe
<davecheney> but at the moment that is not done
<davecheney> ok 1/2 branches waiting on the bot to merge
<davecheney> the other can be submitte after the release
<arosales> davecheney, ok . . .
<davecheney> arosales: i have not been clear
<davecheney> the build release tarball contains the upstream source
<davecheney> the magical ubuntu debian parts are missing
<davecheney> so while you can build juju from the srouce
<davecheney> you cannot replicate the packaging
<davecheney> that magic is not something i have access too
<davecheney> however, I do not think this is a major hurdle for smoke testing
<arosales> smoke no, maybe for getting something out the door today.
<arosales> afik anyone can build a ppa if they have the upstream source and the debian source package
<davecheney> arosales: yes, it is possible
<davecheney> no, we do not have the magic to do so
<davecheney> arosales: i am working as fast as possible
<arosales> davecheney, I know and its appreciated. I know your up late too. I was just inquiring about the getting the release into juju/stable step.
<davecheney> arosales: in theory it is possible to skip that step
<davecheney> at the 11th hour, without ever having tried it
<davecheney> i don't know how to do it
<davecheney> nor recommend it
<mgz> arosales: so, I've not found the ppa useful or a good recommendation
<davecheney> arosales: what do you need, and what time do you need it by ?
<davecheney> when is the announcement ?
<arosales> mgz, juju/stable?
<davecheney> will a binary .zip be sufficient ?
<mgz> but, it would be pretty trivial to change it to do basically what the release into saucy does, by just making it use a packaging branch straight up and merging the tarball into that
<davecheney> jamespage: hello
<arosales> davecheney, we'll need a ubuntu package.
<arosales> davecheney, time wise it was never set to an actual hour. If it is not available in the next 4 hours then we'll probably miss today as the UK folks go offline and they are the web team that will do the announce.
<davecheney> tarball is ~30 mins away
<davecheney> less
<dimitern> rogpeppe, ping
<rogpeppe> dimitern: pong
<dimitern> rogpeppe, I have a weird panic in a test when I run the full suite, but it's ok if I run each test on its own
<rogpeppe> dimitern: which suite?
<hazmat> so.. right now the most stable version of juju till the next tarball is ppa/devel
<natefinch> jam: FYI logging in to garage maas is not working.  Seems likely I don't have permission.
<rogpeppe> dimitern: and what's the panic?
<davecheney> arg, fucking build falied
<dimitern> rogpeppe, http://paste.ubuntu.com/6119111/
<davecheney> trying again
<dimitern> rogpeppe, provisioner - this is the code http://paste.ubuntu.com/6119112/
<hazmat> arosales, davecheney re new stable version.. afaics we should have 1.13.3 in stable ppa
<davecheney> hazmat: NO
<dimitern> rogpeppe, it doesn't happen with the deployer, which also uses PasswordChanger the same way
<davecheney> no, no no, a thousand nos
<hazmat> davecheney,  why not.. forget the  undocumented even/odd stuff..
<davecheney> hazmat: even if I could
<davecheney> the tools owuldn't work
<davecheney> really
<davecheney> no
<hazmat> hmm
<davecheney> this is not a good idea to be trying
<hazmat> trunk has a bunch of tools issues
<davecheney> hazmat: sure, we're not releasing trunk
<hazmat> davecheney, didn't you tell me to use 1.13.3 earlier today for a customer setup?
<davecheney> we are relasing 1.14.0 which is 1.13.3 + 4 small patches
<hazmat> ah.. ic
<rogpeppe> dimitern: does it happen consistently?
<davecheney> so, the best version of juju we have
<dimitern> rogpeppe, so far on each run
<davecheney> plus some azure fixes from brisbane
<davecheney> so it's even betterer
<dimitern> rogpeppe, tried updating mgo to r240, as the dependencies.tsv says - still the same error
<hazmat> davecheney, so what's the transition process from dev to stable?  last dev gets incremental fixes.. till its marked stable?
<dimitern> rogpeppe, would've tried with the latest mgo tip, but that just fails on its own tests
<davecheney> hazmat: as it stands, we develop on trunk under an unstable dev version
<davecheney> then a decision is made that a particular unstable tag looks good
<davecheney> so we branch from there
<davecheney> then start a new unstable series
<davecheney> so
<davecheney> 1.13.3 -> 1.14.0
<rogpeppe> dimitern: could you put a log print just before line 114?
<davecheney>        -> 1.15.0 -> 1.15.1
<rogpeppe> dimitern: the weird thing is line 35
<davecheney> arosales: 15 minute delay, the bot failed to run the tests correctly
<davecheney> trynig again
<hazmat> davecheney, so basically we tag a version of trunk or dev tag as release series, and create a release series branch against that and then do bug fixes against till its stable?
<davecheney> if it fails, we'll go around th bot
<dimitern> rogpeppe, the log doesn't show at all
<dimitern> rogpeppe, what's weird about line 35?
<davecheney> hazmat: no
<davecheney> hazmat: docs/juju-release-process.txt
<rogpeppe> dimitern: provisioner.SetPasswords should make an API call, right?
<davecheney> hazmat: also https://docs.google.com/a/canonical.com/document/d/1bxekJclEzXT3b8fEmP2wHfZ2hiiYI-ojzBnr_2mVfpo/edit
<dimitern> rogpeppe, no
<davecheney> and about a million other google docs
<dimitern> rogpeppe, that's the server-side test
<hazmat> davecheney, if only they could be searched ;-) thanks
<davecheney> hazmat: i believe you can ask your govt. for assistance there
<rogpeppe> dimitern: oh, of course, sorry.
<hazmat> davecheney, nice. citizens only get guns.. info requires using it
<dimitern> rogpeppe, why is the session getting closed anyway? and why only on the last test
<rogpeppe> dimitern: but you're not seeing a log message that's printed just before line 114?
<arosales> davecheney, thanks for the continued work on it.
<dimitern> rogpeppe, ah, sorry I see it yes
<dimitern> rogpeppe, just after that the log continues with reset dummy environment
<rogpeppe> dimitern: so you see the reset *after* the log message?
<dimitern> rogpeppe, yes
<davecheney> arosales: no problem, it's what i;m here for
<davecheney> arosales: we're really close
<davecheney> i'd like to not cowboy the packaging
<davecheney> it is one of the few things that works well
<rogpeppe> dimitern: could you print out the callers of dummy.Reset?
<davecheney> and we'll only cock it up if we rush
<rogpeppe> dimitern: (you can use "code.google.com/p/rog-go/exp/runtime/debug".Callers(0, 20))
<rogpeppe> dimitern: e.g. log.Infof("dummy resetting; callers: %s", debug.Callers(0, 20))
<arosales> davecheney, ack we should follow the process
<dimitern> rogpeppe, just a sec
<davecheney> arosales: merged, tag updated, building tarball
<arosales> getting closer :-)
<hazmat> davecheney, that doc seem quite right.. there is only one dev branch and tags.. so my previous description seems accurate. but the doc states.. "For stable releases they are branched from a known working devel branch"
<hazmat> er.. doesn't seem quite right
<davecheney> hazmat: for the 2 stable releases ive done
<davecheney> one of which I am doing right now
<davecheney> I branched from a devel tag
<davecheney> that is all I can tell you
<hazmat> davecheney, k
<davecheney> hazmat: the pirate code is more, um, a guideline
<natefinch> davecheney: nice
<dimitern> rogpeppe, http://paste.ubuntu.com/6119176/
<davecheney> arrr
<natefinch> arosales, davecheney:  I'm building the 1.14 juju client windows installer.  I have a windows build of the juju client that I made off of the 1.14 branch.   I believe that's all that is required.  is there anything I'm missing?   The windows installer code didn't actually make it into 1.14, but it's completely separate from the client code, so it shouldn't matter.
<davecheney> natefinch: just make sure you get the right revisions of GWACL et al
<dimitern> rogpeppe, it seems somehow TearDownTest gets called
<davecheney> the revisions are in the build-release-tools script
<davecheney> dependencies.txt in the root of the 1.14 branch is incorrect
<davecheney> in fact
<davecheney> i'm going to delet eit
<davecheney> arosales: release tarball: /home/dfc/devel/juju-core/1.14/scripts/build-release-tarball/juju-core_1.14.0.tar.gz
<davecheney> DONE!
<rogpeppe> dimitern: could you paste the log from running with -gocheck.vv
<arosales> natefinch, I defer to davecheney
<davecheney> jamespage: relase incoming in ~ 5 mins
<dimitern> rogpeppe, http://paste.ubuntu.com/6119185/
<arosales> jamespage, do you have a rough estimate on when saucy could have 1.14 available ?
<jamespage> arosales, I'll pick it up tomorrow AM
<jamespage> need to checkin with the release team as well
<rogpeppe> dimitern: that is weird
<jamespage> but as its 'stable 1.13' that should not be to much of a problem
<dimitern> rogpeppe, it seems it calls both SetUpTest and TearDownTest before trying the actual test
<rogpeppe> dimitern: could you push the branch - there are a couple of things i want to check
<jamespage> davecheney, arosales, mramm: speaking of which I really need to get what we want to ship in Saucy nailed down here - https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1219879
<_mup_> Bug #1219879: [FFe] juju-core to 1.14.0 stable release for Ubuntu 13.10 <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1219879>
<davecheney> jamespage: arosales i would like to assign that bug to mark ramm
<davecheney> does he have the bandwidth to deal with it ATM ?
<dimitern> rogpeppe, https://codereview.appspot.com/13720045
<arosales> jamespage, ack.  I will follow up on that bug. I think thumper-afk and jam had some ideas on bits needed for Saucy.
<davecheney> arosales: will you please hold the bug for the moment
<davecheney> jamespage: arosales https://launchpad.net/juju-core/1.14/1.14.0
<davecheney> sinzui: https://launchpad.net/juju-core/1.14/1.14.0
<davecheney> ^ sorry mate, stole your thunder
<arosales> davecheney, will do. I was mostly going to sync up with thumper-afk and jam on saucy and update bug comments.
<davecheney> jamespage: is it possible to build this release directly into the juju:ppa/stable PPA ?
<davecheney> for P, Q, R and S ?
<rogpeppe> dimitern: ok, i can reproduce the problem at least
<jamespage> davecheney, I'm just prepping for saucy now then I can backport (have a script)
<sinzui> davecheney, no
<dimitern> rogpeppe, good start :)
<arosales> davecheney, thanks for getting it built
<sinzui> davecheney, jam controls gobot? did you ask jam to add the tags to goose and 1.14 branch?
<davecheney> jamespage: BOJAH
<davecheney> sinzui: looks lie he did gwacl
<davecheney> see https://docs.google.com/a/canonical.com/document/d/1DuQfYq3SB1MJigrnpcuKBkfJfMJaTstQc4bRzN5Q0Uo/edit#
<davecheney> i did the rest
<sinzui> I did gwacl because I had permission
<davecheney> ahhh
<davecheney> cool
 * davecheney table flips over our stupid permission model
<rogpeppe> dimitern: this is very odd - somehow the call to mgo.Query.One is triggering the TearDownTest call
<dimitern> rogpeppe, hmmm
<davecheney> https://code.launchpad.net/~dave-cheney/juju-core/9001-update-release-tarball-for-1.14.0/+merge/186024
<natefinch> arosales, davecheney: <unintelligible swearing>  my windows fixes didn't make it into 1.14... which means when you run juju on windows, what you get is "error: cannot determine juju home, neither $JUJU_HOME nor $HOME are set"
<jam> natefinch: why didn't those get landed?
<natefinch> they were landed the day after 1.14 was cut
<natefinch> ... after 3 landing tries failed
<natefinch> (from conflicts and then some bot problems IIRC)
<davecheney> natefinch: are you sure
<davecheney> oh
<davecheney> so they only landed on trunk
<davecheney> and they were not backported to 1.14 series
<davecheney> well, fuck
<natefinch> correct
<natefinch> indeed
<davecheney> arosales: what do you want to do ?
<davecheney> i recommend not cutting a new release
<mgz> 1.14.1 with some more things from trunk is fine
<davecheney> but comitting nates fix to 1.14
<davecheney> and he can do a dodgy .zip release
<mgz> right.
<davecheney> it'll be fine
<davecheney> natefinch: can you please propse the merge anyhoo
<natefinch> davecheney: sure
<davecheney> we'll figure out the policy reguardless
<dimitern> rogpeppe, any more progress?
<davecheney> can I get a F'yeah for https://code.launchpad.net/~dave-cheney/juju-core/9001-update-release-tarball-for-1.14.0/+merge/186024
<rogpeppe> dimitern: just more weirdness
<mgz> davecheney: I don't like it much, because just fixing the release script is easy... but it seems okay for now
<rogpeppe> dimitern: i'm starting to suspect a compiler bug - i'll try against go tip
<jam> davecheney: I find it very inverted to have juju-1.14 as a label on gwacl/goose, but I can understand it from a pragmatic perspective.
<arosales> davecheney, yes we'll need to get nate fixes in. Sounds like a 1.14.1 is needed.
<dimitern> rogpeppe, wow.. these always come to me
<rogpeppe> dimitern: yeah
<rogpeppe> dimitern: i think somehow a stack frame may be being mashed
<davecheney> jam: it was something that happened without much thought
<natefinch> jam, mgz: how do I cherry pick revisions to merge into 1.14?  I have 1.14 checked out... but I don't want to waste people's time figuring out the bzr syntax
<arosales> davecheney, so is it correct to state that a 1.14.1 is needed?
<davecheney> a while back wallyworld___ and I started tagging goose with juju-x.y.z tags
<davecheney> and it snowballed form there
<davecheney> from there
<mgz> natefinch: similar to your revert the other day
<davecheney> arosales: i'd like to avoid it
<jam> natefinch: cd $1.14 branch
<rogpeppe> dimitern: because i can put a defer func(){sleep(some time); log.Printf("returning")}() into mgo.Query.One and it sleeps and i see the print
<jam> natefinch: bzr merge -c REVINTRUNKITLANDED lp:juju-core
<davecheney> and cut a special .zip file for windows from the tip of 1.14 series
<rogpeppe> dimitern: but a log statement immediately after the return i never see
<dimitern> rogpeppe, right
<dimitern> rogpeppe, well, if it works with tip, we have something at least
<arosales> davecheney, will there be a tools mismatch if natefinch pull from 1.14 tip?
<davecheney> arosales: no, they won't notice
<dimitern> rogpeppe, any ideas how to work around it?
<rogpeppe> dimitern: i need to characterise it first
<arosales> they = juju, correct?
<davecheney> because nate's build will still think it is 1.14.0
<arosales> davecheney, akc
<arosales> ack
<davecheney> to be sure, this is a BAD idea
<davecheney> but something we can live with
<davecheney> to avoid resetting the clock with the work jamespage is doing
<arosales> davecheney, ok
<jam> natefinch: with the caveat that you have to then backout your logrotaet changes again
<jamespage> davecheney, don't like the sound of that
<jamespage> I just uploaded to saucy
<jam> natefinch:  "bzr merge -c 1788 lp:juju-core" should be about right
<natefinch> jam: yep, had already taken that into account
<davecheney> jamespage: thank you
<davecheney> arosales: this is more importnat than a small bodge
<rogpeppe> dimitern: thank goodness the bug doesn't seem to be too sensitive to code changes
<davecheney> jamespage: i can cut 1.14.1 as soon as natefinch 's change is ready
<davecheney> can you upload that to sauce and backport today ?
<natefinch> jam: should I be merging directly into the 1.14 branch or a branch off of that?
<dimitern> rogpeppe, good
<davecheney> natefinch: checkout 1.14 branch
<davecheney> merge the change there
<natefinch> davecheney: cool
<davecheney> bzr push lp:~natefinch/juju-core/1000-fix-windows-build
<rogpeppe> dimitern: just building go tip
<jamespage> davecheney, later tonight yes - you caught me on a baby break :-)
<davecheney> then go to lp
<davecheney> jamespage: ok, we'll do it properly
<davecheney> sinzui: do you want to drive 1.14.1 ?
<dimitern> rogpeppe, did you manage to isolate it to a minimal code snippet?
<sinzui> davecheney, I suppose so.
<davecheney> sinzui: "no" is a perfectly acceptable answer
<rogpeppe> dimitern: ha ha
<dimitern> rogpeppe, what?
<davecheney> natefinch: is there a bug for the JUJU_HOME thing ?
<davecheney> i need to assign it to a 1.14.1 milestone
<natefinch> davecheney: I think so, lemme look
<rogpeppe> dimitern: no chance!
<sinzui> davecheney, Is dependencies.tsv really wrong in the 1.14 branch?
<rogpeppe> dimitern: (not yet anyway)
<natefinch> davecheney: at least indirectly
<davecheney> sinzui: yes
<davecheney> and we dont' use it
 * rogpeppe now *actually* builds go tip
<davecheney> and we can always add back a correct one in the future
<davecheney> but an incorrect file is a hand grenade with the pin loose
<davecheney> sinzui: in the future the goal is for the release-tarball script (and everything else) to consume dependencies.tsv
<davecheney> but it doesn't today
<davecheney> so on this stable branch
<davecheney> i have voted to remove it
<davecheney> as a 2nd pair of underpands
<davecheney> pants
<davecheney> natefinch: https://launchpad.net/juju-core/+milestone/1.14.1
<davecheney> please adjust the JUJU_HOME bug to include this series
<davecheney> or give me the original bug number and I'll do it
<jamespage> davecheney, backports for p q and r uploaded to stable ppa as well
<davecheney> jamespage: wonderful
<davecheney> sorry, you're going tohave to do it all again in 40 minutes
<rogpeppe> dimitern: ok, the issue's still there in go tip
<sinzui> davecheney, understood. I see you removed dependencies.tsv from your branch to avoid confusion
<natefinch> davecheney: should I not be using lbox propose? It keeps trying to propose against trunk
<davecheney> natefinch: nah, that'll fuck up
<davecheney> just propose the branch by hand
<davecheney> sinzui: correct, this is just to avoid confusion
<davecheney> we can always add a correct one tomorrow
<dimitern> rogpeppe, bugger..
<davecheney> ... for small values of tomorrow
<natefinch> davecheney: https://code.launchpad.net/~natefinch/juju-core/014-fix-windows-build/+merge/186028
<rogpeppe> dimitern: now trying to isolate
<sinzui> davecheney, I only hesitate because I have just gotten up and finishing my first cup of coffee. I try not to commit when I don't have my wits about me
<dimitern> rogpeppe, great
<davecheney> sinzui: https://code.launchpad.net/~dave-cheney/juju-core/9002-set-stable-release-version-to-1.14.1/+merge/186029
<davecheney> sets the version to 1.14.1
<davecheney> sinzui: like i said, "no" is a perfectly acceptable answer
<davecheney> there will be many more releases
<davecheney> some less shotgun
<davecheney> natefinch: bloody hell, this isn't a small change
<natefinch> davecheney: 99% is mechanical, changing $HOME to a small function call
<davecheney> ok, jam rogpeppe dimitern etc, https://code.launchpad.net/~natefinch/juju-core/014-fix-windows-build/+merge/186028
<natefinch> davecheney: the comment is unfortunate, since it was written when the change was just a single line :/
<davecheney> can we get a _1
<davecheney> +1
<davecheney> natefinch: found that bug ?
<davecheney> sinzui: i'm going to reuse the 1.14.1 release notes
<sinzui> understood
<sinzui> davecheney, LGTM
<davecheney> tradition dictates that i mention that 1.14.0 was replaced with 1.14.1
<davecheney> nobody seems to care
<rogpeppe> natefinch: please can you use lbox propose (you can use lbox propose -for lp:juju-core/1.14) so the changes can be seen in context
<davecheney> sinzui:
<davecheney> 1.11.4 is a small bugfix release for 1.11.3, please refer to the
<davecheney> release notes for 1.11.3 for full details.
<natefinch> rogpeppe: I tried that and it still tried to do it into trunk
<davecheney> ^ for example, this was one previous release note
<rogpeppe> natefinch: it worked ok for me recently
<natefinch> davecheney: I think I was hallucinating... no bug for that, but we do have: https://bugs.launchpad.net/juju-core/+bug/1195757
<_mup_> Bug #1195757: Package Juju for Windows <juju-core:Triaged> <https://launchpad.net/bugs/1195757>
<davecheney> natefinch: that's a paddling
<rogpeppe> natefinch: it might work now if you just do lbox propose
<davecheney> I have to AFK for 10 mins to meet my partner at the train station (it is late, and she's coming home on her own)
<davecheney> i will be back v shortly
<natefinch> rogpeppe: trying
<natefinch> rogpeppe: no luck
<natefinch> rogpeppe: it completes, but nothing is in rietveld
<rogpeppe> natefinch: did it print a codereview link?
<natefinch> rogpeppe:  no :/
<rogpeppe> natefinch: try "lbox propose -cr -for lp:juju-core/1.14"
<natefinch> rogpeppe: right, oops
 * natefinch forgot the -cr
<rogpeppe> natefinch: the -cr *should* be automatic, but...
<natefinch> rogpeppe:  still nothing printed out
<natefinch> rogpeppe:  and nothing in rietveld....
<rogpeppe> natefinch: odd. what does it print if you use the -verbose (or -debug) flag?
<natefinch> rogpeppe:  it's really spammy, but looks like it's bailing because it was already proposed.... http://pastebin.ubuntu.com/6119401/
<natefinch> rogpeppe: I gotta run for a few to help my wife with the kids, sorry
<natefinch> rogpeppe: already put her off for a half hour
<rogpeppe> natefinch: ok, np - i'll pull it locally and have a look in a bit
<natefinch> rogpeppe:  thanks
<rogpeppe> natefinch: currently trying to track down dimiter's bug
<davecheney> bak
<rogpeppe> dimitern: there's definitely *something* wrong going on. i've got it down to this so far: http://paste.ubuntu.com/6119414/
<davecheney> rogpeppe: natefinch i think it would be better not to overcomplicate things nad just propose this via lp
<rogpeppe> dimitern: if i change line 42 to use s.machine not s.machines[0], the test passes
<davecheney> natefinch: rogpeppe please be mindful of jamespage 's time
<davecheney> the more screwing around we do
<davecheney> the more time he has to wait
<rogpeppe> davecheney: ok, i'm on it
<davecheney> rogpeppe: thanks
<sinzui> for fsck's sake...my computer just rebooted and mir blocked me from logging back in
<davecheney> you can take raring from my cold dead hands
<davecheney> i'm not falling for the unstabl eupgrade trap again
<jamespage> davecheney, I'm out for the next ~8 hours or so
<dimitern> rogpeppe, interesting
<jamespage> if you need more time I can do first thing UK time tomorrow
<dimitern> rogpeppe, so it might be a stack overflow kind of thing perhaps? with the slice present
<davecheney> jamespage: understood
<rogpeppe> natefinch: in config.expandTilde, shouldn't path.Join in expandTilde be filepath.Join ?
<rogpeppe> dimitern: no
<sinzui> saucy has been quite a naught series. My keyboard is often unmapped and the audio drivers work on the roll of the dice
<rogpeppe> dimitern: it really seems like a compiler bug
<rogpeppe> dimitern: ha ha!
<rogpeppe> dimitern: i thought you meant something suitable for a stackoverflow.com question!
<rogpeppe> dimitern: i don't *think* it looks like stack overflow
<rogpeppe> dimitern: after i've done this review, i'll have a look at the assembly
<arosales> davecheney, so jamespage  uploaded 1.14.0 to the ppa but the client still needed a 1.14.1 release, which is in progress, correct?
<davecheney> arosales: yes
 * arosales thought natefinch was just going to use 1.14 tip . .  .
<davecheney> arosales: we changed our minds about 2 pages ago
<arosales> ok
<davecheney> jamespage prefered that we did it correctly and said there was no blocker in getting 1.14.1 done 1st thing UK time
<arosales> aside from to companies holding their PR announces
<arosales> s/to/two/
<davecheney> arosales: i need you to give me clear directoin
<arosales> I guess I let folks know we'll shoot for tomorrow
<dimitern> rogpeppe, ok, thanks for the help
<davecheney> i can sto the 1.14.1 process and as soon as nates' windows fiz lands we can build a dodgy .zip version from trunk
<davecheney> /s/trunk/tip
<arosales> davecheney, jamespage: do you think we'll have a 1.14.1 ready UK morning?
<arosales> davecheney, nah is already in progress
<davecheney> arosales: ok, understood, the plan is to continue to release a 1.14.1 as soon as possible
<arosales> davecheney, to confirm does 1.14.1 have natefinch fixes? Or is that still under review by rogpeppe ?
<davecheney> still in review
<davecheney> when it lands 1.14.1 will contain nates fixes
<sinzui> davecheney, who is adding the tags to goose and 1.14.x branch? You?
<rogpeppe> arosales: still in review, sorry - it's quite a substantial change
<davecheney> bot is batting 2 for 0
<davecheney> sinzui: I will do it
 * arosales not sure how davecheney is cutting a 1.14.1 release with code still under review
 * arosales goes to break the news that we are not releasing 1.14 today . .  .
<davecheney> arosales: stop
<davecheney> we _CAN_ release a 1.14.x today
<davecheney> we can bodge up a windows client release without going through the release process
<davecheney> arosales: i can cut the release as soon as the banch land
<davecheney> the commands are queued up in my buffer
<davecheney> arosales: i am waiting on your word to try for a proper 1.14.1 or keep 1.14.0 that we have now and bodge in the windows zip fix
<davecheney> if we go for the latter option
<davecheney> we can fix it in short order with a proper 1.14.1 release on the next day
<arosales> it would be nice to have something that works today
<arosales> and then we can fix the bits behind the scenes by tomorrow
<arosales> davecheney, what if we go with the former option
<davecheney> arosales: both options need natefinch rogpeppe  to land that branch
<davecheney> going with a proper 1.14.1 probably adds another 8 hours delay
<davecheney> at minimum
<rogpeppe> davecheney: i've just finished the review
<davecheney> rogpeppe: sweet
<rogpeppe> davecheney: it LGTM with one minor issue
<davecheney> mark that sukka as approved
<davecheney> oh
<rogpeppe> davecheney: one place is using path.Join instead of filepath.Join
<rogpeppe> davecheney: as nate's not around, i think i might just fix it, and approve it myself
<rogpeppe> davecheney: could you confirm my comment on https://code.launchpad.net/~natefinch/juju-core/014-fix-windows-build/+merge/186028 please?
<natefinch> rogpeppe:  sorry... wrangling two kids right now... yes, you're probably right
<rogpeppe> natefinch: are you able to fix it and approve soon, or i could do it if you can't
<natefinch> rogpeppe: probably better for you
<natefinch> rogpeppe: sorry and thanks
<arosales> davecheney, lets proceed with the following:
<arosales> for today we use the 1.14.0 with a hacked 1.14.0 msft client installer
<arosales> tomorrow we update the links to use the 1.14.1 msft installer
<arosales> davecheney, ack?
<davecheney> arosales: ack
<davecheney> arosales: hold
<davecheney> i am locating the 1.14.0 debs to produce tools
<arosales> natefinch, I'll need a 1.14.0 tools that uses 1.14.0 tip download link
<davecheney> arosales: um, can you say again please
<rogpeppe> davecheney: approved
<davecheney> because we havne't bumped the version string in 1.14 trunk
<davecheney> any release produced from there thinks it is 1.14.0
<davecheney> rogpeppe: thanks
<arosales> davecheney, I was saying for today we can use the 1.14.0 release jamespage already uploaded with natefinch msft client that pulls from tip.
<arosales> after natefinch has the 1.14.0 build from tip we can proceede with a 1.14.1 proper release and I can adjust download links tomorrow.
<davecheney> arosales: yes, that is is correct
<davecheney> arosales: yes, that is also correct
<arosales> ok so now I just need natefinch to kick a 1.14.0 msft client build that pulls from tip.
<davecheney> ok, please hold
<davecheney> https://code.launchpad.net/~dave-cheney/juju-core/9002-set-stable-release-version-to-1.14.1/+merge/186029
<davecheney> waiting on this merge to fail
<davecheney> if it _succeeds_ i'll have to back it out
<sinzui> arosales, by adjust the download links, I think you mean remove the 1.4.0 tarballs from Lp after the 1.4.1 files re uploaded. Lp will show the new links because they are the latest uploads and since they are the only uploads on series 1.14, that is the only links available for that series
<arosales> sinzui, no I mean downloads links of the msft installer from juju.u.c/install
<davecheney> arosales: sinzui which we haven't yet procued
<davecheney> so we can bodge that
<davecheney> sinzui: arosales ignore
<davecheney> what i just said was wrong
<davecheney> ignore it
<davecheney> i will _not_ be changing the tarball on the milestone
<davecheney> as jamespage has alreday submitted it
<sinzui> ah!
<davecheney> nate, do this to make yorself a new tarball
<davecheney> cd scripts/build-release-tarball/
<davecheney> change line 41 to read
<davecheney> bzr-checkout lp:juju-core/1.14 $TAG launchpad.net/juju-core
<davecheney> bzr-checkout lp:juju-core/1.14 -1 launchpad.net/juju-core
<davecheney> then run
<davecheney> bash build-release-tarball.bash juju-1.14.0
<davecheney> this will produce a tarball that is identical to what we uploaded to the milestone, but follows 1.14 trunk
<davecheney> then you should be able to feed that to whatever prodeuces your windows .zip build
<arosales> natefinch, note I'll need a msft download link to give the web team asap.
<davecheney> natefinch: ping me if my instructions are unclear
<rogpeppe> dimitern: i still don't know the underlying cause, but i've discovered a quick workaround for your problem (which also fixes a bug in your code)
<rogpeppe> dimitern: set s.machines to nil at the start of provisionerSuite.SetUpTest
<dimitern> rogpeppe, ah, ok will do, tyvm
<dimitern> rogpeppe, if you manage to isolate and file a golang bug report, please give me a link to put it in a comment there
<rogpeppe> dimitern: that's something you want to do anyway - otherwise you add 3 new machines in each test, ending up with 9 in total by the start of the last test
<natefinch> arosales, rogpeppe, davecheney: back, sorry for the delay
<rogpeppe> dimitern: tbh, you could probably just rename to SetUpSuite
<dimitern> rogpeppe, no, I need a SetUpTest with the machines created before each case
<rogpeppe> dimitern: ah, ok
<rogpeppe> dimitern: in which case you want to reset the slice
<rogpeppe> dimitern: aaaargh!
<rogpeppe> dimitern: i see the problem!
<rogpeppe> dimitern: it's not a compiler bug at all
<rogpeppe> dimitern: the above issue *is* your problem
<rogpeppe> dimitern: and that's it
<arosales> natefinch, no worries I understand daddy duty
<rogpeppe> dimitern: you're using Machines with stale state
<arosales> natefinch, I was just needing a msft client installer that uses 1.14.0 tip
<rogpeppe> dimitern: hmm, though...
<natefinch> arosales: no problem, working on it
<rogpeppe> dimitern: i still don't see why TearDownTest is being called
<arosales> natefinch, thanks, ping me when you have it.
<rogpeppe> dimitern: ah, yes, i understand that too now
<rogpeppe> dimitern: it all becomes crystal clear, phew
<davecheney> arosales: i have located the ppa's for 1.14.0
<davecheney> proceeding to make tools
<davecheney> lets hope this release bloody works
<arosales> davecheney, lol
<natefinch> arosales: http://ubuntuone.com/7aKY92pbRTXGe6IiSVzcMl
 * rogpeppe crosses his fingers for the release
<dimitern> robbiew, oh really?
<robbiew> rogpeppe: ^
<robbiew> lol
<rogpeppe> lol
<dimitern> oops :) sorry
<arosales> I did that yesterday
<rogpeppe> dimitern: yeah
<mgz> it's funny every time it happens... :)
<arosales> dang auto complete
<dimitern> quassel had better tab completion than xchat it seems
<dimitern> until it stopped working at all
<rogpeppe> dimitern: it would have helped the diagnosis quite a bit if mongo didn't panic in Session.cluster, but returned an error instead
<dimitern> rogpeppe, so you replaced the panic and caught it?
<rogpeppe> dimitern: no
<rogpeppe> dimitern: the panic was just a symptom of the bug in your code
<dimitern> rogpeppe, what's the bug?
<rogpeppe> dimitern: the bug is the one i pointed out above - you don't reset the slice in SetUpTest
<rogpeppe> dimitern: which means that the first machines that you call Refresh on are from a previous test, and so have a stale mongo connection
<dimitern> rogpeppe, oh god, yes!
<rogpeppe> dimitern: innit
<dimitern> rogpeppe, so it's no compiler bug then
<rogpeppe> dimitern: no indeed.
<dimitern> robbiew, whew :)
<dimitern> robbiew, sorry
<rogpeppe> dimitern: and my ba
<dimitern> rogpeppe, I really ought to start typing 2 chars at least before pressing tab
<dimitern> 3 actually
<rogpeppe> dimitern: and my length of time to find the problem was exacerbated by the fact that i was seeing Query.One return - which it was, but only because a panic was happening
<rogpeppe> dimitern: your IRC client should be better behaved
<rogpeppe> dimitern: and choose the last-addressed username by default
<dimitern> rogpeppe, absolutely - quassel does that
<rogpeppe> dimitern: (mine seems to do that)
<rogpeppe> dimitern: Konversation works pretty well for me
<dimitern> rogpeppe, ah, there it is - just a option I need to tweak, and now it completes last spoken to nick
<natefinch> davecheney: is there a script or something that sets the release number in the code?  I need to get the windows installer script hooked into something like that, so we don't have to hand-update the version in the script
<davecheney> natefinch: there is a little logic in the package recipe that extracts the detail from version/version.go
 * davecheney finds link
<sinzui> okay, davecheney , natefinch I don't understand how we are avoiding a tools mismatch...oh, ^ are we tinkering with the version?
<davecheney> natefinch: http://bazaar.launchpad.net/~dave-cheney/juju-core/package/view/head:/debian/rules#L10
<davecheney> sinzui: no, no tinkering
<davecheney> the version/version.go file continues to report 1.14.0
<natefinch> sinzui: tinkering would be required for it to know it's a different version
<davecheney> it sounds like nates biuld script doesn't extract the version automagically like the packageing does
<natefinch> davecheney: right now my build script is cd cmd/juju && go-windows-386 build
<davecheney> natefinch: that'l work fine
<davecheney> seeya mup
<sinzui> Are we incrementing to 1.14.1 *after* natefinch cuts the window's installer. Thus the windows release is different from the *nix release?
<natefinch> davecheney: good :)  ...and then open up a windows VM and build the installer from the stuff under scripts/win-installer    ...my intent is to try to get it done via wine and an actual script at some point
<davecheney> sgtm, lets boil one ocean at at eim
<davecheney> time
<natefinch> davecheney: agreed
<davecheney> sinzui: http://paste.ubuntu.com/6119698/
<davecheney> ^ how to bodge the tools uploader for saucy universe packages
<sinzui> ! thank you davecheney
<davecheney> im sorry for the hack
<davecheney> folling the various naming gyrations is complicated
<davecheney> following
<davecheney> arosales: et al, tools coming online very shortly
<davecheney> saucy is there
<davecheney> raring is coming
<davecheney> if anyone is running those series
<davecheney> and wants to test
<davecheney> now is the time
<davecheney> sudo apt-add-repository -r  ppa:juju/devel
<davecheney> sudo apt-add-repository ppa:juju/stable
<davecheney> sudo apt-get update && sudo apt-get install juju-core
<davecheney> make sure $(which juju) points to /usr/bin/juju
<arosales> davecheney, for aws only correct?
<davecheney> arosales: at this stage yes
<arosales> davecheney, ok
<davecheney> we can sync-tools in a sec
<davecheney> but lets make sure nothing is cocked up
<arosales> davecheney, ack
<davecheney> jam, mgz: are you able to sync-tools to hp cloud and canonistack ?
<mgz> davecheney: I should have the instructions and creds in my inbox somewhere
<davecheney> mgz: ta
<davecheney> mgz hold for a few mins
<davecheney> still getting Q and P tools
<mgz> sure.
<davecheney> natefinch: i'm getting the wind up here
<davecheney> how is your confidence on makig the 1.14.0 windows release ?
<natefinch> davecheney: 100%.  I already gave the installer to arosales (at least, I gave him a link)
<davecheney> mgz: sync tools
<davecheney> arosales: 1.13.3 to 1.14.0 upgrade worked fine
<davecheney> natefinch: cool
<davecheney> i need to sign off very shortly
<davecheney> otherwise there won't be much of me left to sign on tomorrow
<natefinch> davecheney: thanks for all the hard work so late at night
<davecheney> natefinch: not a problem, it is what i am here for
<davecheney> mgz will confirm this is nothing like the hilarity of the 1.10.0 release
<mgz> that was indeed fun.
<davecheney> http://paste.ubuntu.com/6119756/
<natefinch> in case I get hit by a bus, here's the build process for the windows installer: https://docs.google.com/document/d/1WMm6lcUTDA4wZnmA8YzfSXyLnQRz6Fqrr5g2lDtZZes
<davecheney> OH SHIT
<davecheney> this is not good
<davecheney> well
<davecheney> it's not terrible
<davecheney> but fucking auto sync tools is extending the bootstrap time
<davecheney> mgz: is this because the 'waves hands' magic data in simple streams needs to be updated ?
<davecheney> hmm, only appears to affect ap-southeast-1
<davecheney> too bad
<davecheney> we'll live with it
<davecheney> bootstrap works fine on ap-southeast-2
<mgz> davecheney: maybe, I'm not up to date on what error messages are what with simplestreams
<davecheney> yup, just a problem with ap-southeast-1
<davecheney> we'll survive
<davecheney> ap-southeast-2 and us-west-2 work fine
<davecheney> this is another hilarous eventually consisent problem on aws
<sinzui> :(
<davecheney> arosales: i think, to the extent that I can test at 1 am
<davecheney> we're good
<arosales> davecheney, thank you sir for the late night hard push
<arosales> mgz, let me know when aws, hp cloud tools are sync'ed
<davecheney> if anyone wants to test in hp cloud
<arosales> I can work on Azure
<davecheney> it will work right now without tools synced
<davecheney> it will just take longer
<davecheney> to bootstrap
<davecheney> -v, always -v
<mgz> arosales: I'm assuming aws is done, because I'm using that as a source
<davecheney> natefinch: don't forget to port your patches for the windows fixes, path => filepath back to trunk
<natefinch> davecheney: thanks
<mgz> hm, sync-tools needs a -v
<mgz> having it just sitting there quietly makes me nervous
<mgz> okay, canonistack done
<davecheney> arosales: http://ec2-54-202-26-90.us-west-2.compute.amazonaws.com/?p=4
<davecheney> done
<mgz> and cute, 1.14 includes the armhf binaries, if canonistack grows an arm region :)
<davecheney> mgz: but only for saucy
<davecheney> atm
<davecheney> all: i have to sign off
<davecheney> be online in (insert short amount of time)
<sinzui> thank you davecheney
<davecheney> we'll do a proper 1.14.1 tomorrow
<mgz> hp... I need to check creds with arosales I think
<arosales> davecheney, woot
<mgz> davecheney: thanks, have a good night
<davecheney> night ya'll
<arosales> davecheney, have a good night
<arosales> thanks for your work
<arosales> i'll sync with sinzui on 1.14.1
<arosales> mgz, do you need a password reset for hp?
<mgz> arosales: I'm going to check now
<mgz> can't remember where we left it last time
<arosales> ok, let me know.
<arosales> I think you said you were able to login, but we had to update your info.
<mgz> arosales: what user name do you have for me on that account?
<sinzui> arosales, Lp will not promote the 1.14x releases because they are on an older series: https://launchpad.net/juju-core lists the latest release from trunk
<mgz> the issue is my own account uses my canonical email address, so not sure how I log into the tools one instead
<sinzui> https://launchpad.net/juju-core/+download lists the release after it lists all the releases from trunk
<dimitern> ok, wrong channel
<rogpeppe> dimitern: lovely, looking
<dimitern> rogpeppe, https://codereview.appspot.com/13720045/ I have a review for you :)
<dimitern> rogpeppe, thanks
<gary_poster> dimitern, we like having you over there though :-)
<dimitern> gary_poster, I'm there all the time :)
<gary_poster> cool :-)
<rogpeppe> dimitern: ping
<dimitern> rogpeppe, here
<rogpeppe> dimitern: just wondering why the provisioner auth function allows access to any machine
<dimitern> rogpeppe, how else would you do it?
<rogpeppe> dimitern: doesn't that depend on whether the machine is an environmanager or not?
<dimitern> rogpeppe, hmm it should yeah
<rogpeppe> dimitern: as it is, any machine can remove any other machine...
<dimitern> rogpeppe, how about the lxc provisioner?
<dimitern> rogpeppe, it has to be able to change any machine, which has parentId == currently auth'ed machine's id
<rogpeppe> dimitern: that sounds reasonable
<dimitern> rogpeppe, ok, so I'll make the getAuthFunc smarter
<rogpeppe> dimitern: sgtm
<dimitern> rogpeppe, it has to be either an environ manager, hence - access to all machines, or a machine agent, and access only that machine's children
<rogpeppe> dimitern: seems reasonable. although...
<rogpeppe> dimitern: should the environ manager be able to remove machines several levels deep?
<rogpeppe> dimitern: i guess it probably should, as i suppose we should be able to force-remove a machine
<dimitern> rogpeppe, yeah, leave a fallback option
<rogpeppe> dimitern: i wonder if that should be the case for machine agents too
<dimitern> rogpeppe, what?
<rogpeppe> dimitern: so you're allowed access to all machines below the current machine
<dimitern> rogpeppe, I don't think this is easier
<rogpeppe> dimitern: i wasn't suggesting it because it was easier
<dimitern> rogpeppe, it's easy enough to check the parent of a machine, but all parents up?
<rogpeppe> dimitern: but because i think it might be more correct
<dimitern> rogpeppe, don't think so
<rogpeppe> dimitern: why should the global provisioner be able to force-remove a machine, but a machine-local provisioner not be able to force-remove a container?
<dimitern> rogpeppe, we shouldn't jump levels of agents like that
<dimitern> rogpeppe, it will be, but not more than 1 level deep
<dimitern> rogpeppe, same goes for the environ provisioner
<rogpeppe> dimitern: that means if you've got a totally broken instance/container, you won't ever be able to remove it, no?
<dimitern> rogpeppe, can you do that now?
<rogpeppe> dimitern: i don't know, but it's a definite issue
<dimitern> rogpeppe, if it is, it's not an agent api problem
<rogpeppe> dimitern: things do break, and we need to be able to clean up when they do
<dimitern> rogpeppe, it's more like a management cli/api thing
<rogpeppe> dimitern: hmm, you may well be right
<dimitern> rogpeppe, yeah, let's not overcomplicate things just now
<rogpeppe> dimitern: so the client does the remova
<rogpeppe> l
<rogpeppe> dimitern: sgtm
<dimitern> rogpeppe, well, the cli does that now - and it's the only way
<rogpeppe> dimitern: so i guess the environ manager *doesn't* get access to all machines
<dimitern> rogpeppe, only the ones without a parent?
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe, ok, looks better to me this way, nicely symmetrical
<rogpeppe> dimitern: yeah
 * dimitern has to step out for 10m
 * dimitern is back
<dimitern> rogpeppe, so, about that review?
<rogpeppe> dimitern: still on it
<dimitern> rogpeppe, ah, ok, sorry
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe, cheers
<arosales> natefinch, juju-tools uploaded to Azure, when you have  a spare moment could you test msft client 1.14.0 with 1.14.0-juju-core in Azure?
 * rogpeppe finishes for the day
<arosales> mgz, did you get the hp tools uploaded?
<rogpeppe> g'night all
<natefinch> arosales: will do
<arosales> rogpeppe, night
<arosales> natefinch, thanks.
<natefinch> arosales: I presume normally, we'd do that *before* the announcement, right? :)
<arosales> yes, normally it would be part of the release process
<arosales> mgz, for aws I see all the tools at /tools
<arosales> http://juju-dist.s3.amazonaws.com/
<arosales> mgz, however for hpcloud I see the 1.14.0 juju-tools @ tools/releases
<arosales> for azure I followed the aws convention
<natefinch> arosales: bootstrapped, waiting for machine 0 to get out of pending state
<TheMue> *phew* all tests work
<TheMue> so, time to leave. good night all.
<natefinch> arosales: hmm didn't notice this before, but juju ssh doesn't work, because it expects to run the ssh command (which doesn't exist on windows)  I had always assumed we just used an ssh implementation in Go, but evidently not.
<arosales> natefinch, hmm
<arosales> natefinch, btw did your bootstrap come back with the default juju-tools bucket?
<arosales> on the precise release stream?
<arosales> natefinch, not sure how to handle ssh on windows but be good to log a bug for tracking/triage purposes
<natefinch> arosales: how can I tell what bucket it uses?
<natefinch> arosales: juju status gives me
<natefinch> environment: azure
<natefinch> machines:
<natefinch>   "0":
<natefinch>     agent-state: pending
<natefinch>     dns-name: juju-azure-x4upm0herf.cloudapp.net
<natefinch>     instance-id: juju-azure-x4upm0herf
<natefinch>     instance-state: Created
<natefinch>     series: precise
<natefinch> services: {}
<arosales> hmm, not sure post bootstarp
<natefinch> arosales: it hasn't ever gotten off pending
<arosales> during bootstrap you can see which tools are choosen during a --debug
 * arosales will fire up an instance from ubuntu
<natefinch> ahh ok. I'll rebootstrap after I try deploynig some stuff
<arosales> natefinch, without ssh on windows your nice page, https://juju.ubuntu.com/docs/getting-started-keygen-win.html, is invalid, correct?
<arosales> natefinch, well not invalid just doensn't function atm
<natefinch> arosales: no no. That's still valid and needed to connect to the clouds... it's just that trying to do "juju ssh 0" won't work
<sinzui> natefinch, I bootstrap with the -v which tells you what was used.
 * sinzui looks for hidden log
<natefinch> yeah, seems like everyone uses -debug or -v ... seems like we should have some better default than no output
<sinzui> natefinch, As a developer withing with unstable tools, it becomes common. Canonistack has good months and bad weeks. I like to know when it is a bad week
<arosales> natefinch, ah you are saying for msft clients connecting to clouds other than Azure
<natefinch> arosales: yep, exactly
<arosales> gotcha, just 'juju ssh 0' won't work
<natefinch> arosales: I think there's a major bug with the windows client
<natefinch> arosales: major is probably an understatement.  Critical? Catastrophic?  Will not function.
<arosales> :-(
<arosales> I don't think the juju tools got up thier correctly
<arosales> I am re-uploading
<arosales> not sure if that is what you are hitting.
<natefinch> arosales: unlikely.  This looks like the client is telling the server to create directories with windows-style backslash path separators
<arosales> natefinch, ok, I'll let you debug then
<natefinch> arosales: I'll switch to EC2 to double check that it's not azure
<arosales> natefinch, ack
<arosales> well at least you hit that issue too
 * arosales types into wrong window
<hazmat> natefinch, you should be able to ssh into and have a look at the cloud init log
<hazmat> natefinch, you can't use juju ssh .. but you should be able to ssh into the dns name with an ssh client.. juju ssh uses the bootstrap node
<natefinch> hazmat: yeah, ssh is working, and ec2 shows the same symptoms when bootstrapped from the windows client
<natefinch> sigh
<hazmat> sinzui,  natefinch, incidentally  on the topic of using -v --debug all the time.. bug 1226786
<_mup_> Bug #1226786: mechanism to set default --params to commands <juju-core:New> <https://launchpad.net/bugs/1226786>
<hazmat> natefinch, can you pastebin /var/log/cloud-init-output.log
<natefinch> haz
<natefinch> hazmat: yep
<natefinch> hazmat: https://pastebin.canonical.com/97613/
<natefinch> hazmat: an example of the problem is   mkdir -p varlibjujudb/journal
<hazmat> jam, incidentally all the places your doing for ssl cert support are all the same places we'll need to hit up again for proxy afaics
<hazmat> natefinch,  that does look suspicious, client path separator perhaps.. is juju running on the machine?.. the end state there looks sane (mongo running, juju state initialized)
<natefinch> hazmat: yeah, the path separator was my first thought.  I also see  \varlogjujumachine-0.log and  \var\lib\juju\server.pem as files in the root directory  (note, those are the filenames... not a path)
<natefinch> hazmat: mongo is running, but jujud is not
<hazmat> natefinch, we've got tons of filepath.Join which is going to be problematic
<natefinch> hazmat: yeah.... ironically, by using the "right" function... we're shooting ourselves in the foot
<hazmat> it would be nice to see the raw cloudinit, i'm curious how yaml is treating the \ separation
<hazmat> indeed
<natefinch> hazmat: I'm kinda surprised that the client is the one that defines the paths that get set up on the server
<hazmat> natefinch, its only for bootstrap's cloudinit
<hazmat> natefinch, although possible we'll see other issues with charm deploys and bundle assembly
<natefinch> hazmat: I guess that makes sense
<hazmat> since those are also client assembled
<hazmat> natefinch, so the quick fix for this release, is to make a utils.PathJoin which does the unix thing, and replace extant uses of filepath.Join
<hazmat> although we really need to check for other uses of filepath i guess since there are others uses of os.PathSep
<natefinch> hazmat: yeah
<hazmat> natefinch, and then revisit as we get further into windows support
<natefinch> hazmat: k. I'll start working on that
<natefinch> hazmat: if it were me, I'd pull the current installer off the website... it's just cause heartache and pain
<hazmat> natefinch, given we have a press release out..
<natefinch> hazmat: yes, but the big announcement was azure, not windows client.  I was looking for the windows client announcement, and almost didn't see it
<hazmat> i'd rather yank it to be honest, and replace with a note for windows then have peopl dl something totally broken
<hazmat> arosales, ^
<hazmat> yeah.. its making its way through the news wires/pundit sphere now..
<hazmat> natefinch, i don't actually see the download though
<natefinch> hazmat: https://juju.ubuntu.com/install/
<hazmat> oh. there it is
<hazmat> jcastro, do you know how to update that till we get this fixed
<hazmat> the windows download link.. its off of juju.ubuntu.com which we could fix there.. but really we also want to yank the binary from http://assets.ubuntu.com/sites/ubuntu/latest/u/files/juju/juju-setup-1.14.0.exe
 * hazmat escalates via email
<hazmat> mramm, ^
<natefinch> hazmat: thanks, I was going to do the same
<natefinch> hazmat: also: Searching 784 files for "filepath.Join"  307 matches across 92 files
<mramm> arosales: ^^^^
 * arosales reading backscroll
<natefinch> arosales: short version: windows client is unusable
<hazmat> arosales, posted to canonical-juju
<arosales> hazmat, mramm, jcastro and I no longer have access to juju.u.c
<arosales> I can file an RT and hopefully the IS team have access, but other than than we'll need to await the web team.
 * arosales sure the IS does.
<mramm> I would file an RT, IS does have access!
<mramm> ;)
<arosales> s/IS/IS team/
<arosales> natefinch, is this a 1.14 specific issue you didn't see in your earlier testing?
<hazmat> mramm, WTF to the rescue ;-)
<natefinch> arosales: windows-specific ...  the version doesn't really matter.  Yes, I didn't see it in earlier testing. My testing was a lot lighter than it should have been. I thought it had already been tested, so I just tested the fixes I made.
<hazmat> natefinch, really we only need to the filepath.Join fix on cloudinit parts
<hazmat> natefinch, at least thats worth testing to see if that works by itself  for bootstrap and deploy
<arosales> RT filed
<hazmat> arosales, cool
<natefinch> hazmat:  yep, definitely the place to start.  *at least* those parts need to be fixed
<arosales> technically mark baker said in his blog availability tomorrow so natefinch you may have some time to cut a new release if you find the bug.
 * hazmat returns to prepping training
<natefinch> arosales: the problem is, it could be 100 spots in the code.
<arosales> natefinch, understood, just letting you know
<andreas__> natefinch: could line termination be an issue too? Just wondering, since it's another difference between windows and *nux
<natefinch> andreas__: could be.   The immediate problems I saw were definitely path separators, though.
<andreas__> yeah, sure, that was very visible
<andreas__> in a screenshot someone sent to the list
<andreas__> just wondering if the content is ok
<natefinch> me :)
<andreas__> ah :)
<andreas__> do you still have that instance up? Might want to run cat -vet on one of those files
<andreas__> should end in $, not ^M$
<andreas__> iirc
<natefinch> it's gone now... but in general, Go is actually really good about always just using \n and not \r\n, even on windows
<natefinch> andreas__: fixed a couple spots in cloud init where we definitely want unix line endings (or rather, line endings that match the server OS, not the client OS).... seeing what else breaks now
<natefinch> thumper: good morning
<thumper> hi natefinch
 * thumper trawls through the emails
<sinzui> natefinch, arosales do we want to file a bug about the win client path non-sense and target it to the 1.14.1 milestone?
<arosales> sinzui, yes as we'll need that but for the milestone and release
<arosales> sinzui, I don't have the error but I can file a bug and assign to natefinch
<arosales> 1.14 on Azure seems to be working on a sniff test
<arosales> http://juju-azure-8jdith6sql.cloudapp.net/
<arosales> sinzui, were you able to do any preliminary testing on HP?
<thumper> natefinch: how goes the windows path fun?
<sinzui> arosales, natefinch I think the bug just needs to state that the windows client needs to make unix paths when working with the bootstrap node.
<sinzui> arosales, I have not
<natefinch> thumper: I found at least one spot that needed to be fixed.. but still seem to be having problems
<thumper> natefinch: need any help or someone to talk to?
<arosales> bug 1226840 opened
<_mup_> Bug #1226840: Windows client needs to make unix paths when working with the bootstrap node <juju-core:New> <https://launchpad.net/bugs/1226840>
<natefinch> thumper: yeah... I don't know the cloud init stuff well.  Not to mention, my in-laws came over for the evening and it's about dinner time
<thumper> natefinch, arosales: does the bug have all the details?
<arosales> natefinch, mind putting your findings in the bug 1226840 and handing off?
<_mup_> Bug #1226840: Windows client needs to make unix paths when working with the bootstrap node <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1226840>
<natefinch> yep
<thumper> heh, the dreaded slash vs. backslash issue
<thumper> using filepath instead of path
<natefinch> thumper: yep
<thumper> I think path.Join always uses / yes?
<natefinch> thumper: yep
<thumper> but filepath.Join will use local
<thumper> natefinch: so I may need to go grab a windows laptop?
<thumper> at least I have one in the lounge
<thumper> natefinch: how hard is the windows setup?
<natefinch> thumper: you'd want to build on linux... I don't even know if it'll build on windows
<natefinch> thumper: using Dave's cross compile scripts, at least, that's what I did
<thumper> oh...
<arosales> thumper, if you want to see the error the installer natefinch is nice for widows
<thumper> natefinch: do you have a branch in progress?
<thumper> I've poked the cloudinit stuff a lot before
<natefinch> thumper: I have a couple changes, but they're really small
<thumper> natefinch: can you commit and push?
<natefinch> yep
<thumper> and give me the branch
<thumper> I could use it as a starting point
<thumper> and I'll work with Ian and Andrew and Dave to move forwards
<thumper> davecheney: you around?
<arosales> thumper, current installer is at http://ubuntuone.com/7aKY92pbRTXGe6IiSVzcMl  per natefinch
<thumper> arosales: no point me using that if it is broken
<natefinch> thumper: branch is here: bzr+ssh://bazaar.launchpad.net/~natefinch/juju-core/014-fix-windows-build/
<arosales> thumper, ack. Wasn't sure if you wanted to see the current error
<thumper> natefinch: I wonder how go get works on windows with git & hg branches
<thumper> arosales: can see it in the bug
<natefinch> thumper: you have to install git and hg
<natefinch> thumper: and bzr, obviously
<thumper> and also, should be easy enough to find the parts in cloud init
<thumper> if you know what you are looking for
<thumper> it is the old problem
<thumper> $1 to hit the machine with the hammer
<thumper> $9999 to know where to hit it
<arosales> thumper ok thanks. Any help on moving the bug forward would be a plus so we can move https://juju.ubuntu.com/install/ back to a download button.
<natefinch> thumper: really appreciate the help.   Is there anything I need to do to give you permission to work on my branch?
<thumper> natefinch: all I do is branch yours and commit locally
<thumper> can then push to my area
<thumper> and it has your changes
<thumper> DVCS FTW
<natefinch> thumper: btw, I made a "utils.JoinServerPath" so that we could 1.) find all the spots where we're intentionally not using filepath.Join, and 2.) use it later in case we ever have the client on linux and server on windows (not implemented, but available)
<natefinch> thumper: ahh, right. awesome
<thumper> natefinch: ah intersting point
<thumper> although
<natefinch> thumper: right now it just passes through to path.Join
<natefinch> thumper: and the couple of spots in cloud init that use it seem to fix the problems I saw with files in root with backslashes...  but it seems to never finish bootstrap
<natefinch> (even with the new fix that is)
<natefinch> that's where I stopped
<thumper> otp
<thumper> ok
<thumper> natefinch: so you are bootstrapping ec2 from a windows box?
<natefinch> thumper: correct
 * thumper nods
<natefinch> thumper: also tried azure, same deal
<thumper> ok, I'll take a look and talk with davecheney about his cross comple stuff
<thumper> and grab one of the kids laptops to test on
<arosales> thumper, note I didn't see this problem on 1.14 with ubuntu on Azure, and davecheney tried aws
<arosales> I'll bootstrap hpcloud real quick too
<thumper> arosales: you *don't* see the problem?
<natefinch> thumper: cross compile is pretty easy.  basically just need  to be running Go from source, run Dave's script to generate the different builders, and then you can run "go-windows-386  build"   (or generically go-OS-arch  in place of regular go tool to run that tool)
<thumper> natefinch: is your branch based on 1.14 or trunk?
<arosales> thumper, correct, I do _not_ see the problem on ubuntu clients
<natefinch> thumper: the one I linked is based on 1.14
<thumper> so who does see a problem?
<natefinch> thumper: he's not running the client from windows
<thumper> ah
<thumper> in which case arosales, you don't count
<thumper> natefinch: hmm, I'm not running go from source
<thumper> I may ask davecheney to do the actual compiles for me :-)
<thumper> hmm...
 * thumper starts
<thumper> natefinch: we aren't likely to use cloudinit on windows servers
<thumper> natefinch: so using a server path there not such a big deal
<thumper> natefinch: but useful to have the method for all the agents
<arosales> thumper, just passing along a data point.  I'll let you get to it.
<thumper> :)
<thumper> ack
<natefinch> thumper: thanks again for the help.  I'm going AFK for dinner, but will check in after. You can also throw builds at me to run until Dave gets online.  he may be in late, he was up until 1am.
 * thumper nods
<thumper> found and fixed one
<natefinch-afk> thumper: nice
<thumper> in a way, this is kind of funny
<thumper> natefinch-afk: lp:~thumper/juju-core/014-fix-windows-build, I have just used path.Join in cloudinit and upstart, as those are linux specific as far as I know
<thumper> and I think we need more thought about supporting multiple server operating systems
<thumper> I also added a trace to write out the cloud init that is generated
<thumper> so the commandline can see the bootstrapped cloudinit
<thumper> davecheney: can you ping when you arrive?
<thumper> davecheney: although if you are starting late due to your late start, you might hit me at the gym, so I'll reply when back
<thumper> natefinch-afk: at least writing out the cloudinit will let us eyeball the setup so we can see if there are other places where we are missing slash/backslash tweaks
<thumper> simplest way to see it is to do this:
<thumper> juju bootstrap --log-config=juju.environs=TRACE --show-log
<natefinch-afk> thumper: back
<natefinch-afk> thumper: good thinking about printing out cloudinit ... probably handy to have anyway
<thumper> gah, I hit the 1.14 / trunk azure thing trying locally
<thumper> natefinch-afk: yeah, I think so
<thumper> and since it is at TRACE level
<thumper> you have to explicitly ask for it
<thumper> --debug won't show it
<thumper> natefinch-afk: there was another place in cloudinit you had missed, but also the upstart service scripts
<thumper> it seems that go tries to be magic
<thumper> if you call filepath.Join('/var/lib/juju', 'blah') on windows it gives you '\var\lib\juju\blah'
<thumper> at least that is what it looks like
<thumper> not tried it
<natefinch-afk> thumper: ug... that seems like too much magic.
<thumper> I think that is why we see 'varlibjujudb'
<thumper> as the \v -> v
<thumper> weird that sometimes the slashes get switched completely
<thumper> and sometimes, just bits of it
<natefinch-afk> it's interesting that in certain spots we were getting no separator and certain spots we got backslashes
<natefinch-afk> yeah
<thumper> if you are able to comple and run that branch, with the logging and pastebin the cloudinit, we can eyeball it while it tries to start
<thumper> natefinch: in two weeks, I'll be an hour clouser to you
<thumper> as nz hits UTC+13
<natefinch> heh
<thumper> seems that there is a month of that, then we hit another hour closer as you fall back an hour
<thumper> hi davecheney
<thumper> and your alter ego davechen1y
<natefinch> thumper: error: flag provided but not defined: --show-log
<thumper> ah... yeah
<thumper> use --debug isntead
<thumper> but the log-config should still work
<thumper> --show-log is obviously just on trunk
<natefinch> thumper: <unintelligible swearing>   why the hell does the command line's scrollback default to like 200 lines :/
<thumper> haha
<thumper> can you set it to bigger?
<natefinch> thumper: yeah
<natefinch> default is 300.... wtf is this, 1990? :/
<natefinch> thumper: http://pastebin.ubuntu.com/6121592/
<natefinch> thumper: apologies ahead of time for line ending munging
<natefinch> thumper: actually sort of nice for the certs, though :)
 * thumper reads
<thumper> natefinch: ok, that looks like it should work
<thumper> lets wait and see
<natefinch> thumper: looking good.... machine 0 is up
<thumper> \o/
<thumper> now the rest of the cloudinit bits are created by machine 0
<thumper> so there shouldn't be any more issues
<thumper> lets get this merged into 1.14
<thumper> and then into trunk
<thumper> natefinch: I can push and propose mine, and you can review if you like
<natefinch> sounds good
<thumper> oh ffs
 * thumper just uses LP
<natefinch> haha
<natefinch> thumper: just in case, here's the contents of cloud-init-output.log: http://pastebin.ubuntu.com/6121614/
<thumper> natefinch: https://code.launchpad.net/~thumper/juju-core/014-fix-windows-build/+merge/186179
<thumper> natefinch: so you were able to get a successful 'juju status' from windows?
<natefinch> yep
<thumper> awesome
<natefinch> thumper: http://ec2-23-20-77-242.compute-1.amazonaws.com/wp-admin/install.php
<natefinch> thumper: end to end.  makes me happy.
<thumper> \o/
<thumper> natefinch: fyi, you don't need to claim the review before you do it, if you are a member of the team that has a pending review, and you provide a review status with the comment, you claim it automagically
<natefinch> cool
<arosales> wallyworld___, hello do you have an RT open for hosting juju-tools simple streams?
<wallyworld___> arosales: yes - you were cc'ed on it :-)
<wallyworld___> i can look up the number
<wallyworld___> 63925
<arosales> wallyworld___, thanks for reminding me. I'll post to that RT
<wallyworld___> ok. hope they act on it soon
<natefinch> thumper: reviewed, approved.  I gotta close it up for the night
<wallyworld___> thumper: looks like i missed some fun with windows. i skimmed some of the backscroll
<natefinch> wallyworld___: fun.... yeah.
<natefinch> I'm at EOD. Thanks for the help thumper.
<davecheney> sinzui: ping, we should tool up for a 1.14.1 release later this week
<davecheney> just as a tester
<thumper> wallyworld___: yeah
<thumper> wallyworld___: there is a clear difference between something running on windows and not crashing, and having the tool actually work
<thumper> wallyworld___: magic path issues
<wallyworld___> sigh
<wallyworld___> unix was there first
<thumper> expect more pain like this when we want to have the server be windows
<wallyworld___> why didn't windows stick to convention
 * thumper shrugs
<thumper> I expect that msdos just followed the earlier dos
<thumper> and it flowed from there
<wallyworld___> cpm
<wallyworld___> thumper: i have a thing at the school today at the same time as the code review meeting
<thumper> ok
<wallyworld___> so we could reschedule till tomotrrow
<thumper> did you want to do it earlier?
<thumper> I'm also happy to reschedule
<wallyworld___> i'd prefer to do some coding :-)
<thumper> haha
<wallyworld___> i want to get some bootstrap things done
<thumper> ack
<wallyworld___> should be finished today though, hence clear my plate wrt that issue
<wallyworld___> i just wish my branch had been looked at by the ocr last night :-(
<wallyworld___> sigh
<thumper> wallyworld___: which branch, I could look
<wallyworld___> thumper: that would be neat, but be careful what you wish for :-) https://codereview.appspot.com/13632056/
<wallyworld___> i had a high level pre imp with roger, but the exact implementation was not discussed, just the high level removing of reties in s3
<thumper> wow
<wallyworld___> it's a lot of files touched, but only a few lines in each
<thumper> I'll look when back from the gym
 * thumper nods
<wallyworld___> ok, thanks muchly
<wallyworld___> bootstrap is fast again now :-)
<wallyworld___> and live tests on ec2 pass
<wallyworld___> but they sure do take ages to run
<thumper> \o/
<thumper> I know what you mean about wanting to code again
<wallyworld___> yeah
<thumper> I've now cleared my plate from all previous pending branches
<thumper> and was going to start with the kvm stuff this morning
<thumper> but saw the windows bootstrap issues
<wallyworld___> cool
<thumper> so helped out there
<thumper> will look at your review this avo, and start poking kvm
<wallyworld___> ok, ta
#juju-dev 2013-09-18
<davecheney> natefinch, thumper: https://bugs.launchpad.net/juju-core/+bug/1226840
<_mup_> Bug #1226840: Windows client needs to make unix paths when working with the bootstrap node <juju-core:Fix Committed by natefinch> <https://launchpad.net/bugs/1226840>
<davecheney> does this need to be backported to 1.15.0 (trunk series ?)
<axw> davecheney: disregard my email. I just checked the PPA myself
<axw> I would like to know where go gets picked up from though
<davecheney> axw: ok
<davecheney> axw: are you saying, when we biuld into juju:ppa/stable, which version of Go does it use ?
<davecheney> /s/saying/asking
<axw> aye
<davecheney> ok, that is level 11 magic
<davecheney> me finds link
<axw> :) thanks
<davecheney> https://launchpad.net/~juju/+archive/stable/+edit-dependencies
<davecheney> this stable ppa has a dependencie on juju-golang
<davecheney> in effect, juju-golang is inserted into the apt sources
<davecheney> before apt-get install golang-go
<davecheney> so that is how we build with a go that isn't in any of the series'
<thumper> davecheney: yes
<thumper> davecheney: I'll do it
<thumper> davecheney: I tried proposing just that change to lp:juju-core, but it had heaps of conflicts, so I left it until after the gym
<thumper> back now
<davecheney> thumper: cool, thanks for keeping on top of it
<davecheney> i love the smell of backports in the morning
<axw> davecheney: thanks
<davecheney> axw: so that is how it works, i'm not sure if that answers all the questions
<axw> davecheney: yes, I was just curious how it works
<axw> davecheney: I didn't realise the PPAs built with CGO_ENABLED=0
<davecheney> axw: if they are, it's a massive bug
<axw> davecheney: well the dev PPA binaries are statically linked
<davecheney> axw: say again
<davecheney> that should not be the case
<davecheney> the whole reason for this dependant ppa stuff was to accomodate the (now depricated) gwacl cgo dependency
<axw> davecheney: I just downloaded the raring 1.13.3 dev build
<axw> and "juju" is statically linked
<axw> davecheney: I only care because net.LookupIP breaks if you feed it an IP in non-cgo
<axw> htus I
<axw> erk
<davecheney> axw: sudo apt-add-respository ppa:juju/juju-golang
<davecheney> sudo apt-get install golang-go
<davecheney> please check
<davecheney> and raise a bug
<axw> ok
<davecheney> there is so much shit breaking i cannot keep on top ofit
<axw> davecheney: no worries :)
<davecheney> lucky(~/go/src) % file /usr/lib/juju-1.14.0/bin/juju
<davecheney> /usr/lib/juju-1.14.0/bin/juju: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
<davecheney> does not appear to be the case
<davecheney> i am confused that you are seeing different results
<axw> davecheney: https://launchpad.net/~juju/+archive/devel/+files/juju-core_1.13.3-1~1737~ubuntu13.04.1_amd64.deb
<axw> get that
<davecheney> quantal ?
<axw> raring
<davecheney> sorry
<davecheney> axw: could you please raise an issue
<davecheney> otherwise i'm likely to get ... SQUIRREL!
<axw> davecheney: yep I will
<axw> :)
<axw> ooh look at the dog with the fluffy tail
<davecheney> PONY!
<axw> davecheney: https://bugs.launchpad.net/juju-core/+bug/1226902
<_mup_> Bug #1226902: ppa builds are built without cgo <juju-core:New> <https://launchpad.net/bugs/1226902>
<davecheney> ta
<thumper> davecheney:  https://codereview.appspot.com/13241053
<davecheney> thumper: LGTM, tahnks
<axw> davecheney: golang-go has CGO_ENABLED=1, so nfi
<davecheney> that is batshit crazy
<davecheney> is it blocking you today ?
<axw> davecheney: I can work around it for now
<davecheney> ok
<davecheney> thumper: when you close this one https://bugs.launchpad.net/juju-core/+bug/1226840
<_mup_> Bug #1226840: Windows client needs to make unix paths when working with the bootstrap node <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1226840>
<davecheney> please mark it fixed agaist 1.14.1 and 1.15.0
<davecheney> or something
<thumper> davecheney: cleaned up the bug showing 1.14 series target, with 1.14.1 milestone
<thumper> taking main task as trunk
<davecheney> thumper: thank you, you are a very impressive man
<davecheney> how do you do that
<davecheney> i usually say 'target to series, then choose trunk and 1.14'
<davecheney> how do you do it ?
<davecheney> axw: could I draw you attention to https://canonical.leankit.com/Boards/View/103148069/106844170
<thumper> I just said "target to series" and selected 1.14
<davecheney> and ask that if it won't happen, you retarget the issue
<davecheney> and not trunk
<davecheney> maybe that is where I was ging wrong
<thumper> the set the milestone on that for 1.14.1
<thumper> no, didn't select trunk
<thumper> that way the main bug task stays there and active
<thumper> and set the milestone for the main task to be 1.15
<axw> davecheney: if I fix this issue in time, I'll get onto that
<axw> good chance it won't happen, in which case I'll retarget
<thumper> davecheney: probably either is fine
<davecheney> axw: i don't think this is critical for 1.15.0
 * davecheney makes a 1.15.1 milestone
<davecheney> thumper: wallyworld___ could you please put your heads together and reply to https://lists.ubuntu.com/archives/juju-dev/2013-September/001591.html
<wallyworld___> ok
<wallyworld___> if i can get my next 2 branches landed, we'll be good from my perspective
<wallyworld___> i'll reply in a bit
<davecheney> cool
<thumper> davecheney: ack
<thumper> davecheney: all my pending stuff is landed now
<thumper> so a +1 from me,
<thumper> just reviewing wallyworld___'s branch now
<thumper> I think wallyworld___ is trying to go for a record of trailing underscores
<wallyworld___> i've had longer
<wallyworld___> that's what she said
<davecheney> wallyworld___: the door is over >>>>> there
<thumper> heh
<wallyworld___> i'm here till Wednesday
<davecheney> wallyworld___: https://twitter.com/RikerTips/status/372018041787138049
<wallyworld___> lol
<wallyworld___> even though i *hate* twitter
<wallyworld___> i hope the IPO fails
<wallyworld___> i also hate facebook and anything else vaguely related
<bigjools> shall we get off your lawn?
<wallyworld___> yes!
<wallyworld___> or i will hose you
 * bigjools tingles with excitement
 * wallyworld___ does too
<thumper> wallyworld___ is going to eschew all technology made after 1986
<wallyworld___> noooo, just social media, which is pointless crap
<davecheney> thumper: have you seen my 8 track player ?
<wallyworld___> who cares if so and so took a huge dump this morning and then brushed his teeth. seriously
<wallyworld___> mindless drivel
<davecheney> https://launchpad.net/juju-core/trunk
<davecheney> ^ is it possible to hide the old milestones
<davecheney> this list is longer than wallyworld___
<thumper> davecheney: not sure, I'd ask SteveK or wgrant
<thumper> hmm...
 * thumper wonders what wallyworld is going to say about this review
<wallyworld> oh dear
<thumper> wallyworld: done
<wallyworld> thanks i think :-)
<wallyworld> thumper: i *really* wanted to have a storage subpackage under environs, then we would say "storage.Get" etc
<wallyworld> but it was not possible
<thumper> because?
<wallyworld> because someone thought it a great idea to dump all the interfaces in environs
<wallyworld> -> import loops
<wallyworld> i tried to explain that in the cvering letter
<thumper> ok, as I see it you have two choices
<wallyworld> i don't know if it's a Go thing or not
<thumper> keep in environs but with more descriptive names
<wallyworld> but it seems like people don't like packages
<thumper> or make a package with the interface defined in storage
<thumper> I talked with fwereade about this
<wallyworld> i'll see what can be done
<thumper> and we decided it would be better to move all the interfaces into a different package
<thumper> to make it so we don't ahve loops
<wallyworld> yes!
<thumper> the alternative is to have potentially diverging interface copies
<wallyworld> i reckonif some people had their way, we would just have a single juju package
<thumper> haha
<wallyworld> it's not funny
<wallyworld> but sad
<thumper> yeah it is
<wallyworld> sad, sad, sad
<thumper> school run
<wallyworld> thanks for reviewing
<thumper> np
<axw> thumper, wallyworld: are we on for a package review in half an hour?
<wallyworld> axw: thumper was supposed to reschedule - i have a school concert thing at the same time today
<axw> ok
<davecheney> https://codereview.appspot.com/13577044
<arosales_> Anyone able/up for building  the juju msft client per nate's instructions @ https://docs.google.com/document/d/1WMm6lcUTDA4wZnmA8YzfSXyLnQRz6Fqrr5g2lDtZZes/edit?pli=1#heading=h.haqccf144cr7
<arosales_> or perhaps we should just wait for natefinch in the morning us time as he has the env set up.
<thumper> axw: there are a few ways to link your branch https://code.launchpad.net/~axwalk/juju-core/lp1225825-netlookupip-ip/+merge/186229 to bug 1225825
<_mup_> Bug #1225825: add-machine fails when using IP in ssh: target <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1225825>
<axw> thumper: yeah sorry, just forgot
<thumper> axw: when you are making a commit, you can go "--fixes lp:1225825"
<thumper> and then the branch scanner will pick that up
<thumper> or you can do it from the bug or branch page
<axw> ah cool
<axw> I usually -bug in lbox
 * thumper hopes to stab lbox in the heart
<axw> :)
<thumper> axw: if you do it from the branch page, and you have something that looks like a number in the branch name
<thumper> lp takes a guess an suggests that for the bug number
<axw> yup, quite handy
<axw> linked now
<axw> thanks
<thumper> ta
<thumper> axw: what do you most urgently need reviewed?
<thumper> I'm on a roll
<axw> thumper: that one would be good to get into 1.15.0
<thumper> ok
 * thumper looks now
<axw> thumper: after that, everything is to do with null provider/manual bootstrap
<thumper> axw: what about manual bootstrap ?
<axw> probably best if all the storage stuff gets sorted first
<axw> thumper: I mean, all my remaining MPs
<axw> are to do with that
<thumper> axw: give me a queue to work through
<axw> sure, just a moment
<axw> https://code.launchpad.net/~axwalk/juju-core/localstorage-to-httpstorage/+merge/185958
<axw> https://code.launchpad.net/~axwalk/juju-core/filestorage-write-tmpdir/+merge/185715
<axw> https://code.launchpad.net/~axwalk/juju-core/sshstorage-tmpdir/+merge/185717
<axw> https://code.launchpad.net/~axwalk/juju-core/sanity-check-constraints/+merge/185015
<axw> thumper ^^
<thumper> kk
<axw> thank you
<thumper> axw: wasn't there a branch that moved local/storage out to another location?
<thumper> just double checking that this queue is the order they'd be landed
<axw> thumper: https://code.launchpad.net/~axwalk/juju-core/manual-bootstrap/+merge/184714
<axw> that's LGTM'd, just waiting on the sshstorage changes
<axw> it's not quite in order
<axw> I'll have to do some merging
<thumper> axw: for the local -> http storage branch
<thumper> the first file imports provider/local/storage as httpstorage
<thumper> but the very next file uses environs/httpstorage
<thumper> why the difference?
<axw> hang on, gotta refresh my memory
<thumper> ah..
<thumper> local/storage is the worker
<thumper> ffs, why did I do that
<axw> thumper: yeah that one is changing in the manual-bootstrap MP
 * thumper confused himself
<axw> it will become worker/localstorage
<axw> wait...
<axw> *looking*
<axw> wtf
<axw> that is an error :)
<axw> I probably did an overzealous sed
<axw> thumper: cmd/jujud/machine.go should not have changed in this MP
<thumper> axw: how about you do a quick sanity check on that diff, and push any changes and repropose
<thumper> I'll look at the next one
<axw> yep will do
<axw> sorry about that
<axw> thumper: updated
 * thumper looks
<thumper> axw: re fileStorageReader, my first cut at it used filepath.Walk :-)
<thumper> was told to use Glob
 * thumper prefers walk
<thumper> as reviewer, I approve
<axw> thumper: Glob doesn't actually work properly there, that's the main reason for the change
<axw> Glob only works for one level of the hierarchy
<thumper> where does it fail?
<thumper> I didn't see a test added that demontrates that
<thumper> or is it later in the branch?
<axw> /tmp/* matches /tmp/blah, but not /tmp/blah/blah
 * axw looks
<axw> I think it's probably in httpstorage
 * thumper wonders
<thumper> should /tmp/* match /tmp/blah/blah
<axw> not in shell globbing
<axw> or deos it...
<thumper> should it with our one?
 * axw doesn't understand the question
<axw> thumper: according to the docs for filepath: '*'  matches any sequence of non-Separator characters
<axw> filepath.Match even
<axw> which is what Glob's syntax conforms to
<thumper> I'm wondering what the expectation is with our interface
<thumper> is the expectation that we recurse or not?
<thumper> if not, then we don't really need to walk the entire structure
<axw> thumper: oh yeah, List is meant to return everytrhin
<axw> everything
<axw> it considers the namespace flat
<axw> thumper: there's no test for this directly in filestorage, only in httpstorage (which uses filestorage in the test). I can add one.
<thumper> that would be appreciated
<thumper> don't repropose until I've finsihed the review though
<thumper> nearly there
<axw> okey dokey
<thumper> review done
 * thumper moves to the third
<axw> thumper: good call on the test, I found a bug in the existing code
<axw> for instance.HostAddresses; an IPv4 net.IP may be represented as 16 bytes
<axw> there's no check for that
<thumper> :)
<thumper> axw: https://code.launchpad.net/~axwalk/juju-core/sanity-check-constraints/+merge/185015 has conflicts
<axw> ok
<axw> I'll add the filestorage test and then get onto that
<thumper> axw: I'm heading off shortly
<thumper> axw: got anything you want a re-review on before I go?
<wallyworld> thumper: i'll push some changes to my branch in a bit if you happen to be on later
<thumper> I'm being taken out for a birthday dinner
<thumper> so not really likely to be on later
<thumper> but can do it tomorrow morning
<wallyworld> ok. whose birthday?
<thumper> probably around for another 20 min or so
<axw> thumper: I'm about to repropose the local-to-httpstorage
<thumper> wallyworld: mine
<thumper> wallyworld: and Rachel's
<wallyworld> thumper: oh? happy birthday old man :-)
<wallyworld> and to rachel too
<thumper> wallyworld: last week dude
<thumper> geez
 * wallyworld didn't know
<wallyworld> you never mentioned it :-)
<thumper> wallyworld: it is why I was off for a long weekend with no kids
<wallyworld> i thought that was wedding anniversary
<thumper> it was
<thumper> wedding anniversary 9th, my birthday 10th, Rachel's 11th
<thumper> all together for ease of remembering
<wallyworld> you sure do know how to organise things
 * thumper bows
<wallyworld> thumper: i'm moving StorageReader and friends interfaces to environs.storage. a lot of places to change
 * thumper nods
<thumper> ok
<thumper> I still think you should have "Get" and "GetWithRetry"
<thumper> or "GetWithStrategy"
<thumper> instead of "DefaultGet" and "Get"
<wallyworld> sure. separate issue which i'll also look at :-)
 * thumper nods
<wallyworld> thumper: there will be controvery i'm sure with the interface move
<wallyworld> but i just can't agree it is right as is
<wallyworld> and this is a good excuse
<thumper> how are you doing the interface?
<wallyworld> same interface names - StorageReader, StorageWriter etc, but moved to environs.storage
<thumper> seems fine to me
<thumper> we may have an interface package later to remove the dependency loops
<thumper> but that can happen later
<wallyworld> me too. but i'm sure i recall people wanting *all* the interfaces lumped together
<thumper> and I may suggest we get technical architect approval and review
<axw> thumper: I've reproposed, but haven't done the  test helper changes yet. will do that now.
 * thumper nods
<thumper> axw: I see the comment but not a diff change
<thumper> did the propose work?
<axw> thumper: sorry, I think lbox was still working
<axw> should be there now
<thumper> kk
<thumper> axw: that one is good to go (preferably after the test helper is added)
 * thumper has to go clean for a bit
<thumper> ciao
<axw> thanks thumper
<jam> wallyworld: poke about DataSource stuff
<wallyworld> yeees?
<jam> I'm trying to figure out whether tools DataSources have '/tools' in them or not.
<jam> I might just have an out-of-date  goose, though.
<jam> but the old keystone entry was just putting "" in the entry
<jam> wallyworld: still true in goose trunk if I'm reading it correctly.
<wallyworld> jam: the keystone entry has been updates to add the /tools
<jam> "Service{"juju", "juju-tools", ... PublicURL: url") where that URL is the root of the Swift bucket
<jam> wallyworld: not in testservices/openstack/New() if I'm reading it correctly.
<wallyworld> i can;t quite recall if goose needed updating
<wallyworld> ah, you mean the test doubl
<jam> wallyworld: I'm testing that I can actually *fetch* something from each DataSource and some of them are failing
<jam> and I'm trying to figure out whether it is a mistaken understanding of where the stuff needs to get uploaded to
<wallyworld> hmmm. i know the real canonistack puts out a url with /tools are the end. goose may not. but i don't think it matters. but i could be wrong
<wallyworld> i think it matters for the legacy fallback to work
<wallyworld> so, if simplestreams it set up right, you don't need any /tools at the end
<wallyworld> it's just a url
<wallyworld> but, if you want the legacy fallback stuff to work, you need to stick stuck in /tools
<wallyworld> let me take a quick look at the code
<jam> wallyworld: GetToolsSources puts environs.BaseToolsPath into the NewStorageSimpleStreamsDataSource
<jam> but uses 'juju-tools' output directly
<jam> I think it uses "ToolsURL" from config directly
<jam> and DefaultBaseURL directly
<wallyworld> the base url can be anything
<wallyworld> the default url is for getting stuff from the official repository
<jam> wallyworld: I was a bit confused because of different places where you end up mangling the search path for tools
<jam> but you don't touch all the possible paths
<wallyworld> there's only one place - in tools/urls.go
<jam> so I thought "tools" got tacked on after the fact
<wallyworld> but each provider gets to add in some search places if it wants
<jam> wallyworld: I'm quite aware of that, which are the bits I'm working on for Openstack. I'm trying to make sure that if you have data in all places "correctly" that you can fetch them.
<jam> and I'm still just trying to sort of what "correctly" is.
<jam> A couple of times now my expectations of consistency across them didn't quite match.
<wallyworld> so - you need to have a base url, under which is 1) a releases dir with the tarballs, and 2) a streams/v1 dir with the metadata
<wallyworld> no "tools" anywhere
<wallyworld> but, it's convention/tradition that we tend to make the base url end in "tools"
<jam> wallyworld: and clearly you haven't fetched from the "juju-tools" keystone entry, because you can't write data to the '/' container
<jam> AFAICT
<wallyworld> i have fetched them from there
<wallyworld> i can write to juju-tools url/tools
<wallyworld> the keystone entry points to blahblah/tools
<jam> wallyworld: not in the goose double
<jam> which is my point
<jam> you can't have a test case that is actually doing this
<jam> you might have tested on Canonistack
<wallyworld> but it doesn't need to end in tools
<wallyworld> i don't think
<jam> wallyworld: I think you're missing *my* point.
<wallyworld> sorry, this is hard ober irc
<jam> wallyworld: the URL probably can be whatever it wants to be
<jam> wallyworld: but you can't have tested that the keystone entry works with the double, because you can't write data to the location that goose's keystone entry points to
<jam> thus you probably can't have a test case that the keystone entry works
<jam> if you have one, point me to it, and I'll follow how it sets up the data
<wallyworld> where does the goose test double keystone entry point to?
<jam> wallyworld: you *do* have a test case that we see the base URL
<jam> but no test that Fetch on that URL works
<wallyworld> there's test cases that the urls are as expected
<jam> wallyworld: it points directly to the Swift base URL
<wallyworld> and can't we set up one of those http proxies to return data releative to that url?
<jam> wallyworld: you're pointing at where Swift returns its data, and you can't upload data to swift without a container to put it in
<jam> so I could create the container named "releases" and one named "streams" and put stuff in the subdirectories (probably)
<wallyworld> so, there are tests to ensure the urls that are used to construct the data sources are correct. but there's no need to put data in each as that's done in different tests
<jam> wallyworld: I'm testing that both URL() and Fetch() works for each of the datasources we see
<jam> I currently can't set up Fetch in a sane way to ensure that it gets data back properly
<wallyworld> well, you can using tools-url
<jam> I can hack around it, but it feels like we missed some test coverage
<jam> wallyworld: that is a *different* data source
<wallyworld> sure, but it's just a http data source
<jam> I'm testing tools-url, private storage, and the keystone entry
<wallyworld> and we test http data sources work
<wallyworld> and we test that the correct http data soruces are returned
<wallyworld> i guess we can add extra tests but it seems over kill given we test all the moving parts
<jam> wallyworld: so in what I'm working on each one needs to be properly configured for ssl/no-ssl, so they aren't "just simple URLs"
<wallyworld> so get the data sources back and poke their config?
<wallyworld> so instead of testing MxN, test M+N
<jam> wallyworld: well I'm just testing that I can fetch any data
<jam> so I don't have to test that simplestream layout works there.
<wallyworld> you can test separately if you can fetch data using a data source set up as no ssl
<wallyworld> and separately test that the data sources that come back from simplestreams all have no ssl activates
<jam> wallyworld: except the one that points to the canonical location, etc. It isn't very hard to just do a "can i actually fetch data from this source".
<jam> wallyworld: especially given the privacy layering means you can't "just check the bit is set"
<wallyworld> sure, but you don't need to
<wallyworld> you just need check that the url is correct and the ssl thing is correct
<wallyworld> and there are separate tests for if url based data sources work
<wallyworld> there's only 2 data sources - a url based one and a environs storage one
<wallyworld> they can be tested separately with the ssl thng
<wallyworld> so to me it's still a M+N vs MxN strategy
<jam> wallyworld: it is the same number of asserts to test the SSL bits as to do an actual Fetch, it is slightly more setup to actually put a file in the location to test that you can fetch it back.
<wallyworld> so if you want to do that then i guess the url returned by the goose test double needs to have a container at the end
<wallyworld> that you can put stuff in
<jam> wallyworld: well, I have to cheat elsewhere because we put 'tools' into the URL, so I can just try to fetch "acontainer/foo" from that URL
<wallyworld> we put tools in there for legacy things - the tools file name is "releases/tools/juju-" for simplestreams
<wallyworld> using the existing tools StorageName method
<wallyworld> many tests set up the new simplestreams prefix and reset it after wards
<wallyworld> and soon the legacy prefix will go away
<dimitern_> hey guys, william texted me he's expecting his internet connection at the new place to be fixed some time this morning
<wallyworld_> jam: i'm going to miss the stand up tonight. i have 2 branches in review. tim is doing the first. the second builds on that to add sensible retries to tools fetching
<hazmat> umm.. just a found  a serious security issue in 1.14 for openstack
<hazmat> although it might extend longer
<jam> hazmat: what's up?
<hazmat> switch to canonical #juju pls
<dimitern_> rogpeppe, morning
<rogpeppe> dimitern: hiysa
<rogpeppe> dimitern_: hiya, even
<dimitern_> rogpeppe, updated https://codereview.appspot.com/13720045/ :)
<rogpeppe> dimitern_: you didn't like my "lxc:34/kvm/5" idea?
<rogpeppe> dimitern_: aw, i thought that was quite nice
<dimitern_> rogpeppe, that's non-standard
<dimitern_> rogpeppe, it's only used in add-machine and deploy
<dimitern_> rogpeppe, in all other cases it's just a type, no prefix
<rogpeppe> dimitern_: i have a bit of a problem with the ContainerTypes argument to WatchContainers
<dimitern_> rogpeppe, yes?
<rogpeppe> dimitern_: the problem is that the ContainerType type doesn't really represent a container type - it's really a kind of pattern for matching containers of a particular type within a given machine
<rogpeppe> dimitern_: a container *type* is really just a string
<dimitern_> rogpeppe, the structure is right for args, but I'm open to suggestions about a better name
<rogpeppe> dimitern_: yeah, it's just the name i'm not keen on
<dimitern_> rogpeppe, how about MachineContainerPairs ?
<rogpeppe> dimitern_: one possibility might be to make the container type a separate (non bulk) arg
<rogpeppe> dimitern_: so that a given WatchContainers call could watch many machines, but only one type of container
<dimitern_> rogpeppe, that's not good - we might need to create several different containers in a single bulk call
<rogpeppe> dimitern_: s/create/watch/ ?
<dimitern_> rogpeppe, yep
<rogpeppe> dimitern_: well, you never *need* to issue a single bulk call for something
<dimitern_> rogpeppe, so how about MachineContainerPairs{Pairs: []MachineContainerPair{Tag, ContainerType}} ?
<rogpeppe> dimitern_: it's really Machine-ContainerType-Pair but that doesn't make for a great name
<dimitern_> rogpeppe, why doesn't it?
<rogpeppe> dimitern_: because MachineContainerTypePair could read as "Machine-container type pair"
<dimitern_> rogpeppe, c'mon
<dimitern_> rogpeppe, that's what doc comments are for
<rogpeppe> dimitern_: i think these names are actually quite important, and not easy to get right
<dimitern_> rogpeppe, I really want to start landing stuff so I can continue
<dimitern_> rogpeppe, we can always go with WatchContainersParams{Params: WatchContainersParam{MachineTag, ContainerType}}}
<rogpeppe> dimitern_: i'm wondering about params.WatchContainers
<rogpeppe> dimitern_: yeah
<dimitern_> rogpeppe, ok, will change it and repropose
<rogpeppe> dimitern_: i think that perhaps makes more sense that trying to make it into some kind of generally applicable type
<rogpeppe> dimitern_: params.WatchContainers and params.WatchContainer, i think
<rogpeppe> dimitern_: thanks for bearing with me
<dimitern_> rogpeppe, ok, np
<dimitern_> rogpeppe, updated
<rogpeppe> dimitern_: does a provisioner really need to be able to access its own machine?
<rogpeppe> dimitern_: (other than for Watch, of course)
<dimitern_> rogpeppe,  for WatchContainers yes
<rogpeppe> dimitern_: but not for all the other methods, right?
<dimitern_> rogpeppe, Life() as well
<dimitern_> rogpeppe, that's how we do st.Machine(tag)
<rogpeppe> dimitern_: so perhaps those two methods should use a slightly different auth function?
<dimitern_> rogpeppe, acutally all of the others need it as well
<rogpeppe> dimitern_: really?
<dimitern_> rogpeppe, we need InstanceId(), EnsureDead()
<dimitern_> rogpeppe, Remove()
<dimitern_> rogpeppe, definitely SetStatus
<dimitern_> rogpeppe, and SetProvisioned
<rogpeppe> dimitern_: um, aren't most of those operations done on the child machines?
<rogpeppe> dimitern_: not the provisioner's machine itself?
<dimitern_> rogpeppe, well, I don't think this is a big issue anyway, because the machine agent, which runs the machiner and the provisioner have access to the same stuff on the same account
<rogpeppe> dimitern_: i was thinking of it more as a way of catching errors in the provisioner code as anything else
<rogpeppe> dimitern_: (and also possibly as a way of making the code more obvious)
<dimitern_> rogpeppe, and more complicated :)
<dimitern_> rogpeppe, but it makes sense yeah
<rogpeppe> dimitern_: leave it for now; as you say it doesn't really matter
<dimitern_> rogpeppe, ok then
<rogpeppe> dimitern_: reviewed
<dimitern_> rogpeppe, thanks
<dimitern_> rogpeppe, responded
<rogpeppe> dimitern_: i'm not sure why you need to make sure that the machine exists in the auth method
<dimitern_> rogpeppe, to return ErrPerm if it doesn't
<rogpeppe> dimitern_: istm that a lexical test should be just fine
<rogpeppe> dimitern_: i really don't think it matters actually
<rogpeppe> dimitern_: as it is, we're going to fetch the machine *three times* on every API call
<dimitern_> rogpeppe, it matters so long as we're keeping consistency with other api facades
<dimitern_> rogpeppe, not 3 times - just 2 times
<rogpeppe> dimitern_: three times, i think
<dimitern_> rogpeppe, once in lifegetter, once in auth func, right?
<dimitern_> rogpeppe, the same applies to the other mixins
<rogpeppe> dimitern_: ah, perhaps - i was thinking we did it when creating the API facade itself, but that will have been done at login time
<dimitern_> rogpeppe, and we're running on the same node as the state server, so shouldn't slow us down
<rogpeppe> dimitern_: BTW lots of existing API calls might not return ErrPerm if the entity has been removed
<dimitern_> rogpeppe, only if you're authorized to see it
<rogpeppe> dimitern_: sure - but we can test that lexically, right?
<dimitern_> rogpeppe, i.e. you're trying to access your own machine, which was removed
<rogpeppe> dimitern_: e.g. if you're trying to access a child machine, which was removed
<rogpeppe> dimitern_: i think it's reasonable that the environment-global provisioner can ask whether a top level machine exists or not
<dimitern_> rogpeppe, how do you know it was removed, without fetching it?
<rogpeppe> dimitern_: it doesn't matter - you can get a NotFound error without any security problems ensuing
<dimitern_> rogpeppe, if it comes to that, I'll change the logic, but so far it seems it'll work just fine
<rogpeppe> dimitern_: i think that it's fine that a provisioner is able to access all machines in its domain, whether they exist or not
<rogpeppe> dimitern_: the ErrPerm thing is more about accessing one's *own* machine that's been removed, I think (and even then we don't check in most cases)
<rogpeppe> dimitern_: are there any other examples where the auth func does a db lookup?
<rogpeppe> dimitern_: (i'm also wary of the fact that we're potentially dropping mongo errors on the ground here)
<dimitern_> rogpeppe, deployer does it
<rogpeppe> dimitern_: not afaics
<rogpeppe> dimitern_: getAuthFunc does, but not the auth func itself
<dimitern_> rogpeppe, well, take a look at getAllUnits
<dimitern_> rogpeppe, ah, yes
<rogpeppe> dimitern_: and note the fact that getAuthFunc returns an error, so we don't drop it on the floor there
<dimitern_> rogpeppe, well, we don't have the tag inside getAuthFunc yet
<rogpeppe> dimitern_: that's true. honestly though - why is a lexical test bad here? if the name we're trying to access is logically ok for us to access, why not allow it, regardless of the status of the entity in question?
<rogpeppe> dimitern_: is there a security issue here that i'm not seeing?
<dimitern_> rogpeppe, let me check something
<dimitern_> rogpeppe, ok, changed the getAuthFunc to use state.ParentId() instead
<rogpeppe> dimitern_: cool, thanks
<dimitern_> rogpeppe, and did slightly differently the suggestion about simplifying it
<dimitern_> rogpeppe, reproposing
<dimitern_> rogpeppe, LGTM?
<rogpeppe> dimitern_: just having a last look
<rogpeppe> dimitern_: reviewed
<dimitern_> rogpeppe, thanks
<jam> dimitern, mgz, rogpeppe, TheMue: standup ? https://plus.google.com/hangouts/_/7bee5998ed9eebebcff8169fb6394538499bdf74?authuser=2
<jam> well, leave off that last bit: https://plus.google.com/hangouts/_/7bee5998ed9eebebcff8169fb6394538499bdf74
<mgz> gslowload
<mgz> jam, hm, you're not in the one from c.. right
<jam> we lost you rogpeppe
<rogpeppe> jam: yeah, just reconnecting
<dimitern_> bug 1227074
<_mup_> Bug #1227074: runtime panic when running any juju command in a deleted directory <juju-core:Confirmed> <https://launchpad.net/bugs/1227074>
<jamespage> hazmat, did 'peers before herd' get into juju-core?
<jamespage> (where peer relations are formed prior to any other remote relations)
<rogpeppe> review anyone? https://codereview.appspot.com/13251052
<rogpeppe> dimitern_, jam, TheMue, mgz, axw: ^
<mgz> looking
<rogpeppe> mgz: thanks
<dimitern_> I have one as well https://codereview.appspot.com/13552046
<mgz> okay, so the var stuff is the idiom for bits that need overriding in tests, right?
<mgz> xcept the logger, which is just a logger
<dimitern_> rogpeppe, will you take a look?
<rogpeppe> mgz: yeah
<dimitern_> mgz, I haven't heard of such practice - if it's for the tests there's usually a comment
<rogpeppe> mgz: maybe that's a reason to keep them separate i suppose
<jam> mgz: the export_test stuff isn't because it is being overridden
<jam> it is because it is being exposed
<rogpeppe> jam: i think he was referring to the var block at the start of https://codereview.appspot.com/13251052/diff/1015/juju/api.go
<mgz> jam: the main code has a few aliases at the top for some functions
<mgz> right
<TheMue> dimitern: LGTM
<dimitern_> TheMue, thanks
<rogpeppe> just rebooting to try to speed up this sluggish machinre
<mgz> okay, this basically makese sense to me rog, but is pretty complex
<jam> rogpeppe: I do have to wonder why doing the "in parallel connect to the provider" is better (especially for maintenance purposes) than try to connect with a short timeout, and then fall back to something els
<jam> mgz, rogpeppe: I guess the idea is that we have a 10min wait because we might be waiting for a node to start? I don't think we need to do that in the case that we have seen an endpoint
<fwereade> evening all
<jam> hey fwereade, good to see you online :)
<mgz> hey fwereade
<dimitern_> fwereade, hey! you're online at last
<fwereade> dimitern_, yeah, I strolled off and had a peaceful lunchand now it's actually here at last
<fwereade> how have the last couple of days been?
<dimitern_> provisioner is half way done :)
<fwereade> dimitern_,<3
<dimitern_> well, i'm finishing the server-side at first
<fwereade> any reviews I should be looking at in particular, anyone?
<rogpeppe> fwereade: i'd appreciate a look at https://codereview.appspot.com/13251052
<fwereade> rogpeppe, ack
<mgz> yeah, that's a good idea
<TheMue> fwereade: heya, I would like you to take a look at https://codereview.appspot.com/13430044/ to see if the direction is right
<fwereade> rogpeppe, looking at the format I'm very much unkeen on the nested fields in what we read in/write out
<rogpeppe> fwereade: so you don't like the endpoint/creds separation?
<fwereade> rogpeppe, was there some compelling consideration leading away from flatness?
<fwereade> rogpeppe, in my mind, if we end up with enough top-level fields in there for the separation to help, the purpose of the file will have changed beyond belief
<fwereade> TheMue, ack
<rogpeppe> fwereade: the reason was mainly because i've come around to the idea that location and credentials are two separate concerns, and should be treated separately.
<fwereade> rogpeppe, I am +100 on doing that internally
<rogpeppe> fwereade: i don't really see a downside from nesting them
<fwereade> rogpeppe, I don't think it's a benefit in the context of content you might paste into an email to someone :)
<rogpeppe> fwereade: it means that the configstore code can be blissfully ignorant of the contents of APICredentials and APIEndpoint
<rogpeppe> fwereade: i think it reads ok actually
<rogpeppe> fwereade: insofar as yaml ever reads ok :-\
<fwereade> rogpeppe, isn't the purpose of the configstore code to be able to produce a creds and an endpoint on demand?
<fwereade> rogpeppe, based on what's on disk?
<fwereade> rogpeppe, it's not some super-generic data store
<fwereade> rogpeppe, it is concerned precisely with their format on disk and in memory and it's its job to adapt between them ;)
<rogpeppe> fwereade: it will also store other info
<rogpeppe> fwereade: e.g. PushedSecrets
<rogpeppe> fwereade: extra config attributes
<fwereade> rogpeppe, extra config wants to be nested, sure
<fwereade> rogpeppe, PushedSecrets has to be LacksSecrets, doesn't it?
<rogpeppe> fwereade: so we can flatten creds and endpoint if you really feel it makes a difference, but we've got nesting going on anyway. this isn't a hard format for people to parse.
<rogpeppe> fwereade: why so?
<rogpeppe> fwereade: ha, yes, i think you're right
<fwereade> rogpeppe, because otherwise it'll sit there clogging up every file forever, won't it? otherwise we'll read the missing field and go "oh, false"
<rogpeppe> fwereade: yeah
<rogpeppe> fwereade: good point
<rogpeppe> fwereade: maybe NeedsSecrets
<fwereade> rogpeppe, as you wish:)
<fwereade> rogpeppe, but I haven't really heard anything that makes me think it's better to nest the format than not to nest, which is how it was originally specified
<fwereade> rogpeppe, and I'm wondering what the deal is with the string for cacert?
<rogpeppe> fwereade: it means it can just be serialised with no extra hassle
<rogpeppe> fwereade: there's no point in double-base64 encoding
<fwereade> rogpeppe, didn't we say base64-encoded []byte?
<fwereade> rogpeppe, don't we always use CACerts as []byte?
<rogpeppe> fwereade: well it can't be []byte if it's to be serialised as a yaml string
<rogpeppe> fwereade: yeah, and i think that was a mistake on balance
<fwereade> rogpeppe, do those []byte~s actually only hold bytes from the base64 set? )
<rogpeppe> fwereade: yes, they're all ASCII
<rogpeppe> fwereade: perhaps not from base64 per se, but all ASCII anyway, which is fine for encoding as a yaml string
<rogpeppe> fwereade: the only reason we use []byte is because the crypto package functions use []byte
<rogpeppe> fwereade: but it's all PEM encoded
<rogpeppe> fwereade: (and documented as such)
<fwereade> rogpeppe, urrgh
<rogpeppe> fwereade: it also means that the certificate is actually obviously a certificate when it's in the config file.
<rogpeppe> fwereade: BTW the environ config code already stores and uses the certificates as string
<fwereade> rogpeppe, ok, I'm convinced there, thanks :)
<fwereade> rogpeppe, but I'd really appreciate it if we could remove the nesting
<rogpeppe> fwereade: ok, will do
<rogpeppe> fwereade: BTW i did start off trying to implement a "type YAMLBase64Bytes []byte" with a custom YAML encoding method to encode and decode as base64
<rogpeppe> fwereade: ... or maybe as string, anyway
<fwereade> rogpeppe, cheers
<fwereade> rogpeppe, ah, nice
<rogpeppe> fwereade: there are goyaml bugs that meant it couldn't work, sadly
<fwereade> rogpeppe, aww
<fwereade> rogpeppe, well, moot now I guess :)
<fwereade> TheMue, wondering what the motivation is for the []StatusValue conversion
<fwereade> TheMue, StatusData seems nice and clear
<fwereade> TheMue, but I don't see what problem the change to array solves
<fwereade> brb
<rogpeppe> fwereade: re-proposed with those on-disk changes as discussed
<TheMue> fwereade: It's helping to keep the usage of SetStatus() the same (the values are optional) instead of always pass a third argument nil if not needed
<TheMue> fwereade: but can change it if preferred
<TheMue> fwereade: would only touch many places in code ;)
<fwereade> TheMue, I do see the nastiness
<TheMue> fwereade: just convenience
<jcastro> mgz: have you seen this yet? https://bugs.launchpad.net/juju-core/+bug/1227145
<_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers <juju-core:Confirmed> <https://launchpad.net/bugs/1227145>
<fwereade> TheMue, I think I'd prefer to use the nilsexplicitly on balance
<fwereade> TheMue, I don't think it'll clutter the diff much, and it'll make SetStatus calls similar across the board
<TheMue> fwereade: that's the best argument for, that usage is not typical
<TheMue> fwereade: I'll change it
<mgz> jcastro: not yet, looking
<fwereade> TheMue, tyvm
<TheMue> fwereade: what I also need is a bridge to the AllWatcher. the megawatcher is getting the change, but I currently cannot see how this play together with the AllWatcher
<fwereade> TheMue, find what's already being done with status
<fwereade> TheMue, it's all in the same document, right?
<TheMue> fwereade: pardon?
<TheMue> fwereade: ah, you mean document == statusDoc
<TheMue> fwereade: wondered about a document
<fwereade> TheMue, the allwatcher already reports status changes per-entity anyway AIUI (rogpeppe, confirm?)
<rogpeppe> fwereade: yeah
<rogpeppe> TheMue: what's the issue exactly?
<fwereade> TheMue, so wherever it is it's picking up those changes, you can bung an extra field in whereappropriate and you're good, Ithink
<TheMue> fwereade: I added it to UnitInfo and MachineInfo already, which are delivered EntityInfos. so it should be done
<fwereade> TheMue, sorry,meeting
<TheMue> rogpeppe: just getting a better understanding how the megawatcher and AllWatcher work together
<TheMue> rogpeppe: had been unsure
<rogpeppe> TheMue: they're the same thing
<rogpeppe> TheMue: or do you mean the multiwatcher?
<TheMue> rogpeppe: simplifies it for me :)
<TheMue> rogpeppe: no, the megawatcher
<rogpeppe> TheMue: there's no code that mentions the term "megawatcher"
<rogpeppe> TheMue: it's just the name of the file that the AllWatcher implementation sits in
<rogpeppe> TheMue: which is really just left there as an in-joke tbh
<TheMue> rogpeppe: yes, and I referred to that file not knowing that it is the AllWatcher impl. I found it checking where UnitInfo and MachineInfo changes are reported
<TheMue> rogpeppe: that is very helpful information for me, thx
<rogpeppe> TheMue: your best bet is to read the multiwatcher docs thoroughly
<rogpeppe> TheMue: all the changes you need to make will be in the allWatcherStateBacking methods, which implement multiwatcher.Backing
<TheMue> rogpeppe: I had only to change updated() in backingStatus
<rogpeppe> TheMue: that sounds about right
<TheMue> rogpeppe: here now only one information more has to be copied for units and machines
<TheMue> rogpeppe: but as I said, I have been sure I had to change it there but I didn't know the direct connection to AllWatcher *sigh* some kind of blindness
<TheMue> rogpeppe: solving by accident
<rogpeppe> TheMue: the
<rogpeppe> % grep -i allwatcher megawatcher.go | wc
<rogpeppe>     17      91     992
<rogpeppe> :-)
<rogpeppe> TheMue: bit of a clue there :-)
<TheMue> rogpeppe: I'm used to used an old fashioned environment of the 80s: smalltalk, with absolute simple code navigation by clicking ;)
<rogpeppe> TheMue: your editor doesn't do that?
<TheMue> rogpeppe: not the command you've done above, but references and implementors
<TheMue> rogpeppe: but still not as convenient and in a good visual way as smalltalk does it since more than 30 years
<TheMue> rogpeppe: today i started pharo again, because i once developed a product descriptor similar to our reviewed version stuff
<rogpeppe> TheMue: pharo?
<TheMue> rogpeppe: has been fascinating how fast you can navigate in the code
<TheMue> rogpeppe: pharo is an oss smalltalk
<rogpeppe> TheMue: it's not hard to make it easy to navigate around go programs. there are plugins for vi and emacs that do that.
<TheMue> rogpeppe: and for sublime too :)
<TheMue> rogpeppe: but it is still not the same
<rogpeppe> TheMue: the go oracle will raise that to whole other level too
<TheMue> rogpeppe: go oracle?
<rogpeppe> TheMue: does sublime do the "go to definition of identifier" thing?
<TheMue> rogpeppe: yep
<rogpeppe> TheMue: interesting. so you can click on the "X" in an expression like y.foo().bar.X and it'll work reliably?
<TheMue> rogpeppe: also nice is "go to any symbol in your project"
<rogpeppe> TheMue: (to take you to the definition of the symbol X)
<rogpeppe> TheMue: that's much easier
<rogpeppe> TheMue: because it doesn't need to do the type inference
<rogpeppe> TheMue: https://docs.google.com/document/d/1WmMHBUjQiuy15JfEnT8YBROQmEv-7K6bV-Y_K53oi5Y/edit#
<TheMue> rogpeppe: oh, that doc looks interesting, will read it later
<TheMue> rogpeppe: have to check your question before
<TheMue> rogpeppe: so you mean a y of type Y, having a method foo(), returning something with a field bar containing a field X?
<TheMue> rogpeppe: never tried, will check
<rogpeppe> TheMue: yes (including the fact that X might be embedded several levels deep in anonymous structs)
<rogpeppe> TheMue: as may foo and bar
<TheMue> rogpeppe: no, I don't think that this is possible with GoSublime (the name of the plugin)
<rogpeppe> TheMue: and y itself might be a variable in a range expression over the result of a function return, etc etc
<rogpeppe> TheMue: ah, so that's what i meant - that capability is incredibly useful
<rogpeppe> TheMue: it means that any time i see a symbol, i can find out all about it
<rogpeppe> TheMue: in 0.01s approx
<TheMue> rogpeppe: that really would be great
<rogpeppe> TheMue: you could maybe do a plugin for sublime to support that (assuming you can make arbitrary plugins in sublime)
<TheMue> rogpeppe: what I can do is if I find something like x := xdef.X{} I can simply jump directly to the definition of X in xdef
<rogpeppe> TheMue: you just need to find out the address in the current file, invoke an external program, and parse the resulting file address
<rogpeppe> TheMue: only if xdef is a package identifier, right?
<TheMue> rogpeppe: yes, will check if it discovers an alias too
<rogpeppe> TheMue: it should do - that's really easy to do
<TheMue> rogpeppe: it does
<rogpeppe>  fwereade: i'm still hoping for a review of https://codereview.appspot.com/13251052, if poss
<fwereade> rogpeppe, sorry, just finishing meeting
<rogpeppe> fwereade: ack, np
<dimitern_> fwereade, https://codereview.appspot.com/13501051/
<dimitern_> fwereade, last bit of provisioner server-side api
<TheMue> simple part done, eliminated status value. but now the consequences ...
<yolanda> hi, i'm receiving this error after a juju bootstrap, when trying to deploy a service, or even with juju status: error: cannot log in to admin database: auth fails
<fwereade> yolanda, can you ssh to the machine and see if there's anything in var/log/cloud-init-output.log (I think that's the one)
<fwereade> yolanda, if it didn't manage to set up admin creds it will probably say why in there
<fwereade> yolanda, sorry, what I mean is, please pastebin me the contents of that file and I'll take a look
<fwereade> yolanda, is there any possibility that you've got mismatched tools?
<yolanda> http://paste.ubuntu.com/6124396/
<yolanda> fwereade, i executed a juju sync-tools before bootstrap
<fwereade> yolanda, that is an interesting thing to see though
<fwereade> yolanda, is there any more context? and cloud-init.log mightalso be handy
<yolanda> fwereade, and it happens repeateadly since this afternoon, i tried at least 3 times
<fwereade> yolanda, sorry, I have been moving house, but... rogpeppe, did we release anything this afternoon..?
<yolanda> let me paste a bit from cloud-init
<rogpeppe> fwereade: not as far as i've seen
<yolanda> http://paste.ubuntu.com/6124407/
<yolanda> fwereade, is that useful?
<fwereade> yolanda, it may be, it's consistently surprising :)
<fwereade> yolanda, is there anything in /var/log/juju at all?
<yolanda> fwereade, only all-machines.log, and it's empty
<fwereade> rogpeppe, is there anywhere else you can think of that might have some context there?
<rogpeppe> fwereade: reading back
<rogpeppe> yolanda: could you paste the contents of cloud-init-output.log ?
<rogpeppe> yolanda: (there's also cloud-init.log, i think, but that's has different stuff in)
<rogpeppe> s/that's/that/
<yolanda> rogpeppe, full log?
<yolanda> a bit is on http://paste.ubuntu.com/6124396/
<rogpeppe> yolanda: yes please
<yolanda> http://paste.ubuntu.com/6124464/
<rogpeppe> yolanda: thanks!
<rogpeppe> yolanda: can you ssh to the machine that happened on?
<yolanda> yes
<rogpeppe> yolanda: what happens if you try to run /var/lib/juju/tools/1.14.0-precise-amd64/jujud on that machine?>
<yolanda> rogpeppe, it runs ok, shows a help message
<rogpeppe> yolanda: hmm
<rogpeppe> yolanda: running as root? or ubuntu?
<yolanda> ubuntu
<rogpeppe> yolanda: what environment is this in?
<rogpeppe> yolanda: it looks like a mmap syscall has failed when initialising the go runtime
<yolanda> rogpeppe, i'm on canonistack
<rogpeppe> yolanda: just to check: what does uname -a print on that machine?
<yolanda> Linux juju-openstack-machine-0 3.2.0-53-virtual #81-Ubuntu SMP Thu Aug 22 21:21:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
<rogpeppe> yolanda: hrmph :)
<rogpeppe> mgz, yolanda: do you know what kind of virtualisation canonistack uses?
<rogpeppe> (if any)
<mgz> kvm
<rogpeppe> mgz: any idea why a mmap call may be failing sometimes?
<mgz> flakey disk?
<rogpeppe> mgz: there's nothing disk-related about this mmap call
<mgz> I've not had issues with the instance disk on canonistack though, only attached volumes
<rogpeppe> mgz: (the fd argument to mmap is -1 in this case)
<mgz> er,,, bad overcommit on memory then perhaps?
<mgz> or a juju bug that just looks like the syscall is failing :)
<rogpeppe> mgz: i can't think how that could happen - the go runtime is printing out the messages i'd expect to see in that case
<rogpeppe> mgz: and the juju binary works ok when called later, so it doesn't appear to be binary corruption
<rogpeppe> s/in that case/if the syscall fails/
<yolanda> rogpeppe, mgz, problem is happening in every bootstrap, so it's not a random problem
<rogpeppe> yolanda: yes, this is really weird! (but actually good that you can reproduce the problem)
<yolanda> but bad because i cannot deploy my services :(
<mgz> yolanda: have you tried both regions?
<yolanda> mgz, no, only zone 1
<yolanda> it was working fine until this afternoon
<mgz> maybe try lcy02 if that's not too much hassle
<yolanda> ok, let me try
<mgz> the other thing to elliminate would be by using 1.13 tools rather than the new 1.14 if you're not certain you had a working 1.14 run
<yolanda> trying zone 2
<rogpeppe> yolanda: did you use --upload-tools? (i.e. did you build the binaries yourself, and if so, what version of Go are you running?)
<yolanda> juju sync-tools
<rogpeppe> yolanda: thanks
<rogpeppe> mgz: do you know what Go version we're using to build the tools in the public bucket?
<mgz> rogpeppe: whatever dave used, I'd expect what we have in saucy
<rogpeppe> mgz: hmm, any way i can easily find out? i'm just putting together an email to the golang list
<rogpeppe> mgz: in case someone there has a better clue
<mgz> sec
<mgz> oo, I'm not sure for 1.14, wasn't built out of his recipe
<mgz> rogpeppe: pretty sure it's the saucy package, 1.1.2
<rogpeppe> mgz: ok, thanks
<mgz> arosales_: any corrections?
<TheMue> fwereade: have to step out, but updated proposal is in. any feedback is appreciated so that i can continue tomorrow morning
<rogpeppe> mgz: what's the likelihood that yolanda is getting the same physical instance each time?
<yolanda> rogpeppe, i'm getting same instance every time
<mgz> the same underlying machine? high-ish within the same region, we have pretty limited hardware
<rogpeppe> mgz, yolanda: i wonder if it might be something odd about that instance
<arosales_> mgz, sorry reading backscroll now ..  .
<mgz> arosales_: short recap, what version of golang did the 1.14 release get built with?
<yolanda> mgz, bad news, same happens with zone 2
<arosales_> mgz, not sure on the golang version
<mgz> yolanda: good news probably, means it unlikely to be canonistack hardware related
<mgz> so, much more tractable problem :)
<yolanda> mgz, but i was hoping that zone 2 worked and i could deploy everything :)
<arosales_> fwereade, do you know what version of golang releases are currently being built with?
<mgz> I think the short version is no one has any clue how we do the release... I understoon Dave's old procedure, but he did something different this time around
<arosales_> mgz, https://code.launchpad.net/~dave-cheney/+recipe/juju-core is the build recipe that was used.
<mgz> arosales_: the buildlog for that ppa disagrees
<arosales_> mgz, ah but I see what you are saying
<arosales_> mgz, so davecheney built the tar ball and delivered that to jamespage
<arosales_> jamespage then updated the saucy packaging and uploaded to the saucy archive
<arosales_> which was built by the lp builders, I assume saucy based
<arosales_> davecheney, then dput the packages from saucy into the ppa
<yolanda> is there anything i can do to help to catch the bug?
<mgz> arosales: I'm wondering if we didn't use the saucy binaries direct for the past ubuntu versions, but instead rebuilt with their golang copies... but I'd expect things to break much more dramatically if that happened
<arosales> mgz, so we have done initial testing on aws, hp, and azure with 1.14
<arosales> are were able to bootstrap there
<arosales> 1.14.0 that is
<mgz> and using precise images I presume
<arosales> mgz, correct
<arosales> precise on the server
<arosales> mgz, saucy failing here?
<mgz> nope, precise
<arosales> ok
<mgz> yolanda: give me your exact steps, in as miniaml form as possible, and I'll try to reproduce
 * arosales not sure how simplestreams work in canonistack
<yolanda> mgz, let me pass you my environments file
<yolanda> without keys, of course
<arosales> There were some cloud-init issues but we thought there were specific to Azure
<arosales> yolanda, may want to have smoser take a look at the cloud-init log
<yolanda> arosales, smoser, link to logfile is pasted in this conversation
<yolanda> mgz, environments file is like that: http://paste.ubuntu.com/6124684/
<yolanda> then i just do a juju sync-tools, juju bootstrap
<mgz> yolanda: ta
<mgz> hm, there's some pyjuju junk in there but I assume that should just get ignored
<arosales> smoser, I think yolanda  is referencing http://paste.ubuntu.com/6124464/
<mgz> yolanda: what's your local juju binary version?
<yolanda> 1.12
<smoser> k.
<smoser> reading.
<yolanda> i can try updating juju
<mgz> let my try here first
<mgz> okay, so I can reproduce that
<mgz> manifests as a hang in status, which is joyous
<mgz> rogpeppe, yolanda: also fails with the old 1.12 binaries, so not 1.14 related
<rogpeppe> mgz: v glad you can reproduce
<yolanda> mgz, and do you have any clue on what happens?
<rogpeppe> mgz: i wonder if you can reproduce it by getting the cloudinit to run some other non-juju executable
<rogpeppe> mgz: (go executable)
<rogpeppe> mgz: the weird thing is that the same binary works later.
<mgz> yup, manually running the command that fails (and fixing up the --constraints arg), gets the machine in a usable state, and status works
<mgz> so, something at that point during boot is borked
<mgz> this is definately a poke-IS moment
<mgz> yolanda: can you file a bug or rt?
<yolanda> mgz, sure, maybe an RT can be faster?
<rogpeppe> fwereade, mgz: PTAL
<rogpeppe> fwereade: i've addressed your points, i think, except that i couldn't work out what the Default remark was about
<mgz> okay, that acroymn has me stumped :)
<rogpeppe> mgz: sorry, "please take another look"
<rogpeppe> mgz: (common in golang core dev, not here, i guess)
<rogpeppe> mgz: i added a test for the "abort stops without closing" logic, but it proved really hard to make it fail
<rogpeppe> mgz: so i left it untested (it's just an optimisation after all)
<rogpeppe> right, that's me for the day
<rogpeppe> g'night all
<mgz> changes lgmt
<rogpeppe> mgz: thanks
<yolanda> mgz, is that an issue of juju-core, cloud init?
<mgz> really hard to tell, but I suspect not juju-core, as it only started today, and the 1.12 binaries from a while back are also failing
<natefinch> jam, fwereade, mgz, rvba: any of you guys know how to connect to Garage Maas?  These instructions no longer appear to work: https://wiki.canonical.com/CDO/UbuntuServer/IOM-Lab
<jam> rvba: allenap, bigjools ^^
<yolanda> i just file an RT with the situation and logs then
<mgz> yolanda: thanks!
<sinzui> natefinch, are you satisfied with with windows client? Do you want to block the releases of 1.14.1?
<natefinch> sinzui: the windows client is fine, though currently it is marked as 1.14.0.  If we want one for 1.14.1 I'll need to rebuild the installer.
<yolanda> mgz https://eu1.salesforce.com/500D000000Uksl4
<sinzui> natefinch, understood. I think we want to start the release of 1.14.1, though the building of it would happen tomorrow by jamespage. I think you build the client at the time we extract the tools?
<natefinch> sinzui: I can build the client any time we decide that what is on the branch is what we want to release.... it's 100% orthogonal to the rest of the release process
<sinzui> natefinch, fab. The lp:juju-core/1.14 branch is NOT ready, but I will start on it.
<marcoceppi> wtf, why is juju putting .empty files in directories?
<natefinch> marcoceppi: do you have more detail?
<smoser> shoot. / me rememers
<smoser> am istill needed here?
<smoser> mgz, i suspect jugju core.
<smoser> cloud-init ran juju's stuff happily.
<smoser> i'm not really sure why jjuju decided to print 7000 numbers.
<smoser> presumably each charater in cacert as ascii
<natefinch> smoser: yeah, I've seen it do that
<marcoceppi> natefinch: install hook creates a "volumes" directory in the charm, next hook fails because it checks that directories for files, and in it is a .empty file
<marcoceppi> Wasn't created by the charm, shows up in all directories my charms create that are empty
<natefinch> marcoceppi: the code says that has something to do with using git with the charms
<marcoceppi> natefinch: ugh, this sucks and breaks a few charms in the store
<natefinch> marcoceppi: I don't know that area of the code at all... but it looks like that's not new code... though it's certainly possible it's being used in a new way
<marcoceppi> there are other ways to register empty directories in git other than putting a dang file in there
<natefinch> marcoceppi: yeah, not really the best way to do it.  should be easy enough to fix so that we don't do that.
 * marcoceppi files a bug
<ahasenack> guys, I just bootstrapped with juju trunk from earlier today, and I'm getting authorization errors in juju status
<ahasenack> anything known about that?
<ahasenack> lemme paste
<natefinch> marcoceppi: actually, it looks like you can't add empty directories in git... though honestly, it doesn't seem very useful anyway
<marcoceppi> natefinch: you can
<ahasenack> http://pastebin.ubuntu.com/6125398/
<natefinch> marcoceppi: ok. I was just going off a quick google search.  But if there's a way, then certainly it must be better than empty ugly files everywhere
<marcoceppi> natefinch: but either way, theres no need to track an empty directory
<marcoceppi> if it's empty, it's empty
<ahasenack> r1834
<natefinch> marcoceppi: yeah, it seems like you could just ignore it
<marcoceppi> natefinch: well, there was a way
<natefinch> ahasenack: let me try it here
<ahasenack> natefinch: I bootstrapped with --upload-tools
<ahasenack> if that matters
<natefinch> shouldn't... but you never know
<ahasenack> natefinch: I see a traceback in cloud-init.log, digging
<ahasenack> natefinch:
<ahasenack> runtime: panic before malloc heap initialized
<ahasenack> fatal error: runtime: cannot allocate heap metadata
<natefinch> well that sounds bad
<ahasenack> so the bootstrap node doesn't run on an instance with 512Mb of RAM anymore
<natefinch> that is pretty tight, but I don't know... I wouldn't have expected that to change
<ahasenack> can I set default constraints in environments.yaml?
<ahasenack> canonistack's default is 512Mb
<natefinch> hmm... I'm actually getting similar behavior on EC2
<natefinch> lemme check the cloud init log there
<natefinch> definitely wouldn't be a ram issue if it happens on EC2
<ahasenack> correct
<ahasenack> unless you are using a tiny instance, then maybe
<natefinch> it defaults to a small
<natefinch> which is what I used
<natefinch> I'm getting a different error, but still an error from cloud init
<ahasenack> cloud init backtraced, I had to check cloud-init-output.log
<natefinch> Yeah, I looked at both... possibly the memory issue you hit was during error handling, I don't know
<natefinch> I'll write up a bug. I don't know this part of the code, so there's not much I can do, but others who do know will be on soon
<ahasenack> so bootstrap failed for you in the end?
<natefinch> ahasenack: yeah
<ahasenack> that sucks
<natefinch> ahasenack: well, bootstrap appeared to complete, but then juju status got into the same loop yours got in
<natefinch> 2013-09-18 20:09:33 INFO juju.state open.go:106 connection established
<natefinch> 2013-09-18 20:09:33 INFO juju.state open.go:68 opening state; mongo addresses: ["ec2-54-221-155-244.compute-1.amazonaws.com:37017"]; entity ""
<natefinch> over and over
<ahasenack> no authorization errors?
<natefinch> yeah, one at the beginning, like in your paste
<ahasenack> natefinch: mine worked now
<ahasenack> used a bigger instance
<natefinch> hmm weird
<natefinch> so why is mine failing? :/
<ahasenack> natefinch: did you wait long enough? Mine failed for a while with auth errors, but then worked, I'm guessing stuff was still happening over at the bootstrap node
<natefinch> all this time and it's still not working. Weird. I'll kill it and retry
<ahasenack> k
<natefinch> ahasenack: same problem
<ahasenack> natefinch: what's the error in both cloud-init log files?
<natefinch> 2013-09-18 20:27:43 ERROR juju supercommand.go:235 command failed: state info or API info not found in configuration
<ahasenack> natefinch: only that?
<ahasenack> natefinch: what about mongo messages in /var/log/syslog?
<natefinch> ahasenack: well, that's the meat of the error, the rest is the python traceback
<natefinch> ahasenack:  yeah, a bunch of these:
<natefinch> Sep 18 20:28:20 ip-10-139-4-163 mongod.37017[6957]: Wed Sep 18 20:28:20 [initandlisten] connection accepted from 71.174.89.21:55993 #1 (1 connection now open)
<natefinch> Sep 18 20:28:23 ip-10-139-4-163 mongod.37017[6957]: Wed Sep 18 20:28:23 [conn1]  authenticate db: admin { authenticate: 1, nonce: "ec93b95260b929d0", user: "a
<natefinch> dmin", key: "b386a7e0cf28af1a46eca891235e2cc2" }
<natefinch> Sep 18 20:28:23 ip-10-139-4-163 mongod.37017[6957]: Wed Sep 18 20:28:23 [conn1] auth: couldn't find user admin, admin.system.users
<ahasenack> that sounds like an interrupted cloud-init script
<ahasenack> you can check in /var/lib/cloud/tabtab somewhere about the cloud-init files that juju gave this instance
<natefinch> so, I see this in my cloud-init.log, which sounds suspicious  - Sep 18 20:27:43 ip-10-139-4-163 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts
<thumper> morning
<natefinch> afternoon
<natefinch> thumper:  windows installer is deployed to the website (again). This time with code that actually works.  So, high five for that.
 * thumper high fives natefinch
<natefinch> thumper: unrelated... I can't seem to bootstrap an ec2 instance using trunk.  was just going to write an email about it
<thumper> what error are you seeing?
<natefinch> thumper: error during cloud init - 2013-09-18 20:27:43 ERROR juju supercommand.go:235 command failed: state info or API info not found in configuration
<thumper> hmm...
<natefinch> and some errors from mongo in syslog- Sep 18 20:30:30 ip-10-139-4-163 mongod.37017[6957]: Wed Sep 18 20:30:30 [conn350] auth: couldn't find user admin, admin.system.users
<thumper> interesting
<thumper> not sure what happened there
<thumper> I disconnected myself
<natefinch> heh
<natefinch> thumper: you missed this - and some errors from mongo in syslog- Sep 18 20:30:30 ip-10-139-4-163 mongod.37017[6957]: Wed Sep 18 20:30:30 [conn350] auth: couldn't find user admin, admin.system.users
<thumper> huh
 * natefinch shrugs
<thumper> yes, send the email
<ahasenack> natefinch: the run-parts bit, what are the scripts in /var/lib/cloud/instance/scripts ? I think it means one script in there failed
<ahasenack> it's the bootstrap-state command that failed
<thumper> natefinch: did you upload tools?
<natefinch> thumper: yep
 * thumper goes back to thinking
<natefinch> thumper: http://pastebin.ubuntu.com/6125667/
<natefinch> output from bootstrap
<thumper> natefinch: yeah, reading through the email
<natefinch> thumper: it says it found existing jujud... sorta surprised it found jujud, actually
<thumper> natefinch: ah, could be in issue with the jujud you have built locally
<natefinch> thumper: yeah I was just thinking that
<natefinch> thumper: if it's an old version
<thumper> natefinch: can you do an install
<natefinch> thumper: yep
<natefinch> thumper: retrying, but I need to get going... stuff to do before dinner. I'll see if this works better.
<thumper> ok
<thumper> np
<natefinch> thumper: worked fine with the correct jujud.... false alarm :)
<thumper> ok, cool
<natefinch> g'night
<Beret> t
<thumper> morning wallyworld
 * thumper has been busy reviewing
<bigjools> hello
<thumper> o/
<thumper> gym time
#juju-dev 2013-09-19
<axw> thumper-gym: re Preflight, do you have a preference for splitting it into two methods, one for creating an instance, one for a container on an instance? Then the instance.Instance parameter is never optional.
<thumper> axw: sounds very reasonable
<wallyworld> thumper: so i missed you before, i was stuffing my face. i got that big branch landed :-)
<thumper> axw: for example, the null provider can't create machines, but could perhaps create containers
<thumper> axw: what did you think of my method name suggestions?
<thumper> wallyworld: good to get it landed
<axw> thumper: I quite like Vet, and Precheck isn't bad
<axw> not so keen on Probe
<axw> or Review
<thumper> PrecheckCreateMachine, PrecheckCreateContainer
<thumper> ?
<axw> yeah I think that sounds decent
<axw> thumper: do you think different interfaces are necessary, or is that overkill?
<axw> MachinePrechecker, ContainerPrechecker... *shrug*
<thumper> I don't mind either way
<thumper> chances are
<thumper> that every provider will end up having it anyway
<thumper> for some precheck stuff
<axw> yeah true
<axw> I'll just keep it in one then
<thumper> ok
<axw> cool
<axw> thanks!
<wallyworld> thumper: if you are tired of coding, i have a small piece of work which uses the new storage stuff to set tools retries when needed. but it can wait if you have other things to do https://codereview.appspot.com/13577045/
<thumper> tired? I haven't even started
<wallyworld> oops, sorry
 * thumper looks
<wallyworld> you don't have to
<thumper> did
<wallyworld> thumper: with bool params, i hear you but they are used everywhere. and there are others in the tools area and felt consistency was better here
<thumper> I know... but I still feel the need to push back
<thumper> what about type AllowRetry bool
<wallyworld> alright
<thumper> and then two const values
<thumper> it doesn't have to be exactly that
<thumper> but play and see what feels good
<wallyworld> yeah, i know the pattern to use. just felt wrong ony changing one instance of it
<wallyworld> i'll finish my current branch and go back to it soon
<hazmat> anyone know where we are filing doc bugs?
<davecheney> hazmat: juju-core docs series isn't a bad start
<davecheney> they all kind of go to the same place/people anyway
<hazmat> yeah..
<hazmat> davecheney, thanks
<davecheney> % juju scp gccgo/0:/home/ubuntu/gccgo.tar.bz2 .
<davecheney> ^ :heart juju
<hazmat> fwereade, curious what you think of this https://bugs.launchpad.net/juju-core/+bug/1227450
<_mup_> Bug #1227450: juju does not retry provisioning against transient provider errors <juju-core:New> <https://launchpad.net/bugs/1227450>
<hazmat> fwereade, it seems with 1.14 things are a bit better for recovering without waiting (ie i can kill machines and services/units without the unit agent coming online)... but ideally the transient would just be auto'd
<thumper> wallyworld: coming?
<wallyworld> no 1:30 yet
<wallyworld> still 10s to go
<axw> thumper: wallyworld can you hear/see me in the hangout?
<thumper> no
<thumper> try this one axw wallyworld https://plus.google.com/hangouts/_/0d2e7f7c6d9229ef8ed15d6c8f7ff08b0de146cc?hl=en
<bradm> davecheney: any luck on those charm reviews? :)
<davecheney> bradm: i am really sorry
<davecheney> i know they are reviewed but have not made it into the charm store
<davecheney> i will try to find out (again) what is going on
<bradm> davecheney: no worries, I'll just prod you every now and then about it :)
<davecheney> bradm: i will follow up with marcoceppi
 * thumper twiddles thumbs waiting for lbox before going to make coffee
 * wallyworld looks at thumper's review
 * thumper noticed something wrong
<thumper> poo
 * thumper leaves it for now as an exercise for wallyworld to find
 * thumper -> coffee
 * thumper needs to help with early dinner
<thumper> back for meetings tonight
<rogpeppe> mornin' all
<davecheney> o/
<rogpeppe> davecheney: hiya
<hazmat> just ran into my first real world user who has a default vpc
<davecheney> zomg
<davecheney> how did it go
<davecheney> ?
<hazmat> davecheney, it didn't work with juju .. throws an error
<hazmat> thumper-afk, how do you remove a container on a machine
<davecheney> hazmat: poop
<bradm> is there some best practice about how to ship updated code to juju instances via a charm?  right now I've got to update the stuff from python-moinmoin to do openid properly, and I'm shipping files, which just feels like a terrible unmaintainable hack.
<bradm> I could do patches, but thats only slightly less of an unmaintainable hack
<bradm> I could try to get the code upstream, but that won't always happen
<davecheney> bradm: if you're talking about including code inside the charm
<davecheney> that is one way
<davecheney> otherwise you could have the charm bzr branch something
<davecheney> but putting the code inside the chamr will at least ensure that all units of that service get the same copy
<bradm> davecheney: yeah, basically I need to patch the python code that the python-moinmoin deb puts on disk for openid
<davecheney> sounds nasty any way you cut it
<bradm> lp:~brad-marshall/charms/precise/python-moinmoin/python-rewrite
<davecheney> quite a few charms have a configuration settting to control adding a ppa
<davecheney> so if you have that config set
<bradm> thats what I've gotten so far, it does seem to work ok, I've got an instance up on canonistack
<davecheney> the install hook will use the ppa version
<bradm> ooh, that could work
<bradm> it feels like a less terrible hack
<davecheney> bradm: the wordpress charm is a good plcae to look
<bradm> davecheney: ta, I'll have a look at that, it feels like at least then I'll know it'll all work together
<bradm> it does mean I'd have to keep on top of new versions of moin that come out, and have a way of getting people to update
<davecheney> fork's suck
<bradm> ah, a downside of that is you have to assume egress access too, which wouldn't work in our environment
<bradm> maybe I should just try and get the code upstream, thats the least sucky option
<rogpeppe> anyone for a review? https://codereview.appspot.com/13778043
<yolanda> hi, any update on the juju failure from yesterday?
<rogpeppe> yolanda: hi
<yolanda> hi rogpeppe
<rogpeppe> yolanda: i got a question that you may be able to answer
<yolanda> tell me
<rogpeppe> yolanda: do you still have access to an instance that this happened on?
<yolanda> rogpeppe, i destroyed it
<yolanda> but i can create a new one with the same problem
<rogpeppe> yolanda: that would be great. i'd like to find out the contents of /proc/sys/vm/overcommit_memory
<yolanda> ok, just a moment
<rogpeppe> yolanda: thanks
<yolanda> i need to have a working juju for the work i'm doing now so i'm glad to help
<rogpeppe> yolanda: for the record, here's the question i asked, and its reply from one of the Go core team: http://paste.ubuntu.com/6127199/
<yolanda> shall i try with a bigger instance?
<rogpeppe> yolanda: if you can, that would be an excellent thing to try, yes
<rogpeppe> yolanda: but i'd like to see the overcommit_memory thing on an instance where the problem has happened
<rogpeppe> yolanda: so that we can try to pin down the issue
<yolanda> ok, still bootstrapping, it takes time to sync tools
<yolanda> rogpeppe, /proc/sys/vm/overcommit_memory = 0
<rogpeppe> yolanda: ah ok, i think that probably settles it
<rogpeppe> yolanda: thanks
<rogpeppe> yolanda: i guess that too much is going on at bootstrap time
<rogpeppe> yolanda: perhaps we should run juju as root
<yolanda> rogpeppe, shall i try with a bigger instance to discard memory?
<rogpeppe> yolanda: yes please - i predict it'll work fine on a bigger instance
<yolanda> let me try
<yolanda> by default it uses 512mb right?
<rogpeppe> yolanda: i don't think there's any default - it probably just chooses the smallest available
<yolanda> trying with 1024
<rogpeppe> mgz: ping
<yolanda> rogpeppe, with 1gb of memory works
<yolanda> so yes, it's a ENOMEM issue
<rogpeppe> yolanda: ok, thanks, at least we know where we are now
<yolanda> but this was working until yesterday afternoon, can it be related with some update done?
<rogpeppe> yolanda: perhaps something different is being done at bootstrap
<rogpeppe> yolanda: which uses more memory
<yolanda> i'll deploy a service with 1gb of memory now to test
<rogpeppe> yolanda: i'm afraid i don't know much about the details of how the openstack instances are set up
<yolanda> at least the defaults now should be set to 1GB to work
<rogpeppe> yolanda: yes. or we could set overcommit_memory to 1, i guess
<rogpeppe> yolanda: or, even better, work out what's taking all that memory at bootstrap time :-)
<rogpeppe> yolanda: go does grab a lot of VM at init time, although it doesn't touch it until it actually needs it
<yolanda> rogpeppe, but that's not directly related with juju, but with cloud-init, right?
<rogpeppe> yolanda: well, juju is written in Go, so the VM issue is partially Go-related
<rogpeppe> yolanda: but what's running at bootstrap time is indeed cloud-init and image-related
<yolanda> rogpeppe, i was asking that to try to file a bug, but i'm not sure on how to place it
<rogpeppe> yolanda: file it against juju, because there are several possible ways to solve the problem, not all of them in cloud-init.
<yolanda> rogpeppe, ok
<jam> yolanda, rogpeppe: If you are not overcommitting memory, and go is asking for a bunch that it will use later (but doesn't touch yet), doesn't that cause this issue?
<rogpeppe> jam: indeed it does
<jam> or is the go request "give me address space" different from "allocate some memory to me"
<jam> rogpeppe: I know internally we have a default "minimum memory" of 1GB for VMs.
<jam> though I think you can pass your own constraints to override it.
<rogpeppe> jam: i'm surprised that when yolanda used a 1GB constraint, that it caused it to work
<jam> rogpeppe: it might depend on which provider
<yolanda> works like a charm now
<rogpeppe> jam: if we do have that minimum memory default constraint
<rogpeppe> jam: yeah. are we getting this stuff from simplestreams now? i wonder if something changed there.
<rogpeppe> yolanda: which charm would that be? :-)
<jam> rogpeppe: environs/instances/instancetype.go: "minMemoryHeuristic = 1024"
<jam> which I'm pretty sure is in MB
<rogpeppe> jam: if the instance types were somehow reporting the wrong value, then it might be choosing the wrong instances, i suppose
<rogpeppe> jam: hmm, but then the change in constraints value wouldn't affect it.
<jam> rogpeppe: theoretically. It is possible that we are getting our units wrong. Do you know what provider this is?
<rogpeppe> jam: this is on canonistack
<rogpeppe> jam: and the issue only just started happening
<rogpeppe> jam: so *something* has changed recently
<jam> rogpeppe: do you know if there was any change like lcy01 => lcy02 ?
<jam> 01 runs Openstack F while 02 runs Grizzly, IIRC
<rogpeppe> jam: the same issue occurs in both, afair
<yolanda> rogpeppe, https://bugs.launchpad.net/juju-core/+bug/1227533
<_mup_> Bug #1227533: Juju fails to bootstrap if memory is lower than 1GB <juju-core:New> <https://launchpad.net/bugs/1227533>
<rogpeppe> yolanda: thanks
<jam> yolanda: are you passing a constraint? or just no constraints?
<yolanda> jam, now i pass a constraint for 1024mb, without constraints it fails
<yolanda> i tried with canonistack only, but i think it was failing under other environments
<jam> yolanda: strange, because I bootstrapped to canonistack 2 days ago, and things are working. I'll see if I can reproduces
<yolanda> jam, yes, it started to fail yesterday afternoon
<yolanda> i've been playing with juju and canonistack for weeks
<jam> yolanda: I wonder if a new instance type showed up in the catalogue or something.
<yolanda> it's grabbing from simple-streams, so maybe, yes
<jam> yolanda: so instance types are in the openstack Flavor catalogue. Vs *image* which is from simplestreams
<jam> Image == Ubuntu 12.04
<jam> Instance == m1.small
<jam> yolanda: it is possible that we got a new image, too, which might have changed overcommit
<yolanda> jam, do you have some booted machine previous to the failure? then you can check for the overcommit
<jam> yolanda: I do, give me a sec
<jam> my instance has overcommit_memory = 0 as well
<jam> free says I have 512MB, though
<yolanda> same as mine that were failing
<jam> top agrees
<jam> so it is always possible that canonistack itself is under mem pressure because of a lot of people starting instances, etc. But if it is reliable and not intermittent than I doubt that is the issue
<yolanda> jam, it's not intermittent, happens all time since yesterday, in the 2 zones, at different hours
<jam> yolanda: so I'll try quickly with 1.14 that is in the stable ppa, I don't know if I can get 1.12
<yolanda> jam, i tested with 1.14 today
<yolanda> same problem, now i'm using that
<jam> rogpeppe: so at the least, we have a bug that we think we use a min-memory heuristic of 1GB but don't
<yolanda> yesterday i tried with 1.12
<rogpeppe> jam: well, i think that's true - but it would be good to check the actual memory of the provided instance
<jam> rogpeppe: I did, the instance I have running for a couple of days is 512MB
<rogpeppe> jam: cool
<jam> rogpeppe: And I just bootstrapped now and got the same 512MB
<rogpeppe> jam: i've got a proposal for review, if you fancy taking a look: https://codereview.appspot.com/13778043
<fwereade> rogpeppe, are you expecting https://codereview.appspot.com/13493044/ to go anywhere, or are you dropping it? I've kinda forgotten context from before this week
<rogpeppe> fwereade: ^
<fwereade> rogpeppe, I'm just going through from the top today, trying to approach the bottom
<jam> rogpeppe: I do see the failure like yolanda mentioned, Cloud init reports that something failed, and I have "fatal error: runtime: ..."
<fwereade> rogpeppe, but it's also my meeting day, so...
<rogpeppe> jam: ah, so juju failed to start
<jam> rogpeppe: right
<fwereade> rogpeppe, also https://codereview.appspot.com/13512051/
<axw> fwereade: thansk for https://codereview.appspot.com/13635044/ I'm just waiting for sshstorage to land first.
<rogpeppe> fwereade: ah, i'll submit the latter; the former can wait, i think
<axw> thanks*
<thumper> rogpeppe: hey
<rogpeppe> thumper: yo
<jam> rogpeppe: https://codereview.appspot.com/13512051/ I think Tim has a branch that changes some names, so it will conflict (eventually), otherwise seems ok
<jam> fwereade: https://bugs.launchpad.net/juju-core/+bug/1227450 Are we intending that if the first request to Provision fails, we will ever try again?
<_mup_> Bug #1227450: juju does not retry provisioning against transient provider errors <juju-core:New> <https://launchpad.net/bugs/1227450>
<jam> I thought we intentionally weren't restarting things people manually stopped
<jam> which sort of falls into a similar category
<fwereade> jam, the original intent was that we hook up juju resolved
<jam> fwereade: so you could run "juju resolved" and it would try to provision again?
<fwereade> jam, yeah, that's the idea
<dimitern__> fwereade, hey, I see you didn't have time for this - can you take a look now please? https://codereview.appspot.com/13501051/
<fwereade> dimitern__, I'm ...2 away from it in the list
<dimitern__> fwereade, ok
<fwereade> dimitern__, thanks for your patience
<dimitern__> fwereade, no rush, just reminding :)
<thumper> fwereade: coming?
<hazmat> is there a tag for 1.14?
<jam> mgz: can you link me the document?
<jam> thumper, fwereade: 1 min to meeting
<jam> mgz: fwereade, rogpeppe: https://codereview.appspot.com/13562045/ is the critical fix for Openstack security groups
<mgz> jam: sorry, was paged up
<rogpeppe> jam: i don't think it would be too hard to add a live test that checked exposing
<rogpeppe> jam: but it would be slow to run
<rogpeppe> jam: unless we jammed it in the kitchen sink of BootstrapAndDeploy
<jam> rogpeppe: we don't have any tests today that run a custom program on the remote machine, so I didn't have much to go off of. It would be possible, but I'd rather land the fix and have done manual testing.
<rogpeppe> jam: agreed, good to land the fix, but please file a bug
<jam> rogpeppe: https://bugs.launchpad.net/juju-core/+bug/1227586
<_mup_> Bug #1227586: cross-provider test that we don't expose non-juju service ports <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1227586>
<rogpeppe> jam: thanks
<jam> rogpeppe: note also that we have bug #1217595 which means that "juju upgrade" won't fix the security groups that exist, and "juju destroy-environment" doesn't delete security groups so "juju destroy-environment && juju bootstrap" will leave you vulnerable, but I'm thinking to fix #1217595
<_mup_> Bug #1217595: security groups reused without ensuring validity <canonical-webops> <juju-core:Triaged> <https://launchpad.net/bugs/1217595>
<_mup_> Bug #1217595: security groups reused without ensuring validity <canonical-webops> <juju-core:Triaged> <https://launchpad.net/bugs/1217595>
<jam> bug #1227588
<_mup_> Bug #1227588: destroy-enviroment does not delete security groups <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1227588>
<rogpeppe> jam: i don't think that the old security groups pose a security hazard, on ec2 anyway
<rogpeppe> jam: just a resource problem
<rogpeppe> jam: because the ensures that if old security groups exist, they have the correct permissions
<rogpeppe> jam: s/because the/because the code/
<rogpeppe> jam: ah, i hadn't seen 1217595
<rogpeppe> jam: do you know what provider that's using?
<jam> rogpeppe: openstack at least
<rogpeppe> jam: i remember writing logic in the ec2 provider that specifically tried to cope with that case
<jam> rogpeppe: note that I think we copied that behavior from ec2
<jam> it detects a duplicate, though I'm trying to see if it updates it
<jam> "if err == nil { g = resp.SecurityGroup"
<rogpeppe> jam: see ec2.environ.ensureGroup
<jam> I do see it doing a set stuff
<jam> to find what to revoke
<rogpeppe> jam: it looks like the openstack provider doesn't do the right thing
 * rogpeppe gets some lunch
<mgz> rogpeppe: that was changed in codereview.appspotcom/11655043
<mgz> the alternative is using the same fiddly stuff that ec2 does with security groups
<mgz> really the code should be unified regardless
<jam> mgz: I think we should move the permSet into a shared module and both use it
<jam> mgz: right, so I'm fine with doing a Get before we do Create to avoid the duplicate-with-quantum bug
<jam> but we can still take the thing we have and do a set diff to figure out what to put on it
 * TheMue => AFK
 * thumper -> bed
<dimitern__> fwereade, review poke :)
<rogpeppe> mgz: it doesn't look like the openstack provider ever revoked security group rules, even before that change
<rogpeppe> dimitern__: i'll do yours if you do mine: https://codereview.appspot.com/13778043
<dimitern__> rogpeppe, looking
<dimitern__> rogpeppe, reviewed
<rogpeppe> dimitern__: ta!
<dimitern__> rogpeppe, fwereade: I have 2 CLs that need reviews please https://codereview.appspot.com/13501051/ and https://codereview.appspot.com/13627051
<rogpeppe> dimitern__: i'm already on the first one
<dimitern__> rogpeppe, great
<jam> dimitern__ how many underscores do you need ? :) Is it possible to share the Machine object somewhere, it sure feels repeated.
<dimitern__> jam, not sure what do you mean?
<mgz> about the underscores or the Machine? :)
<dimitern__> it's how we agreed to do the api
<jam> for underscores, there appear to be 3 of you
<dimitern__> my irc client surely acts funny today
<mgz> ah, so that's his trick
<jam> for Machine, it looks like a tiny object with 2 bits of state and a callable that has a few apis on it. If it is actually different than the other ones thats fine
<dimitern__> jam, they're all different
<dimitern__> jam, and expose only a subset of the state.Machine methods, as needed by each worker
<yolanda> hi, using config_changed hook, what should be the best way if a config var has changed since the previous invocation? we receive the hook on each config, but i'd like to take some action if a given var has a different value. Maybe store old value in some place?
<dimitern__> yolanda, you could save the old value at each config_changed call, and the use it in the next
<dimitern__> yolanda, that will of course trigger a config_changed on the remote unit as well
<jam> dimitern__: Id() Life() and Tag() sure seem common, but we can deal with that some other time, I guess
<dimitern__> jam, we can yes
<yolanda> dimitern__, you mean, saving on a file? not sure if i follow you
<dimitern__> yolanda, I mean doing relation-set oldValue=X, but you can use a file as well
<dimitern__> yolanda, that way the remote config_changed hook won't be triggered
<yolanda> dimitern__, maybe a file is simpler, then i avoid double relation hook call
<dimitern__> yolanda, yes
<yolanda> just i was wondering if juju methods had something to retrieve previous val, may be useful
<rogpeppe> yolanda: i sometimes wonder if we should provide a standard easy way to store persistent state across hook invocations
<rogpeppe> yolanda: because everyone reinvents their own wheel there
<dimitern__> yolanda, juju doesn't provide that at the moment
<yolanda> rogpeppe, dimitern__, sounds like an useful feature
<dimitern__> rogpeppe, something like relation-save key=value
<rogpeppe> dimitern__: there's not really anything relation-oriented about it
<dimitern__> rogpeppe, config settings are stored per relation
<dimitern__> rogpeppe, so it makes sense to have a way to store them locally with a hook command, but the name could be anything, yeah
<rogpeppe> dimitern__: i'm not necessarily talking about config settings - just persistent state that one hook invocation can set to let another one see
<rogpeppe> dimitern__: tbh it's probably just a matter for better standard bash tooling - not something that juju-core needs to be involved in
<dimitern__> rogpeppe, should these things be stored in state as well?
<rogpeppe> dimitern__: definitely not
<dimitern__> rogpeppe, i agree
<rogpeppe> dimitern__: https://codereview.appspot.com/13501051/ reviewed
<dimitern__> rogpeppe, thanks!
<jam> mgz:  did you get a chance to look at: https://codereview.appspot.com/13562045/ ?
<jam> or rogpeppe. You made some comments on IRC, but didn't comment on the CL from what I can see
<rogpeppe> jam: oh, sorry, i got distracted.
<jam> rogpeppe: I'm looking at dimitern's second CL
<rogpeppe> jam: cool, thanks - i was some way through it, but happy for you to do it
<jam> rogpeppe: well certainly submit what you've gotten through
<rogpeppe> jam: i had no comments yet
<mgz> jam: looked at, not gone through all the test stuff yet
<mgz> will do that now
<rogpeppe> jam: reviewed
<jam> dimitern: https://codereview.appspot.com/13627051/ reviewed
<jam> natefinch: you should be doing some sort of "ssh shared@maas.mallards"
<jam> eg Username == shared
<natefinch> jam: oh.. duh
<natefinch> jam: I guess I expected the config "user" setting to do that for me, but then failed to realize it obviously wasn't
<natefinch> jam: using shared@  works perfectly
<jam> natefinch: it should
<jam> were you accidentally doing nate@ ?
<natefinch> jam: nope
<natefinch> jam: maybe  it just wasn't picking up the config
<jam> natefinch: did you put it into config.personal?
<jam> I don't *think* that file is read by default, (It has to be imported somehow)
<natefinch> jam: yep
<natefinch> I reran sshebang afterward...
<jam> natefinch: I think it is ok to use garage mass without calendar as long as you aren't trying to allocate all 16 machines, and cleanup after yourself
<natefinch> actually I see User nate being specified for *.mallards in config.personal above there.... depending on which one wins
<jam> natefinch: your test cases are just provisioning 1-2 machines so it should be reasonably well behaved.
<jam> natefinch: I *think* first-entry wins
<natefinch> that would do it.  easy enough to try
<natefinch> jam:  yep, first one in wins
<natefinch> writing wiki docs makes me happy
<dimitern__> rogpeppe, fwereade, updated https://codereview.appspot.com/13501051/
<fwereade> TheMue, I can'tfind where your auto-sync-tools code is, did something happen to it?
<TheMue> fwereade: have to look myself
<TheMue> fwereade: should be this one https://code.launchpad.net/~themue/juju-core/035-bootstrap-autosync merged on Aug, 2nd
<TheMue> fwereade: and found it in trunk
<fwereade> TheMue, yep, just did likewise, thanks
<TheMue> fwereade: yw
<fwereade> TheMue, was looking right past it, sorry
<TheMue> fwereade: np, as long as my answer is positive (and i haven't missid to merge it *phew*)
<rick_h_> niemeyer: ping, any luck with more details on the issue?
<niemeyer> rick_h_: Hey
<niemeyer> rick_h_: Not really, unfortunately
<niemeyer> rick_h_: Now that I'm consciously trying to replicate it I can't either
<gary_poster> thanks rick_h_ .  yeah niemeyer, we want to stomp this but can't dupe :-(
<hatch> yes very frustrating
<rick_h_> niemeyer: ok, we've had 4 people try to replicate without success so far. If we can find something let me know and we'll jump right on it. Where were you linked from when it died?
<niemeyer> rick_h_: I've accessed it directly
<niemeyer> rick_h_: It actually stopped right on entrance the first time
<rick_h_> niemeyer: right, your second email said you were linked there?
<niemeyer> rick_h_: I was trying to guess the URL of a charm
<niemeyer> rick_h_: So both times were hand-written
<niemeyer> rick_h_: Both times CTRL-R solved the issue
<rick_h_> niemeyer: ok, hmm. So maybe some race in a deeper url. Ok, well that gives more info to go on trying to replicate
<hatch> niemeyer: on entrance - do you mean, 'loading juju-gui' ?
<niemeyer> hatch: No, just typing jujucharms.com
<gary_poster> :-/
<niemeyer> rick_h_: I wouldn't be surprised if it's something timing out
<hatch> niemeyer: sorry - I mean, there are two loading messages, one is for the assets and one is for the actual juju connection
<niemeyer> hatch: Hmm, ok?
<hatch> sounds like you're getting hung up on the connection
<niemeyer> hatch: I'm not using any juju environments
<niemeyer> hatch: Well, that I know of
<hatch> yeah it still connects on sandbox
<gary_poster> (to an in-browser-memory fake juju)
<niemeyer> I see, ok
<hatch> yeah that :D
<niemeyer> So, a timeout might explain why it failed on the first try, when all the caches on my path were cold
<hatch> yeah odly enough that should never happen
<hatch> I have some ideas to track this down though
<gary_poster> right, we need console messages
<niemeyer> Okay, I don't have them now, but if it happens again I'll try them
<gary_poster> thank you very much niemeyer.  ok hatch, cool, glad you have some ideas.  I was going to suggest that we reply to Gustavo's message with a request for anyone who encounters this issue to please get in touch with us *before* reloading.
<gary_poster> I'll send that out quickly
<niemeyer> gary_poster: Sorry about that.. it was kind of stupid.. I should know to have observed  the console for any hnts
<gary_poster> np thanks for raising it niemeyer.  at least we know there's a likely issue somewhere
<hatch> damn schrodinbugs
<rogpeppe> fwereade: hmm, i just noticed that checkers.Set was renamed to testing.PatchValue. i'm not that keen, as the reason for it being in jc was because it has minimal dependencies so that it can be used from internal tests without fear of import loops.
<rogpeppe> fwereade: i thought it's worth discussing before i propose a move back though.
<mramm> is the lean kit board reasonably up to date for me to look through it for things that have already landed but are not yet in a release?
<mramm> fwereade: rogpeppe: dimitern__: TheMue: ^^^^
<rogpeppe> mramm: i did put a Doing card on earlier, but tbh I haven't looked at it much since we stopped doing daily kanban runthroughs
<mramm> so how are we managing work now?
<rogpeppe> mramm: that's bad, i know, sorry
<mramm> tickets, lkk, whatever -- something needs to be there
<rogpeppe> mramm: yeah. i thought that kanban board discussion gave a good focus actually.
<mramm> well, that is for you guys to sort out
<mramm> for now I guess I'll try to just look through the commits that have been merged to trunk and go from there...
<dimitern__> mramm, mine is always up-to-date
 * rogpeppe polishes dimitern__'s halo :-)
<dimitern__> :P
<rogpeppe> dimitern__, natefinch, TheMue: do you have any opinions about jc.Set vs testing.PatchValue ?
<rogpeppe> dimitern__, natefinch, TheMue: (the latter being a new name for the former)
<fwereade> rogpeppe, I confess a mild preference for testing.PatchValue
<rogpeppe> fwereade: i'd like it not to be in testing
<rogpeppe> fwereade: because of the dependency issue
<rogpeppe> fwereade: and how about just Patch ?
<fwereade> rogpeppe, jc.Patch definitely > jc.Set
<fwereade> rogpeppe, I'd +1 that
<rogpeppe> fwereade: cool
<natefinch> yeah, I like patch
<natefinch> Patch implies temporary, set implies permanent
 * TheMue votes for Patch too
<dimitern__> +1 for Patch
<marcoceppi> How is this possible? http://i.imgur.com/7x5eYib.png
<mgz> does TestManageStateServesAPI failin on the bot mean anything to anyone?
<marcoceppi> Ran debug-hooks during an install error hook, ran resolved --retry, trapped both hooks?
<marcoceppi> 1.14.0, should this be a bug or is this expected, or what?
<mgz> hm, bug 1219661
<_mup_> Bug #1219661: TestManageStateServesAPI is flakey <test-failure> <juju-core:Fix Committed> <https://launchpad.net/bugs/1219661>
<mgz> landed on trunk only
<bac> hi, go question:  for my juju-core branch, bzr shows i'm out of date by many revisions.  but 'go get -v launchpad.net/jujuj-core/...' does nothing.  what gives?
<rogpeppe> fwereade: one thought on <environmentname>.yaml vs bare <environmentname>, how about a ".juju" extension? that way the files are readily identifiable when sent around the network, and could even be potentially double-clicked to open
<bac> gah, nm s/v/u/
<mgz> rogpeppe: can I have a rubber stamp on cl 13722051 please?
<rogpeppe> mgz: stamped
<mgz> ta
<rogpeppe> fwereade: ping
<fwereade> rogpeppe, pong
<fwereade> rogpeppe, ah, sorry, missed the above
<rogpeppe> fwereade: just wondering how you feel about the above possibility
<fwereade> rogpeppe, it's somewhat interesting
<rogpeppe> fwereade: using an extension means that we can be sure that temp files don't clash too
<rogpeppe> fwereade: (we don't currently have any restriction on environment names)
<fwereade> rogpeppe, yeah, that's not a bad idea... but, hmm, .juju doesn't feel quite right
<fwereade> rogpeppe, if anything, .env or something
<rogpeppe> fwereade: .jujuenv
<fwereade> rogpeppe, not totally in love
<rogpeppe> fwereade: neither me
<fwereade> rogpeppe, anyway, sorry, I have a call starting
<rogpeppe> fwereade: can i leave .yaml for the time being, and we can bikeshed it when we actually hook up the Write code?
<fwereade> rogpeppe, ok, sgtm
<mgz> arosales, sinzui: landed the required fix on 1.14 branch
<sinzui> mgz thank you
<arosales> mgz, thanks. Do you have authority to get sinzui access to the go-bot creds?
<arosales> needed to make a release
<sinzui> ^ you are a faster typer than I am
<mgz> I can't add his key to the launchpad account, I can stick a tag on for him today though
<sinzui> mgz, that helps.
<sinzui> mgz, we also need a 1.14.1 tag on goose at the same rev as 1.14.0
<TheMue> fwereade: ping
<sinzui> mgz, I want to try using dependencies.tsv for the 15+ releases. No more tags
<fwereade> TheMue, listening, but in a meeting
<mgz> that would be a good thing
<TheMue> fwereade: you hade a good nose
<TheMue> fwereade: see http://paste.ubuntu.com/6128903/
<mgz> sinzui: changing the tarball script is easy, the harder half is making the bot do the same, so things actually break if people forget to bump a dep
<fwereade> TheMue, nice
<TheMue> fwereade: have to see now where exactly it crashes, but it dislikes those statements in the data
<TheMue> fwereade: i think it's the $where
<sinzui> mgz. yeah, I thought as much. I intended to try a ci approach where the tarball is used as a source for the tests. Release candidates fail if the tarball doesn't build
<mgz> ugh, 1.14 is impressively diverged from trunk
<mgz> sinzui: I poked tarmac a little to try that approach, but it doesn't fit in well with the current use of the bzr plugin
<mgz> just need some time to re-look at the bot setup really
<mgz> and understand how tarmac wants to work a little
<sinzui> mgz. Understood. We are building a plugin/subordinate charm for the jenkins charms that acts as a tarmac gatekeeper and test runner.
<sinzui> We have nothing against tarmac, but it was just one piece that was failing in our setup of charmworld testing
<TheMue> fwereade: halve so wild, it only has been the . in the map key. mongo dislikes dots there. eval or db.eval as value are not executed
<rogpeppe> dimitern__, fwereade, mgz, TheMue: a small change:  https://codereview.appspot.com/13786043
<dimitern__> rogpeppe, looking
<rogpeppe> dimitern__: thanks
<TheMue> rogpeppe: reviewd
<rogpeppe> TheMue: ta!
<dimitern__> rogpeppe, me too
<rogpeppe> dimitern__: ta to you too!
<mgz> rogpeppe, fwereade: stamp on cherrypick sg fix to trunk cl  13787043 please!
<rogpeppe> mgz: done
<rogpeppe> natefinch, dimitern__, fwereade, mgz, TheMue: another small one: https://codereview.appspot.com/13341051
<mgz> rogpeppe: I'll run the live tests on that branch
<rogpeppe> mgz: thanks
<rogpeppe> mgz: i don't think the live tests verify the expose semantics currently, so probably worth doing a sanity check there too, if poss
 * lamont has what may be a stupid question...
<lamont> To connect to ... insecurely, use `nocheckcertificate'.
<lamont> where do I say 'nocheckcertificate' for juju (-core) bootstrap to see and use it?
<mgz> lamont: juju-core doesn't use ssh to connect, you may want bug 1202163
<_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <papercut> <Go OpenStack Exchange:Confirmed> <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1202163>
<TheMue> rogpeppe: reviewed, like it
<rogpeppe> TheMue: thanks
<natefinch> rogpeppe: is there a NewApiClient that is not from a name? Wondering why you need to say "fromName" when it clearly takes the name as an argument?
<TheMue> so, will step out now. see you on monday (tomorrow holiday)
<rogpeppe> natefinch: i didn't invent the name and i try to avoid repainting sheds
<rogpeppe> natefinch: i agree in principle though
<rogpeppe> TheMue: see ya
<natefinch> natefinch: oh, sorry, I should have scrolled down, missed that it was there before
<TheMue> have a good night
<lamont> mgz: it's trying to talk to swift, using https because I don't like cleartext tokens and all that
<lamont> but the CA cert for the signing cert is not in the ca-certs package
<lamont> and I'm too cheap to go buy a cert for this particular venture
<lamont> + tar xz C /var/lib/juju/tools/1.12.0preciseamd64
<lamont> + wget noverbose O  https://swift....
<lamont> so what I actually need to know is how to tell whatever is calling wget to say wget --nocheckcertificate ...
<lamont> which may just mean hacking over the tarball
<lamont> except it's fetching the tarball that is failing.
<rogpeppe> g'night all
<natefinch> g'night
<natefinch> mgz: is there a way to do a build such that I can copy the juju client and the tools to another computer for testing purposes?
<natefinch> mgz: I finally have a maas setup, but I can't really get juju to connect to it from my local machine
<natefinch> maybe that's fixable with some ssh magic, I don't know
<jpds_> lamont: That related to https://bugs.launchpad.net/juju-core/+bug/1202163 ?
<_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <cts> <papercut> <Go OpenStack Exchange:Confirmed> <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1202163>
<lamont> jpds_: probably
 * natefinch is not good with teh ssh
<mgz> natefinch: you can give sync-tools a directory with stuff in as a param
<lamont> and I think I got my mess figured out enough to have ways forward
<natefinch> natefinch: I recognize that sentence as valid english, but I don't know how to execute on it :)
<natefinch> mgz: err ^^
<rogpeppe> natefinch: fancy a review? https://codereview.appspot.com/13489044
<rogpeppe> fwereade: ^
<rogpeppe> or anyone else that happens to be around
<rogpeppe> natefinch: fairly simple stuff
<natefinch> rogpeppe: sure
<rogpeppe> natefinch: thanks
<natefinch> rogpeppe: sorry, got a phone call in the middle, and now don't have time to finish, but I can finish in the morning if it still needs it. Sorry :/
<rogpeppe> natefinch: ok, fair enough
<rogpeppe> natefinch: if you could publish any comments you already have, that would be great
<natefinch> rogpeppe: good point.. I don't have much, it was a long phone call
<natefinch> gotta run
<rogpeppe> natefinch: g'night
<natefinch> rogpeppe: actually, got a reprieve from dinner duties, so I can finish the review
<thumper> morning
<natefinch> thumper: morning.  How's your team in the Cup doing?  Scoring lots of goalies?
 * natefinch doesn't understand sailing.
<thumper> natefinch: oracle won the first race, and the second was called off due to wind limits hit
<thumper> next race is tomorrow morning (afternoon SFO time)
<thumper> nz needs one more to win
<thumper> oracle needs 7
<natefinch> thumper: wow
<natefinch> thumper: ok then
<thumper> oracle has fixed their boat now and has a slight speed advantage
<thumper> so it makes it interesting
<natefinch> hey, interesting is better than boring
<natefinch> friend of mine follows the Cup and was complaining for days about all the races getting cancelled
<rogpeppe> thumper: a review for you if you wanna: https://codereview.appspot.com/13489044
<natefinch> rogpeppe: I just finished that one up btw
<bigjools> howdoo
<thumper> o/ bigjools
<natefinch> whelp, I'm outta here
<bigjools> https://bugs.launchpad.net/juju-core/+bug/1227722
<natefinch> night all
<bigjools> fun
<_mup_> Bug #1227722: juju uses tools for the wrong architecture when bootstrapping a MAAS node <maas> <juju-core:New> <https://launchpad.net/bugs/1227722>
<thumper> bigjools: I blame wallyworld
<rogpeppe> thumper: thanks
<thumper> I don't know if it is his fault or not
<thumper> but
 * thumper shrugs
<thumper> rogpeppe: np
<bigjools> blame schmame.
<bigjools> it's far too early in the morning for blamestorming
<wallyworld> never too early
<bigjools> never too early to blame wallyworld? I could get behind that :)
<wallyworld> yep
<bigjools> wallyworld: plus I am happy because I am off on our road trip tomorrow
<wallyworld> oh yah
<wallyworld> will be great
<bigjools> brimmed the fuel tank - $200...eek
<wallyworld> i still have another week :-(
<wallyworld> petrol is expensive here
<bigjools> cheaper than Europe
<bigjools> should come down a bit, the A$ made ground against US$ lately
<wallyworld> yep :-)
<wallyworld> fwereade: you still around?
<fwereade> wallyworld, heyhey, yeah, I wanted to chat about your CL, because the last thing I want to do is block you
<wallyworld> fwereade: ok, quick hangout?
<fwereade> wallyworld, sure
<wallyworld> https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
<fwereade> wallyworld, you sure that's the link?
<wallyworld> it worked for me
<wallyworld> i'll try another
<fwereade> wallyworld, https://plus.google.com/hangouts/_/67b33e32c2942787f0aa5076ba1f070cd1203c4c
<thumper> fwereade: any reason you aren't landing https://code.launchpad.net/~fwereade/juju-core/prepare-leave-scope/+merge/181065 ?
<thumper> ironically a test for synchronization fails
<thumper> hmm...
 * thumper is looking at an intermittent test failure
 * thumper tries something...
 * thumper proposes
<thumper> wallyworld: I'd like axw to review the branch I'm just proposing
<thumper> but you could take a look too if you like
<wallyworld> ok
<thumper> it fixes a race condition in a test
<thumper> and also writes the test in a more broken up way
<thumper> so three tests, not one test that tests three things
<wallyworld> \o/
<thumper> https://codereview.appspot.com/13799043 if you are interested
<wallyworld> looking
<wallyworld> thumper: this is for the 1.15 release if you have a moment https://codereview.appspot.com/13763044
<thumper> sure
<thumper> wallyworld: I have the gym shortly, but we should catch up after that to talk about logging
<wallyworld> sounds good
<wallyworld> i have one more branch to write for the release
<wallyworld> then testing
<wallyworld> then tools cleanup
<wallyworld> then release notes
<wallyworld> thumper: those revised tests look nice to me
<thumper> thanks
<wallyworld> smaller tests are good
<thumper> looking over yours now, but not sure I'll get it finished before gym time :)
<thumper> agreed
<wallyworld> np
<wallyworld> i'm cleaning up the mps william looked at, and landing those now
#juju-dev 2013-09-20
<thumper> hi axw
<axw> hey thumper
<thumper> axw: what does ReadFull do?
<axw> ReadFull reads exactly len(buf) bytes from r into buf. It returns the
<axw>     number of bytes copied and an error if fewer bytes were read.
<thumper> oh cool
<thumper> yeah, that's better than the loop
<thumper> axw: I only noticed it because I occasionally habitually run "make check" when I switch to an updated trunk
<thumper> and it errored the first time and not the second
<thumper> actually, it didn't fail again
<axw> that's race conditions for you :)
<axw> glad it didn't cause too much strife
<wallyworld> thumper: last one for the 1.15 release i hope https://codereview.appspot.com/13800043
<thumper> axw: no, not strife. I've spent lots of time in the past dealing with race conditions
 * thumper objects horribly
<axw> thumper: re jc.Set?
<thumper> yes
<thumper> now on juju-cev
<thumper> dev
<thumper> how about testing/basesuite
<thumper> or basesuties
<thumper> suites
<axw> testing/testbase?
<thumper> I'd be happy with that
<thumper> shall I do it now?
<axw> doesn't have to be about suites
<thumper> sure
<thumper> worth having any test helper methods and functions that don't drag in extra dependencies
<thumper> keep it loopless if possible
<thumper> should be possible
<axw> I say go for it
 * thumper finishes wallyworld's review then goes for it
<wallyworld> \o/
<wallyworld> s/review/reviews
<wallyworld> and we need to talk about logging
<thumper> ack
 * thumper dedicates 20 minutes to this task
<thumper> if it takes longer than that, interrupt me
<thumper> make that 30 minutes
<axw> thumper: it can wait till later, but am I likely to get a lgtm on sshstorage today?
<thumper> axw: yes, I'm at the disposal of you two today
<thumper> after this hacking
<axw> bueno
<axw> thanks
 * axw looks at authenticated httpstorage
<thumper> bugger
 * thumper forgot to add the directory before moving things to it
<thumper> bzr lost track
 * thumper found bzr mv --after
<mramm> hey thumper got a couple of min?
<mramm> I think robbie (kvm robbie not cdo robbie) is out all next week
<thumper> sure
<mramm> so if you want to round trip with him on kvm issues it is going to have to be today :/
<thumper> yeah...
<thumper> he scheduled a meeting 1am nz saturday
<mramm> ouch!!!!
<mramm> that is nutto
<thumper> yeah, at least I'm down as optional :)
<thumper> mramm: did you want to hangout?
<mramm> I just wanted to tell you that news
<mramm> and to let you know that Dan is *very* concerned about kvm stuff
<mramm> he says it is pretty well tested on his end, and wanted me to give him a detailed view of what problems you've had
<thumper> I'll forward you the email
<mramm> thanks
<davecheney> mramm: is there background on the kvm issue ?
<davecheney> i have heard 512mb + kernel overcommit == 0 is the issue ?
<davecheney> mramm: we believe this is fixed in golang trunk
<davecheney> fixed is an oversimplification of the problem
<davecheney> if this is the problem I think it is
<davecheney> dan will have the same issue running jvm applications in those small kvm containers
<davecheney> thumper: mramm ping
<thumper> hi davecheney
<davecheney> thumper: may i butt into this issue
<davecheney> i believe i have experience
<mramm> davecheney: the issue is with the kvm wrappers that they wrote
<mramm> not in any way with jvm
<mramm> and it's just about us supporting kvm containers *inside juju*
<davecheney> mramm: ok
<davecheney> thanks
<davecheney> mramm: what does kvm inside juju mean ?
<mramm> like lxc containers but kvm containers
<mramm> EG deploy ceph inside kvm on a machine as part of an openstack install
<mramm> ok, gotta run
<mramm> getting late here.
<axw> gnight
<davecheney> mramm: how does this problem relate to the 1gb issue ?
<davecheney> are they unrelated
<davecheney> i'm only interested in the latter
<mramm> which problem?
<mramm> kvm?
<davecheney> mramm: yes
<mramm> they are completely unrelated
<mramm> yes
<davecheney> ok
<davecheney> please view my comments above with respect only to the 1gb issue
<mramm> and the later is that we can't bootstrap on a 512 node anymore
<davecheney> mramm: ahh, that is why i am confused
<davecheney> 512mb nodes are a kvm thing ?
<davecheney> correct ?
<davecheney> mramm: is there an issue for the 1gb / 512mb problem ?
<thumper> mramm: sorry those emails hadn't arrived, I thought I had sent them but the email client was stuck asking me to sign things
<mramm> dave I don't know where the issue is or even if there is a bug filed, sorry
<davecheney> mramm: i will follow up with rog to see if he raised an issue
<davecheney> mramm: i've seen the other side of the conversation on the golang-dev ML
<davecheney> the gui has just crapped out on me as well
<davecheney> looking at the unit
<davecheney> the gui process has died ...
<davecheney> oooooooooooooh, interesting
<davecheney> this is a chrome issue
<davecheney> one chrome tab was showing the SSL warning
<davecheney> which was blocking any other connectino to that site ....
<thumper> nice huge mechanical branch coming
<thumper> wallyworld, axw: rack up the reviews you need in the order you need them
<thumper> I'll drill through them after I have kids from school
 * thumper goes to put on shoes
<axw> thumper: I see you've lgtm'd sshstorage (thanks); just this one now: https://code.launchpad.net/~axwalk/juju-core/sanity-check-constraints/+merge/185015
<wallyworld> https://codereview.appspot.com/13763044/
<wallyworld> https://codereview.appspot.com/13800043/
<thumper> axw: you can take a look at my testbase branch
<axw> thumper: yep, will do
<thumper> https://codereview.appspot.com/13694046
<thumper> all it does is move a few things
<thumper> axw: are you landing the sshstorage?
<thumper> if so, I'll wait for my as they'll clash
<axw> thumper: I can wait for you to land that one if you like
<thumper> ok, I'll land it
<axw> thanks
<thumper> axw: I'd prefer to avoid importing checkers into testbase, but we could, as long as we make sure that it is just that
<axw> thumper: nps, just leave it till it's an issue
<thumper> approved it, so lets see if it lands :)
<axw> thumper: did you land the sshstorage test one?
<axw> yes
<thumper> yes
<axw> thumper: does mocking out ssh/sudo really make the test whitebox? the fact that it's in the same package is really just an unfortunate requirement
<thumper> hmm...
<thumper> perhaps not
<thumper> I'm flexible
<axw> I'll leave it. I think we may move the SSH stuff out to another package later anyway, for centralising common flags
<thumper> wallyworld: /environs/tools/tools_test.go is still using filepath, do you know where?
<wallyworld> in the mp?
<thumper> nm
<thumper> I' look
<thumper> and it is ok
<wallyworld> sigh, another one https://codereview.appspot.com/13694047
<thumper> kid says "take me to hockey now, I have a game at 4"
 * thumper afk for a few minutes.
<thumper> axw, wallyworld: where are we up to on reviews needed?
<wallyworld> thumper: i've got 3
<axw> thumper: I'm all good now
<wallyworld> let me check them to see what's approved
<axw> thumper: I'm just going to make sure my null provider stuff still works with all the changes to sshstorage and co, then I'll land manual bootstrap and propose null provider
<wallyworld> thumper: default hp cloud set up is not needed anymore because there's nothing hp cloud specific anymore
<thumper> wallyworld: how do the users point openstack at hpcloud?
<wallyworld> what do you mean?
<thumper> axw: ok
<thumper> wallyworld: well, I want to go bootstrap
<wallyworld> they set their credentials same as now
<thumper> how does it know where to look?
<wallyworld> it uses the credentials
<thumper> you suggest os env vars
<thumper> ?
<wallyworld> no, they set in yaml or env vars
<wallyworld> their choice
<thumper> ok, fair enough
<thumper> do the users know that hpcloud is openstack?
<wallyworld> before, hp cloud was different because the public bucket url was needed
<wallyworld> they don't need to know that
<thumper> I'm just wondering if it is nice to keep hpcloud as an option since it is a certified cloud
<wallyworld> the only hp cloud difference was the public bucket url, else it is just standard openstack
<thumper> wallyworld: but you have removed hpcloud as an option
<wallyworld> no need
<thumper> in the config
<wallyworld> yes, because there is no hp cloud specific config
<thumper> I know there is no *need*, I just think it is useful
<wallyworld> it's done via credentials
<wallyworld> useful how?
 * thumper slaps wallyworld for not listening
 * wallyworld slaps thumper for the same reason
 * thumper slaps wallyworld for slapping him
<wallyworld> hangout perhaps?
<thumper> wallyworld: yes, please
<thumper> however
<thumper> I need to go get kid 3 in 2 minutes
<wallyworld> ok
<thumper> only takes a few minutes
<wallyworld> with the tests, i deliberately used text
<thumper> I'll create a hangout and ping you when I'm back
<wallyworld> not a const
<wallyworld> ok
<thumper> wallyworld: https://plus.google.com/hangouts/_/6f17da147ed6be734a284c49eb9c67086eef59b0?hl=en
<wallyworld> davecheney: hi still working on getting the code for the release ready. i'll update the release notes tonight. i assume that will be ok?
<davecheney> wallyworld: sure
<davecheney> wallyworld: it's more important to get it right, than to get it out there tonight
<davecheney> if you need more time, just say
<wallyworld> yep. pushing shit uphill, but getting there
<davecheney> wallyworld: the friday date is fairly arbitary
<wallyworld> i've 3 branches to land then test
<wallyworld> davecheney: do you have credentials to write to the s3 public bucket?
<davecheney> wallyworld: will /msg
<wallyworld> ok
<thumper> ick
<thumper> axw: got a minute?
<axw> thumper: yes?
<thumper> axw: hangout?
<axw> sure
<thumper> axw: https://plus.google.com/hangouts/_/43172cdd0670d5faaae767a84e0c0fe72923a9ea?hl=en
<davecheney> AAAAAAAAAAAAARHRHG
<davecheney> lucky(~/charms/precise/galaxy/hooks) % juju add-machine -n10
<davecheney> error: flag provided but not defined: -n
<davecheney> WHY
<wallyworld> davecheney: are you thinking of add-unit?
<wallyworld> add-machine doesn't take a -n parameter
<davecheney> wallyworld: it would be good if it did
<wallyworld> yeah. no one has asked for it that i know of
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1214209
<_mup_> Bug #1214209: cmd/juju: add-machine should take a -n param <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1214209>
<wallyworld> fair enough
<axw> :(
<axw> thumper: you know how I changed sshstorage to create/remove the tempdir on Put?
<axw> it's not going to work
<thumper> why
<axw> storage goes to /var/lib/juju/storage
<axw> tempdir to /var/lib/juju/storage.tmp
<thumper> and...
<axw> /var/lib/juju/storage gets created under sudo, and chown'd to $SUDO_UID
<axw> then.,..
<thumper> hmm
<thumper> how about /var/lib/juju/storate/.tmp ?
<axw> another, non-sudo shell is executed to do the transfers
<axw> that just means we can't have .tmp in storage, which I suppose is reasonable
<rogpeppe> mornin' all
<axw> morning rogpeppe
<rogpeppe> axw: hiya
<davecheney> ave
<axw> thumper: by which I mean, the user can't Put anything starting with ".tmp"
<thumper> axw: yeah...
<thumper> axw: however your tmp dir is now removed after right?
<thumper> axw: doesn't the function to create a tmpdir take a prefix?
<thumper> and won't clash with names?
<axw> thumper: yes, but if you die before removing it
<axw> then you have something in storage
<thumper> yes, that is a risk
<thumper> but managable compared to the other options I think
<rogpeppe> thumper: testbase looks good (not keen on "base" in names in general, but can't think of a better one here)
<thumper> rogpeppe: axw and I brainstormed some names
<thumper> and it was the best we could come up with
<thumper> I'm particularly fond of the test that makes sure it doesn't grow dependencies
<rogpeppe> thumper: one thing though: how about Patch rather than PatchValue? the lines that use it tend to be longish anyway, so the extra horizontal space is useful.
<thumper> Patch doesn't say what it is patching
<thumper> it isn't a descriptive name
<rogpeppe> thumper: it's patching its argument, which is (by definition) a value
<thumper> no, sorry I disagree
<rogpeppe> thumper: "Value" doesn't really add any, erm, value there
<rogpeppe> thumper: i think it works well for the basic patching mechanism
<rogpeppe> thumper: and it's a reasonable compromise between PatchValue and Set :-)
<thumper> my suggesting is this: if you still think strongly that we should change it by the time we all get together in SFO, then we can get a team concensus
<thumper> then we can run sed -i on the codebase
<thumper> but give it a little time to settle
<thumper> we should schedule a session for pure bikeshedding :)
 * thumper is at EOW
 * rogpeppe sobs that it was his shed originally :-)
<rogpeppe> thumper: have fun
 * thumper throws pink paint at rogpeppe's shed and runs
<wallyworld> rogpeppe: are you an ec2 expert? we are getting access denied trying to access http://juju-dist.s3.amazonaws.com/ and i have no idea why
<wallyworld> hence juju is currently broken for ec2
<rogpeppe> wallyworld: oh shit
<rogpeppe> wallyworld: i am by no means an ec2 expert
<wallyworld> i tried sync tools and aborted it after it had uploaded a few tarballs. not sure if that caused it. i can't see how
<wallyworld> i also aborted uploads to hp cloud etc and no problem
<rogpeppe> that *shouldn't* have caused a problem
<rogpeppe> wallyworld: so did it stop working immediately after your abort?
<wallyworld> i think so
<wallyworld> may fwereade knows? or mgz?
<rogpeppe> wallyworld: i don't have any s3 access at all
<rogpeppe> wallyworld: i wonder if this might be an amazon issue
<wallyworld> i sure hope so
<rogpeppe> % s3 put /tmp/x s3://pefkesnjesesvds
<rogpeppe> ERROR: S3 error: 403 (InvalidAccessKeyId): The AWS Access Key Id you provided does not exist in our records.
<wallyworld> so i guess we wait a bit
<wallyworld> and see if they fix anything
<rogpeppe> wallyworld: hmm, status.aws.amazon.com doesn't indicate anything amiss
<rogpeppe> wallyworld: could you install s3cmd and try making a bucket, please?
<rogpeppe> wallyworld: just to see if you get the same issue
<wallyworld> ok
<wallyworld> i can create one
<davecheney> wallyworld: do i need to declare an emergency
<hazmat> wallyworld, i'm an expert..
<davecheney> it's 3:30pm in japan
<wallyworld> hazmat: any idea why we are getting access denied all of a sudden trying to read the juju-dist bucket?
<hazmat> davecheney, its sounds like its just an upload issue not a public issue
<hazmat> wallyworld, are you using the correct account..
<hazmat> wallyworld, are you sure you don't have some openstack/canonistack stuff in your env
<wallyworld> hazmat: i was testing sync-tools and aborted a tarball upload part way through with Ctrl^C. but surely that could not have caused it?
<rogpeppe> hazmat: it should be globally readable
<hazmat> ie. your actually using an aws account
<wallyworld> hazmat: yes
<hazmat> rogpeppe, we're talking about upload
<hazmat> wallyworld, i've seen that error when i accidentally try to use an openstack account against aws
<hazmat> alternatively when an iam account i'm using is deleted
<rogpeppe> hazmat: i can't access that url, which i think i should be able to
<hazmat> rogpeppe, who is the owner of the account / bucket?
<wallyworld> hazmat: i'm almost 100% sure i used the correct account details
<davecheney> hazmat: gustavo
<rogpeppe> hazmat: niemeyer
<hazmat> hmm.. too early for him..
<hazmat> one moment.. checking the bucket with creds
<rogpeppe> wallyworld, hazmat: FWIW i see the same issue, though i can in fact upload to my own buckets (i'm not sure why s3cmd was failing to do that)
<hazmat> wallyworld, davecheney, rogpeppe ... okay verified the account works
<hazmat> i can upload files to the bucket, and remove them
<rogpeppe> hazmat: to juju-dist?
<hazmat> rogpeppe, yes
<hazmat> the problem is that the permissions on the account are restricted
<wallyworld> so why do we get access denied reading it?
<hazmat> i can't for example list all the buckets on the account
<hazmat> but i can list th contents of that bucket, and r/w to that bucket
<hazmat> read access to juju-dist would be a separate issue
<hazmat> er... public read access
<rogpeppe> oh, ffs, s3cmd doesn't look at the usual env vars
<wallyworld> hazmat: so you can fix the permissions?
<wallyworld> i wonder how they changed
<hazmat> wallyworld, well first i want to verify if its an actual issue
<wallyworld> juju doesn't work
<hazmat> wallyworld, it works for me..
<wallyworld> not for me or rogpeppe :-( or davecheney
<hazmat> i'm in the middle of a class, and we've been deploying units and services in different envs this whole time
<hazmat> those envs were already bootstrapped though.. but the tool logic is pretty constant on pinging the pub bucket
<wallyworld> once bootstrap happens i don't think it uses the pub bucket anymore
<hazmat> wallyworld, hmm.. ic what you mean
<hazmat> wallyworld, checking
<hazmat> wallyworld, you can still download tools from the bucket.. http://juju-dist.s3.amazonaws.com/tools/juju-1.13.2-saucy-i386.tgz
<davecheney> ahh, but we cannot list the bucket ...
<hazmat> for example
<wallyworld> hazmat: yes
<davecheney> i woner if we've passed some limit
<hazmat> we should be using an explicit index anyways..
<davecheney> ie, the number of rows to return ?
<hazmat> ie. a json/yaml file we create..
<davecheney> maybe it isn't 200
<hazmat> relying on aws default output isn't sane
<rogpeppe> hazmat: you can't rely on list-bucket?
 * hazmat looks into the bucket perms
<davecheney> hazmat: i guessed at the limit
<hazmat> rogpeppe, juju does not do list-bucket
<hazmat> rogpeppe, we don't require every use to have an aws account
<hazmat> every user.
<wallyworld> juju does list buvket
<rogpeppe> hazmat: list-bucket is just a GET of the bucket, no?
<wallyworld> when it looks for the tools
<hazmat> not quite
<hazmat> rogpeppe, yes.. but its a bit more involved for pagination, etc
 * wallyworld has to run away for a bit. back a bit later
<rogpeppe> hazmat: agreed. but we do that.
<rogpeppe> hazmat: is there a problem with doing that?
<rogpeppe> hazmat: there are definite advantages to relying on ec2 itself to provide data about what files are uploaded
<hazmat> rogpeppe, wallyworld, davecheney fixed
<rogpeppe> hazmat: in particular it means that you can concurrently upload tools without worrying about maintaining a shared "directory" file (something i'm not is possible in s3)
<hazmat> http://juju-dist.s3.amazonaws.com
<rogpeppe> hazmat: yup, that's better
<rogpeppe> hazmat: what was the problem?
<hazmat> rogpeppe, we control the upload process, we should define our format
<hazmat> rogpeppe, the bucket perms had changed to not be public
<rogpeppe> hazmat: how tf did that happen?
<hazmat> rogpeppe, and updating an index file allows us the flexibility to host tools elsewhere
<rogpeppe> hazmat: we're moving towards using "simple" streams for all of this
<hazmat> relying on aws .. means they change something and we break.. and we have multiple impls if we have to go somewhere else
<hazmat> rogpeppe, good
<hazmat> rogpeppe, wallyworld, davecheney if you want the gory xml details.. here's previous and current of the bucket acl.. http://pastebin.ubuntu.com/6131371/
<hazmat> something reset the bucket perms..
<hazmat> it seems like
<hazmat> to only allow owner access.. anyways.. its fixed afaics, heading back to class
<rogpeppe> hazmat: yeah
<rogpeppe> hazmat: thanks
<hazmat> np
<rogpeppe> fwereade, dimitern, mgz, axw: small review anyone? https://codereview.appspot.com/13806043/
<axw> rogpeppe: looking
<rogpeppe> axw: thanks!
<wallyworld> i have a branch for review for the 1.15 release. tim started it but got stuck when ec2 failed earlier. i took his work and added a test and here it is https://codereview.appspot.com/13642045
<mgz> wallyworld: looking
<wallyworld> thanks :-)
<natefinch> Is it just me, or do the menus on www.ubuntu.com look totlaly messed up?
<rogpeppe> natefinch: me too
<rogpeppe> another trivial review: https://codereview.appspot.com/13780044/
<natefinch> rogpeppe: I'll take a look
<rogpeppe> natefinch: thanks
<natefinch> rogpeppe: done
<rogpeppe> natefinch: ta!
<natefinch> rogpeppe: who should I poke about www.ubuntu.com?
<rogpeppe> natefinch: i have no idea, sorry
<natefinch> heh no problem
<dimitern> natefinch, fwereade, rogpeppe, wallyworld: standup?
<dimitern> rogpeppe, fwereade : a quick review after standup? https://codereview.appspot.com/13636044/
<fwereade> rogpeppe, wallyworld, mgz, dimitern, natefinch: sorry, I'm trying to get back in
<fwereade> gents, my audio is appallingly bad, I think we stick with "don't do that then" and hope that everybody can remember that until we have simplestreams online
 * fwereade pops out for coffee
 * rogpeppe just got bitten by this bug. it really confused me! https://bugs.launchpad.net/gocheck/+bug/1228105
<_mup_> Bug #1228105: panic in TearDownTest obscures original test panic <Gocheck:New> <https://launchpad.net/bugs/1228105>
<sinzui> natefinch-afk, does https://launchpad.net/juju-core/+milestone/1.14.1 need a bug fix to address the missing azure documentation?
<rogpeppe> trivial review anyone? https://codereview.appspot.com/13775046
<rogpeppe> took a while to diagnose that!
<rogpeppe> fwereade, dimitern, mgz: ^
<fwereade> rogpeppe, looking
<mgz> metoo
<rogpeppe> fwereade: oh it would be so good to sort out JujuConnSuite. i think we're heading in the right direction with the configstore interface actually
<mgz> ugh, this is all so fragile
<rogpeppe> mgz: yeah
<fwereade> rogpeppe, LGTM
<rogpeppe> mgz: most of it's actually unnecessary in fact
<rogpeppe> fwereade: thanks
<dimitern> still waiting for a review on https://codereview.appspot.com/13636044/ please
<dimitern> rogpeppe, fwereade ^^ ?
<rogpeppe> fwereade: i kind of agree about nestable fixtures, but i'm not really sure why this CL implies the need
<rogpeppe> dimitern: looking, sorry
<fwereade> rogpeppe, it looked like we were half-assing a fixture with sub-Test-method setup/teardown effectively
<rogpeppe> fwereade: that actually works ok in general - it's not really nesting as such, more splitting.
<rogpeppe> fwereade: i wish fixtures looked less like test suites though
<rogpeppe> dimitern: can't a machine tell its container type just by looking at its id?
<dimitern> rogpeppe, I prefer to have an explicit method
<rogpeppe> dimitern: why do we need another API round trip for this?
<dimitern> rogpeppe, it's still the same - no more roundtrips
<rogpeppe> dimitern: oh, i guess that's true. but still, i'm not entirely convinced it's worth denormalising here.
<dimitern> rogpeppe, denormalising what?
<rogpeppe> dimitern: storing the container type redundantly to the id which directly implies it
<dimitern> rogpeppe, then why do we have a ContainerType method on machine, or even a doc field?
<rogpeppe> dimitern: hmm, good point
<rogpeppe> dimitern: about the doc field anyway
<dimitern> rogpeppe, the fact the machine id implies the container type is a side-effect, I think, does not merit doing tricky magical stuff in the api, just because we know its format
<wallyworld> fwereade: i have a small mp to fix an issue with hp cloud that i found while smoke testing the 1.15 release https://codereview.appspot.com/13789045
<rogpeppe> dimitern: you may be right there. tbh, i think that it's pretty dodgy that the agent needs to know its container type - it looks pretty hacky to me
<rogpeppe> dimitern: it would be more regular if lxc machines had an lxc job, or something like that
<dimitern> rogpeppe, perhaps, but we're not there yet
<fwereade> wallyworld, ha, Isee, good catch
<rogpeppe> dimitern: fair enough
<fwereade> rogpeppe, agreed and understood hacky, but out of scope for now ;)
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe, ta!
<fwereade> wallyworld, LGTM
<wallyworld> fwereade: great thanks, landing now
<fwereade> wallyworld, there's some code commented out you might want to remove first
<wallyworld> fwereade: yeah, already done :-)
<wallyworld> it's late, i'm tired, stupid typo
<fwereade> wallyworld, I know the feeling, get some rest :)
<wallyworld> soon, will just land the branch
<motter> out of curiosity, does all of juju's logging code write to syslog?
<motter> (i'm looking at https://bazaar.launchpad.net/~go-bot/juju-core/trunk/files/head:/log/)
<mgz> motter: we've been moving that way
<mgz> for the agents that run on remote machines at least
<motter> the loggo package does levels, is that configured to write to syslog too?
<mgz> yes, there should be a correspondance
<motter> mgz: thanks
<motter> we're also using syslog, but i'm curious as to what other projects are doing
<sinzui> I need help setting up a reliable test env/config. I always get failures running trunk and 1.14. I doubt the suite is really broken.
<rogpeppe> sinzui: what failures are you seeing?
<sinzui> rogpeppe, http://pastebin.ubuntu.com/6133051/
<rogpeppe> sinzui: what version of mongod have you got installed?
<sinzui> db version v2.4.6
<rogpeppe> sinzui: could you paste the output of mongod --help, please?
<sinzui> rogpeppe, http://pastebin.ubuntu.com/6133061/
<rogpeppe> sinzui: hmm, that looks ok. although i'm actually using 2.2, so it's possible there's a problem with later versions.
<rogpeppe> sinzui: i think it's *probably* a mongod problem
<sinzui> rogpeppe, I have mongod running on my system by default. I don't think I have messed with it (I build my apps in containers)
<rogpeppe> sinzui: what version of ubuntu are you running?
<sinzui> saucy
<rogpeppe> sinzui: hmm, i don't know what the status of juju on saucy is currently. mgz?
<rogpeppe> sinzui: the version of mongo i get with "sudo apt-get install mongodb-server" is v2.2.4, which is earlier than yours
<mgz> should e working...
<rogpeppe> mgz: do you know anything about what versions of mongo juju currently works with?
<mgz> youmean it's not all of them? :)
<rogpeppe> mgz: well, it looks like it fails with 2.4.6
<sinzui> I have a precise container. I can try running the suite from there
<rogpeppe> sinzui: i'm running raring and it works ok there
<rogpeppe> sinzui: you could try grabbing the mongod binary from juju-dist.s3.amazon.com
<rogpeppe> sinzui: are you running the standard mongod installed with saucy? (i'm presuming so)
<sinzui> I am
<sinzui> and I have deployed many times with it. I see no signs of breakage
<rogpeppe> anyone else here running saucy?
<rogpeppe> sinzui: i think i see a possible fix - looks like mongo have changed their error messages
<rogpeppe> sinzui: could you try editing testing/mgo.go for me please
<sinzui> one moment, I am running the tests in a precise container
<rogpeppe> sinzui: and change the isUnauthorized function to this: http://paste.ubuntu.com/6133129/
<sinzui> rogpeppe, I have the file open, I will make the change when the suite completes
<rogpeppe> sinzui: that might *possibly* fix your problem
<sinzui> The test run is taking forever in this container. but it is also passing where it failed for saucy
<sinzui> thank you rogpeppe, the patch is better, only one failure: http://pastebin.ubuntu.com/6133177/
<sinzui> The tests passed in my precise container. I am unblocked for now, and can test more patches if you like
<rogpeppe> sinzui: ah, that looks like exactly the same issue in a different guise
<rogpeppe> sinzui: could you try one more thing for me, please
<rogpeppe> ?
<rogpeppe> sinzui: if you could try another version of isUnauthorized and paste me what the panic prints, that would be very useful: http://paste.ubuntu.com/6133195/
<rogpeppe> sinzui: then i'll be able to submit an appropriate patch to trunk
 * sinzui runs tests
<sinzui> sorry rogpeppe , quite errors this time: http://paste.ubuntu.com/6133222/
<rogpeppe> sinzui: thanks; line 31 is all i need
 * rogpeppe wishes mongo could sort out its error codes
<mgz> filed bug 1228255 for my live test woes earlier
<_mup_> Bug #1228255: Live bootstrap tests fail on canonistack <juju-core:Confirmed> <https://launchpad.net/bugs/1228255>
<rogpeppe> sinzui: ping
<sinzui> hi rogpeppe
<rogpeppe> sinzui: could you pull this branch and try testing it, please. i hope it should fix your problem: lp~rogpeppe/juju-core/409-fix-for-mongo.2.4
 * sinzui does
<sinzui> rogpeppe, I have one failure. I don't think it is related to your work. I think I have the deps wrong when I switched back to trunk: http://paste.ubuntu.com/6133503/
<rogpeppe> sinzui: could you paste the whole of that failure log please?
<rogpeppe> sinzui: (no, i don't think it's related either)
<sinzui> goose and mgo had higher revnos. I just reverted to what the dependencies.tsv listed
<rogpeppe> sinzui: try this (in the juju-core root directory): go get launchpad.net/godeps; godeps -u *.tsv
<rogpeppe> sinzui: ah, or manually, sure
<sinzui> rogpeppe, I did. it warned that the branches were on the wrong version. I assumed it would correct that
<sinzui> this is the full log: http://paste.ubuntu.com/6133515/
 * sinzui runs tests again
<rogpeppe> sinzui: you can run that test alone to get a quicker turnaround time: go test launchpad.net/juju-core/cmd/juju -gocheck.f TestAutoUploadAfterFailedSync
<rogpeppe> sinzui: i'd like to know if it fails consistently in saucy
<sinzui> sorry rogpeppe. I don't know how to clean goose and v2/mgo. godeps says the are not clean when I run clean on them
<rogpeppe> sinzui: if in doubt, remove them and go get them again.
<rogpeppe> sinzui: e.g. rm -r $GOPATH/src/launchpad.net/goose
<rogpeppe> sinzui: rm -r $GOPATH/src/labix.org/v2/mgo
<rogpeppe> sinzui: go get launchpad.net/goose labix.org/v2/mgo
<rogpeppe> sinzui: if you want to know *why* they're considered unclean, go into the directory and run "bzr status"
<sinzui> goody. I will try that next time. I am getting the code again
<sinzui> rogpeppe, http://paste.ubuntu.com/6133582/ shows the failure again
<rogpeppe> sinzui: ok, that's useful thanks
<rogpeppe> mgz, fwereade, dimitern: saucy fix CL review? https://codereview.appspot.com/13808043
<mgz> looking
<rogpeppe> fwereade: ping
<mgz> rogpeppe: reviewed
<rogpeppe> mgz: we're using standard base64 (as witness the StdEncoding below.
<mgz> so, that change shouldn't be needed than?
<rogpeppe> mgz: but i'm actually inclined to use '[^']*'
<rogpeppe> mgz: why not? standard encoding includes + and /, no?
<rogpeppe> mgz: (neither of which were in the original regexp)
<mgz> ah, yes, I misread
<mgz> the change is correct
<natefinch-afk> smoser: have a minute?
<smoser> here
<natefinch> smoser: Having difficulty ssh'ing into the maas virtual machines (it worked yesterday and then today somehow it's not... probably user error)
<natefinch> smoser: I recreated the machins with 1024M of RAM, since we'd seen some problems lately bootstrapping with 512
<smoser> k
<natefinch> smoser: maybe I messed something up... what's weird is that juju bootstrap seems to run ok (at least from what I see locally), but then I try to ssh insto the machine, and it says it can't find a machine with that name
<smoser> natefinch, its the hack that i told you about.
<smoser> you have to fix /etc/resolv.conf now. i'll do it.
<natefinch> natefinch: I figured there was something I'd forgotten
<smoser> what happened that changed is that your dhclient lease came up
<smoser> and it rewrote /etc/network/interfaces
<smoser> i dont have a real solution for this at the moement.
<natefinch> ahh ok no problem
<natefinch> smoser: I'll leave myself a note so I don't forget next time
<smoser> natefinch, i think its fixed now
<smoser> basically just re-wrote /etc/resolv.conf but to have '127.0.0.1' as the nameserver
<smoser> now , 'host inst-001.master' works
<natefinch> smoser: yep, saw it.  That did it. Thanks.  Wrote myself a note to remind me to do that if this happens again.
<smoser> i also just added 'search master'
<natefinch> Cool
<smoser> i'd like to have a real fix for that, but ... time.
<natefinch> yuuup
<smoser> on canonistack instances the dhclient lease is 2 minutes.
<smoser> so you get to re-apply every 2 minutes :)
<natefinch> OMG
<natefinch> crazy
<rogpeppe> sinzui: i've managed to reproduce your test failure locally, and the fix is straightforward, i think. i reported the bug here: https://bugs.launchpad.net/juju-core/+bug/1228315
<_mup_> Bug #1228315: cmd/juju test failure under saucy <juju-core:New> <https://launchpad.net/bugs/1228315>
<sinzui> thank you very much rogpeppe
<rogpeppe> sinzui: the mongo fix is now in trunk, BTW
<sinzui> fab
<rogpeppe> natefinch, fwereade, dimitern, wallyworld: a simple review? after this, saucy tests should run ok, theoretically. https://codereview.appspot.com/13811043
<natefinch> rogpeppe: I can take it
<rogpeppe> natefinch: thanks
<rogpeppe> sinzui: perhaps you could try out lp:~rogpeppe/juju-core/410-only-one-version and verify that it fixes the problem?
 * sinzui does
<rogpeppe> sinzui: ta
<natefinch> rogpeppe: I like the change, but it doesn't look like it would change the behavior unless /etc/lsb-release changes while you're running.  What is it fixing, other than reparsing the same data over and over?
<sinzui> rogpeppe, saucy loves your branch.
<rogpeppe> sinzui: good to hear!
<rogpeppe> natefinch: it fixes tests that patch version.Current
<natefinch> rogpeppe: aahhhh yep
<hatch> when newbies follow the lxc guide and run `juju boostrap -e local` then follow up with a `juju status` the error message is that it can't connect and to check your credentials
<hatch> this will be very confusing
<hatch> I am guessing there is no way to know that the bootstrap node is still starting up
<hatch> but maybe a 'your bootstrap node might still be starting' message could be added?
<natefinch> hatch:  a friendlier error message might be all we really need
<hatch> natefinch: that's what I was thinking - any 'real' sollution is probably a little too fragile
<hatch> shall I file a bug ?
<natefinch> hatch: go for it
<jamespage> sinzui, 1.14.1 all done - should build in the stable PPA shortly
<sinzui> wow. Thank you for your dedication jamespage
<jamespage> sinzui, np
<hatch> awesome!
#juju-dev 2013-09-21
<rogpeppe1> if anyone's around, here's a moderately meaty but very limited-scope CL ready for review. I had some spare time last night. https://codereview.appspot.com/13249054/
#juju-dev 2013-09-22
<rogpeppe> thumper: hiya
<thumper> morning rogpeppe
<thumper> working sunday night?
<rogpeppe> carmen's away...
<rogpeppe> managed to do a bit of slightly off-piste hacking on saturday
<thumper> heh
<rogpeppe> result is this CL; review appreciated: https://codereview.appspot.com/13249054/
<rogpeppe> there are some cool things that it makes possible
<rogpeppe> for instance it's easy to use it to generate API client-side code automatically.
<rogpeppe> or to allow API clients to find out what calls are available
<rogpeppe> i also think it makes sense just from the p.o.v. of the structuring of the rpc package too, BTW - it becomes less monolithic, and that part (which is a significant part of the rpc package logic) can be more easily tested independently.
<rogpeppe> thumper: but YMMV of course
<thumper> I'll take a look, but this is somewhat outside my current experience messing with go/juju
<rogpeppe> thumper: that's ok, i actually just want you to marvel at this, which took only an hour to write this afternoon, after i thought of it: http://paste.ubuntu.com/6143236/ :-)
<rogpeppe> thumper: it generates API client code for our entire API.
<rogpeppe> thumper: output looks like this: http://paste.ubuntu.com/6143244/
<rogpeppe> thumper: it actually compiles, but i haven't tried using it
<thumper> heh
<thumper> interesting
<rogpeppe> thumper: the text/template package is actually surprisingly powerful
<thumper> I don't see why the whole docker thread is on three different mailing lists I'm on
<thumper> I get every email three times FFS
<rogpeppe> thumper: that's because they're so anxious that you read the mailing lists that they subscribed you three times :-)
<thumper> no, I'm on all three lists
<thumper> and the email is sent to three different lists
<thumper> rogpeppe: are you not getting them three times?
<thumper> juju-dev, cdo and cloud?
<rogpeppe> to:	 Canonical Clouds <cloud@lists.canonical.com>,
<rogpeppe>  cdo@lists.canonical.com,
<rogpeppe>  canonical-juju <canonical-juju@lists.canonical.com>
<rogpeppe> thumper: i'm not sure
<rogpeppe> thumper: my inbox is a mess currentyl
<rogpeppe> thumper: well, both my inboxes
<thumper> there are two cross list conversations atm
<rogpeppe> thumper: i hadn't actually read any of the thread. interesting.
<rogpeppe> davecheney: yo!
<davecheney> rogpeppe: hey
<thumper> wallyworld_: thanks for finishing off that logging branch
<wallyworld_> np at all
<wallyworld_> was good to get it landed
 * thumper got sidetracked starting kvm work fixing lxc tests
<wallyworld_> how's the kvm stuff going?
<thumper> wallyworld_:  https://codereview.appspot.com/13828043
<thumper> wallyworld_: just starting really
<wallyworld_> looking
<thumper> wallyworld_: I have been focusing primarily on helping you and axw get your things in
<thumper> and doing kvm on the side
<wallyworld_> thank you :-)
<thumper> as it isn't going to make 1.16
<thumper> hoping to have it available shortly after
<thumper> definitely prior to SFO
 * thumper crosses fingers
<wallyworld_> mass folks will be happy
<wallyworld_> maas
<thumper> but I want to make sure that the simplestream tools things and null provider stuff gets in
<wallyworld_> thumper: why does the PatchEnvironment  call followup with a call to cleanup but not the SetPatch calls?
<thumper> wallyworld_: it is in setup suite
<thumper> the PatchEnvironment in the test is a test cleanup
<thumper> not a suite cleanup
<thumper> I'm yet to decide on wording for the PatchEnvironmentForSuite name
<thumper> or PatchValueForSuite
<thumper> perhaps those are good enough
<wallyworld_> "the PatchEnvironment in the test is a test cleanup"  --- isn't the patch env called from setup suite. i don't quite get what you are saying that it is a test clean up
<wallyworld_> with this call "s.PatchValue(&lxcObjectFactory, s.Factory)" is SetupTest, i can't see how the old value is ever restored??? since no cleanup is added
<wallyworld_> s/is/in
<thumper> ah, the PatchValue is a call on the test
<thumper> which automatically adds the cleanup
<thumper> see testbase.CleanupSuite
 * thumper -> gym
#juju-dev 2014-09-15
<menn0> wallyworld_: but I think RB has versioned several times since then
<wallyworld_> menn0: yeah, it's not a valid command, yet the email says to use it, so i'll have to dig into the doc
<menn0> wallyworld_: which version of the RB tools do you have installed? maybe you ended up with an older version somehow?
<wallyworld_> 0.6.2
<menn0> that the latest
<menn0> the docs here don't mention a pull command: https://www.reviewboard.org/docs/rbtools/dev/
<menn0> I wonder if Eric meant "post"?
<menn0> wallyworld_: that's the command I remember using the most. it's what creates and updates review requests.
<wallyworld_> could be, i was wondering the samw
<menn0> wallyworld_: cheers for the review
<wallyworld_> np
<axw__> wallyworld_: I'm back at limited capacity - whole house has had the flu
<wallyworld_> oh no :-(
<wallyworld_> take it easy then
<wwitzel3> yuck, hope you feel better axw
<axw> wwitzel3: thanks, I'm mostly better, just a bit run down.
<davecheney> menn0: you're going to like my next PR
<davecheney> all the state package _test_'s have stopped importing the api
<menn0> davecheney: ##\o/##
<davecheney> menn0: basically inside most of the tests we do somethingl ike
<davecheney> m := state.NewMachione
<davecheney> api.ChangeMAchine(m.Id()
<davecheney> assert m.Thing == expected change
<davecheney> what is clear after my change is that these two methods are indepdent
<davecheney> ie, you ask the api to do something
<davecheney> but the effect on the state is indepdented
<davecheney> independant
<davecheney> as identified by the fact that the data type of the state is not the same as the api
<menn0> sounds good. I look forward to seeing it.
 * menn0 is hating reviewboard right now
<menn0> wallyworld_: when you have a chance: http://reviews.vapour.ws/r/20/
<wallyworld_> menn0: sure, i just noticed it :-)
<wallyworld_> first go at using review board
<menn0> wallyworld_: I had a lot of trouble creating that request. it turns out the "master" branch of your personal GH repo (i.e. not your local master) has to be up to date or else you end up creating reviews for stuff that's already committed.
<menn0> wallyworld_: that's pretty inconvenient
<menn0> wallyworld_: I'll reply to Eric's email
<wallyworld_> menn0: that sucks actually
<wallyworld_> a lot
<menn0> wallyworld_: there must be a better way
<wallyworld_> yep, there had better be
<menn0> wallyworld_: I don't recall this being a problem when I've used RB before but that was with git-svn, not pure git.
<wallyworld_> i just wish we used launchpad still :-(
<wallyworld_> it handles this stuff well
 * menn0 nods
<davecheney> menn0: why are we using reviewboard again ?
<menn0> davecheney: AFAIK, because Github reviews are bit sucky: e.g. as soon as you write a comment, that's it the whole world see it without a chance for you to delete/edit
<davecheney> menn0: is that the only reason ?
<menn0> davecheney, wallyworld_: hopefully this issue I've found today can be sorted out. it could just be a setup thing.
<wallyworld_> hope so too
<menn0> davecheney: I think there were others but I'm not sure what they are. I haven't been too involved with the decision.
<wallyworld_> davecheney: there's other reasons too
<wallyworld_> eg lack of decent review queue
<davecheney> so we can have depdenent branches ?
<wallyworld_> that too
<davecheney> oh well, we can't know these things til we try them
<menn0> davecheney, wallyworld_: RB is pretty flexible about where it gets its diffs from so hopefully this issue I've run in to is fixable
<davecheney> menn0: it feels like the RB <> GH integration is wrong
<davecheney> the other systems 've used integrate into the GH review, not the other way around
<davecheney> i have no idea how they do this
<davecheney> they jsut do
<davecheney> cf. werker and travis
<menn0> they use the GH API
<wallyworld_> travis was ruled out because it doesn't test properly
<menn0> we can improve the integration as Eric indicated in his email
<menn0> by taking advantage of GH's and RB's APIs
<menn0> I'm guessing we're going to end up with something like "lbox propose" that ties together or GH and RB usage
<menn0> s/or/our/
<davecheney> and the circle is complete
<davecheney> all roads lead to lbox
<axw> I was kinda hoping we'd just do a pull request, and that would cause a RB review to be created
<davecheney> is the reviewboard tool manditory ?
<davecheney> % rbt pull
<davecheney> usage: rbt [--version] <command> [options] [<args>]
<davecheney> rbt: error: 'pull' is not a command
<davecheney> what did I do wrong
<davecheney> this is step 1
<wallyworld_> axw: yeah, me too
<wallyworld_> davecheney: there's a typo in the email - it should be post
<davecheney> % rbt post
<davecheney> ERROR: There don't seem to be any diffs!
<davecheney> is this because I need to keep pushing maste3r ?
<wallyworld_> not sure, it just worked for me
<wallyworld_> you need a gh pr first
<davecheney> done that
<davecheney> https://github.com/juju/juju/pull/752
<davecheney> i tried to create ie from the gh diff
<davecheney> https://github.com/juju/juju/pull/752.patch
<wallyworld_> menn0: first review done using rb, did you get an email?
<davecheney> but reviewboard doesn't like it
<wallyworld_> i've not tried using a patch
<wallyworld_> i just used the rbt tool and it worked
<menn0> wallyworld_: no email :(
<menn0> wallyworld_: I did configure an email address for my account earlier today
<wallyworld_> menn0: well, that sucks, i just finished, so maybe it will get there eventually
<davecheney> error: The following untracked working tree files would be overwritten by merge: .reviewboardrc
<wallyworld_> davecheney: that file was added to master i believe
<davecheney> so step 2 of hte instructinos is also wrong
<wallyworld_> so if you pull master, you'll get it
<wallyworld_> could be, i just created the rc file in my home directory
<davecheney> ahh, so that file needs to be in every branch
<davecheney> that makes sense
<wallyworld_> it seemed to find it in my home dir, but there's also a copy checked in which i don't quite understand
<davecheney> still can't make it work
<wallyworld_> i can't help much - i know very little about rb
<davecheney> http://reviews.vapour.ws/r/21/diff/#
<davecheney> oh that is brilliant
<davecheney> if you have a large diff, rb gives up
<davecheney> and only shows the diff if you ask explicitly
<axw> wallyworld_: https://code.launchpad.net/~axwalk/gomaasapi/maas-testserver-bootimages/+merge/234614
<wallyworld_> lookng
<wallyworld_> davecheney: send an email to eric and the list so it can get fixed
<davecheney> will do
<waigani> menn0: I think I've got services migrated. just doing a final make test ... so far so good!
<menn0> waigani: sweet. I'm keen to see the PR.
 * menn0 has to deal with his youngest for a bit
<waigani> menn0: just give the tests a day or two to run...
<wallyworld_> axw: code looks good, but we need a test or two
<axw> ok
<waigani> menn0: nop, rain check on that
<axw> tests for tests
<axw> forgot about that :)
<menn0> wallyworld_: I've replied to some of your comments here: http://reviews.vapour.ws/r/20/
<wallyworld_> ok
<wallyworld_> menn0: i saw one reply which i replied to - are there others?
<menn0> wallyworld_: it was just that one. I wasn't sure if you're getting emails from RB.
<wallyworld_> menn0: no, not yet. i just wanted to check that since "some" implies > 1 :-)
<menn0> wallyworld_: regarding that issue, I don't like --resolve-previous-error because it's not clear that the error is about a previous upgrade attempt.
<wallyworld_> maybe reset-previous-upgrade ?
<menn0> wallyworld_: yeah sorry I thought there was more. I have more planned in my head regarding the other thing you raised :) (I agree with everything)
<menn0> wallyworld_: bingo. reset-previous-upgrade is the best one yet.
<wallyworld_> \o/
<wallyworld_> naming things and cache invalidation :-)
<wallyworld_> 2 of the hardest problems
<wallyworld_> or cache consistency
<menn0> isn't it: there are 2 really hard problem in computer science:
<menn0> 1. naming things
<menn0> 2. cache consistency
<menn0> 3. off by one errors
<menn0> ?
<menn0> :)
<dimitern> morning all
<axw> wallyworld_: I'm going to change hte way I'm doing the MAAS fix. I'll just query the available nodes, and get the set of architectures from them
<wallyworld_> axw: instead of querying each cluster controller and asking for available images?
<wallyworld_> axw: bbiab, gotta go do school run
<bigjools> axw: don't do that, it's not scalable
<axw> bigjools: is there a theoretical limit on the number of nodes in MAAS?
<bigjools> axw: no
<axw> mk
<bigjools> did you have a problem querying clusters' images?
<axw> bigjools: I'm having trouble with /api/1.0/nodegroups/<uuid>/boot-images/
<axw> I just get 404, despite feeding a UUID I got from nodegroups/?op=list
<axw> The requested URL /MAAS/api/1.0/nodegroups/356f23e1-5879-41fc-be81-e5f601428772/boot-images/ was not found on this server
<bigjools> umm
<bigjools> axw: do you have a new enough maas?
<bigjools> I can't remember exactly when this feature was added
<bigjools> but it's new-ish
<axw> bigjools: I have 1.4
<bigjools> ah you need 1.5
<bigjools> I rhink
<bigjools> think
<bigjools> it's in trusty
<axw> ah, doh
<axw> so then we need a fall back to listing nodes anyway. bleh
<axw> thanks bigjools
<bigjools> urgh
<dimitern> jam1, 1:1 ?
<jam1> dimitern: yeah, I'm just trying to workout my firewall stuff, brt
<dimitern> jam1, np
<dimitern> jam1, if you're having trouble, we can try mumble instead?
<axw> wallyworld_: was a little more complicated than I expected: https://github.com/juju/juju/pull/754
<wallyworld_> looking
<wallyworld_> axw: not sure if it's simpler to record boot images in a map and avoid the need for using the "seen" set. I guess a loop would then be needed at the end to extract the values so probably not a lot of difference
<axw> wallyworld_: yeah, much the same either way.
<wallyworld_> axw: nice, thank you :-)
<axw> wallyworld_: cheers
<wallyworld_> i'll backport when it lands
<axw> thanks
<axw> I'm going to test my charm storage changes live, and make the cleanups you suggested
<axw> hopefully be able to land later on
<wallyworld_> ok, after you were rudey distracted onto the maas thing
<axw> heh, it's fine :)
<mattyw> morning folks
<jam1> TheMue: morning. I might be a little late to our 1:1 as I'll need to take my dog out, but I'll be close to on time.
<TheMue> jam1: just ping me then
<jam1> TheMue: ping pong, walla walla ding dong
<TheMue> jam1: ouch, that hurts in my ears :D
<TheMue> jam1: omw
<jam1> dimitern: TheMue: standup?
<TheMue> jam1: ouch, omw
<jam1> my browser just hung, so I might be a sec
<dimitern> jam1, sorry, omw
 * fwereade lunch
<mattyw> ericsnow, ping?
<ericsnow> mattyw: here
<mattyw> ericsnow, hey there - forgive my inexperiecne with review board - do I need to be running rbt setup-repo in each branch before I push?
<ericsnow> mattyw: only if the branch isn't based on master
<mattyw> ericsnow, ok - yeah this one isn't
<ericsnow> mattyw: I pushed a .reviewboardrc to our juju repo that should take care of it
<mattyw> ericsnow, I guess we should probably add .reviewboardrc to the .gitignore file as well?
<ericsnow> mattyw: I haven't done so for our other repos
<mattyw> ericsnow, I'm playing with a branch that has a prereq
<ericsnow> mattyw: not necessarily
<ericsnow> mattyw: but that should still be based on master (ultimately), right?
<mattyw> ericsnow, yes
<katco> ericsnow: hey i'm doing something wrong with oath and rbt. username is "katco-@github"? and password is what? github pw?
<katco> s/oath/oauth/
<ericsnow> katco: try "katco-" for the username and "oauth:katco-@github" for the password
<katco> ericsnow: k sec
<katco> ericsnow: looks like that worked. thanks!
<ericsnow> katco: cool
<alexisb> natefinch, on and ready when yo uare
<natefinch> alexisb: coming
<thumper> mramm: morning
<thumper> mramm: I got my physio time wrong
<thumper> mramm: and I am around now
<mramm> thumper: ok
<mramm> sounds good
<mramm> are you free in 30?
<thumper> yep
<thumper> mramm: as in minutes not seconds?
<mramm> yea
 * thumper nods
<mramm> ok
 * thumper stares in horror at his email queue, it was only a few days
<waigani> but we missed you thumper
<thumper> so you thought you'd send me heaps of email?
<thumper> yay
<waigani> =D
<waigani> thumper: talk go well?
<thumper> seemed to
<waigani> nice
<waigani> i ran into kit, he said he went up
<perrito666> hey, I ask this again in case new people is here, Do you know how is the path /var/lib/juju/agents/unit-<charm name>-<unit no> generated?
<perrito666> I have a unit name and I would like to do something nicer than just demi hard code it
<mramm> thumper: meeting over, can we meet now?
<thumper> mramm: yup
<thumper> perrito666: how? yes it is based on the string value of the unit tag
<waigani> TheMue: nice work on the API implementation guide. I wish that was about when I started!
<wallyworld> katco: meeting?
<katco> wallyworld: oh hey... sorry it wasn't on my calendar. one sec./
<katco> wallyworld: curious... i can't find it in the past either? can you toss me a link
<wallyworld> katco: https://plus.google.com/hangouts/_/canonical.com/ian-katherine
<waigani> thumper: Params passed to UserManagerAPI.AddUsers
<waigani> https://docs.google.com/a/canonical.com/spreadsheets/d/1cUmTCpsTdso5YJ7Dxvluz_E29Q2P4L__AeGZnP5Q9Ck
<thumper> ta
<waigani> thumper:
<waigani> Params passed to UserManagerAPI.RemoveUser
<waigani> https://docs.google.com/a/canonical.com/spreadsheets/d/1r-fKcMJw5U5dVqS2wpu5Z9xAB1yH7S2EQPQuqQsKCxw
<waigani> thumper: Params passed to UserManagerAPI.SetPassword
<waigani> https://docs.google.com/a/canonical.com/spreadsheets/d/1RS8bmU-g__8z7PnJtiekbl-ULG3arhi39KDPa9E-_lo
<waigani> Params passed to UserManagerAPI.UserInfo
<waigani> https://docs.google.com/a/canonical.com/spreadsheets/d/1caY03J8AalWXNXZ3Lh8DzE4LLYy99HYL1LaQ2UOSwh4
<davecheney> thumper: https://docs.google.com/a/canonical.com/document/d/1wcu6ufHX_x7CM602VJn-3pgHzFMfuEz_DvxfvXX4Q3s/edit#heading=h.f5zva1iwyydv
<davecheney> arm status
<waigani> thumper: WIP https://github.com/waigani/juju/compare/add-env-uuid-to-collections
<waigani> thumper: I'm not clear on the scope of serviceGlobalKey (and all GlobalKeys for that matter)
<waigani> thumper: I've appended the envuuid to serviceGlobalKey assuming global means global, not 'environmental'
<thumper> davecheney: ta
<thumper> waigani: thanks, I'll look shortly
 * thumper -> physio
<wallyworld> davecheney: hey, thanks for reviewing katherine's juju status branch but it was done on github not reviewboard so that make it a little hard to collaborate now
<davecheney> wallyworld: sorry, just saw the email
<wallyworld> np :-)
#juju-dev 2014-09-16
<perrito666> thumper: how -> where
<perrito666> btw, nite/morning
<davecheney> why does revbiewboard hide the diff by default
<davecheney> that seems to be at odds with its primary function in life
 * perrito666 fails at making dinner for reading this channel
<waigani> thumper: ping
 * thumper is back
<wallyworld> axw: morning, you feeling better today?
<axw> wallyworld: hey. much the same as yesterday actually
<wallyworld> :-(
<wallyworld> axw: i have a favour - i have 3 PRs for 1.20, https://github.com/juju/juju/pull/760/761/762 - can you take a look for me? this will then allow 1.20.8 to be finalised for release this week
<axw> wallyworld: certainly
<wallyworld> 2 are trivial
<wallyworld> ty
<wallyworld> axw: you make a good point about the gotool dependency, i'll remove it
<wallyworld> maybe i should retain bash script licence also; the bug implied we wanted consistency, so i wnet for that
<wallyworld> but i agree it doesn't make much sense
<axw> wallyworld: it isn't used in the same way as juju proper, so I don't think consistency makes sense
<axw> AGPL is about running things as a service, getting the same guarantees as GPL does for shipped code
<wallyworld> yep, agreed
<axw> wallyworld: are we going to 1.20.8 as an RC again? would be good to have the MAAS change tested more thoroughly
<wallyworld> axw: i will see if we can get it tested
<axw> wallyworld: all reviewed
<wallyworld> axw: tyvm
<rick_h_> playing with the gui on maas on the orange box and was bummed that there was no hardware characteristics. Does anyone know if that's a limitation of maas, or juju doesn't load/query it?
<bigjools> rick_h_: those are in maas so I suspect the provider code doesn't query it
<rick_h_> bigjools: gotcha, /me ponders a bug for that
<rick_h_> bigjools: ok, looks like it should be there: https://bugs.launchpad.net/juju-core/+bug/1193998
<mup> Bug #1193998: maas provider doesn't return hardware characteristics of started instances <maas-provider> <tech-debt> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1193998>
<rick_h_> oh man, the orange box is still on 1.18
<axw> rick_h_: it's literally just been backported to 1.20.8 today, about an hour ago ... doesn't help if you're stuck on 1.18 tho
<rick_h_> axw: gotcha, yea thanks.
<rick_h_> axw: just playing with screenshots and machine view work on here and wish it was a little bit prettier https://plus.google.com/116120911388966791792/posts/dR3zqM6E66B
<rick_h_> good to know it's coming though
<axw> ah, shame
<axw> wallyworld: did you ever get my email?
<wallyworld> axw: no :-(
<axw> weird
<wallyworld> resend?
<axw> just sent
<wallyworld> got it
<wallyworld> axw: did you want to have a hangout to talk about it?
<axw> wallyworld: in about 5-10 please
<wallyworld> sure, just ping henevr
<axw> wallyworld: just doing the survey, will let you know when I'm done... not sure how long it's going to take
<wallyworld> no hurry
<wallyworld> i need to do it too
<menn0> wallyworld: all issues raised for http://reviews.vapour.ws/r/20/ have been addressed. PTAL.
<wallyworld> menn0: will do, thanks
<axw> wallyworld: ready when you are
<wallyworld> axw: ok, will just finish a review and push up the licensing fix to trunk
<axw> sure
<axw> menn0: did you really mean to leave "foo" in as a way of saying "no"? :)
<menn0> axw: yes. basically anything that isn't "y" or "yes" is treated as no so I wanted to include a nonsense value.
<axw> fair enough
<menn0> axw: I followed the approach used by destroy-environment
<axw> oh that's a texst
<menn0> except there on a non-response is tested for the no case
<axw> test
<menn0> yep :)
<axw> my mistake
 * axw blames it on the flu
<wallyworld> menn0: +1 with suggestions
<menn0> wallyworld: cheers
<menn0> wallyworld: I like the 2nd and 3rd suggestion but I don't understand what you mean by the first one.
<menn0> axw: no problems
<wallyworld> menn0: just add the word "status" to the text, but ymmv (your mileage may vary)
<wallyworld> ie a suggestion
<menn0> wallyworld: got it
<menn0> wallyworld: I thought you wanted me to include "your mileage may vary" in the help text :)
<wallyworld> ah, lol
<menn0> wallyworld: now that I understand it, that sounds good
<wallyworld> ok :-)
<wallyworld> axw: standup hangout?
<axw> wallyworld: yup, brt
<thumper> wallyworld: anecdotal evidence is showing me more stability in our tests
<thumper> my ratio of good to bad test runs is better now than it has been
<wallyworld> thumper: indeed, but still a few failures to fix
<thumper> landed my branch first time
 * thumper nods
<thumper> hit one earlier
<wallyworld> but happy days
<wallyworld> me too
<thumper> but four times out of five today, it succeeded
<wallyworld> i hit the mongo mac error too :-(
<thumper> github.com/juju/juju/worker/uniter -   filter_test.go:467:   Error: unexpected config event
<thumper> and FAIL: replicaset_test.go:175: MongoSuite.TestAddRemoveSet in the same test run
<wallyworld> thumper: well, those tests are supposed to be getting removed
 * thumper nods
<wallyworld> i hope it happens soon
<davecheney> wallyworld: can I remove {state,params}JobManageStateDeprecated
<davecheney> axw: ^
<wallyworld> davecheney: i *think* so
<axw> davecheney: I think that a machine doc's Jobs may still contain it if upgraded all the way from 1.16, so we'd have this unknown job in parts
<axw> (if we removed it)
<axw> that's okay as long as we don't start barfing on unknown jobs
<axw> there's a TODO in cmd/jujud/machine.go to start barfing though
<axw> / TODO(dimitern): Once all workers moved over to using
<axw> / the API, report "unknown job type" here.
<davecheney> right, well i won't touch that
<davecheney> not with 1.18 being around for forever
<davecheney> yay
<davecheney> i found an unused function in envrions/manual which relies on an unused function in state
<thumper> \o/
 * thumper runs out to collect cat from cattery now it is open
<thumper> bbs
<davecheney> thumper: menn0 https://github.com/juju/juju/pull/765
<davecheney> here is a quick one
<davecheney> % rbt post
<davecheney> ERROR: There don't seem to be any diffs!
<davecheney> Â¯\_(ã)_/Â¯
<menn0> davecheney: so you're in the branch you want to post?
<davecheney> yup
<menn0> try "rbt post --tracking-branch=upstream/master"
<menn0> davecheney: ^^^
<davecheney> thanks
<davecheney> that worked
<davecheney> but now it wants my credentials
<davecheney> which ones are those ?
<davecheney> github ?
<davecheney> canoinical ?
<davecheney> gmail
<davecheney> ?
<menn0> that's in Eric's email... it's your GH username and "oauth:<gh-username>" for the password
<menn0> as per my email earlier today, we need to get the tracking branch config set up in the .reviewboardrc that's checked in
<menn0> davecheney: ^^^
<menn0> davecheney: remember to Publish your review request so that others can see it
<davecheney> nup
<davecheney> won't let me login
<davecheney> is it really oauth:<gh-username>
<davecheney> or is that a placeholder
<menn0> placeholder
<menn0> so for you it's "oauth:davecheney" I suspect
<davecheney> nup
<davecheney> no joy
<menn0> davecheney: what happens?
<davecheney> ==> HTTP Authentication Required
<davecheney> Enter authorization information for "Web API" at reviews.vapour.ws
<davecheney> Username: davecheney
<davecheney> Password:
<davecheney> ERROR: Error creating review request: The username or password was not correct (HTTP 401, API Error 104)
<menn0> davecheney: I think the account has to get created first... visit http://reviews.vapour.ws, go to the login page (top right) and click the oauth button
<davecheney> i have an account
<davecheney> i have logged in
<menn0> ok
<menn0> davecheney: and is your RB account "davecheney"?
<davecheney> can you just take a quick look at https://github.com/juju/juju/pull/765
<davecheney> yes
<davecheney> it was crated from my gh oauth creds
<menn0> davecheney: then IHNFI :)
<menn0> davecheney: will review on GH
<davecheney> thanks
<menn0> davecheney: I don't like the duplicated function as I imagine you don't either
<menn0> davecheney: how temporary will it be?
<davecheney> menn0: i can remove it as soon as state stops depending on apiserver/params
<davecheney> given the shifting nature of our priorities it's hard to give an eta
<davecheney> but it is the goal of this piece of work
<davecheney> it is not tangental
<menn0> davecheney: but Real Soon Now? :)
<menn0> davecheney: understood
<davecheney> re of our priorities it's hard to give an eta
<davecheney> err
<davecheney> as soon as state stops depending on apiserver/params
<menn0> davecheney: you have my LGTM
<menn0> davecheney: but I'm a junior developer
<menn0> well junior reviewer
<davecheney> i'm sure thumper would have given it the thumbs up if he were here
<davecheney> but this is good
<davecheney> this leaves the last problem
<davecheney> the watchers depend on apiserver/params
 * davecheney insert jarring chord
 * thumper is here
 * thumper looks
<davecheney> * thumper is here
 * davecheney insert dramatic crescendo
<davecheney> WTF does the "review" button on a review do ?
<davecheney> it sure doesn't show you the code
<axw> wallyworld: the least horrible thing I can think of right now is to add a method, Environ.SetPrivateMetadataSource(DataSource)
<axw> the machine agent will call that with a data source backed by environment storage
<axw> when bootstrapping with --metadata-source, I'll create a local filesystem datasource and call it with that
 * thumper is down to about 150 unread emails
<thumper> got through over a thousand today
 * thumper is done for today
<thumper> night all
<axw> night
<thumper> menn0: I'll be out part of the morning tomorrow as I get an ultrasound and xray on my shoulder
<thumper> menn0: should be back before the standup
 * thumper crosses fingers hoping no surgery needed
<wallyworld> axw: i think that could work
<eagles0513875> rogpeppe1: hey was just replied to you on the mailing list :)
<rogpeppe1> eagles0513875: ta!
<eagles0513875> fwereade: POKE :P
<eagles0513875> rogpeppe1: im asking in the git channel but i have a hunch you do not need to rebase if you squash multiple commits together
<rogpeppe1> eagles0513875: i think rebase -i is the way to do that
<eagles0513875> let me do some further research though
<rogpeppe1> eagles0513875: at least, that's the way i've been doing it (it lets you choose which commit messages to keep, etc)
<eagles0513875> fwereade: im in here as well hehe :)
<eagles0513875> fwereade: out of curiosity here is an appropriate question to ask as i would love to start testing and helping out with juju on a non ubuntu specific distro
<eagles0513875> eager to learn golang for sure :)
<fwereade> eagles0513875, gsamfira has been leading the windows workloads, and I think that the extension of that to other OSs is on his roadmap?
<fwereade> eagles0513875, he'd be best able to say how to support him
<eagles0513875> gsamfira: ^ ping :D
<gsamfira> hey there :)
<eagles0513875> gsamfira: im very eager to help you with testing and development of juju for non specific linux distros
<gsamfira> eagles0513875: sure thing. What did you have in mind?
<eagles0513875> gsamfira: im using gentoo these days and I am more then willing to help you out with testing debugging etc and coding
<gsamfira> eagles0513875 : I have personally never used gentoo, but I can write up a document of things that need attention when adding a new platform if you would be interested
<eagles0513875> if you dont mind that would be awesome
<frankban> gui devs, I need a review for https://github.com/juju/juju/pull/768 (megawatcher), anyone available? thanks
<frankban> oops, juju devs ^^^
<gsamfira> eagles0513875: I have to wrap something up, and it would be my pleasure.
<eagles0513875> also what about cloud providers as i would love to add the cloud provider i uise
<gsamfira> eagles0513875: that is a topic for someone else I believe. But if you have a look inside github.com/juju/juju/providers, you a few providers as example :)
<gsamfira> eagles0513875: If you plan on starting development for Juju, best place to start is: https://github.com/juju/juju/tree/master/doc
<eagles0513875> :D ok
<eagles0513875> im starting to find the work flow a bit confusing
<eagles0513875> though from what im seeing via emails
<eagles0513875> fwereade: who would i need to talk to about adding another cloud providr
<eagles0513875> provider
<fwereade> eagles0513875, if you think you've got the time, come round to mine for a day with your provider credentials and I'll see if I can get you pointed in the right direction
<eagles0513875> i have them already
<eagles0513875> would you need access to their API?
<fwereade> eagles0513875, (the first step is to make sure you can get juju building and running the tests ofc)
<eagles0513875> agreed
<fwereade> eagles0513875, well, having a golang library for interacting with their API would be a great first step
<eagles0513875> ok would need to contact them about that
<fwereade> eagles0513875, I expect that we'd have to write that tbh
<eagles0513875> fwereade: ? what do you mean
<eagles0513875> an golang wrapper?
<fwereade> eagles0513875, yeah -- unless one already exists that you know of?
<eagles0513875> nope none that i know of :)
<eagles0513875> need to find out if golang is available on gentoo
<eagles0513875> hell would be great to get an ebuild setup for juju :)
<fwereade> eagles0513875, ah looks like there are a couple on github
<eagles0513875> couple of what?
<fwereade> eagles0513875, no guarantee they're any good
<fwereade> eagles0513875, linode api libs
<eagles0513875> i know they exist
<fwereade> eagles0513875, (that's what you're using iirc?)
<eagles0513875> yes
<eagles0513875> thing is
<eagles0513875> to use the api you have to have an account with them
<eagles0513875> especially if we want to target clients etc
<eagles0513875> to make it easier to deploy services etc
<eagles0513875> fwereade: ^ not sure if that would cause new problems
<fwereade> eagles0513875, sure, that applies pretty much across the board
<fwereade> eagles0513875, configure the provider with your API keys and go
<eagles0513875> exactly so we coudl ask the users to enter their api key etc
<fwereade> eagles0513875, yeah -- that'd just be defining a Config that accepted the api keys
<mattyw> dimitern, simple one for you http://reviews.vapour.ws/r/31/
<dimitern> mattyw, sure, will have a look in a moment
<mattyw> dimitern, thanks
<eagles0513875> fwereade: out of curiosity canonical wouldnt host a git repository for you guys and gerrit code review site either?
<dimitern> mattyw, oh, boy! my first review at RB :)
<eagles0513875> dimitern: from what ive been reading its making things a hell of alot more complex
<mattyw> dimitern, I still find it a bit clunky
<mattyw> but so was rietveld when I first started using that
<dimitern> eagles0513875, the --amend option?
<eagles0513875> dimitern: RB
<fwereade> eagles0513875, we want to be on github because that's where the developers are -- and we have our own reviewboard up now
<dimitern> mattyw, it's a bit UX-unfriendly, but it takes some time to grow on you I guess
<eagles0513875> fwereade: gotcha i dunno as i dont develop for juju yet but i find its rather a complex setup
<eagles0513875> for instance fwereade with a gerrit git setup
<eagles0513875> when you push your changes you push to gerrit which adds them to the list of things to be reviewed
<eagles0513875> then you can lets say have x number of reviewers and last reviewer would push to the repo
<eagles0513875> for example libreoffice use it
<dimitern> eagles0513875, FWIW, if it sanely supports patch series (i.e. master -> branchA -> branchB -> branchC) it's totally worth it for better productivity
<eagles0513875> dimitern: gerrit does
<eagles0513875> for example libreoffice require 3 reviews for anything going into stable
<eagles0513875> and it keeps track of who reviewed what etc
<eagles0513875> stable branches need two reviews and master needs 1 or 2 reviews for them
<fwereade> eagles0513875, have you read the workflow for dependent branches in gerrit? it's not super-nice
<eagles0513875> fwereade: sadly havent looked into it in detail
<fwereade> eagles0513875, and... you seem to be under the impression that we haven't been doing code review?
<eagles0513875> but im sure though gerrit can be adapted for your needs
<frankban> fwereade, dimitern: time for a quick review? https://github.com/juju/juju/pull/768
<dimitern> mattyw, reviewed
<dimitern> frankban, looking
<eagles0513875> fwereade: i knwo you guys do code review i never said you didnt :)
<frankban> dimitern: thanks!
<fwereade> eagles0513875, we looked into it and decided we preferred reviewboard :)
<eagles0513875> :) at least it was concidered
<eagles0513875> anyway sorry for derailing the channel
<eagles0513875> fwereade: gsamfira hopefully in a few days time ill finish off this website and can start with helping you guys out
<mattyw> ericsnow, ping?
<frankban> dimitern: I used subordinate in unit info for symmetry, and we don't need the principal name
<dimitern> frankban, ah, sorry, I meant Principal bool, not Principal string
<frankban> dimitern: so Principal == !Subordinate?
<dimitern> frankban, yeah, it kinda feels more appropriate
<dimitern> frankban, more like UnitInfo.Principal: u.Principal != ""
<frankban> dimitern: on the other hand, from clients perspective, we have two opposite flags to parse for the same concept
<frankban> dimitern: isn't it UnitInfo.Principal: u.Principal == ""?
<dimitern> frankban, it's not the same concept actually
<dimitern> frankban, right, sorry, it is
<frankban> dimitern: so, sticking with Subordinate makes sense to me
<dimitern> frankban, being a subordinate comes from the unit's service's charm having subordinate: true
<frankban> dimitern: yes
<jam> fwereade: before I complain to the list, can I run something past you?
<dimitern> frankban, it's not a property of the unit itself, units only know if they have a principal or not, but strictly speaking I think you should check the ServiceInfo to determine whether the unit is subordinate
<frankban> dimitern: yes, it's a denormalization, so that we can immediately know if a unit belongs to a service deployed from a subordinate charm
<frankban> dimitern: it's just like UnitInfo.CharmURL: in theory the charm URL is an attribute of the service
<fwereade> jam, ofc
<dimitern> frankban, fair point
<dimitern> frankban, ok, objection withdrawn, go with UnitInfo.Subordinate then
<frankban> dimitern: cool
<frankban> dimitern: just another question
<frankban> dimitern: in what cases do we need to increase the API facade version? only backward incompatible changes?
<jam> fwereade: so, apiserver/params/params.go imports gopkg.in/juju/charms.v3
<jam> because stuff like ResolveCharm takes a "charm.Reference" object
<jam> which *actually* takes a 'string'
<jam> because it has a custom UnmarshalJSON type
<jam> but when trying to call it from python code, I was thoroughly confused
<jam> fwereade: so for a starting point, shouldn't all of apiserver/params be defined in basic types?
<jam> fwereade: the next thing was trying to Deploy
<fwereade> jam, right, that feels like the same problem as storing charm.Meta in the database
<jam> but you can't deploy "cs:ubuntu" or "cs:trusty/ubuntu"
<jam> you have to deploy "cs:trusty/ubuntu-#"
<dimitern> frankban, yes, but I need to know more details on the change you have in mind
<jam> but you have to go talk to the charm store to figure out what # to put in there.
<fwereade> jam, yes -- didn't that come up on the list semi-recently? it's *not* meant to be a charm.Reference at all, it's meant to be a charm.URL that gets deployed
<frankban> dimitern: I was thinking about the change you just reviewed: it's not backward incompatible, so I guess we are ok
<frankban> dimitern: just double checking
<jam> fwereade: sure, but it is *really* inconvenient from a "lets do this from Python" POV that to talk to our API you have to debug and figure out yet-another API
<eagles0513875> fwereade: can i make a suggestion for reviews
<jam> fwereade: and going to a page like: http://manage.jujucharms.com/charms/precise/ubuntu I can't actually tell what revision it is
<fwereade> jam, and -- yes, it should probably really just be a string over the API
<fwereade> jam, with the requirement that that string be a valid stringification of a charm url
<jam> fwereade: sure, it turns out that it is just a string, but it is decoded magically for the API rather than the API server taking a string and doing the decoding.
<fwereade> jam, I agree that that is a Bad Thing
<eagles0513875> fwereade: just would like to suggest with RB is it possible to have multiple people review something lets say 3 before approving and pushing
<fwereade> eagles0513875, it would be
<fwereade> eagles0513875, our experience is that that helps less than you might imagine
<dimitern> frankban, adding an optional field to a result should be ok, as long nobody depends on it
<frankban> dimitern: yeah, that's what I suspected. cool then, and thanks for the review!
<dimitern> frankban, backwards-incompatible change will be if you require Subordinate to be present and give errors if not
<dimitern> frankban, no worries
<jam> dimitern: I may be late for standup, taking dog out
<dimitern> jam, ok
<dimitern> jam, ok
<dimitern> ha.. so hitting enter on a blank line in xchat resends the last msg
<voidspace> dimitern: no it doesn't
<voidspace> dimitern: you must have hit the up arrow
<eagles0513875> fwereade: thats news to me. wouldnt that improve code quality and start discussion about fixes etc if its the right way etc
<dimitern> voidspace, I must have then :)
<voidspace> dimitern: I take it we're waiting for jam for the standup
<voidspace> dimitern: I have a question about dialling mongo in the meantime
<voidspace> dimitern: I don't think it's possible for us to establish connections to all mongos in a cluster
<dimitern> voidspace, it seems I'm the only one in the standup call
<voidspace> dimitern: from just a session
<voidspace> dimitern: I was there a minute ago
<jam> voidspace: dimiter and I are here, where are you and TheMue
<voidspace> ah, ok
 * fwereade lunch
<voidspace> jam: dimitern: I don't think you guys can hear
<voidspace> jam: dimitern: *hear me
<jam> voidspace: we cannot
<voidspace> jam: dimitern: leaving and returning
<dimitern> voidspace, we're hearing you typing
<jam> TheMue: poke?
<perrito666> morning
<mattyw> perrito666, morning
<perrito666> how do I get from the charm/number format to the tag name of a unit?
<perrito666> uh, I think I found it
<jam> perrito666: unit-charm-number
<perrito666> jam: I was looking for a method, apparently NewUnitTag would return that
<jam> perrito666: though be careful, as NewUnitTag (IIRC) will Panic if you don't have a valid content
<jam> and IIRC davecheney said "this is for tests only, you shouldn't be using this"
<perrito666> oh
<perrito666> ok
<jam> perrito666: so *maybe* you can use it, but you *must* validate the input first
<perrito666> jam: its is odd we dont have soemthing like that that does not panic (I was wondering why the panic)
 * perrito666 try to recover his head side movement by ibuprofen-ing it
<katco> perrito666: i am right there with you. apparently i slept funny.
<perrito666> katco: I went hiking on sunday and there was this strong cold wind (not feeling the face level) and I am still suffering random muscle failures
<katco> perrito666: well your reason sounds much more healthy.
<katco> perrito666: all i got is "i slept weird"
<perrito666> katco: sleeping is good
<perrito666> I am sure its much better than trying to be asymptotic to hypothermia
<katco> perrito666: at least with that you sleep well :)
<perrito666> yup, apparently hiking was good on my memory only because last time I did it I was like 20kg lighter and 5 years younger
<jam> perrito666: I went jogging today after about a 6 month hiatus.... yeah, things aren't what they used to be
<perrito666> it gets better I decided to take some form of morning exercise
<perrito666> since I have lost my walk to the office in the morning exercise
<perrito666> well I havent but 5mts is not very demanding
<perrito666> I wish I could go jogging
<jam> perrito666: well, we got a dog 6mo ago, and I was spending my jogging time walking her, but today I was reminded that walking the dog != jogging
<perrito666> heh if your dog is athletic enough you can jog together, takes some training for the dog to be able to though
<jam> perrito666: yeah, I was trying to get there, but it is hard to control a dog and get them to jog for a while. My dog would jog for about 30s-1min. But I'm trying to jog more like 20+min.
<perrito666> jam: give the dog a reward when properly jogging and a yell when not, that worked with mine
<perrito666> yet my dog is a border collie and I cannot have her running at a time that suits me or she will faint bc of the temperature
<jam> perrito666: yeah, it has been difficult here as well
<jam> it has just started to cool down, which is why I went jogging today
<perrito666> here winter has become more of a technical term, we get like a month of cold a year
<perrito666> with luck
<perrito666> which reminds me
 * perrito666 shops for AC for his dig
<perrito666> dog*
<wallyworld> any ocr in da house? http://reviews.vapour.ws/r/18/
<wallyworld> my poor little pr has been feeling very unloved
<perrito666> wallyworld: o/
<wallyworld> hey :-)
 * perrito666 reviews
<perrito666> the other one is dimitern
<wallyworld> ty :-) it's mainly mechanical
<wallyworld> test data unchanged
<perrito666> wallyworld: have a bit of patience I am trying to figure how to add a MustBeValidatedByDave(LGTM) to reviewboard
<wallyworld> perrito666: no problem :-) thank you for looking
<wallyworld> maybe just add a comment and i can ping dave tomorrow
<perrito666> wallyworld: I believe I did it :p
<perrito666> I sent an email to dave just in case
<wallyworld> \o/ thank you :-)
<wallyworld> you rock
 * voidspace lunch
<perrito666> is there a straight forward way to know a machine series from a unit?
<dimitern> perrito666, fwereade, please take a look http://reviews.vapour.ws/r/33 - api changes for the firewaller wrt port ranges support
<perrito666>  dimitern This review request is private. You must be a requested reviewer, either directly or on a requested group, and have permission to access the repository in order to view this review request
<dimitern> perrito666, sorry, I should've used --publish - should be fine now, try again?
 * dimitern will be back in 1h
<ericsnow> mattyw: you still looking for me?
<mattyw> ericsnow, I am - have you seen my email>
<ericsnow> mattyw: yeah, I'll take a look
<mattyw> ericsnow, cool thanks
<perrito666> ericsnow: wwitzel3 shall we or should we suspend?
<ericsnow> perrito666, wwitzel3: I'd still like to meet
<ericsnow> perrito666, wwitzel3: or do you mean wait until Nate is back?
<perrito666> ericsnow: I dont know if Ill be online when nate is back
<wwitzel3> ericsnow, perrito666: we can do it now then and then another when Nate is back, that's fine by me.
<perrito666> wwitzel3: ericsnow I can hear you guys
<perrito666> apparently my upload is sucking?
<wwitzel3> hah and now i dropped
<wwitzel3> Looking for someone who can LGTM on https://github.com/juju/juju/pull/705
<wwitzel3> It's been reviewed and updated, just need a final look
<dimitern> perrito666, fwereade, poke re that review? :)
<perrito666> dimitern: let me get out of a call and Ill do it (although you will need a second review)
<dimitern> perrito666, sure, thanks
<mattyw> dimitern, can you take another look at http://reviews.vapour.ws/r/36/
<mattyw> dimitern, you've seen it before
<mattyw> dimitern, but I messed up the old diff
<dimitern> mattyw, sure, looking
<dimitern> mattyw, LGTM
<mattyw> dimitern, thanks again
<ericsnow> cmars: could you take another look at https://github.com/juju/juju/pull/736?
<ericsnow> cmars: your feedback was really helpful, BTW
<cmars> ericsnow, sure, in a bit
<ericsnow> cmars: no worries
<ericsnow> FYI (everyone), I have fixed notifications on reviewboard so now you should be getting emails for review requests there if you've set your email address in your account settings
<perrito666> ericsnow: what was the problem?
<ericsnow> perrito666: I guess I set an email address for the review group, it didn't work, and that prevented any notifications from going out
<ericsnow> perrito666: I removed it (it was unnecessary)
<wwitzel3> if I have a unit tag, how can I get the machine tag from that? can I?
 * wwitzel3 digs through juju/names
<perrito666> wwitzel3: depends whre you are
<wwitzel3> perrito666: I am right here .. with a UnitTag
<wwitzel3> the world is my oyster
<perrito666> wwitzel3: client/server ?
<wwitzel3> perrito666: apiserver
<perrito666> wwitzel3: well if you can get a Unit out of that tag you can get AssignedMachineId
<perrito666> gtg bbl
<wwitzel3> perrito666: yeah, so state.Unit() and then unit.AssignedMachineId
<perrito666> awww really? the cham dir needs sudo so I cannot scp there bummer
<mattyw> fwereade, ping?
<natefinch> trivia time..... yaml 1.1 defines what they call a "base 60 float" of the form   12:34:56.90   so it's basically like hours, minutes, seconds, fractional seconds.  But it means that if you happen to be using unquoted strings with a colon in them that happen to only have numbers and colons in them.... yaml parsers will blow up.
<mattyw> mgz, ping?
<mattyw> well that's enough bug writing for me today, night all
<jcw4> tx wallyworld :)
<wallyworld> jcw4: let me now if you have questions
<wallyworld> know
<wallyworld> you can cargo cult what's been done already
<jcw4> wallyworld: will do... I'll use the other facades you mentioned as a guide
<jcw4> yep
<jcw4> :)
<wallyworld> jcw4: makes it much easier to have a focused set of APIs, since the whole facade gets a new verson when one of the APIs changes
<jcw4> wallyworld: that makes sense.  I'm learning that API versioning is very complicated
<wallyworld> there's some doco coming soon i think. you basically rev the whole facade from v1 to v2 to v3 etc. the complication is catering for older clients, but that's sort of taken care of by the infrastructure
<wallyworld> since the older facades are retained
<jcw4> looking forward to the docs. It's an interesting topic
<wallyworld> indeed
 * thumper made it back
<waigani> menn0: standup?
<menn0> waigani: be there in a sec. Chrome keeps aw snapping
<jcw4> thanks ericsnow ... I'm focused on apiserver today, but I expect to be looking at api/* tomorrow :)
<ericsnow> jcw4: cool :)
<ericsnow> jcw4: FYI, patching out the api facadecaller in tests is recommended for the sake of keeping tests isolated from the rest of the system and for lower test times
<ericsnow> jcw4: however, the mechanism for doing that patching could use some improvement (there are 2 ways implemented and neither is easy to grok)
<jcw4> ericsnow: I'll look at your PR's about that.  I've read the discussion on patching facadecaller with interest, but haven't looked closely at what that actually means
<jcw4> ericsnow: I've done a little of the patching technique a couple months ago, but I don't recall the specifica
<ericsnow> jcw4: it makes me think there is something broken with how we are wrapping facade callers with clients
<jcw4> ericsnow: hmm
<ericsnow> jcw4: it's nothing critical though
<jcw4> ericsnow: tx for the pointers
<ericsnow> jcw4: I'm glad we have something at all :)
<jcw4> +1
<davecheney> thumper: forgot to say
<davecheney> just saw some good news on the arm64 ticket
<davecheney> maybe, maybe we'll have access tonight over vpn
<thumper> yep, saw your email responses
<thumper> awesome
<davecheney> i'm on the box now
<davecheney> i'
<davecheney> i'll follow up with aram tonight when he comes online
<thumper> coolio
<menn0> thumper: 2 meta-reviews required please:
<menn0> http://reviews.vapour.ws/r/30/
<menn0> http://reviews.vapour.ws/r/28/
<thumper> gah, now I need to work out how to log into this thing
<menn0> http://reviews.vapour.ws/, "sign in", click the "github oauth" button. done
<wallyworld> davecheney: if you have a chance to give a 2nd +1 to http://reviews.vapour.ws/r/18/ that would be great :-)
<davecheney> perrito666: did you make any comments on that revierw ?
<davecheney> just a question, no subtext
<davecheney> dopne
<wallyworld> davecheney: thank you, will add the constants, good idea
#juju-dev 2014-09-17
<davecheney> or make StreamsVersion private
<davecheney> either way
<davecheney> once it's public people will starting using it for things
<davecheney> think of the version.Version cluster fuck
<perrito666> davecheney: I did
<perrito666> wait
<perrito666> davecheney: expand that
<davecheney> perrito666: i cannot see any comments on reviewboard
<davecheney> this may not be your fault
<davecheney> it could be RB
<perrito666> davecheney: which one, I sent you two emails
<davecheney> perrito666: did you place any comments on http://reviews.vapour.ws/r/18/
<perrito666> no, I just LGTMd, I see you added a comment, I am checking what I missed now
<perrito666> I did add several comments to 33 though
<davecheney> perrito666: cool
<davecheney> to restate, this is not a critisism
<davecheney> just checking that i'm not missing things recorded in reviewboard
<davecheney> i find it's interface as comfortable as riding a tractor
<perrito666> davecheney: that is ok, I just went back to take a look just in case, I a a bit lost in RB still
<davecheney> perrito666: 33 is NOT LGTM
<perrito666> davecheney: last tractor I rode was waaay better than my car
<perrito666> davecheney: odd, dimitern said that this change was suggested by you
<davecheney> yup, he's fixed some todos
<davecheney> which is awesome
<davecheney> but changed the tags inside some structures which i'm super protective of
<perrito666> davecheney: <dimitern> re the tag changes - it was making writing tests unwieldy and dave already commented in a few places we need to fix this
<perrito666> he got carried away with changes
<davecheney> perrito666: the reason we have to use a type assertion is the interface those types conforms to requries the Tag() method return a names.Tag,
<davecheney> that can be fixed
<davecheney> but the type stored inside the structure must be the concrete tag type that matches the thing it's tagged
<davecheney> or in other wods
<davecheney> type Machince struct { tag names.MachineTag }
<davecheney> ^ anything else is going backwards
<perrito666> makes sense
<perrito666> davecheney: thanks for the explanation though
<menn0> thumper: another one: http://reviews.vapour.ws/r/25/
<perrito666> davecheney: btw, do you see the comments I did there?
<davecheney> perrito666: nope
<davecheney> but i didn't read the whole thing
<davecheney> just rage quit when i saw the change to the tags
<perrito666> mm, I seem to need to do some changes so others can see those
<davecheney> gotta push the 'review' button
<davecheney> which actually means 'post review'
<davecheney> the
<davecheney> the 'review' button is actually labeled 'view diff'
<perrito666> I did, odd
<menn0> thumper: with RB, if you're happy with a change you need to click "Ship it" or tick the option (depending on which screen you're on)
<menn0> thumper: I've just done it for #30 since you were happy with that one.
 * perrito666 looks for a way to links to his comments and fails.. and burns the food in the stove
<thumper> kk
<davecheney> thumper: i have to go out in 60 mins
<davecheney> any chance of a chat before then ?
<thumper> davecheney: sure, how about now?
<davecheney> thumper: i'll see you in the standup channel
<thumper> kk
<waigani> megawatcher tests are passing
<jcw4> wallyworld: is there any precedent for passing in partially filled results structs as filters/tempates for an API  List* or Get* method in Juju?
<wallyworld> jcw4: do you have a concrete example?
<jcw4> wallyworld: well for actions, I want to be able to List the actions queued up for a Service and/or a Unit, and possibly filter by Status (pending, running, finished, failed, etc.)
<jcw4> s%and/or%or%
<wallyworld> you'd define a Filter struct to hold the query parameters?
<jcw4> wallyworld: I was thinking of using the return type struct and partially filling out the fields...?
<wallyworld> yuk :-)
<jcw4> :)
<wallyworld> IMHO
<jcw4> (that's why I asked)
<wallyworld> :-)
<wallyworld> np
<jcw4> I know mongo does something like that
<jcw4> I'm pulled between keeping type explosion down vs. simple API
<wallyworld> it does? i didn't know that. I've just encountered Find(blah).One(&result) type stuff
<wallyworld> sometimes simple != intuitive
<jcw4> wallyworld: I interpreted the blah as a 'template'
<wallyworld> eg
<wallyworld> s.db.C("managedStoredResources").Find(bson.D{{"path", "environs/env" + path}}).One(&mrDoc)
<wallyworld> blah = a list of filter conditions
<jcw4> wallyworld: okay.. so instead of one Get method, maybe GetActionsOn(service names.Tag), GetFailedActionsOn(service names.Tag), etc...
<jcw4> or GetActionsByStatus(entity names.Tag, status string) etc.
<wallyworld> the latter would be preferable
<jcw4> and, modified to be all bulk calls of course
<wallyworld> i would imagine under the covers it would call GetActions(filter)
<jcw4> wallyworld: yes.
<jcw4> okay, so slightly more verbose API would be preferable
<wallyworld> that works for me personally. i like a wide API at the level of the caller, and a narrow api over the wire
<jcw4> wallyworld: hence the GetActions(filter) under the covers I suppose?
<wallyworld> yep
<wallyworld> a narrow api over the wire is more robust against change
<wallyworld> eg if a new status were added
<jcw4> makes sense
<wallyworld> axw: hi there, i booted an aws environment from trunk using upload-tools, and when I try and add-machine, I get
<wallyworld>   "1":
<wallyworld>     agent-state-info: no matching tools available
<axw> :/
<wallyworld> have you seen that?
<axw> nope
<wallyworld> ok, i'll poke around
<rick_h_> wallyworld: what's this placement/ensure availabiity stuff?
<rick_h_> wallyworld: you guys going to give me a manual api back?
<wallyworld> rick_h_: the landscape guys want to use ensure-availability and be able to nominate what machines should become the new HA servers
<wallyworld> we already have this for bootstrap
<rick_h_> wallyworld: what about UI guys that want to have ensure-availability by default, but allow people to override if they want to?
<wallyworld> it's extending that to ensure-availability
<rick_h_> wallyworld: ah ok, so not what I'm hoping for
<wallyworld> what do you mean ensure-availability by default?
<rick_h_> so that is on by default on azure
<rick_h_> which is cool, I like safe defaults and all that
<rick_h_> but that means by default, no user can specify placement
<rick_h_> they have to rebootstrap without the ensure-availability and then don't get any help
<wallyworld> yes, that is an azure limitation :-(
<rick_h_> right
<wallyworld> this change is primarily for maas
<rick_h_> but my understanding is that it's only a limitation because we made it that way
<wallyworld> i don't think so, axw would know more
<axw> no, it was made that way to work around azure's restrictive nature
<rick_h_> there's no way to setup 'I don't care if these are HA' services manually placed, log a log aggregator or *cough* jenkins *cough* install
<axw> rick_h_: the way azure does HA is very different to other cloud providers, which means we can't support both at the same time
<rick_h_> axw: right, but I'm suggestions, or I guess asking, if there's room for some things to not be HA in an env
<rick_h_>  /suggestions/suggestion
<axw> rick_h_: yes, with some work we could probably do it on a per-service basis
<axw> I don't think it's a small amount of work though
<rick_h_> axw: yea, in brussels I'll probably ask about it. Especially with the planned work for more auto placement, it kind of breaks the whole chunk of work we've done in machine view
<rick_h_> axw: on the one hand we're looking at trying to enable colocating and collapsing down, but if we start auto placing and turning off overrides, then we break that story for smaller/getting started users
<rick_h_> and even stuff like the orange box setup where almost every node there has multiple services on it
<wwitzel3> axw: no need to worry about the versioning stuff for juju run .. I spoke to william and he completely extended the scope of my work and added some rather large, non-trivial things to that PR ;)
<axw> wwitzel3: lol
<wwitzel3> axw: good news is, it will be versioned :)
<rick_h_> axw: wallyworld ok, saw a branch go by with an interesting little title and had to ask what was up.
<axw> rick_h_: I understand, we do need to do something about it
<wallyworld> rick_h_: np at all, ask anytime
<axw> it's a bit crappy having to choose
<axw> at such a coarse level anyway
<wwitzel3> axw: I'm just making a new, smaller, focused facade for the run part of the client, instead of leaving it all rolled up under client.
<axw> wwitzel3: cool, SGTM, just like what wallyworld did for ensure-availability
<rick_h_> axw: yea, my feeling on it. I think it's good stuff, just a bit :( since it's by default and the answer is "once you realize you DO want to colocate something, rebootstrap"
<wallyworld> and eric for Backup, and also previously for KeyManager, UserManager etc
<axw> wallyworld: I hate private image metadata
<axw> :(
<wallyworld> sadly we need it :0(
<wallyworld> :-(
<wwitzel3> axw: yep, and the usermanager facade
<axw> it's messing up my nice abstractions
<wallyworld> wwitzel3: and keymanager and backup
<wallyworld> axw: did what we discuss last night about adding it in to the current CustomMetadata work flow turn out?
<axw> wallyworld: I've got an API client-based one jammed in there. feels a bit wrong to be calling the API from there though...
<axw> also, going to need to do something else to read the image metadata at bootstrap time
<axw> previously we'd upload to cloud storage before bootstrapping, don't have that luxury now
<wallyworld> axw: we should use a bespoke interface and inject in an implementation
<wallyworld> for retrieving image metadata
<wallyworld> the interface lives in provider
<wallyworld> there's then an api cient adaptor
<wallyworld> or something like that
<axw> wallyworld: it's too error prone... we'll end up missing a call to set the source
<wallyworld> that's what tests are for :-)
<axw> tests where? *everything* uses an Environ, and we add it all the time
<wallyworld> and we can have the thing which constructs the provider even read stuff from state, and pass in static data behind that interface
<axw> environs.New(*config.Config)
<wallyworld> hmmm
<wallyworld> maybe it needs to be done on the provider
<wallyworld> cause that's where startinstance is
<axw> no it's not, that's on Environ
<wallyworld> ah, yeah
<axw> EnvironProvider just opens/prepares
<axw> more or less
<wallyworld> it's at that point we could pass in the metadata
<wallyworld> this is all handwavy without looking back at the code
<waigani> i'm getting the following error: /usr/lib/go/pkg/tool/linux_amd64/6l: conflicting definitions for @"github.com/juju/juju/state".State
<waigani> is it safe to delete /usr/lib/go/pkg/ ?
<waigani> I deleted ~/go/pkg but it didn't help
<waigani> davecheney: ^?
<axw> waigani: I don't think you'd want to remove /usr/lib/go/pkg, unless you installed a package there with sudo
<waigani> axw: no I didn't
<waigani> axw: I'm using the go that comes with 14.04
<axw> wallyworld: I'm going to investigate another approach of having a datasource registry in environs/imagemetadata
<wallyworld> ok
<axw> waigani: ok, dunno what that error is, but removing that dir sounds wrong
<waigani> yeah, felt wrong, that's why I asked. thanks
<waigani> actually its referring to a bunch of stuff in tmp
<waigani> that I can delete
<waigani> e.g /tmp/go-build964096169/github.com/juju/juju/state/_test/github.com/juju/juju/state_test.a(_go_.6):
<axw> wallyworld: the registry will be keyed on environ. may make it possible to remove the SupportsCustomSources interfaces
<wallyworld> worth a spike
<waigani> except that its already been deleted... hmm
<waigani> fixed it, that was a new one
<menn0> wallyworld: do you need a review for this? http://reviews.vapour.ws/r/38
<wallyworld> menn0: yes please
<menn0> wallyworld: doing it now
<wallyworld> ty
<wallyworld> menn0: i pushed a small fix to the github pull request, what's the command to update the reviewboard diff? just "rbt post" ?
<menn0> wallyworld: "rbt post -u" might do it... or else "rbt post -r <review number>"
<wallyworld> ta
<wallyworld> axw: found the tools problem. i had an older compiled jujud in my path (1.20.8) and it found and uploaded that one and then when that 1.20.8 jujud went to find tools, it looked for 1.20.8 and there were none as juju recorded tools for 1.21
<axw> wallyworld: ah, ok.
<axw> that's a problem with upload-tools, it just assumes whatever it finds is the same as the CLI version
<wallyworld> yeah, so i found out :-(
<wallyworld> i didn't even realise i had a binary, and i didn't notice the log message n amongst all the noise
<menn0> wallyworld: just finished reviewing http://reviews.vapour.ws/r/38/. thumper can you do a meta-review please?
<wallyworld> thanks menn0
 * thumper looks
<thumper> menn0: I can't see your comments on the main diff
<thumper> ahh...
<thumper> because the comment is on diff one
<menn0> thumper: ah... wallyworld uploaded a new diff while I was reviewing...
<menn0> thumper: if you're looking at my review comments, you can click the title (where the filename is) to jump to the code.
<wallyworld> menn0: thumper: i am going to back out that change i think - the issue i had from from using an old jujud. i thought i messed up the api versioning as i just guessed based on reading the code, but it turns out i was right after all
<wallyworld> but i will keep the extra test
<wallyworld> menn0: thumper: although, I do like the fact that the new implementation gets the version directly off the ha client. but it means possibly opening a ha client and closing it again if the version is wrong. what implementation do you guys prefer?
<wallyworld> the first one gets the facade version by asking for it by name
<wallyworld> the second one gets a client and asks the client
<thumper> menn0, wallyworld: why has noone updated the branch in the .reviewboardrc?
<wallyworld> nfi
<menn0> thumper: I think we needed to establish whether everyone has the "upstream" remote set up in the same way
<thumper> menn0: if they followed the instructions they should have
<thumper> menn0: if not, they need to fix it
<thumper> menn0: why make the rest of us suffer?
<menn0> thumper: yeah.. and no-one said anything when I mentioned it on the list
<menn0> thumper: let's fix it
<menn0> thumper: I've been running rbt post with --tracking-branch=upstream/master in the mean time
 * thumper nods
 * thumper submits his first review board review
<thumper> http://reviews.vapour.ws/r/39/diff/
<menn0> thumper: will take a look soon
<thumper> ta
<wallyworld> jam: hi, i'd like your input on determining API version. http://reviews.vapour.ws/r/38/ The first diff did this: root.BestFacadeVersion("HighAvailability"), the second diff does this after instantiating a client: client.BestAPIVersion(). Both work. The first way doesn't require creating a client only to potentially not use it if the  API doesn;t exist, but does require knowing the facade by name. What would you prefer?
<wallyworld> this is in cmd/juju/ensureavailability.go
<jam> wallyworld: so, I haven't looked at the whole diff, but is the idea that you're moving EA into another location?
<jam> so IMO, you would have an api/ha/client.go that would try to use the new location if it existed, else fall back to the old location
<jam> in which case, you'd still be wanting the client
<jam> it would just be using the right API call underneath
<wallyworld> jam: the facade name is different
<wallyworld> i'm moving methods off the original big client
<jam> wallyworld: that doesn't mean you need a different client side type
<wallyworld> and onto a whole new facade
<jam> wallyworld: sure, but the api/highavailability/client.go can still call APICall("Client", 0, "", "EnsureAvailability", ...)
<wallyworld> oh,i see
<wallyworld> that sounds nicer
<wallyworld> jam: i'll implement that and get you to +1 if that's ok
<jam> wallyworld: so it depends *what* we can get away with, but my idea is that we should push the compatibility layer as low as we can
<wallyworld> yep
<wallyworld> agreed
<wallyworld> i didn't know how to do that
<jam> wallyworld: so if we can hide compat underneath api/foo.go then do so
<wallyworld> +1
<jam> sometimes it has to be done in client code like cmd/juju level code
<jam> but for this, I think you can do it lower.
<wallyworld> that would be good
<jam> wallyworld: fwiw,  I would rather it not be called apiserver/highavailability/client.go
<jam> "client" here seems confusing.
<jam> wallyworld: api/highavailability/highavailability.go seems slightly better (if a bit longer)
<wallyworld> fair point, i cargo culted that from somewhere
<wallyworld> ill fix
<jam> wallyworld: apiserver/client/client.go I'd imagine
<wallyworld> could be
<wallyworld> jam: i've pushed the changes to rb if you would be kind enough to look
<menn0> thumper: wow, such diffs, very long
<menn0> thumper: almost done
<thumper> menn0: ack, and ta
<menn0> thumper: done
<davechen1y> thumper: thanks for the chat earlier
<davechen1y> i've had a good mull over it
<thumper> davechen1y: np
<davechen1y> i think the multiwatcher should only accept types defined in its package
<davechen1y> i won't say it is the only sane way to do it
<davechen1y> but it feels more sane than what we have at the moment
<davechen1y> holy shit
<davechen1y> thumper:
<davechen1y> type EntityId struct { Kind string Id   interface{}
<davechen1y> }
<davechen1y> look, it's a tag!!
<thumper> davechen1y: yeah, I've been looking at that information now in the watcher...
<davechen1y> thumper: i keep getting confused between state/megawatcher, and state/multiwatcher
<thumper> me too
<davechen1y> thumper: i'll be spending 1/2 of tomorrow and all of friday on arm stuff
<thumper> ack
<davechen1y> now we have a machine I suspect aram's output will increase quite a lot
<urulama> wallyworld: morning ... is there anything that you'd like to discuss today on a regular catchup?
<wallyworld> urulama: hey, morning.
<davechen1y> thumper: i'm thinking that params.EntityId should be the state versino of a tag
<davechen1y> william has said many times he doesn't want tags stored in the db
<davechen1y> but it feels like there is a case of a tag like think
<davechen1y> and entity id may be that thing
<davechen1y> it's an identified and a type bound together
<thumper> hmm... interesting
<davechen1y> honestly entitydi could be
<davechen1y> type MachineEntity interface{}
<davechen1y> well maybe, that might be a bad idea
<davechen1y> but that sort of thing
<davechen1y> it's a typed thing
<davechen1y> and that is _exactly_ what an interface is
 * thumper nods
<thumper> sounds reasonable...
<davechen1y> i haven't quited figured out what entity id's do yet
<davechen1y> they seem to be a way of informing watchers what _type_ of entity changed
<thumper> I was just looking at the entity stuff as a wip review of jesse's look at adding the env-uuid to the service collection
<thumper> I'm wanting the id to be environment specific
<thumper> so the actual document ID is an opaque transformation of the entity id
<davechen1y> oh yeah, they completely fail to capture that
<davechen1y> they are at best a tupple { collection, row }
 * thumper nods
<thumper> it could be looked at like that
<davechen1y> so, the question is, do we extend the tupple to be
<thumper> but I think it would be good to work out the real intent
<davechen1y>  { collection, env-uuid, row }
<thumper> to work out how best to model this
<thumper> I don't think that bit is necessary, as the state instance has an implicit env-uuid
<davechen1y> or stick with { collection, row} and accept that GetAll does not return the complete set of documents in the collection
<thumper> which is why I think the mega watcher needs to be environment specific
<thumper> the later of what you said
<thumper> I think
<davechen1y> i need to figure out where that data is begin used
<davechen1y> i'm guessing it's somethinglike
<thumper> yeah, I agree
<davechen1y> switch collection {
<davechen1y> case "machines": state.GetMachine(id)
<davechen1y> case "unit"L: state.GetUnit(id)
<davechen1y> and so forth
<davechen1y> which is so like tags it's not funny
<davechen1y> thumper: why do all the backingNNN methods in megawatcher.go return an error
<davechen1y> yet they are all hard coded to return nil
<davechen1y> ?
<thumper> NFI
<davechen1y>         store.Update(info)
<davechen1y>         return nil
<davechen1y> }
 * davechen1y feels a branch coming on
<thumper> because someone thought it was a good idea?
<thumper> davechen1y: it would almost be "state.FindEntity(tag)" for the switch above
<davechen1y> i wonder if they have to fit some interface
<davechen1y> ./megawatcher.go:458: cannot use (*backingRelation)(nil) (type *backingRelation) as type backingEntityDoc in assignment:
<davechen1y> hmmm
<thumper> menn0: so when I update this review, do I squish locally first?
<thumper> what is the normal practice?
<menn0> thumper: you can do whatever you like. review board will just upload the diff of your branch to upstream/master, regardless of how many commits it's in
<thumper> ok
<thumper> menn0: is there a keyboard shortcut to take me to the next review comment?
<thumper> that'd be handy
<menn0> I haven't looked but I'd like to know too if there is one
 * thumper looks for the rbt command to update
 * thumper is off now.
<menn0> wallyworld: thumper is EOD so could you do a meta-review for this please? http://reviews.vapour.ws/r/34/
<wallyworld> sure
<menn0> wallyworld: otherwise ericsnow will have to wait longer to get his review
<wallyworld> and we don't want that :-)
<wallyworld> in case we need to ask for a reviewboard fix :-)
<menn0> wallyworld: :)
<menn0> wallyworld: cheers
<wallyworld> menn0: 3/4 of the way through
<jam> wallyworld: so your patch doesn't actually get the backwards compatibility stuff quite right.
<wallyworld> ok
<jam> wallyworld: you mention "regression tested on AWS", does that mean you did "juju-1.20 bootstrap" and "juju-1.21 ensure-availabilty" ?
<wallyworld> jam: yes
<wallyworld> but if i got the backwards compat stuff wrong, i will retest
<jam> wallyworld: so you don't actually fall back to calling the Client API
<jam> AFAICT
<jam> HA didn't exist in 1.20, right?
<wallyworld> jam: i think you are right,  i tested before i did the last lot of changes after talking to you
<jam> wallyworld: AFAICT, your legacy test is only patching the client side, so if it makes the new API call, the server still would accept it.
<wallyworld> yeah, i'll have to rework it i think and retest
<jam> so you really want a server that doesn't think it has the new functions, so that you know the client isn't requesting them
<jam> TheMue should have some examples he's putting together
<wallyworld> or i could patch out the client and check the facade called
<jam> wallyworld: I feel like that still gets you into a place where you have to be careful to patch everything that might be touched
<jam> you could
<wallyworld> sure, yeah, i'll run up one end-end test
<wallyworld> i'll look for the doc
<wallyworld> thanks
<wallyworld> jam: for some reason, reviewboard doesn't show the client.go files as renamed; the github pr does
<dimitern> morning
<dimitern> jam, hey, would you like to have a look at my http://reviews.vapour.ws/r/33/ please?
<jam> dimitern: looking
<dimitern> jam, thanks
<dimitern> jam, in particular I'm having some trouble understanding why "s.firewaller.Machine(s.machines[0].Tag())" is worse than "s.firewaller.Machine(s.machines[0].Tag().(names.MachineTag))"
<jam> dimitern: I'm not personally very sure why MachineTag is better/worse than just Tag, for that you'd have to have a discussion with davecheney
<jam> (I certainly thought the point of Tag was that it was the abstract representation, so having type-specific versions of it seems odd)
<jam> dimitern: oh, and *IMO* ever doing a plain type assertion:  .(names.MachineTag) in production code is asking for a panic to bring your system down
<jam> panic() ==> the whole thing dies
<dimitern> jam, strictly speaking, using names.Tag internally is less safe than names.MachineTag, but the apiserver facade validates all tags it gets, so there's no way to pass a unit tag to a method allowing only machine tags and not get an error
<jam> error() => we return "sorry can't do that"
<dimitern> jam, ok so why state.Machine.Tag() returns names.Tag rather than names.MachineTag ?
<dimitern> jam, I think in the API we should follow the same logic
<jam> dimitern: *If* we are supposed to be using names.MachineTag, then it seems like it should
<jam> dimitern: as stated, *I* don't really understand why
<dimitern> jam, oh, ok - so we're in the same boat on this :)
<jam> dimitern: my point was just that a bare type assertion seems to mostly be the wrong thing to do
<dimitern> where are thumper and davecheney when you need them :)
<jam> dimitern: certainly IMO MachineTag should be an implementation detail of Tag
<jam> dimitern: also note that NewUnitTag was something that exists, and dave has stated "it is there for tests" but we don't have another way to go from "unit/0" to a UnitTag
<dimitern> jam, I agree unchecked type assertions are bad, but if state.Machine.Tag() returns anything else than a names.MachineTag wrapped into a names.Tag interface, I'd rather panic than let it slip :)
<jam> (it also used to panic if you gave it bad data, which meant that charms could crash the API server)
<jam> dimitern: panic if developers are writing bad assumptions, but don't panic if you were given bad data
<dimitern> jam, I think I understand their point - the internal tag filed of the Machine struct should be names.MachineTag, but firewaller.Machine() should take names.Tag and possibly ParseMachineTag and verify internally
<dimitern> s/filed/field/
<jam> dimitern: so NewFacadeCallerForVersion seems like a bit of a "we don't really want to do it this way", but I suppose the point is to allow the server to go to a new version before the client is ready to handle it? But we can just do that by not incrementing the best-known version in api/facadeversions.go
<jam> though I guess the test suite will tell you off for that...
<dimitern> jam, no, it's not about the client
<dimitern> jam, it's a way to implement a new server and client version of the api, test it and only switch over to it once it's user (i.e. the worker) is ready
<dimitern> jam, I thought a lot about how to do it nicely, and I'm keen on your thoughts about my approach (both server-side and client)
<jam> dimitern: so this is actually because we can't abstract the API difference, and we need 2 different workers, thus we are forced to explicitly chose which version the worker is talking to
<jam> ?
<dimitern> jam, in this case, having old and new versions of the worker using v0 and v1 is the easiest way, until we can drop the old worker and rely on having apiserver supporting v1
<dimitern> jam, in reality, once we have apiv1 in place and the upgrade step done, I think it will probably be safe to drop the v0-only worker, isn't it?
<dimitern> jam, the only problematic case is when a MA running v1 firewaller worker is talking to a v0 apiserver, but since upgrades are done in lock-step, can we have this case at all?
<jam> dimitern: you mean, we are unlikely to be talking to a v0 API server if the Agent is V1 ?
<jam> I do believe that is true
<jam> dimitern: well, if they "finished" the upgrade path wrt HA upgrades
<jam> I'm not sure if menno has landed all of the pieces there
<dimitern> jam, yeah, so the old v0 worker will be kept only until upgrade is done
<mattyw> morning all
<dimitern> jam, if not (i.e. we happen to connect to a v0-only apiserver in HA configuration, which haven't yet updated), I think it will be also safe to terminate the worker and restart it, hoping it will connect to an upgraded api server
<dimitern> morning mattyw
<jam> morning mattyw
<dimitern> wallyworld, are you around?
<wallyworld> hey
<dimitern> wallyworld, hey, can you confirm the behavior in HA setup wrt server upgrades ^^
<wallyworld> maybe, i'll read
<dimitern> wallyworld, I mean, suppose we have a new version that has an upgrade step to migrate stuff in the db, and then a worker starts after the upgrade expecting to find apiserver supporting the new api v1 it needs
<dimitern> wallyworld, in HA-scenario there can be (I guess) some apiserver not yet upgraded - or we ensure all of them are and we then start any other workers?
<wallyworld> dimitern: i *think* they are all upgraded, but am not sure. in any case, the worker will exit and restart i would expect
<wallyworld> dimitern: i think there was some code landed recently about coordinating upgrades
<dimitern> wallyworld, yeah, and keep doing this until it connects to an api server supporting the facade version the worker needs
<wallyworld> yes, but if i am correct, upgrades should be coordinated as of recently
<wallyworld> so either way, i think we are ok
<jam> dimitern: so for agents we can go with a bit of "it doesn't have to work until the API server is new enough", I don't want us to get *too* used to that kind of programming.
<jam> dimitern: iow, I'd rather we just make it work with v0 and v1 as much as we can, but this is a case where the difference is large enough it probably isn't worthi t.
<jam> worth it
<wallyworld> jam: i would think that some sites may want to run with an older api server for  bit? upgrade in stages?
<wallyworld> not sure if we plan to suport that
<dimitern> wallyworld, my original question was "given 2 versions of a worker - old one using apiv0 and new one using only apiv1; when is it safe to remove the old v0 worker code - is it once we have an upgrade step in place that does migration apiv1 expects?"
<wallyworld> hmmmm, good question
<jam> wallyworld: I can see from a "I'm managing my infrastructure" wanting to do rolling upgrades, and testing various pieces. It hasn't been part of how "juju upgrade-juju" works at all.
<wallyworld> jam: i think it will/could very well come up as a requirement, given people already want rolling charm upgrades perhaps
<jam> wallyworld: so my personal opinion is that as much as possible clients and servers should support the widest range of versions that we can cope with as programmers.
<wallyworld> yeah, i tend to agree
<jam> And we're being pickier about that for Client apis vs Agent apis
<wallyworld> we have little choice for clients
<jam> well, there are people that would have said that if we had gone with an API from the beginning, "juju upgrade-juju" wouldn't exist, and we wouldn't need simplstreams, because the Agents would just be a "apt-get install jujud"
<dimitern> for clients, absolutely, but we can be laxer with agents
<jam> and we'd have to cope with Precise being a different version than Trusty
<dimitern> jam, wallyworld, so I propose to trust our upgrade code does the right thing (and file bugs when not), and keep the old code only until the upgrade step is in place
<jam> dimitern: I'd sign off on that, you may want to check with fwereade at some point to help it get codified as appropriate practice.
<wallyworld> +1 to trusting the upgrade code
<voidspace> morning all
<wallyworld> o/
<dimitern> ok, I'll send a mail to juju-dev
<dimitern> morning voidspace
<wallyworld> william is moving today i think
<voidspace> dimitern: o/
<jam> morning voidspace
<voidspace> o/
<jam> TheMue: can you take a look at http://reviews.vapour.ws/r/33/diff/# Dimiter has an interesting approach to doing the versioned testing, which I don't think is quite what we had been discussing.
<jam> (a base test suite that has a lot of helper functions that take an appropriate interface in each method)
<jam> and then concrete tests that just thunk over to the helpers
<jam> dimitern: as far as that goes, I feel like we're still feeling out how we want to do compatibility testing, etc.
<dimitern> wallyworld, moving where?
<wallyworld> flight back from UK I think he said
<dimitern> ah, I see
<wallyworld> he was away for a bit last week
<jam> dimitern: I think William was advocating that we could do per API suites, and then aggregate them into versions of facades
<jam> so you would have a WatchSuite, and an InstanceId suite, etc.
<TheMue> morning
<jam> morning TheMue
<TheMue> jam: yep, will take a look
<dimitern> jam, that sounds like a horrible amount of boilerplate code - imagine doing this for the uniter api
<dimitern> morning TheMue
<jam> dimitern: well, you have a fair amount of boilerplate in your testing of Firewaller
<TheMue> jam, dimitern: seen my document where I tried to describe the testing approach?
<jam> TheMue: https://github.com/juju/juju/pull/758/files ?
<TheMue> jam: yes, and thankfully by clicking on "View" the markdown is rendered for better readability
<dimitern> jam, I can't decide yet whether having a test-suite-per-api-method is more or less boilerplate than having "kinda C++-template-like helpers" taking an interface with a single method that's being tested
<dimitern> TheMue, no, I'll have a look, but look at my proposal when you have some time (I'm quite proud what I came up with :)
 * jam needs food, bbiab
<dimitern> TheMue, the approach taking a callback func (st *state.State, resources *Resources, authorizer Authorizer) (interface{}, error) is something I tried as well
<TheMue> dimitern: I'm just doing, but I would like you to read the doc (and the according code) too. it is a bit more coding, but not so very much. and the test suites per method can be wonderfully reused
<TheMue> dimitern: yeah, I needed it to inject the factory for the API version I want to test
<dimitern> TheMue, the problem is, you can't pass a constructor returning a concrete type (or pointer thereof)
<dimitern> TheMue, i.e. firewaller.NewFirewallerAPI(...) (*State, error)
<TheMue> dimitern: that's why I had to return the interface
<TheMue> dimitern: and I'm wrapping it in a little func with the signature above
<TheMue> dimitern: here the static typing hinders us to use the factory directly *sigh*
<TheMue> dimitern: never tried "type Factory func(...) (interface{}, error)" and then cast "firewaller.NewFirewallerAPI.(Factory)"
<dimitern> TheMue, won't work - see http://play.golang.org/p/9UQE8Fh2Co
<TheMue> dimitern: yeah, just tested too. would have been nice
<wallyworld> TheMue: i think you're working on this - I have an existing test suite where I want to change the result of RegisterStandardFacade() ie change the version registered. But calling this method a second time gives an error. What's the way we're to use to change the result of BestAPIVersion() for testing? Or even unregister a facade for testing?
<wallyworld> I need to do both
<TheMue> wallyworld: it's not yet proposed, I had it in a branch I pulled back but it will come with the next one: a function to de-register an API returning a restore function to call with defer
<wallyworld> ok, will that land soon?
<TheMue> wallyworld: it has to be called before the API is connected, so that the resolving of the available APIs doesn't return it
<TheMue> wallyworld: think so, yes, but I can already point you to it. it's a small one
<TheMue> wallyworld: one moment, will find the code
<wallyworld> ok
<wallyworld> although if it's in common infrastructure it's best to wait
<wallyworld> but i do need to be able to land my branch tomorrow hopefully
<TheMue> wallyworld: take a look here, it's replacing an existing func in common.FacadeRegistry: https://github.com/TheMue/juju/compare/capability-detection-for-networker#diff-dfdd9146133bcedaf779a17388c5a27fR234
<wallyworld> ok, thank you
<TheMue> wallyworld: and here I used it https://github.com/TheMue/juju/compare/capability-detection-for-networker#diff-13fd488899c8ceea9cc4183707b0b270R200
<dimitern> TheMue, you can do a wrapper func taking any callable and returning something with (interface{}, error) - see http://play.golang.org/p/ZOqBHnjfyi
<wallyworld> TheMue: thanks, looks like Discard is already in the code base
<wallyworld> i might try and use it
<TheMue> wallyworld: yes, it's only that the current Discard() cannot restore, that's all ;)
<wallyworld> ah
<TheMue> dimitern: yeah, adding this to our testing package simplifies the injection, nice
<dimitern> TheMue, well, this is just a quick sketch, but WrapNew should do more checks and take a gc.C as well, but the idea is nice
<dimitern> hey davecheney
<TheMue> dimitern: yes, to be part of a testing package it has to
 * TheMue just fetched a fresh coffee
<TheMue> still a bit tired and having aching legs after the visit of the Photokina fair yesterday. but has been fantastic, with many ideas where I'll spend my money for :D
<wallyworld> jam: i pushed some changes. i used common.Facades.Discard("HighAvailability", 1) to remove the facade registration and so force the client caller to go through the older client facade
<tasdomas> should I close the reviewboard review once it gets a "ship it!"?
<jam> wallyworld: I think you have 1 typo, but otherwise LGTM
<wallyworld> jam: ty
<voidspace> hmmm... so I have a function that after a replicaset reconfig connects to all members of the replicaset
<voidspace> and waits for *them all* to report a healthy status for all the other members
<voidspace> so we know they're all up and they all know about the other members
<voidspace> And the test *still* fails with this immediately after that check
<voidspace>  Message:"exception: need most members up to reconfigure, not ok : localhost:33975", Assertion:false}
<voidspace> that just seems insane
<voidspace> I'm going back to mongo support channels
<TheMue> dimitern: so, took a deeper look at the server-side versioning and the tests. wrote some notes in the review.
<dimitern> TheMue, cheers, will have a look
<TheMue> dimitern: I'm currently thinking about mixing our ideas
<dimitern> TheMue, I'm in process of changing my PR to use per-method suites to see how it will look like
<TheMue> dimitern: ah, we're getting closer
<aznashwan> hello, could somewhat please tell me what email/password rbttools expects of me?
<aznashwan> it says in the documentation that it's the ones of the rbt server, and I presumed it was the github ones seeing as though we used 0auth; but those don't seem to work D:
<davecheney> aznashwan: it is amazingly
<davecheney> $YOURUSERNAME
<davecheney> password: gh-oauth
<davecheney> aznashwan: sorry, i think the password is oauth:<$YOURGITHUBUSERNAME>
<jam> voidspace: standup?
<voidspace> jam: oops, omw
<aznashwan> davecheney: I've been spamming the server with requests for a while now, and still no luck D:
<aznashwan> any chance the upload diff file directly to the webpage will work?
<davecheney> aznashwan: you can do that
<davecheney> but it needs to be a "full diff"
<davecheney> the diff from github won't work
<davecheney> :(
<davecheney> i already tried that
<davecheney> aznashwan: don't tell anyone, but if you just do a pull reequest on github
<davecheney> then i'll review it and we don't need to worry about review board
<aznashwan> davecheney: thanks a lot, I've already got two up now, and they've been looked into a little already, minor fixes each, freshly rebased on master so it will just be fast-forwards if they get merged
<perrito666> morning
<davecheney> aznashwan: can you paste the urls
<davecheney> ?
<aznashwan> davecheney: sure, sorry: https://github.com/juju/juju/pull/748 https://github.com/juju/juju/pull/704
<aznashwan> davecheney: thanks for the reviews, will get to fixing the PR's immediately :D
<davecheney> aznashwan: it's way past end of day for me
<davecheney> i'll catch you on the flip slide
<davecheney> side
<aznashwan> davecheney: k, have a nice one :D
<perrito666> see you tonight davecheney
<perrito666> some times this irc channel feels like a relay race
<davecheney> perrito666: we develop juju 24/7
<aznashwan> perrito666: a rolling stone gathers no moss
<aznashwan> luckily, I ran out of things to bother you guys with, at least for the time being :v
<perrito666> aznashwan: I strongly disagree http://cdn2-b.examiner.com/sites/default/files/styles/image_content_width/hash/49/b3/49b30d7598f1527927df8a4cbd98e253.jpg?itok=AIdW4EIi
 * voidspace lunch
<mattyw> perrito666, nice
<perrito666> mattyw: I was hoping someone would get the joke
<perrito666> .p
<perrito666> :p
<mattyw> perrito666, I wouldn't go as far as to say joke ;)
<perrito666> o cmon, rolling stone, moss, picture, instant fun
<mattyw> that must be some strange usage of the word fun I wasn't previously aware of
<perrito666> mattyw: arent you brittish?
<mattyw> perrito666, I am, for now
<perrito666> :p then you are in no moral grounds to judge about fun then, we actually catalog brittish commedy films in a separate category :p
<perrito666> mattyw: moving out?
<mattyw> perrito666, until Scotland gets independance
<perrito666> mattyw: ahh moving out the entire country, very cool move :p
<mgz> 'd still be british surely...
<mgz> mattyw: what did you want me for yesterday evening btw?
<mattyw> mgz, I was getting crazy errors from the landing bot - seemed to suggested something was up with the server rather than the bot
<mattyw> mgz, but it's fine now
<mattyw> mgz, thanks anyway
<mgz> I'll have a look at the logs
<mattyw> mgz, here's one http://juju-ci.vapour.ws:8080/job/github-merge-juju/657/console
<mgz> ta
<perrito666> its interesting, school here actually teaches kids that england, scotland and ireland+nort ireland as 3 separate countries and makes no mention whatsoever to the united kingdom until high school (near the end of it)
<perrito666> what would happen if there was a command clashing with a plugin? for intance if I was to create a charm command?
<fabrice> perrito666:  in France england scotland wales are the same and we always use england for uk
<mattyw> folks - after we land a branch reviewed on review board do we just mark it as close -> submitted?
<mattyw> ^^ yes appears to be the answer
<axw> wallyworld: if/when my branch lands, and backups move off provider storage, I believe we can remove Storage from the Environ interface
<wallyworld> yup \o/
<axw> just did a ~100line change which looks like it'll work
<wallyworld> whoo hoo
<axw> mattyw: that's what I did
<axw> dunno if we're supposed to or not, but I guess so
<wallyworld> i did that too
<tasdomas> what package should I place a commonly used function in juju?
<tasdomas> thinking of moving NowToTheSecond out of state
<tasdomas> it's used both in tests and in production code, so I can't just put it in a testing package
<perrito666> axw: tell me more about backups
<TheMue> hmm, not yet friend with rbt. if it tells me "ERROR: There don't seem to be any diffs!" when calling rbt post, how can I still post my pull request?
<perrito666> TheMue: did you do the upstream configuration magic menn0 suggested?
<TheMue> perrito666: I'm just following the described workflow in the mail
<TheMue> perrito666: which "magic" (and why do we need magic when simple tools should do it)?
<perrito666> TheMue: I meant a configuration in your .Idontrememberwhatrc
<TheMue> perrito666: I have no .Idontrememberwhatrc *lol*
<TheMue> perrito666: hehe, no, will take a closer look
<perrito666> TheMue: apologies, I do not recall the whole thread
<TheMue> perrito666: but I found the missing line, and no need to appologize, it's my fault to not configure it correctly
<TheMue> perrito666: but now added the magic line and it ask me to authenticate. so getting closer, yeah.
<perrito666> TheMue: ah apparently you ned to user
<perrito666> agh
<perrito666> need to use username@github
<mattyw> axw, cool thanks
<mattyw> axw, I saw that's what you did so that's what I went for
<TheMue> perrito666: this tool hates me, cannot login
 * perrito666 tried to do a portmanteau between troll and tool but feared to accidentally post a profanity in English
<TheMue> ;)
<TheMue> I tried with TheMue as well as my E-Mail as username, but still cannot login
<TheMue> Changing the password just to make sure doesn't help too. *sniff*
<TheMue> What.does.this.tool.want.from.me?
<perrito666> TheMue: your soul
<perrito666> what are you using for a user?
<natefinch> you guys talking about rbt?
<perrito666> TheMue: the logs show ericsnow suggesting:
<perrito666> to katco  "oauth:katco-@github"
<natefinch> right, I was going to say that
<natefinch> for some reason eric set all our passwords to that ;)
<TheMue> ahh, wil try
<natefinch> (joking of course, you should use oath:<github_username>@github as your password
<TheMue> perrito666: thanks, now it works *phew* you saved my soul *grin*
<perrito666> natefinch: well makes sense, passwords are most likely hashAlgorythm:hash and so oauth: must trigger a django auth plugin if I understand correctly that makes the authorization dance
<TheMue> ok, now it would be nice if markdown files could be directly rendered like in github
<wwitzel3> natefinch: when you have a chance, http://reviews.vapour.ws/r/42/
<natefinch> wwitzel3: cool, I'll take a look
<wwitzel3> natefinch: it is the changes from that ML thread on rsyslod
<aznashwan> could someone please tell what the hell is rbt post expecting as a username/password?
<ericsnow> username: <github username> password: "oauth:<github username>@github"
<natefinch> ericsnow: can you just email that out to the list as a one line email?
<ericsnow> natefinch: good idea :)
<TheMue> ericsnow: when publishing changes after an "rbt post -u" I currently get a HTTP 500. known problem?
<ericsnow> TheMue: looks like the revision you are trying to post is already associated with a different review request
<TheMue> ericsnow: eh, I accidently posted it without -u but immediately discarded it
<ericsnow> TheMue: each review request has an associated repo revision (which has a UNIQUE constraint)
<TheMue> ericsnow: as I said, it's discarded. so the software should not care
<ericsnow> TheMue: unfortunately discarded requests still keep that revision locked up
<ericsnow> TheMue: it's dumb
<ericsnow> TheMue: I can "permanently delete" that accidental request if you tell me the number
<TheMue> ericsnow: so HTTP 500 tells me "Hey you dumb guy, make another change, push it, and post it again."? ;)
<ericsnow> TheMue: I expect that this is going to keep happening to people
<ericsnow> TheMue: exactly :)
<TheMue> ericsnow: hehe
<TheMue> ericsnow: but deletion is better, it's the 44
<perrito666> natefinch: wwitzel3 stdup?
<ericsnow> TheMue: I expect we need to either come up with a github subcommand that wraps rbt intelligently or get github-reviewboard integration rolled out
<natefinch> perrito666, wwitzel3, ericsnow: would you guys mind if we delayed until later? I have a sleeping child next to me, and I'd rather not wake her
<ericsnow> TheMue: I expect we'll do the latter at some point (hopefully in the near future) regardless
<perrito666> natefinch: np
<ericsnow> natefinch: let a sleeping child lie :)
<voidspace> coffee
<TheMue> ericsnow: a seamless integration would be nice, indeed
<ericsnow> TheMue: I've deleted 44 so you should be good to try "rbt post -u" again.
<TheMue> ericsnow: great, will do, thanks
<ericsnow> TheMue: np, sorry for the trouble
<dimitern> TheMue, perrito666, updated http://reviews.vapour.ws/r/33/ care to take a final look?
<perrito666> dimitern: did you see davecheneys comment?
<TheMue> dimitern: already started
<dimitern> perrito666, I did change back the tag fields as they were
<TheMue> dimitern: regarding the order inside the file, currently the declarations of base and V0 as well as methods of them are in a mixed order. I only meant to write all base definitions first and all V0 definitions then, simply to group them properly
<dimitern> TheMue, you mean have all func (f *FirewallerAPIBase) Methods() then the FirewallerAPIV0 ones?
<TheMue> dimitern: yep
<dimitern> TheMue, ok, I misunderstood then :) will do and post an updated diff
<TheMue> dimitern: thanks
<TheMue> dimitern: btw, in your mail, you meant "an" agent API, and not "the" agent API, don't you? not that I'm getting you wrong here.
<dimitern> TheMue, updated
<dimitern> TheMue, yes, "an agent api", the AgentAPI (perhaps poorly named) is just one facade
<TheMue> dimitern: fine
<TheMue> dimitern: for the testing approach I still would like to know how we'll test a func introduced in V1 in V2 then?
<TheMue> dimitern: but I've got an idea here
<TheMue> dimitern: currently your base is a type with methods
<wwitzel3> I see we have juju/utils/set  and it currently has implemented Strings
<dimitern> TheMue, le me think a bit
<TheMue> dimitern: how about changing it into a simple struct which is passed to simple test functions as a first argument
<dimitern> TheMue, so you have Foo() in V1, which is unchanged in V2?
<TheMue> dimitern: so for V0 I implement the V0 test functions and use them in the test suite
<wwitzel3> my question is, if I want to have a set of tags, should I implement that in utils/set and make it have a Tags implementation?
<TheMue> dimitern: in V1 I only write the new test functions and write my test suite V1 like you've done using these functions too
<dimitern> TheMue, the we can use testFoo(c, patchFunc(taking NewAPIV1 in v1_test and NewAPIV2 in v2_test)
<TheMue> dimitern: and so in with the next functions
<dimitern> TheMue, more or less yes
<dimitern> TheMue, I'm not saying "this is the way we should test API versions, period." :)
<wwitzel3> because right now, I'm converting from user input string, to tag, using the tag.String() method with set to create a unique set of tags, then converting them back to tags with ParseTag as I iterate over them
<TheMue> dimitern: hehe
<dimitern> TheMue, it's just we're looking for the best way, and I don't think we're quite there yet
<TheMue> dimitern: no, me neither, only thinking of a good solution too
<dimitern> TheMue, but for the time being, I'd rather move on with the ongoing tasks and revisit testing later
<wwitzel3> my thought is, since it lives under juju/utils already, our set implementation knowing about types from juju/names seems ok
<wwitzel3> thoughts?
<TheMue> dimitern: so your approach, but with no real base in the sense of inheritence, more as a growing set of versioned test functions used in the individual suites
<TheMue> dimitern: ok
<dimitern> TheMue, If we agree on an approach, I promise to take some time to refactor existing facade tests
<wwitzel3> TheMue: I will act as a witness along with the IRC logs of dimitern's promise ;)
<TheMue> dimitern, wwitzel3: *rofl*
<dimitern> wwitzel3, +1 :)
<dimitern> wwitzel3, that's a very good idea IMO
<dimitern> wwitzel3, having set.NewTags() etc. so we can use them with names.Tag and derived types, store them as tags, but have the same UX as with Strings
<TheMue> dimitern: I'll note a quick dummy containing your approach and my latest thoughts. think it's a good way, flexible and less boilerplate
<dimitern> wwitzel3, I'd rather have map[names.Tag]bool as internal implementation for set.Tags, but optionally, you can have NewTags(names.Tag...) and NewTagsFromStrings(string...) internally using ParseTag on each
<dimitern> TheMue, cheers :)
 * dimitern has reached eod
<natefinch> dimitern, TheMue, rogpeppe1, mgz:  do you guys know offhand if we try to store charm configuration keys in mongo as key names?
<rogpeppe1> natefinch: i think we *did* do that AFAIR
<natefinch> rogpeppe1: hmm interesting
<natefinch> rogpeppe1: there's an email on the juju ubuntu list entitled "Regression with dots in charm options"
<mgz> natefinch: yeah, saw that, bug 1308146 being the relevent thing
<mup> Bug #1308146: not okForStorage error when deploying local charm <charms> <deploy> <juju-core:Triaged> <https://launchpad.net/bugs/1308146>
<mgz> natefinch: what's not clear to me is when then actually started breaking
<mgz> I guess just in 1.16->1.18?
<natefinch> yeah, could be.   I care slightly less about when than about why
<dimitern> natefinch, we don't do that without escaping $ and .
<rogpeppe1> natefinch: looking at state/settings.go, it does at least try to replace the dot (with full-width unicode dot)
<natefinch> dimitern: I had hoped we sanitized it
<dimitern> natefinch, take a look at state/settings.go
<dimitern> natefinch, do you know what this job opening is for ? https://ch.tbe.taleo.net/CH03/ats/careers/requisition.jsp?org=CANONICAL&cws=1&rid=830 (i.e. juju as in core or as in ecosystem or something else?)
<dimitern> a good friend of mine (also good dev) called to ask if we're hiring
<voidspace> mgz: ping
<natefinch> dimitern: we have 1 engineer position open and that one.... that one you linked to is kind of more of a do a little bit of everything job to work with kapil and mark S and mark ramm etc for like product maangement type stuff
<natefinch> dimitern: so, that one is higher up in juju, but there is a juju core position open IIRC
<dimitern> natefinch, I see, well he's based in BG, so I guess no juju-core openings at the moment?
<dimitern> natefinch, I'd appreciate if you find it and send me the link by email, as browsing all openings I couldn't find juju-specific openings except this one
<mgz> voidspace: hey
<natefinch> dimitern: I think the location is just a suggestion.  Definitely ping Alexis about it, I know we have a dev position open, and I'm pretty sure it's in core.
<voidspace>  mgz: hey, hi
<voidspace> mgz: but never mind...
<mgz> ;_;
<voidspace> mgz: unping :-)
<dimitern> natefinch, cheers, will do
<wwitzel3> dimitern: thanks :)
<perrito666> dimitern: that is perhaps the most generic role description I have seen in my whole life :p
<dimitern> perrito666, lol, most of these on taleo we have are like this
<voidspace> ericsnow: ping
<ericsnow> voidspace: hey hey hey
<voidspace> ericsnow: I just failed to create a review board review request
<voidspace> ericsnow: or at least, I created one entirely unrelated to my branch
<voidspace> ericsnow: could you walk me through it
<voidspace> ericsnow: I have a github pull request for the branch
<ericsnow> voidspace: what is the review number?
<voidspace> ericsnow: https://github.com/juju/juju/pull/776
<voidspace> ericsnow: 45
<ericsnow> voidspace: and it is associated with revision 9cac03a, right?
<voidspace> ericsnow: right
<ericsnow> voidspace: did you set TRACKING_BRANCH = "upstream/master" in ~/.reviewboardrc?
<voidspace> ericsnow: no...
<voidspace> ericsnow: I'll do that
<voidspace> ericsnow: should that change be enough?
<ericsnow> voidspace: if origin is set to something else (which it should be) and master is not up to date with upstream, you'll get bad diffs
<ericsnow> voidspace: yeah
<voidspace> ericsnow: I made sure my branch was up to date first
<voidspace> ericsnow: so how do I kill that request and re-create it?
<ericsnow> voidspace: well, give the tracking branch thing a try
<voidspace> I've done that, do I just post again?
<ericsnow> voidspace: reviewboard is kind of dumb in that regard
<ericsnow> voidspace: you have to do "rbt post -u" or "rbt post -r #"
<ericsnow> voidspace: since RB links review requests to revisions uniquely and even discarded requests count :(
<ericsnow> voidspace: just doing rbt post would give you an oh-so-helpful 500 error
<voidspace> ericsnow: so the diff is now correct - but title and summary are still the old ones
<voidspace> I did "rbt post -u"
<ericsnow> voidspace: cool
<ericsnow> voidspace: just edit them in the web interface
<voidspace> if I do "-r #" will it create a new one?
<voidspace> ok
<voidspace> ericsnow: thanks for your help
<voidspace> katco: I think you're an OCR today
<voidspace> reviews.vapour.ws/r/45/
<katco> voidspace: right you are. currently reviewing a largish change-set
<voidspace> katco: ok, cool
<voidspace> :-)
<ericsnow> voidspace: -r just means update that specific review request and -u mean match the revision to a request and update that one
<ericsnow> voidspace: glad to help!
<voidspace> ericsnow: ah right
<voidspace> ericsnow: much appreciated :-)
<voidspace> looking forward to seeing you all again soon
<ericsnow> voidspace: same here :)
<voidspace> ericsnow: so in summary, it should definitely be done - but I shouldn't do it...
<voidspace> ericsnow: and what do you mean "not sure if I intended this change", after you told me to do it...
<voidspace> ericsnow: :-p
<ericsnow> voidspace: :)  just not in that PR
<voidspace> ericsnow: heh, ok
<voidspace> ericsnow: one for you then
<voidspace> ericsnow: http://reviews.vapour.ws/r/46/
<ericsnow> voidspace: that might take a while to review :)
<voidspace> ericsnow: let me know when you can get to it
<voidspace> I've got all night...
<ericsnow> voidspace: done
<ericsnow> voidspace: I also added a little more explanation to the PR summary (which will become the merge commit message, right?)
<ericsnow> voidspace: if you don't like it feel free to edit or remove it :)
<voidspace> ericsnow: looks good to me
<voidspace> ericsnow: thanks
<ericsnow> voidspace: I was creating the same PR and that was my summary :)
<voidspace> Heh
<voidspace> ericsnow: I'm sorry...
<ericsnow> voidspace: don't be :)
<voidspace> ericsnow: I'll set the issue on review 45 to fixed as it is no longer a diff against upstream/master
<voidspace> or at least won't be shortly
<ericsnow> voidspace: totally
<voidspace> and that's me done for the day
<voidspace> see you all tomorrow
<ericsnow> voidspace: see ya
<katco> holy mother of god there's a second page to this diff.
<natefinch> lol
<katco> <-- first time being OCR. gets thrown into the deep end immediately.
<ericsnow> katco: looking at one of mine? <wink>
<katco> lol no, axw's.
<sinzui> natefinch, can you get someone to look into bug 1370635
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <precise> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1370635>
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1370635
<natefinch> sinzui: yep.
<aisrael> Has relation-list been deprecated?
<aisrael> It's being referenced from a charm MP I'm reviewing, but I only see that command available in the juju-0.7 package
<katco> marcoceppi: ping
<sinzui> 1.21-alpha1 is released and it uses devel streams instead of the released streams
<perrito666> natefinch: why is there only one of those bugs you emailed on the topic?
<natefinch> perrito666: not sure what you're talking anout
<natefinch> I really have to run, sorry
<menn0> waigani: morning
<waigani> menn0: hey
<menn0> did you see the CI blocker: bug 1370635
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <precise> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1370635>
<menn0> just looking at the details on the ticket it seems like it might be related to recent MESS related changes
<menn0> waigani: what do you think?
<waigani> menn0: yeah, what is this: - '[fe80::f816:3eff:fe1f:67e1]:17070'
<waigani> looks like a mac address?
<menn0> it's a IPv6 address
<menn0> get used to them :)
<waigani> oooh
<waigani> menn0: so you can't connect to the API if the user is in state but not added to the environment
<menn0> yep
<menn0> waigani: perhaps one of the migration steps isn't doing the right thing?
<waigani> that would be the first thing I'd check, but he's authing with admin and I'd be surprised if admin was not added as a user to the env
<menn0> waigani: thumper is out today isn't he?
<waigani> having said that, thumper has been working hard to 'demote' the status of the admin user - and we do not have an admin user by default in an env
<waigani> menn0: yep, thumper is away
<menn0> waigani: true. but I don't think those changes are in 1.21-alpha2.1 are they?
<waigani> yeah, no but didn't he land some prelim branches?
<menn0> waigani: I'm just in the middle of testing my last bit of upgrade sync changes (yay!) and then I can jump on this blocker
<menn0> waigani: it seems like it won't be too hard to figure out
<waigani> anyway, it's just a hunch but I'd check if the user is in the environment
<menn0> (famous last words)
<waigani> yeah, sigh
<perrito666> ok, lets try the new propose thing
<perrito666> TheMue: still around?
<menn0> davecheney, waigani: thumper is out today. email standup?
<davecheney> menn0: sgtm
<perrito666> hey, someone is pushing without previously checking their code
<perrito666> state/metrics.go:231: no formatting directive in Warningf call
<perrito666> worker/uniter/charm/bundles_test.go:120: arg apiCharm.ArchiveURL() for printf verb %q of wrong type: (*net/url.URL, github.com/juju/utils.SSLHostnameVerification, error)
<menn0> perrito666: I don't think everyone has the pre-commit hook in place
<menn0> perrito666: so they don't see the vet warning
<perrito666> menn0: well in this case the push hook did not fail, it just warned me
<menn0> perrito666: yep I get it too. I don't think the hook is set up to fail on vet errors, just warn as it does.
<perrito666> menn0: odd, I had the impression it failed
 * menn0 shrugs
<menn0> perrito666: I've only seen it warn so far (I've seen other vet warnings in the past)
<perrito666> Â¯\_(ã)_/Â¯
<perrito666> sweet not only github suggests me to PR my recently pushed branch, it also shows a warning with a link to contributing suggesting to review the contributor policy of this repo
<perrito666> sweeet http://reviews.vapour.ws/r/49/diff/#
<davecheney> perrito666: review now, lots of comments
<perrito666> oh cool, tx davecheney
<perrito666> davecheney: reviewboard does not seem to agree
<davecheney> publised
<perrito666> aghh replying takes me out of the diff
<davecheney> it's a brilliant system
<davecheney> i love how thre are two different ways to publish your comments
<davecheney> one, with the "review" button
<davecheney> which allows you to add a final comment
<davecheney> and the other, the yelow hovering bar at the top of the screen which does not let you add comments
 * perrito666 goes read the annotatef godoc after an intriguing comment
<davecheney> pretty sure
<davecheney> errors.Annotatef(nil, "something") => nil
<perrito666> excelent, I just read
<davecheney> if it doesn't then ignore my comemnt
<perrito666> yup // If the error argument is nil, the result of Annotatef is nil.
<menn0> davecheney, perrito666: it does. errors.Trace too.
<perrito666> but isnt it a bit anti idiomatic if I return (Whatever, err) instead of an explicit nil?
<perrito666> and for the matter an explicit zeroed Whatever when err != nil?
<perrito666> davecheney: tx, Ill fix those while the food is in the oven
<perrito666> you people make excelent company while making dinner everyday :p
<menn0> perrito666: you're right. if you're returning an error it's better to return nil for the other return value if there is one.
<menn0> that Annotate/Trace feature is best used for funcs that just return an error
<menn0> perrito666: what's for dinner?
<menn0> waigani: the upgrade steps aren't even running
<waigani> O_o ?
<menn0> waigani: I think what's happening is a chicken and egg scenario
<waigani> because upgrades needs access to the API?
<menn0> waigani: the upgrade steps infrastructure wants an api.State connection
<menn0> waigani: to give to the upgrade steps
<menn0> waigani: but it can't open one, because the upgrade steps haven't run yet
<menn0> waigani: fucking awesome
<menn0> waigani: I'm not fully up to speed with why the API connection can't be opened though
<waigani> but if the server is 1.20, a 1.21 client should still be able to connect to start the upgrades?
<menn0> waigani: starting the upgrade worked fine
<menn0> jujud restarted into 1.21
<menn0> it gets to the point of running the upgrade steps
<menn0> and that's where things go wrong
<waigani> ah, the order of the upgrade steps?
<menn0> so the server is no longer 1.20
<menn0> waigani: nope
<waigani> do we need to migrate the users to the env as the first step?
<menn0> before any upgrade steps are run an API connection for machine-0 is set up
<perrito666> davecheney: btw, partial review means there are more underway?
<waigani> and machine-0 is still 1.20 at that point right?
<menn0> waigani: no
<waigani> ah
<menn0> waigani: machine-0 is running 1.21
<menn0> waigani: but the upgrade steps haven't run yet
<waigani> well that's just stupid
<menn0> it might be that we need to run DB migrations first, they need a state.State which will still work
<waigani> so we need to make an exception to the new auth rule to allow upgrade steps to complete
<menn0> but that's a fairly major change
<menn0> waigani: that's probably a wiser move
<menn0> waigani: I'm not completely up to speed with the recent user changes
<menn0> waigani: do you understand what "environment not found" means?
<waigani> menn0: so we may have to stage the migration to env users across two versions. first upgrade ensures all local users (i.e. admin) are added to environment. Second upgrade locks down API connection to allow connections only by users added to the environment.
<menn0> waigani: that sounds sensible
<menn0> waigani: or have some kind of workaround to just allow the migrations to work and then lock down the API
<waigani> menn0: yep, that is the basic idea - how we implement that is the question
<perrito666> menn0: omlette with onnions and what wallyworld calls capsicums and escalopes
 * wallyworld doesn't know what an escalopes is
<menn0> waigani: I've got to meet a friend for coffee
<perrito666> wallyworld: http://en.wikipedia.org/wiki/Schnitzel
<menn0> waigani: back in a bit
<wallyworld> perrito666: oh, i know what Schnitzel is
<waigani> menn0: one *horrible* workaround would be if user = "admin" all good
<perrito666> wallyworld: sorry translated spanish -> french -> english
<perrito666> :p
<waigani> but we'd still have to stage the migration with a workaround
<wallyworld> perrito666: sounds delicious, what time should i come over?
<waigani> as the workaround would have to be removed from the code once all users are in the env
<waigani> menn0: ^
<waigani> menn0: apiserver/admin.go:181 - this is where the error is coming from.
#juju-dev 2014-09-18
<waigani> we might be able to do a version check?
<waigani> nop, that will not work
<waigani> ah, we could only restrict API access once  UpgradedToVersion >= 1.21
<bradm> anyone able to help me with a juju bootstrap issue?  I've got a maas+juju environment setup where when I bootstrap it, all I see in the debug logs is a endless loop of trying to ssh into the bootstrap node, even though I can connect manually
<waigani> menn0: ping
<bradm> interesting, before destroying the instance, it complains about /var/lib/juju/nonce.txt does not exist
<bradm> this is looking like LP#1314682
<menn0> waigani: pong
<waigani> heyhe
<bradm> hah!  creating /var/lib/juju/nonce.txt and things start moving forward
<waigani> menn0:  so I'm thinking splitting the migration over two versions is probably best
<menn0> waigani: not restricting the API while upgrades are in progress is doable
<waigani> menn0: I looked into a few workarounds, but they add a whole bunch of kluge
<perrito666> ok ppl gotta go sleepto be up early in the AM, cheers
<menn0> l8r perrito666
<perrito666> cu all tomorrow AM
<waigani> menn0: yeah, so that is what i got halfway into coding but ...
<waigani> menn0: it adds boilerplate that we could avoid if we simply add restricted logins in the next version
<menn0> waigani: true. what is the downside of not restricting logins
<waigani> menn0: it means that MESS is not complete
<waigani> that is all
<waigani> so it would be one step at a time towards MESS
<menn0> maybe let's do that for now and check with thumper tomorrow
<waigani> menn0: but 1.22 could have restricted API and we are back on track
<menn0> it's the least invasive thing to do
<menn0> so removing that check fixes the issue?
<waigani> menn0: I'm just testing that now..
<bradm> what's supposed to create the /var/lib/juju/nonce.txt on a juju unit?
<menn0> waigani: sweet
<menn0> waigani: I've got to get some lunch ready for the fam. back in a bit.
<waigani> menn0: yep, np
<menn0> waigani: I assigned the bug to myself earlier on this morning. you should grab it.
<waigani> ok
<wallyworld> bradm: just saw your questions - the juju bootstrap process sets up a cloud init script which creates the nonce
<wallyworld> bootstrap process = start instance
<wallyworld> it you look in /var/log/cloud-init-output.log you should see the bash command to echo to the nonce.txt file
<bradm> wallyworld: ok, let me rebootstrap it and have a look
<bradm> wallyworld: if I create the file by hand, things start moving again, but errors out when it tries to start juju-db
<wallyworld> bradm: that implies the cloud init script has not been run
<bradm> wallyworld: this is a maas setup, not sure if that means anything
<wallyworld> bradm: yeah, we've just seen this on maas
<bigjools> still got the nonce problem?
<wallyworld> cause the node is acquired and doesn't run cloud init
<wallyworld> bigjools: bradm does
<bigjools> old version?
<bradm> LP#1314682 sounds vaguely familiar with the symptoms
<bradm> bigjools: of maas?  its 1.5.2+bzr2282-0
<bigjools> no it's a fix in maas's clients
<bigjools> they supply the nonce to the API
<wallyworld> that bug concluded that the node which was acquired didn't run cloud init
<wallyworld> might be wrong?
<bradm> this is kinda important to fix, its for bootstack.
<bigjools> did you find any errors in the logs?
<bradm> ok, so my bootstrap node is back up, lets see what it has
<bradm> 2014-09-18 01:19:35,042 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [3]
<bradm> 2014-09-18 01:19:35,045 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
<bradm> 2014-09-18 01:19:35,045 - util.py[WARNING]: Running scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
<bradm> that doesn't look good
<bradm> thats some custom scripts we have to create the nova instance dirs if it has a /dev/sdb
<bradm> let me knobble that for now.
<bigjools> do failures there make the whole cloud init script bail out?
<bradm> looks like it must be
<bradm> I'll comment these out and rebootstrap
<bradm> it has some checks to see if the device is there or not, I can't see why its failing immediately though
<bradm> ok, the cloud init scripts seem to have finished properly, although I now can't ping the host
<bradm> but I can see it creating the nonce.txt, so there's something else going on here
<bradm> progress is good, thanks guys.
<bradm> must be something wrong in the preseeds
<menn0> waigani: where are things now? can I help with anything?
<waigani> menn0: I'm currently hunting down where the "environemnt not found" err is coming from
<bradm> ah, what, it does a ifdown eth0?  what the?
<bradm> and indeed, thats what breaks it, if I do a ifup eth0 things start working
<bradm> wallyworld, bigjools: any idea why cloud-init is doing a ifdown eth0?  and what I can do to stop it? :)
<bigjools> dunno, I thought it only did what the user data told it to do
<wallyworld> bradm: that was done for maas to set up networking, but it can be disabled
<wallyworld> let me check
<bradm> I logged into the host and did a ifup eth0, I now have a successfully bootstrapped juju env \o/
<bradm> so if I can stop it doing that, we're in much better shape.
<wallyworld> bradm: disable-network-management
<wallyworld> set that to true in your yaml
<bradm> LP#1341524
<wallyworld> but be aware that you may ten need to do stuff by hand that juju otherwise would try to do
<wallyworld> bug 1341524
<mup> Bug #1341524: juju/MAAS deployed host with bonding configured via preseed missing eth0 from bond on first boot <maas-provider> <juju-core:Invalid by niedbalski> <https://launchpad.net/bugs/1341524>
<wallyworld> bradm: what version of juju are you using?
<bradm> wallyworld: 1.20.7, apparently.
<wallyworld> cause the fix is only in 1.20.8
<wallyworld> which is not formally released yet
<bradm> aha!
<wallyworld> but will be available for testing tomorrow
<wallyworld> and released early next
<bradm> this is a staging environment for bootstack, we can use whatever version of juju that is appropriate
<wallyworld> bradm: you can try 1.21-aplha1
<wallyworld> that is released
<wallyworld> and has the fix also
<wallyworld> email was sent to canonical-juju
<wallyworld> with the details
<bradm> I suspect I'm not on that list.
<wallyworld> you'll need to use a custom tools-metadata-url
<wallyworld> ok, i can forward the email
<wallyworld> it was also sent to juju-dev i think
<bradm> aha, found a juju 1.21-alpha1 release email
<bradm> sent to juju@lists.u.c
<wallyworld> good :-)
<bradm> ooh, --keep-broken, that sounds awesome
<bigjools> ha
<wallyworld> bigjools: allows for debugging when maas breaks :-P
<bradm> wallyworld: so to confirm, that disable-network-management: true is only in the still to be released 1.20.8, or the currently released 1.21-alpha1?
<wallyworld> bradm: it's in both those releses
<bradm> wallyworld: excellent.
<wallyworld> 1.21 is here now to try
<bradm> wallyworld: ok, I can probably test out 1.21-alpha1 in staging, but we'll want to stick with stable in the long run
<bradm> right, I can try that out now
<wallyworld> bradm: agreed, this ws just a way of letting you get further
<bradm> wallyworld: yup, perfect.
<bradm> wallyworld: and this staging env is for exactly this.
<wallyworld> \o/
<bradm> although I wonder how we got the other env up
<wallyworld> nfi :-)
<bradm> it is 1.18 I think
<wallyworld> ah
<wallyworld> explains it
<wallyworld> a lot of new network stuff has been done since then
<bradm> ev'll get upset if I break it ;)
<wallyworld> yep :-)
<wallyworld> menn0: waigani: where's Tum (sic) today?
<waigani> wallyworld: being an 'entrepreneurial innovator'
<wallyworld> oh, sounds serious
<waigani> wallyworld: some workshop thing that he took the day off for
<wallyworld> and didn't mark the calendar
<waigani> haha
<wallyworld> or reschedule his 1:1 with me
<waigani> oh, that sucks
<wallyworld> not really, i didn't want to talk to him anyway
<wallyworld> don't need him
<waigani> haha
<bradm> wallyworld: I can't imagine there'd be any bad side effects if I upgrade to 1.21, do a test deploy and then destroy and redeploy to 1.20.8 when its out, right?
<wallyworld> bradm: that should work
<bradm> wallyworld: sounds like I know what I'm doing after lunch then.
<bradm> wallyworld: thanks for the help
<wallyworld> np :-)
<davecheney> bradm: hang on, id you say downgrade ?
<davecheney> i'm not sure what you mean when you say redeploy to $OLDER VERSION
<huwshimi> Hey, does anyone know if there is a max character length for machine names?
<bradm> davecheney: juju destroy-environment ; downgrade to 1.20.8 ; juju bootstrap
<davecheney> bradm: ok, that is fine
<bradm> davecheney: currently this is all about refining the process, and its staging, so its not the end of the world to redeploy - wouldn't want to do it all the time though.
<bradm> wallyworld: fwiw, the juju bootstrap just worked with 1.21-alpha1 and the tweaks to the environment.
<wallyworld> bradm: awesome, great
<wallyworld> huwshimi: juju doesn't impose one explicitly
<huwshimi> wallyworld: OK, thanks
<davecheney> wallyworld: juju lets you choose machine names ?
<wallyworld> no
<davecheney> wallyworld: then you didn't really answer huwshimi 's question :)
<wallyworld> i just meant that juju has no length limit anywhere
<wallyworld> what the providers do is up to them
<davecheney> wallyworld: the machine names will always be integers
<davecheney> yes ?
<davecheney> the instance id's are opaque blobs
<wallyworld> well, we print out the int id now, but don't have to
<wallyworld> the point is there's no char length  encoded anywhere
<davecheney> nope, that is true
<wallyworld> which was the point of the question, presumably from a ui perspective
<davecheney> ok
<menn0> an update on the CI blocker (bug 1370635)
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <precise> <regression> <upgrade-juju> <juju-core:In Progress by waigani> <https://launchpad.net/bugs/1370635>
<menn0> I have the cause figured out but the solution isn't clear yet
<menn0> details on the bug
<axw> wallyworld_: can you please review https://github.com/juju/juju/pull/784 for me? I've refactored state.Storage, which should make the other branch a bit tidier
<wallyworld_> sure
<wallyworld_> axw: +1 with a suggestion, i gotta do scholl pickup, bbiab
<axw> wallyworld_: thanks, cya
<bradm> what the?  there's .epm files and .conf files in /var/log/juju with juju 1.21-alpha1 ?
<bradm> er, .pem even.
<dimitern> davecheney, hey, still around?
<axw> wallyworld_: can't update RB #51 for some reason, I get an internal server error when trying to publish
<axw> I've fixed the issues though
<bradm> filed LP#1370896
<wallyworld_> axw: except landing is blocked \o/
<axw> yeah :|
<wallyworld_> bradm: well that kinda sucks, that should be fixed. btw if you type bug 1370896 a magic fairy will print some extra info
<mup> Bug #1370896: juju 1.21-alpha1 has conf files in /var/log/juju on instances <juju-core:New> <https://launchpad.net/bugs/1370896>
<axw> bradm: those .pem files have been there for ages; logrotate.conf is new
<wallyworld_> but why in the log dir?
 * axw shrugs
<axw> the .pem file is for rsyslogd
<axw> for SSL
 * wallyworld_ thinks it needs fixing anyway
<bradm> axw: just because its been there for ages doesn't make it right.
<axw> bradm: right, sorry, I thought you thought it was new
<bradm> I just mentioned them both in the bug for completeness
<axw> okey dokey
<bradm> anyway, not a big deal, just throwing out bugs for issues as I find them.
<wallyworld_> axw: i may have old code, but i noticed there's a test called TestFindToolsInControlBucket, which is now obsolete. it uses a test helper UploadToStorage(c, s.env.Storage()...). these can go away
<axw> wallyworld_: sure, I'll take a look...
<wallyworld_> ta, or i can delete them as a drive by
<wallyworld_> in the tools-stream branch
<axw> wallyworld_: nah can't go yet, not till we have the upgrade-step
<axw> wallyworld_: I've yet to write an upgrade step that moves everything from provider to environment storage
<wallyworld_> maybe i delete the Find test
<wallyworld_> cause i don't want to spend time making it work with tools-stream
<axw> okay
<menn0> axw, wallyworld_: in case anyone asks, I'm stopping for a bit but will pick up with CI blocker later tonight
<menn0> I haven't done enough hours today due to Real Life stuff
<axw> menn0: no worries, thanks
<dimitern> axw, ericsnow, http://reviews.vapour.ws/r/33/ please take a look, I really need to land this already :)
<axw> dimitern: looking
<dimitern> axw, thanks
<mattyw> morning all
<dimitern> is launchpad down of anyone else?
<dimitern> for*
<voidspace> morning all
<voidspace> morning all
<jam> morning voidspace
<jam> dimitern: did you see https://bugs.launchpad.net/juju-core/+bug/1308146
<mup> Bug #1308146: not okForStorage error when deploying local charm <charms> <deploy> <juju-core:Triaged> <https://launchpad.net/bugs/1308146>
<jam> I'm wondering if this has to do with something we're writing to the DB to cache information about local changes
<jam> cmarms
<jam> charms
<jam> vs, what we do for cs: charms
<dimitern> morning voidspace
<dimitern> jam, not sure, I have to look
<jam> dimitern: you did the stuff for PutCharm to support local charms, and for the GUI to be able to get its stuff, right?
<dimitern> jam, that's right
<dimitern> but it's been a while
<jam> dimitern: sure. Just the thought comes to might that it is probably the only thing that is directly different between local and cs charms
<jam> and involves possibly writing charm config keys to the db
<dimitern> jam, what I didn't get from the bug is whether the problematic charm was deployed from a local repo or from cs:
<jam> dimitern: from local:, from "cs:" doesn't have the problem
<jam> hence why I think it is the caching logic
<jam> "It's not exactly clear from the bug log when this regression occured,
<jam> and the bug suggests it may only affect local charms."
<dimitern> jam, right, I saw that later, sorry
<dimitern> jam, I'm looking through the code, it seems this happens in the last step - state.UpdateUploadedCharm()
<dimitern> jam, I see the problem
<dimitern> jam, we do replace . and $ in service settings when saving to state, but we don't do the same for the charms collection (and its config field)
<dimitern> jam, and it seems we never did
<dimitern> jam, yep, i'm afraid it looks like a "my bad" case :) I'll assign it to myself and propose a fix
<jam> dimitern: it sounds like it is just "need to escape", yeah
<jam> dimitern: great
<voidspace> axw: ping
<axw> voidspace: pong
<voidspace> axw: thanks for your review of my PR
<axw> you're welcome
<voidspace> axw: you commented that the "cmd" variable isn't needed any more in applyReplSetconfig
<voidspace> axw: it's used twice (for logging) - and is passed into applyReplSetConfig
<axw> ah, it's input
<axw> my bad
<axw> sorry, glanced over that
<voidspace> yeah, it looks kinda like that
<voidspace> resetting it in the loop was always pointless
<voidspace> that loop just didn't do what it looked like it was doing
<voidspace> which is why I fixed it
<voidspace> axw: anyway, cool - just checking you didn't have some other reason I missed
<axw> nope, thanks though
<voidspace> axw: so I'll add those doc comments and merge
<voidspace> thanks!
<axw> wallyworld_: I think we were meant to discuss charmstores and placement and things today - probably a bit late now
<jam> dimitern: escaping sounds like something we could backport to 1.20
<jam> and since local charms never worked there anyway, we don't have to worry about compat for upgrade
<TheMue> morning
<jam> morning TheMue
<TheMue> dimitern, jam: I think I got a good approach for testing now. take a look at http://paste.ubuntu.com/8370850/
<TheMue> the upper files simulate the different versions of an agent here
<TheMue> and then there are files for testing each version
<TheMue> instead of embedding a type I'm working with a simple struct for needed infos and versioned non-public test funcs containing the tests on anonymous facades
<TheMue> so if tests are added or changed in later versions the according test helper funcs can be added in those files too (and used until they are changed in future again)
<TheMue> the examples contain adding, changing and dropping a facade function
<dimitern> jam, sgtm
<dimitern> TheMue, will look shortly, thanks
<TheMue> dimitern: thanks, it's mostly based on your approach :D
<jam> TheMue: so that still makes it fairly easy to accidentally leave off TestSecondFuncOK (like you did in agent_v2_test.go)
<jam> now given your change to SecondFunc() it may be that you intended for it t odie
<jam> to die
<jam> using the "embed the earlier version but make it an invalid signature" rule
<TheMue> jam: this risk still exists, yes
<jam> Which I think fwereade and I thought was a bit too magical
<TheMue> jam: it is, because the public name still exists. did I missed a better idea for hiding it?
<TheMue> jam: surely we could do it in a similar way like I've done for the tests here: private versioned implementations which are only called in each new facade
<jam> TheMue: so in our API docs discussion, the idea was if you wanted something to disappear you have to factor out a common base and embed
<TheMue> jam: so a V2 dropping a function simply doesn't have it
<TheMue> jam: I've got my problem with the embedding, or, at least even here the methods have to be versioned.
<TheMue> jam: because we could have a doThisV0 valid until V3, but need a doThisV3 in V3 and higher
<TheMue> jam: implementing it as functions using one shared type would make it simpler (or at least more logical) to implement the v0 login in the v0 file, reusing it until v2 and add a new implementation of v3 in the v3 file
<TheMue> jam: so the implementation is more where it is expected than in one large file containing the base with all versioned changes
<TheMue> dimitern: btw, your idea of using a facade with only the interesting function is cool
<dimitern> TheMue, yeah? It could be written a bit more concise, if we define the interface before the test case, like type lifer interface { Life() params.Life } .. testLife(c *gc.C, facade lifer)
<jam> wwitzel3: a simpler patch for you: http://reviews.vapour.ws/r/54/
<jam> I didn't see how to do the dependent-diff with Review Board and make my change depend on yours
<jam> wwitzel3: well, I guess I did find something, but reviewboard tells me: The file "utils/syslog/config.go" (revision e1eaf4a) was not found in the repository
<TheMue> dimitern: yes, the only problem then would be to be creative enough for the name *lol*
<TheMue> dimitern: but I now like your approach more than my first one. it's only important that the tests themself are external and versioned, so that they can be reused and changed over time
<dimitern> TheMue, yeah, that was my idea as well
<dimitern> jam, thanks for the review - one of the issues is already fixed earlier (the NewFacadeCaller calling NewFacadeCallerForVersion internally)
<jam> dimitern: yeah, I had written that up yesterday, but I keep hitting "Save" instead of "Publish" when doing reviews.
<dimitern> jam, axw, ericsnow, http://reviews.vapour.ws/r/55/ - fixes bug 1308146
<mup> Bug #1308146: not okForStorage error when deploying local charm <charms> <deploy> <juju-core:In Progress by dimitern> <juju-core 1.20:Triaged by dimitern> <https://launchpad.net/bugs/1308146>
<jam> just a reminder, whole team meeting in 8min
<jam> dimitern: I haven't looked yet, but does it seem feasible to backport to 1.20?
<dimitern> jam, yes, changes are minimal
<menn0> phew! I have a fix for the CI blocker
<jam> https://plus.google.com/hangouts/_/canonical.com/juju-core-team?authuser=1
<jam> nice menn0
<menn0> it might not be the final fix we end up going with but it solves the issue for now
<menn0> I will take it up with thumper tomorrow (it's his work that caused it)
<menn0> but I don't blame him
<menn0> it was head-hurtingly hard to figure this out
<menn0> PR soon
<jam> menn0: dimitern: TheMue: voidspace: team meeting
<menn0> jam: did you mean me?
<jam> menn0: well, it is the "entire juju team weekly meeting" , so yeah
<jam> menn0: you might not usually come to it on this week
<axw> dimitern: I was afk, will review after the meeting
<dimitern> axw, cheers
<voidspace> jam: omw
<voidspace> hmmm... currently failing to join
<perrito666> jam: that is unfair, I could do sleep even though my timezone says work
<axw> jam: did you set the parent branch in rbt after the fact?
<jam> perrito666: I'll let you pick whichever one you want
<axw> or can you only do it on first post
<voidspace> dimitern: are you there?
<jam> axw: yeah, I was trying to play around it
<jam> with it
<jam> rbt post -r --parent eventually worked
<voidspace> dimitern: we could do a quick standup now
<dimitern> voidspace, yep
<axw> jam: ah, thanks
<jam> once I rebased everything because git couldn't handle what I wanted
<dimitern> voidspace, ah, yes actually, brt
<axw> ah I was doing -u, that's why it wasn't working
<jam> axw: interesting, because it was trying to match up based on the summary?
<jam> but your summary is different because you're including different revs (presumably)
<jam> axw: I think my problem is that I was trying to merge master, but rbt post can't handle that
<axw> jam: actually no, I'm just a twit: I did -u <rev>
<jam> you have to have master in your parent branch
<jam> ah
<axw> :)
<menn0> axw or wallyworld_: http://reviews.vapour.ws/r/56/
<wallyworld_> looking
<mattyw> folks, how can I use mongo shell to connect to mongo on a live state server?
<wallyworld_> menn0: +1 with a suggestion, thanks for fixing
<menn0> mattyw: local provider?
<mattyw> ^^ specifiacally looks like I need to login somehow
<wallyworld_> menn0: you managed to test it?
<menn0> wallyworld_: yep
<wallyworld_> \o/
<mattyw> menn0, aws - but I suppose I could use local provider which might make things easier
<menn0> I created a script to repro
<menn0> and used git bisect to automatically find the problem reb
<wallyworld_> mattyw: there's a plug from kapil
<menn0> rev
<wallyworld_> juju-db
<menn0> mattyw: I just asked because the answer is slightly different depending on the provider
 * menn0 looks up the command line
<wallyworld_> mattyw: https://github.com/kapilt/juju-dbinspect
<wallyworld_> works on any provider afaik
<menn0> mattyw: mongo 127.0.0.1:37017/admin --ssl --username "admin" --password "`sudo grep oldpassword /var/lib/juju/agents/machine-*/agent.conf  | cut -d' ' -f2`"
<mattyw> wallyworld_, menn0 thanks very much - I'll try everything until I get what I want :)
<wallyworld_> mattyw: i've used the db-inspec tool - very niiiice
<mattyw> menn0, that's a command worth hanging on to
<wallyworld_> it essentially automates what menn0  typed
<wallyworld_> plus gives some standard commands
<menn0> :)
<menn0> mattyw: you'll need to sudo apt-get install mongodb-clients first
<menn0> mattyw: the path to the agent.conf is different for the local provider, hence the difference
<menn0> or use the plugin :)
<mattyw> menn0, thanks for your help - I've found what I was looking for
<mattyw> menn0, metrics have arrived from a deployed unit \o/
<menn0> \o/
<menn0> wallyworld_: is this ok?
<menn0> logger.Warningf("state servers info has no environment UUID so retrieving it from environment state")
<wallyworld_> menn0: +1 thank you
<voidspace> TheMue: sorry frank, I cut you off!
<voidspace> TheMue: I will enjoy it all I hope
<voidspace> TheMue: except maybe giving the talk - still practising
<menn0> right, the CI blocker fix is testing now but I really need to go to bed
<menn0> can someone please keep an eye on the merge and if it's successful mark bug 1370635 as "Fix Committed"?
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <regression> <upgrade-juju> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1370635>
<TheMue> voidspace: my talks in India in 2008 have been my first public ones too. man, I've been nervous
<TheMue> voidspace: but now it's something I really enjoy
<menn0> jam, natefinch :?
<natefinch> menn0: yep, will do
<menn0> natefinch: thanks
<TheMue> voidspace: in India I liked all those impressions, the food, and the traffic with all those different vehicles on multiple lanes and honking all the time
<jam> menn0: can you link the PR
<jam> ?
<menn0> https://github.com/juju/juju/pull/787
<menn0> and the merge run is here: http://juju-ci.vapour.ws:8080/job/github-merge-juju/673/
<voidspace> TheMue: I have friends in Chennai, not far from Bangalore, so I'll be going there after the conference
<menn0> goodnight
<voidspace> TheMue: I've spoken at conferences many times, I still find it hard
<voidspace> menn0: goodnight - see you in Brussels!
<menn0> indeed! looking forward to it.
<perrito666> what is the process to post an update to rb?
<perrito666> ok I definitely dont understand rb mails
<jam> perrito666: "rbt push -r EXISTINGREV"
<jam> so to update "54" you do "rbt post -r 54"
<perrito666> jam: tx
<jam> sorry for the typo
<perrito666> jam: I saw push and post used mixed so many times in the emails that I already read post when you say push
<jam> yeah
<mattyw> folks, very trivial http://reviews.vapour.ws/r/57/
<dimitern> jam, trivial review of the 1.20 backport fix? https://github.com/juju/juju/pull/790
<jam> dimitern: LGTM
<wallyworld_> mgz: looks like CI has an issue? see http://juju-ci.vapour.ws:8080/job/github-merge-juju/675/console
<wallyworld_> /var/lib/jenkins/juju-release-tools/make-release-tarball.bash: line 115: ./check_dependencies.py: No such file or directory
<dimitern> jam, thanks
<perrito666> aghh something went wrong when updating the patch
<perrito666> strange
<perrito666> dimitern: around?
<perrito666> apparently when I re-merged with master I dragged some changes from dimitern
<perrito666> http://cdn.meme.li/instances/500x/54445816.jpg
<dimitern> perrito666, is that why it fails with check_dependencies.py: No such file or directory ?
<perrito666> dimitern: ?
<voidspace> is CI unblocked again?
<perrito666> dimitern: I just updated master from upstream and then mmm, I believe Ill need to drop the pr
<mgz> wallyworld_: that's me, I'll fix it
<wallyworld_> mgz: thanks, it's blocking a critical bug fix
<dimitern> perrito666, ah, ok :), thanks mgz
<wallyworld_> to unblock landings
<mgz> must remember not to land things just before lunch...
<mgz> okay, fix tried, I'll resend the landing
 * perrito666 turns on his git fu
<jam> wallyworld_: It looks like I did get the  bot to work: http://juju-ci.vapour.ws:8080/job/github-merge-juju/678/
<jam> ah, maybe not.
<jam> :(
<wallyworld_> :-(
<jam> I misread it.... so yeah, still broken mgz
<mgz> jam: I have hope for the current run
<mgz> just made a change to the bash
<jam> natefinch: can you follow up with https://github.com/juju/juju/pull/787 or possibly mgz
<jam> It is one of the critical blockers, and I'm not going to be around to shepherd it
<mgz> jam: I'll make sure that lands
<jam> thanks
<mgz> there was a test failure on the first run earlier...
<mgz> dimitern: good and bad news
<dimitern> mgz, hit me
<mgz> good news is my change did what it was meant to... bad part is trunk *is* actually failing the check
<mgz> so, I'll just pull out the code for now, resend yours and menno's fixes, then put it back in again and fix trunk
<dimitern> mgz, did I break it perhaps? or
<mgz> no, I think it was an earlier change, as it was on 1.20 as well (but got fixed there)
<mgz> so, resending yours now
<mgz> dimitern: yours is in
<dimitern> mgz, cheers!
<perrito666> wallyworld_: axw I know its quite late but if any of you could give a second pass over eric's http://reviews.vapour.ws/r/34/ It would be greatly appreciated, I need this up to land restore
<dimitern> jam, would you mind terribly if you review the 1.18 backport MP on LP, I'm having issues with lbox and rietveld: https://code.launchpad.net/~dimitern/juju-core/lp-1308146-1.18-backport/+merge/235116
<hazmat> wrt to openstack, do we know which custom extensions we require.. afaics primarily we require nova network security groups extension
<dimitern> mgz, perrito666, wallyworld_, or somebody else free to take a look? ^^
<hazmat> do we have an option to disable the firewaller so we don't need that extension?
<mgz> dimitern: looking
<dimitern> hazmat, not right now
<dimitern> hazmat, we need os-security-groups to exist
<hazmat> dimitern, any idea how much work it would be to disable that?
<mgz> dimitern: lgtm, have you also run the full test suite locally? we don't have gating on 1.18 any more I think
<dimitern> mgz, you mean go build ./... && go test ./... ?
<dimitern> hazmat, to disable the firewaller altogether ?
<mgz> dimitern: yup
<dimitern> mgz, running now
<hazmat> dimitern, yeah
<dimitern> mgz, and then, if you can be so kind to remind me, how to manually merge it?
<hazmat> dimitern, do we use os-networks extension as well?
<mgz> dimitern: just checking if the old bot is actually alive still
<mgz> dimitern: if not, merge to the 1.18 branch locally
<dimitern> hazmat, it's not much work to introduce "firewall-mode": "disabled" (in addition to "instance", "global") in the env config and then when "disabled" just not start the firewaller
<hazmat> dimitern, cool, thanks
<dimitern> mgz, ah, so bzr merge ~/src/juju-core/1.18 while on my branch in GOPATH ? and then push?
<dimitern> hazmat, np, if it's important, I'd file a bug and possibly ping alexis for scheduling
<hazmat> dimitern, k, thanks just feeling it out atm
<mgz> then push as-the-bot, `bzr push bzr+ssh://IFORGOTTHEBOTNAME@bazaar.launchpad.net/+branch/juju-core/1.18`
<mgz> dimitern: wait a sec on that though
<mgz> dimitern: the old bot seems to be handling it
<mgz> dimitern: I'll ping you if you need to do anything else
<dimitern> mgz, so 1. set commit message on the MP and then mark it as approved?
<dimitern> mgz, ok, cheers
<wallyworld_> perrito666: review done
<axw> me too, practically at the same time :p
<mgz> he's double-lucky
<perrito666> wallyworld_: tx man
 * perrito666 gets spam from one of the top competitors for presidency... quoting Dr. Seuss as his core platform driver
<hazmat> dimitern, mgz also looks like we may also depend on os-availability-zones.. not clear if its entirely optional
<mgz> hazmat: pretty sure that fails neatly when it's not available
<mgz> axw: ^
<hazmat> mgz, yeah.. there's two call paths to it afaics.. one with explicit zone placement specified (and explicit provisioning error), and one with the zone balance using collected zones of an instance. it looks like it should fail neatly
<mgz> dimitern: landed on 1.18
<dimitern> mgz, sweet! tyvm
<perrito666> ericsnow: ping
<mgz> ocr, can I get a stamp on https://github.com/juju/charm/pull/48 please?
<wwitzel3> natefinch, ericsnow, perrito666: I'll be a couple minutes late to standup (5 max), dealing with some house paperwork.
<perrito666> wwitzel3: np
<mgz> perrito666: ^review plz? super easy :)
<perrito666> mgz: remember that my review bears the weight of a feather
<mgz> the best kind
<perrito666> mgz: lgtm
<mgz> perrito666: ta ta
<dimitern> TheMue, I've looked at http://paste.ubuntu.com/8370850/ - LGTM
<TheMue> dimitern: thanks, currently changing my code an feels fine there too
<hatch> If I have a multi core instance on ec2 and a few single core service units. Is there a way I can specify that each unit should get a single core?
<mgz> perrito666: next trivial bit <https://github.com/juju/juju/pull/791> :)
<mattyw> shouldn't landing be unblocked now that https://github.com/juju/juju/pull/787 has landed?
<mgz> mattyw: it is, sorry, should have announced in here
<mgz> everyone: landing unblocked, go wild
<perrito666> mgz: lgtm, I actually checked the hash just in case
<mattyw> mgz, I had something rejected 2 minutes ago
<mattyw> mgz, should I just try again?
<mgz> hm, let me look
<mgz> perrito666: thanks!
<mattyw> mgz, have you fixed world hunger yet?
<mgz> mattyw: 789?
<mgz> mattyw: on it :)
<jcw4> mattyw, mgz looks like bug 1370365 still blocks ci
<mup> Bug #1370365: After the last `apt-get update` the screen backlight is turned off. <amd64> <apport-bug> <bios-outdated-a09> <kernel-da-key> <performing-bisect> <regression-update> <screen> <trusty> <linux (Ubuntu):Incomplete> <https://launchpad.net/bugs/1370365>
<mattyw> mgz, nice one
<jcw4> sorry bug 1370635
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <regression> <upgrade-juju> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1370635>
<mgz> mattyw: marked the bug, resubmitted yours
<mgz> bug 1370635
<mup> Bug #1370635: Unable to connect to environment after local upgrade on precise <ci> <regression> <upgrade-juju> <juju-core:Fix Committed by menno.smits> <https://launchpad.net/bugs/1370635>
<mattyw> mgz, thanks very much, let me know when you've put an end to war and I'll review it
<mgz> mattyw: I suspect there's some lag in picking up the launchpad bug status
<mgz> it's returning clean for me locally now
<mgz> greh, must be caching related
<mgz> locally: $ python check_blockers.py master 789
<mgz> No blocking bugs
<mgz> on the build slave: jenkins@juju-core-slave:~/juju-ci-tools$ python check_blockers.py master 789
<mgz> Does not match ['fixes-1370635']
<mgz> >_<
<mgz> will have to throw some cache busting in there, this is annoying
<TheMue> dimitern: changed it, currently testing it before reproposal, but it looks and feels real cool
<mgz> okay, this is annoying, cache-busting isn't helping
<mgz> who's holding on to old junk here... an aws proxy? launchpad?
<dimitern> TheMue, great!
<sinzui> natefinch, per the cross-team call, can you ask someone to look into bug 1370781 . I hope this is a trivial fix because the adding of the archive and the calling of upgrade are easy to reverse.
<mup> Bug #1370781: cloud-archive on precise not pinned when juju calls apt-get upgrade <cloud-installer> <landscape> <juju-core:Triaged> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1370781>
<mgz> mattyw: I've hacked in a fix, landing your branch now
<mgz> everyone: trunk not open for landing quite yet
<mgz> we want a clean CI run with menno's fix from this morning
<wwitzel3> ericsnow: https://github.com/juju/utils/pull/35 when you have a chance
<ericsnow> wwitzel3: k
<wwitzel3> ericsnow: http://reviews.vapour.ws/r/58/
<mattyw> mgz, looks like it's not been accepted by the bot
<mattyw> mgz, but not hurry
<mgz> mattyw: yeah, sinzui pointed out we should actually wait for a clean run of those upgrade jobs with menno's change, which is happening now
<mattyw> mgz, sounds sensible
<mgz> sinzui: can I ask you to flip the bug to fix released when that happens, and tell people in here?
<mgz> I'm migrating home from cow-orking now
<sinzui> mgz: I will update the bug when CI is satisfied
<sinzui> mgz, mattyw I see 1804 tested master and passed http://juju-ci.vapour.ws:8080/job/local-upgrade-precise-amd64/
<sinzui> I am marking the bug fix released
<mattyw> sinzui, cool thanks very much
<perrito666> hint: apparently we can access all of our repositories as if they where svn...
<perrito666> https://github.com/features
<mattyw> perrito666, but they still don't suppor prereq branches
<perrito666> mattyw: now, aren't you a party p****r? :p
<mattyw> perrito666, yes
<mattyw> :)
<perrito666> mattyw: I would also want all reviews in one mail
<mattyw> perrito666, I read that as "want all reviews in the mail"
<perrito666> heh I want to be able to do a full review and then commit and have that send ONE summary email
<ericsnow> wwitzel3: that should get you started
<ericsnow> wwitzel3: I *may* have gone a little overboard :)
<wwitzel3> ericsnow: I can't figure out in which use case values would be nil?
<ericsnow> wwitzel3: var mytags Tags
<ericsnow> wwitzel3: (before calling mytags.Add(value) for the first time)
<ericsnow> wwitzel3: maybe that's a use case we don't care about, but the future has a way of making us pay for those kind of assumptions
<wwitzel3> ericsnow: I see now, I forgot to commit the unitialized test that I added .. it is for exactly that scenario and it works fine
<ericsnow> wwitzel3: cool.  If that's covered then that takes care of most of my review comments :)
<perrito666> sorry sinzui hadn't you marked 1370635 as fixed?
<sinzui> perrito666, yes, I marked it fix released within the last 20 minutes I think
<perrito666> mm  merge bot does not seem to ack that
<perrito666> ok seems that it is finally accepting merge requests
<ericsnow> perrito666: that backups PR just merged :)
<perrito666> I know I am the one that dollar dollar-ed the pr last :p
<ericsnow> wwitzel3: thanks for putting up with my exuberance...hopefully it was helpful and didn't add much extra work
<ericsnow> perrito666: thanks
<mattyw> night all
<wwitzel3> ericsnow: appreciate the review :) .. I like all comments, they make me think about the code differently. Even if the result is, I'm not going to take action, it is still good to look at things with different persepctives.
<ericsnow> wwitzel3: okay, good.  The last thing I want is to waste your time on trivialities. :)
<wwitzel3> ericsnow: no worries, I will just ignore those :P
<ericsnow> wwitzel3:  :)
<voidspace> g'night all
 * perrito666 does diff -u of 2 diff files and his head explodes
<jcw4> perrito666: lol
<perrito666> times are changing, my wife complains that osx is less user friendly than ubuntu
<natefinch> nice
<perrito666> well, ubuntu seems to be more responsive on slower machines
<wallyworld_> sinzui: is bug 1370781 really critical, blocking 1.20.8?
<mup> Bug #1370781: cloud-archive on precise not pinned when juju calls apt-get upgrade <cloud-installer> <landscape> <juju-core:Triaged> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1370781>
<sinzui> wallyworld_, no. It is not a regression
<wallyworld_> happy to fix it, but i'm not sure if we should delay 1.20.8 because of it
<sinzui> wallyworld_, I asked earlier today if it was trivial to fix. I had a lot of prep work for 1.20.8 so I was happy to hope for a fix while I updated the copy rules for proposed
<wallyworld_> sinzui: ok, let me see what i can do this morning
<sinzui> wallyworld_, I will release tomorrow regardless of the state of that bug
<wallyworld_> sinzui: fwiw, i have scheduled about 6 apt related bugs to be looked at over the next 2 weeks
<sinzui> fab
<perrito666> nothing like having the second reviewer answering the questions from the first before you get the chance to
<perrito666> :p
<perrito666> ericsnow: tx, --pull is a much better word
<ericsnow> perrito666: oh good :)
<ericsnow> perrito666: at least there's one useful thing for you in that review then :)
<perrito666> ericsnow: also, its charm-sync because otherwise we would be stepping on charmtools toes
<perrito666> ericsnow: I have found many interesting things on your review
<perrito666> ericsnow: re s/unit/remote unit/ unit can only be remote
<ericsnow> perrito666: oh yeah lol
<perrito666> is there a way to set up rb to show a couple of context lines on the issues?
<perrito666> like a comment on "return err" even if it has the line number says absolutely nothing
<perrito666> and I cannot always look up in my local copy of the code bc the line might have moved a lot
<ericsnow> perrito666: it will show only the lines that were highlighted by the reviewer when the made the comment
<ericsnow> perrito666: you can click on the link right there to pull up that spot in your browser
<perrito666> ericsnow: I am trying to avoid jumping around
<ericsnow> perrito666: I don't think that is adjustable
<thumper> well the answer to how stuffed is my shoulder, is very.
<perrito666> thumper: I am pretty sure there is some meaning to stuffed I dont know or you are a teddy bear
<thumper> stuffed as in buggered, screwed
<perrito666> thumper: ouch, I hope the other guy is worse
<ericsnow> wallyworld_: when can I expect to find Andrew?
<wallyworld_> ericsnow: about 2.5 hours
<thumper> o/ wallyworld_
<ericsnow> wallyworld_: k
<wallyworld_> hi thumper, where did you get to yesterday?
<thumper> wallyworld_: I had a day off
<thumper> wallyworld_: for a local seminar
<wallyworld_> we didn't miss you :-P
<thumper> on design thinking, and business stuff
<thumper> yeah, chatted to menn0
<thumper> pleased he fixed my bad shit
<ericsnow> FYI, we've updated .reviewboardrc to include the TRACKING_BRANCH setting
<thumper> o/
<bradm> so, when can I get my hands on juju 1.20.8? :)
<thumper> davecheney: around?
<thumper> davecheney: I have a language question
<davecheney> thumper: yup
<thumper> davecheney: take a look in agent/bootstrap.go
<thumper> davecheney: the InitializeState method
<thumper> function
<davecheney> mmm
<thumper> the named error return value is only used in a defer to close the connection if an error is returned
<thumper> does this actually work?
<davecheney> oh
<davecheney> fuck i always miss that
<thumper> no assignments are made to resultErr explicitly
#juju-dev 2014-09-19
<davecheney> ok, one way to think about it is the defer runs _outside_ the scope of the function
<davecheney> think of it like this
<thumper> so the values returned in return statements are assigned prior to the defer running?
<davecheney> outer() error {
<davecheney> err := inner() {
<davecheney>  // the defer is here
<davecheney> }
<davecheney> yes
<thumper> ok
<davecheney> it's fucking confusing
<thumper> that it is
<davecheney> basically that sets up a defer at that point in the fuction that if anyone does return err after that
<davecheney> st will be closed
<davecheney> it could be done with an explcit, if err != nil { st.Close(); return err }
<davecheney> at those three points after the defer is added
<davecheney> thumper: what is even more confusing, you can do this
<davecheney> func f() (err error) {
<davecheney> defer func() {
<davecheney> if err != nil { ... }
<davecheney> }
<davecheney> if err := something(); err != nil {
<davecheney>     return err
<davecheney> }
<davecheney> normally you'd thin that the err declared inside the if block shadows the one inside the defer block
<davecheney> but in this spefic case it does not
<thumper> because it is returned right?
<davecheney> because err inside the defer block captures the value of err on the way out, no matter how it is return (either implicitly or explicitly)
 * thumper nods
<davecheney> under the hood the defer is capturing the address of the value returned on the stack
<davecheney> which it actaully easier to understand how it works
<davecheney> err has an address on the stack
<davecheney> it is assigned somehow by err = XXX, or return err
<davecheney> then the defer can retrieve the value from the stack because it knows its name
 * thumper nods
<davecheney> i don't think this is a very friendly pattern
<davecheney> it's shorter, to write, but not to read
<thumper> I think it is useful so you can't forget, or more importantly, that someone else comes back later, changes something and doesn't do the expected action
<thumper> I don't like how it requires naming the return values
<thumper> would be nice if there was another way to address them
<thumper> menn0, davecheney: http://reviews.vapour.ws/r/39/diff/#
<davecheney> This diff has been split across 2 pages: 1 2 >
<davecheney> ^ warning
<thumper> davecheney: geez that is annoying
<thumper> davecheney: is that a setting in review board itself?
<wallyworld_> bradm: maybe tomorrow hopefully
<davecheney> thumper: none that I have found
<thumper> davecheney: perhays in the deployment of reviewboard, rather than a client setting... maybe
<waigani> anyone know how to find a mongo doc missing a particular field?
<waigani> I'm playing with bson and reading what docs I can find, no luck yet...
<thumper> wallyworld_: I think you can fine based on unset
<thumper> s/fine/find
<wallyworld_> wot
<thumper> waigani: sorry, that was for you
<thumper> wallyworld_: EWRONGNICK
<waigani> thumper: thanks
<waigani> bson.DocElem{"$unset", fieldName})
<axw> $unset is for removing a field
<axw> you want a combination of $not and $exists, I think.
<ericsnow> axw: that looks like Sunset, Snot, and Sexists
<axw> :p
<menn0> thumper, davecheney: the documentation isn't very clear but it seems like RB's pagination can be tweaked: https://www.reviewboard.org/docs/manual/1.7/admin/configuration/diffviewer-settings/
 * rick_h_ watches thumper's kiwi pycon talk bwuhahaha
<thumper> rick_h_: hey...
<thumper> rick_h_: go on, tell me what you really think
<rick_h_> 10min in so far
<rick_h_> 18 to go
<rick_h_> I'll write a critique later, especially with this django crap :P
<bradm> oh, haha
<bradm> that problem I had yesterday with 1.20.7, its the same with 1.18.1
<bradm> still does the ifdown eth0, which takes it off the network
<menn0> very easy review required: http://reviews.vapour.ws/r/61/diff/
<menn0> thumper, axw, davecheney, wallyworld_?: ^^^ (it's a one line change)
<thumper> menn0: done already
<thumper> geez
<menn0> thumper: I just refreshed before. must have just missed you. sorry.
<wallyworld_> done
<thumper> heh
 * thumper needs to go get sushi before the world implodes here
<menn0> thumper: why is the world imploding?
<thumper> menn0: because jessie isn't getting her sushi
<menn0> thumper: right
<menn0> thumper: I thought it was bad weather coming in or something :)
<menn0> davecheney and thumper: I think you might like this: http://reviews.vapour.ws/r/64/diff/
<thumper> is it deleting a lot of code?
<thumper> page doesn't exist...
<menn0> thumper: no but it barely adds any
<menn0> thumper: try again ... I hadn't hit publish
<menn0> thumper: this avoids what will no doubt be a flaky CI test
<menn0> thumper: I started down that road but couldn't see a sane way to test this otherwise
<thumper> ah, actually, I really do like that test :)
<menn0> sweet :)
<menn0> I was hoping davecheney might be able to look at the reflection bit. I'm wondering if there's a tidier way to get the method set for an interface.
 * thumper needs alcohol
<thumper> menn0: I'm trusting that it does what it says on the can
<thumper> menn0: I'll let davecheney critique the method of getting the method names :)
<thumper> I'm about to go make a pizza, open a bottle of wine, and do some evening javascript hacking
<thumper> night all
 * menn0 is off to pick up pizza himself
<davecheney> menn0: i have no idea what that is supposed to do ?
<TheMue> dimitern: ping
<dimitern> TheMue, hey
<TheMue> dimitern: as long as we use RB *smile* would you mind taking a look at http://reviews.vapour.ws/r/43/
<TheMue> dimitern: sadly the according PR doesn't know the changes due to the tools
<TheMue> interesting, didn't expected this
<dimitern> TheMue, that's probably because you didn't sync your parent master branch with upstream?
<dimitern> TheMue, looking
<TheMue> dimitern: that's what I would expect by a tool, not to do it manually (but I thought I had *hmmm*)
<TheMue> dimitern: regarding my change I would like to diskuss something with you. it's about the usage of the baseSuite. I'm passing it as argument, to make the decoupling more explicit but surely all testAbcV0() to testXyzV999() could be written as methods of the baseSuite too.
<dimitern> TheMue, sorry, expand a bit more ?
<eagles0513875> hey all :)
<eagles0513875> dimitern: just replied to the RB thread btw
<eagles0513875> with a suggestion :)
<dimitern> eagles0513875, hey :) nice, thanks
<eagles0513875> no problem
<eagles0513875> i proposed custom coding something that will interface with githooks and be a web interface :)
<eagles0513875> and that if the team is interested is something I would like to work on
<eagles0513875> seeing as I am not versed in golang
<eagles0513875> dimitern: you still in malta?
<eagles0513875> morning fwereade :D
<fwereade> eagles0513875, o/
<eagles0513875> fwereade: i have a RB replacement suggestion not sure if you saw the reply i sent to the RB thread
<fwereade> eagles0513875, I don't think that's really a good use of anyone's time -- there's no shortage of review systems in existence
<fwereade> eagles0513875, http://sheep.art.pl/Unix%20Koan ;)
<eagles0513875> fwereade: agreed but with what you reviewed before setting on RB
<eagles0513875> what were the others lacking
<eagles0513875> the way i see it something custom coded I would be willing to create for you guys something that will integrate with the github hooks
<eagles0513875> i already have some good ideas on what can be done fwereade just have to see how feasible it would be to do them with the api and hooks
<eagles0513875> such as one files a pull request on github and then on said review platform they woudl submit the pull request if approved it would interface with github in a way you can push said reviewed PR through said review site
<eagles0513875> fwereade: one idea to start off with
<eagles0513875> either that or find a way where it would pick up on pending pull requests automatically
<dimitern> TheMue, reviewed
<dimitern> eagles0513875, no, I moved back home for a while
<eagles0513875> kool :) you arent missing much here in malta except its super hot still and humid and fwereade can attest to that
<dimitern> hot and sunny.. nice it almost drowned us here lately with the rain
<TheMue> dimitern: thanks
<TheMue> dimitern: I would like to change the funcs to methods too, but all V0 are in the V0 file while later all new V2+ tests are in those files. or would you always place the V0 to V999 test methods into one file? that way surely a testFooV7() could be placed after testFooV1() and the change would be more abvious.
<fwereade> eagles0513875, well, it's the chaining that seems to be the biggest issue across the board -- but regardless I worry you're seriously underestimating what it'd take to write a whole new review system
<dimitern> TheMue, let's not do that :) too many things in one file
<TheMue> dimitern: exactly
<eagles0513875> fwereade: out of many that are out there how many interface with the github api's
<TheMue> dimitern: so V0 in V0 and Vx in Vx
<TheMue> dimitern: but still as methods of the baseSuite
<TheMue> dimitern: +1
<dimitern> TheMue, IMO it's easier that way - you immediately see what is where
<TheMue> dimitern: yes
<eagles0513875> fwereade: i know its goign to be a monster of a task, buti think a project like that would benefit the github user community as a whole
<TheMue> dimitern: I think we've got a good approach then
<eagles0513875> make it easier to review pull requests
<eagles0513875> etc
<dimitern> eagles0513875, it doesn't have to be a whole system btw - if there's somehow a way to automate a way of chaining dependent PRs with the github api
<eagles0513875> dimitern: noted. this was an idea that just came to me :)
<eagles0513875> fwereade: see above ^
<eagles0513875> dimitern: if you are going to create something for that might as well create something which has a working gui and its simple to use
 * TheMue simply would like to stay with GH w/o any additional tool. less is more.
<eagles0513875> TheMue: i know it is but if you need a review system that can help review the code a bit quicker and it interfaces directly with github why not
<eagles0513875> that way all you would need to do is file a pull request on GH then if possible the review system would poll pending pull requests and email devs saying this PR is pending review
<eagles0513875> once reviewed they coudl easily push it to the appropriate branch
<TheMue> eagles0513875: an additional site, additional user management, additional usage, additional source of errors
<eagles0513875> agreed but what if userwise it does authentication against github accounts
<TheMue> eagles0513875: but that's only my opinion, as a fan of KISS
<eagles0513875> TheMue: i am a fan of KISS too
<eagles0513875> that is why im thinking this review system would be as automated as possible with email notifications and interface with the github api to make things a bit easier
<eagles0513875> fwereade: any further thoughts or dimitern :)
<fwereade> eagles0513875, I'm not really seeing anything other than a great big bunch of work that distracts from our core purpose
<eagles0513875> fwereade: not saying for you guys to do it
<fwereade> eagles0513875, integrating *any* extra system is a great big hassle, and writing it from scratch as well renders it pretty much unfeasible imo
<eagles0513875> im volunteering myself to do it
<fwereade> eagles0513875, well, sure, you can write anything you like, and I will wish you luck with it, but it doesn't have much bearing on our current situation
<eagles0513875> seeing as RB is rather un popular for certain things that is why i suggested somethign custom coded
<fwereade> eagles0513875, it is by definition vaporware and hence not something for us to be basing plans on
<eagles0513875> is RB something open sourced? maybe I can get the source and improve on it for you guys
<fwereade> eagles0513875, sure, but it's incredibly rare for the build/buy tradeoff to actually come down in favour of build *except* when it's about your core competencies
<eagles0513875> ya lost me
<fwereade> eagles0513875, ok, how long does it take you on average to write a nice dynamic web app that interfaces with another site, from scratch?
<eagles0513875> fwereade: would be my first time
<eagles0513875> have an ideal place already in mind which gives a course about working with the github api
<fwereade> eagles0513875, do you see why it would be somewhat foolhardy for us to depend on that?
<eagles0513875> agreed
<fwereade> eagles0513875, like I say, if you're convinced it has value, go ahead, and I wish you much luck with it
<eagles0513875> fwereade: ill be honest i feel that is the only way i can contribute to juju at this point in time
<fwereade> eagles0513875, I've been trying to steer you towards charm development for some time, AIUI that actually fits fairly well with your skillset
<eagles0513875> :)
<eagles0513875> question about that fwereade
<eagles0513875> where are bugs filed against charms on LP or bugs dont get filed against the charms
<fwereade> eagles0513875, https://bugs.launchpad.net/charms
<dimitern> mgz, ping
<dimitern> mgz, is there a particular reason the merge bot runs out of disk space occasionally while running tests?
<mgz> dimitern: I've not tracked it down yet
<mgz> can only assume the temp drive allocation at the start where we ask for 2 gigs isn't working properly sometimes
<mgz> but I don't see why it's now an issue where it didn't seem to be before
<dimitern> mgz, right
<TheMue> dimitern: quick talk?
<dimitern> TheMue, sure, omv
<perrito666> morning
<wallyworld_> hazmat: meeting?
<perrito666> Changed a lot of things on a couple of files, go fmt says its all right </successkid>
<perrito666> so little is needed to make me happy...
<perrito666> is there a way to see the patch is going to be uploaded to rb befor doing so?
<wallyworld_> axw: katco: still in health/status meeting, running late
<axw> okey dokey
<axw> katco: wanna wait a bit?
<wallyworld_> axw: katco: feel free to have a chat and then finish without me
<katco> axw: sorry just got my wife/daughter out the door
<dimitern> fwereade, wallyworld_, perrito666, TheMue, others? please take a look at https://github.com/juju/juju/pull/799 - the firewaller worker changed to use the new api v1 supporting port ranges
<TheMue> dimitern: PR on GH, not on RB?
 * TheMue loves abbreviations :)
<dimitern> TheMue, yeah, forgot but it anyone insists, will add one :)
<TheMue> dimitern: I don't need it, do it on GH
<fwereade> dimitern, at first glance, shouldn't the upgrade step come first of all?
<fwereade> dimitern, ah, right, maybe not
<fwereade> dimitern, forget I said anything :)
<natefinch> I think for truly trivial code reviews (under 10 lines or something), that GH is fine (like sinzui's version change ones).  Anything more complicated should be on Reviewboard
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1371605
<dimitern> fwereade, yep :)
<sinzui> natefinch, jam: can you get someone to look into bug 1371605 hp deplpys are broken in master
<mup> Bug #1371605: HP Bootstrap fails: no endpoints known for service type: product-streams <bootstrap> <ci> <hp-cloud> <regression> <streams> <juju-core:Triaged> <https://launchpad.net/bugs/1371605>
<natefinch> sinzui: will do
<TheMue> fwereade: thx for doc review, just changing it, also due to the new approach of API testing
<wwitzel3> fwereade: http://reviews.vapour.ws/r/58/ already has a review from ericsnow, but needs a meta-review.
<wwitzel3> dimitern: http://reviews.vapour.ws/r/58/ set.Tags stuff we talked about if you also want to take a look :)
<dimitern> wwitzel3, great, will do, thanks!
<sinzui> natefinch, do you have a moment for https://github.com/juju/juju/pull/800
<natefinch> lol... just noticed .wtf is a valid TLD now
<fwereade> wwitzel3, ericsnow: review LGTM, I'm still interested to know the motivation
<natefinch> fwereade, wwitzel3, ericsnow:  I agree... do we really need a full-on set implementation for tags?
<fwereade> natefinch, well, if we do want sets of tags, then I'm not against carrying a Set implementation -- but yeah, I'm very interested to know the use case ;)
<wwitzel3> fwereade, natefinch: well mainly it was for dealing with user input. when a user provides many units and services as targets, you end up with duplicates. I had helpers that were filtering the list of names.Tag based on their strings, it ended up being 70% of a set implemtation anyway
<wwitzel3> fwereade, natefinch: so I figured, instead of having these private helpers, jsut do a set implementation for names.Tag, I felt is better addressed the problem and resulted in a cleaner implementation in the runcmd API
<fwereade> wwitzel3, heh, interesting
<fwereade> wwitzel3, is that to happen server-side or client-side?
<fwereade> actually surely it's client-side
<fwereade> wwitzel3, in which case wouldn't set.Strings on the initial input work just as well?
<wwitzel3> fwereade: services expand to multiple units server side
<fwereade> wwitzel3, ahhhhh
<fwereade> wwitzel3, cool, thanks
<wwitzel3> fwereade: np
<wwitzel3> fwereade: it is really just a matter of where did the work of "is this tag.String unique for this list" .. and I think using a set abstraction just made all the logic in the runcmd API very simple and easy to understand.
<fwereade> wwitzel3, yep, I'm satisfied :)
<mattyw> folks, I'm of, have a great weekend all
<perrito666> ericsnow: having a different client for the api for backups is not that happy :p
<ericsnow> perrito666: why not?
<perrito666> ericsnow: I now need functionality from the regular api client and I will have to port it :p
<ericsnow> perrito666: what functionality do you need?
<perrito666> for now apparently only client.PublicAddress
<perrito666> well Ill address that on monday
<ericsnow> is that only on api.Client?
<ericsnow> k
<perrito666> ericsnow: It makes some noise in my head to duplicate that but as long as its only that Ill be fine with it
<ericsnow> perrito666: yeah, it makes me think that functionality is in the wrong place
<perrito666> ericsnow: ill have to think about it, we could make it better but its not worth it as it would most likely have a new level of abstraction, anyway have to run, have a nice weekend
<ericsnow> perrito666: you too
#juju-dev 2014-09-20
<Ponyo> tvansteenburgh: you!
#juju-dev 2014-09-21
<hazmat> anyone around?
<hazmat> trying to answer some questions on ha.. there are some unclear parts in the implementation..
<waigani> _thumper_: https://github.com/juju/juju/pull/802
<thumper> waigani: I see it, with it shortly
<waigani> thanks
<thumper> waigani: are you going to put that up on review board too?
<waigani> thumper: okay
<thumper> waigani: thanks
<thumper> waigani, menn0: We'll have to have a lightning fast standup today, as I have to leave at about 11:05
<waigani> thumper, sorry for the delay. I had to install python-virtualenv, create a virtualenviron, download rbt tools, create an alias to rbt tools, follow the new list of pre-pull-request steps, put in username and oauth. ... oh and then I forgot to publish the pr.
<waigani> thumper, menn0 I can do now? Just let me get a coffee
<waigani> thumper, menn0, davecheney got my coffee. early standup or not?
<thumper> davecheney: can you do the standup a little early?
<menn0> thumper, waigani, davecheney: I'm happy to do it now
<davecheney> thumper: sure
<davecheney> i'll jump in there now
<waigani> I'm in
<waigani> new record -3min !
<menn0> waigani: yeah - pretty amazing
<menn0> davecheney: did you want to run through that PR from Friday together?
<menn0> wallyworld_: does the CI blocker ring any bells for you? it looks related to what axw has been working on to me. Bug 1371605.
<mup> Bug #1371605: HP Bootstrap fails: no endpoints known for service type: product-streams <bootstrap> <ci> <hp-cloud> <regression> <streams> <juju-core:Triaged> <https://launchpad.net/bugs/1371605>
<wallyworld_> um, not sure, still getting through email, so didn't notice there was a blocker yet. i'll have a read of the bug
<wallyworld_> menn0: that's a hp loud issue
<wallyworld_> at first read, their keystone catalog is not set up correctly
<menn0> wallyworld_: ok so should it be blocking CI then?
<wallyworld_> i have no idea why it was marked as a blocker
<menn0> ok
<wallyworld_> let me take a little more of a closer look
 * thumper afk
<wallyworld_> menn0: yep, hp cloud 's configuration is at fault, i'll update the bug
<menn0> wallyworld_: sweet
<wallyworld_> menn0: bug marked as Invalid, so hopefullt bot unblocked
<menn0> wallyworld_: thanks
<wallyworld_> np
<menn0> wallyworld_: in a HA env where a state server dies, is it expected that you need to run ensure-availability twice to remove the dead state server from state?
<menn0> I ran it once and it marked the dead one as "no-vote" and created a new state server to replace it
<wallyworld_> menn0: i'm not sure, but that doesn't sound right to me
<menn0> and ran it again to have it removed from the set of state servers
<wallyworld_> maybe it was done like that to allow the dead one to come back
<menn0> and only at that point could I remove-machine it
<menn0> wallyworld_: fair enough
<wallyworld_> we'd have to check with nate I think
<menn0> dealing with dead state server is non-obvious. it took me a while to figure out how to do it.
<menn0> wallyworld_: I might submit a documentation update
<wallyworld_> sounds good
#juju-dev 2015-09-14
<davecheney> how are we doing on those blockers ?
<davecheney> mwhudson: did you fix http://pad.lv/1494441
<davecheney> >
<axw> thumper: re #1495320, why should we have any of those things? IMO we should have per-facade mock states, which is what some of the newer facades have been doing already
<mup> Bug #1495320: New base test suite for apiserver <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1495320>
<thumper> axw: fair point...
<axw> wallyworld: FYI, https://bugs.launchpad.net/juju-core/+bug/1495338
<mup> Bug #1495338: cmd/juju/storage: "volume list" YAML/JSON format is non-obvious <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1495338>
<wallyworld> np, ty
<anastasiamac> axw: this is a nice bug :D i wish they were all like that!
<mup> Bug #1495338 opened: cmd/juju/storage: "volume list" YAML/JSON format is non-obvious <juju-core:Triaged by axwalk> <juju-core 1.25:In Progress by axwalk> <https://launchpad.net/bugs/1495338>
<wallyworld> thumper: waigani: please don't use JujuConnSuite for testing new apiserver facades
<wallyworld> there's plenty of existing apiserver facades that have been written with better tests you can copy fro
<waigani> wallyworld: okay, sorry
<wallyworld> waigani: best not to perpetuate our misery :-)
<wallyworld> especially for new work
<wallyworld> adding a new test i could live with
<wallyworld> but a whole new facade package, not really
<waigani> wallyworld: yep, understood - fixing it now
<wallyworld> waigani: there should be a happy path feature test though that (for now) does use jujuconnsuite to ensure it is glued together end-end
<wallyworld> in the featuretests package
<waigani> wallyworld: I was going to add a feature test once the api and worker have landed, to make sure they all work as intended.
<axw> wallyworld anastasiamac: how's this look as an alternative "volume list" YAML format? http://paste.ubuntu.com/12405626/
<anastasiamac> axw: "machines" as a collection? :D
<anastasiamac> looks nice :))
<axw> wallyworld anastasiamac: there will be a "units" field below "machines" when the data is available in the API
<anastasiamac> \o/
<anastasiamac> personally, i like :)
<wallyworld> axw: "provider-id" - could that be just "id", or "volume-id"
<axw> wallyworld: hum, I'm trying to distinguish between the internal and provider-supplied one. it's kinda ambiguous
<wallyworld> yeah
<anastasiamac> axw: is status per volume or machine/unit? coz what does "attached" mean when there r 1+ machines/units?
<axw> anastasiamac: currently only per volume, but later we'd have per-machine and per-unit also (and they'll go in the "machines" struct, along with "device" and "read-only"
<axw> anastasiamac: atm, "attached" means all attachments are created
<axw> otherwise it's "attaching"
<anastasiamac> axw: sounds great! let's remember to deal with/display multiple status when we have per-machine and per-unit specific ones :D
<axw> anastasiamac: I'll adda  TODO in the relevant structs
<anastasiamac> axw: tyvm for making output user-helpful \o/
<axw> anastasiamac: np
<wallyworld> axw: the id i think should be whatever we expect the user to use when we add CLI that requires it to identify a volume
<axw> wallyworld: yeah, that's the top-level string "1/0"
<axw> wallyworld: same sort of format for machine IDs as in "juju status"
<wallyworld> ok, yeah, that makes sense
<axw> wallyworld: so, yea or nay?
<wallyworld> axw: oh sorry, +1 i think
<wallyworld> is the user generally inetrested in provider id?
<wallyworld> could it be left off?
<axw> wallyworld: same level of interest as people have in instance IDs
<axw> wallyworld: useful for matching up juju resources with things in your cloud provider
<wallyworld> so it adds value in showing it
<axw> wallyworld: I think so
<wallyworld> sgtm
<thumper> wallyworld: got 10 minutes for a quick chat about a critical bug?
<thumper> wallyworld: it has to do with the new uniter code
<wallyworld> sure
<thumper> https://bugs.launchpad.net/juju-core/+bug/1494356
<mup> Bug #1494356: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1494356>
<thumper> 1:1 hangout
<mup> Bug #1495320 changed: New base test suite for apiserver <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1495320>
<mup> Bug #1495320 opened: New base test suite for apiserver <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1495320>
<mup> Bug #1495320 changed: New base test suite for apiserver <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1495320>
<mup> Bug #1495320 opened: New base test suite for apiserver <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1495320>
<mup> Bug #1495320 changed: New base test suite for apiserver <tech-debt> <juju-core:Invalid> <https://launchpad.net/bugs/1495320>
<thumper> ah fuck
<thumper> my passport hits the <6 months until expiry mid-sprint
<davecheney> you shall receive an imigration fail
<davecheney> 3 weeks, you should be able to get an expidited one by then
<thumper> 10 working days for NZ is normal
<thumper> File type: Jpg or Jpeg
<thumper> heh
<thumper> for photo
<davecheney> will they accept JPEG ?
<thumper> hmm... deems dubious
<thumper> seems
<wallyworld> thumper: i got my passport in 2 days after paying an exhorbitant fee after i put it through the wash 10 days before the malta sprint
<thumper> heh
<thumper> I don't want to pay the 2x amount for 3 day shipping
<thumper> 10 should be fine
<mwhudson> davecheney: yes, but the fix will take a loong time to get anywhere i suspect
<davecheney> mwhudson: that's what I figured
<davecheney> it looks like the commit that used the encoding package got rolled back for unrelated reasons
<davecheney> can we mark that issue as not a blocker for now ?
<mwhudson> i don't actually understand all steps of the process by which the fix will get into trusty-updates, i guess it would be good to learn that
<mwhudson> but oops, i'm supposed to be on leave today :-)
<thumper> wallyworld: http://reviews.vapour.ws/r/2650/
<wallyworld> looking
<wallyworld> thumper: +1
<thumper> wallyworld: pretty dull fix I admit
<davecheney> ship it
<thumper> oh fuck
 * thumper found a bad bug
<thumper> testing bug only
<thumper> windows only
<thumper> but bad none the less
 * thumper things
<thumper> and thnks
 * thumper gives up
 * thumper head-desks
<thumper> MEH
<thumper> anyone got a windows 2012 image running?
<thumper> wallyworld: another quick sanity check chat?
<wallyworld> sure
<thumper> wallyworld: http://reviews.vapour.ws/r/2651/diff/#
<wallyworld> looking
<wallyworld> thumper: thanks for fixing that
<thumper> wallyworld: np
<thumper> davecheney: that last one was just a windows time bomb
<thumper> davecheney: it was bound to fail eventually
<thumper> :)
<waigani> thumper: http://pastebin.ubuntu.com/12406461/
<waigani> thumper: first three is using jujuconnsuite
<waigani> thumper: second three, after pop, are using a mock state
<waigani> wallyworld, thumper: updated PR: http://reviews.vapour.ws/r/2649/
<wallyworld> looking
<wallyworld> waigani: i don't like how getEnvironment is patched. hopefully my comments make sense
<wallyworld> it's unnecessary
<waigani> wallyworld: okay, thanks for the review
<waigani> wallyworld: btw rewriting from jujuconnsuite to mock state afforded me the opportunity to benchmark the two: http://pastebin.ubuntu.com/12406461/
<waigani> wallyworld: after the pop is with the mock
<wallyworld> waigani: yeah. although even with the mock the numbers look too high
<wallyworld> i would expect say 5 or 10ms
<wallyworld> but still a good reduction
<waigani> hmm, that could be my laptop..
<wallyworld> maybe there's something else there that needs fixing also
<wallyworld> waigani: did you remember to remove the ngotestsuite
<waigani> wallyworld: I think so... I'll check
<waigani> wallyworld: good catch, that was it.
<wallyworld> thought so :-)
<wallyworld> i didn't believe you when you said you'd removed it :-)
<wallyworld> axw: this fixes a ci blocker, removes a crappy initial port of the update status stuff that i did http://reviews.vapour.ws/r/2652/
<axw> wallyworld: okey dokey, looking
<axw> wallyworld: is operation.State.UpdateStatusTime used at all after these changes?
<wallyworld> axw: no, not supposed to be, did i forget to remove it
<axw> wallyworld: I think so
<wallyworld> ffs, so i did
<axw> wallyworld: can you please explain how it was spinning before?
<wallyworld> tl;dr; the decoupling of remote state from the update status hook action meant that the previous helper to find the duration to wait based on now and last fire time would always trigger a very short wait
<wallyworld> and that would caue the timer to fire again
<wallyworld> and again
<wallyworld> and again
<axw> wallyworld: ah, I see
<wallyworld> the old world and new didn't match
<axw> wallyworld: so now we always just wait for the full period, right?
<axw> wallyworld: and if somethign else happens within that period, we'll wait again for another full period
<wallyworld> yeah, but all we do is update the counter
<axw> yep
<wallyworld> each time the update status hook runs, it sets current value to current counter
<wallyworld> counter could be +10 or +20 over last local recorded value, we dont care
<wallyworld> we just wait till idle and see if counters are different
<axw> wallyworld: yeah I was just thinking about the difference in when the remote state watcher is triggered. I think with this change we'll never set status until the unit has quiesced, and *then* the update-status idle period has elapsed
<axw> as opposed to period since the last update
<axw> wallyworld: (which seems fine to me)
<wallyworld> me too
<wallyworld> because when hooks are run, unit can set status
<wallyworld> so it could have done it during config changed or something
<axw> wallyworld: thanks. reviewed
<wallyworld> tyvm
<wallyworld> axw: i had a small unit test that showed the spinning, but it used wall clock so i removed it
<wallyworld> i'll test live also after school pickup
<axw> wallyworld: don't suppose you're still working are you?
<wallyworld> axw: hey
<voidspace> frobware: dimitern: TheMue: dooferlad: morning, I'm back in the UK
<dimitern> voidspace, morning :)
<frobware> voidspace, hiya
<dooferlad> voidspace: welcome back. Enjoy the rain!
<voidspace> o/
<voidspace> dooferlad: hehe, we had quite a lot of rain (warm rain though) in North Carolina. A couple of impressive thunderstorms.
<voidspace> dooferlad: a few days of good sun too though.
<voidspace> dimitern: frobware: so fwreade gave me a comprehensive review on my "unit public address branch"
<voidspace> dimitern: frobware: I've already started work on it - the biggest change is to set the "preferred addresses" (he doesn't like the Default name) when the addresses are set rather than when they're fetched
<voidspace> dimitern: frobware: plus a bunch of issues he identified in existing code that the PR touches
<voidspace> dimitern: frobware: so that's what I'm up to...
<voidspace> frobware: when is good for you to chat?
<frobware> voidspace, now is good.
<voidspace> frobware: cool
<voidspace> frobware: hmm.... the one-to-one isn't on my calendar for some reason
<voidspace> frobware: what hangout should we use?
<frobware> voidspace, I just sent you an invite (https://plus.google.com/hangouts/_/ejtwk7d5mqnwljcwmbxkvzi2uaa?hl=en-GB&authuser=0)
<voidspace> frobware: gah, I have to do the authentication dance - there in a minute
<voidspace> frobware: https://bugs.launchpad.net/juju-core/+bug/1435283
<mup> Bug #1435283: juju occasionally switches a units public-address if an additional interface is added post-deployment <addressability> <bug-squad> <network> <openstack-provider> <juju-core:Triaged by mfoord> <https://launchpad.net/bugs/1435283>
<voidspace> frobware:  http://reviews.vapour.ws/r/2593/
<dimitern> voidspace, I'm fine with calling it preferred instead of default
<voidspace> dimitern: yeah, me too
<rogpeppe> mwhudson, anyone else: do you know the status of this bug? https://bugs.launchpad.net/juju-core/+bug/1494441
<mup> Bug #1494441: ppc64el: cannot find package "encoding" <blocker> <ci> <ppc64el> <regression> <unit-tests> <juju-core:Triaged> <gccgo-go (Ubuntu):Invalid> <gccgo-go (Ubuntu Trusty):In Progress> <https://launchpad.net/bugs/1494441>
<perrito666> hey, if I wanted to stop/start juju mongo on local provider, where would I find the start/stop script?
<perrito666> meh systemv
<rogpeppe> jam1: any idea if this bug is likely to be fixed, by any chance? https://bugs.launchpad.net/juju-core/+bug/1494441
<mup> Bug #1494441: ppc64el: cannot find package "encoding" <blocker> <ci> <ppc64el> <regression> <unit-tests> <juju-core:Triaged> <gccgo-go (Ubuntu):Invalid> <gccgo-go (Ubuntu Trusty):In Progress> <https://launchpad.net/bugs/1494441>
<rogpeppe> jam1: it's blocking us currently
<jam> rogpeppe: this is the first that I've heard of the problem. It is marked as In Progress my mwhudson
<rogpeppe> jam: ok, thanks. i'll wait for him to come online
<jam> which might be the patch he proposed, I'm not sure the timeline for getting that actually into the trusty gcc-go build
<jam> alexisb would be another person to raise awareness of the bug to
<mattyw> wallyworld, just saw the news, congratulations
<wallyworld> mattyw: 4th one in 5 years :-(
<wallyworld> as bad as italy
<mattyw> wallyworld, you going to put your name forward?
<wallyworld> thinking about it
<wallyworld> can't do any worse
<fwereade> ok, I have been pushing through a filthy headache for a while, and I'm not actually getting anything done
 * fwereade goes to lie down
<mup> Bug #1495542 opened: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1495542>
<katco> ericsnow: natefinch: looks like there is a new blocker: 1495542
<katco> bug 1495542
<mup> Bug #1495542: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1495542>
<ericsnow> katco: related to #1494070, perhaps?
<mup> Bug #1494070: unit agent upgrade steps not run <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494070>
<katco> ericsnow: sounds plausibly correct. sinzui is that a dupe?
<katco> ericsnow: at any rate, pick that one up and maybe you'll find they're the same thing
<katco> ericsnow: pick the blocker up that is
<ericsnow> katco: k
<natefinch> godspeed, ericsnow
<mgz> katco: my guess is it's not related
<natefinch> I've always wanted to say that
<mgz> katco: as we have to un-run upgrade steps, one long standing, and casey hacked around the other one
<ericsnow> katco, natefinch: FYI, looks like the 1.25-beta1 list has several open that have been resolved
<katco> ericsnow: example?
<ericsnow> katco: #1465317
<mup> Bug #1465317: Wily osx win: panic: osVersion reported an error: Could not determine series <osx> <packaging> <wily> <windows> <juju-core:Triaged> <juju-release-tools:Fix Released by sinzui> <https://launchpad.net/bugs/1465317>
<sinzui> katco: ericsnow Well, if that bug is a dupe then it isn't 100% correct. http://reports.vapour.ws/releases/3049/job/aws-upgrade-20-trusty-amd64/attempt/8 shows 1.20 can upgrade to 1.25
<ericsnow> may or may not be forward-ported
<katco> ericsnow: what makes you say this is resolved?
<ericsnow> katco: fixed in an earlier version
<katco> ericsnow: how do you know that? and why isn't this bug targeted to that version?
 * ericsnow only skimmed through, so only a quick estimation
<ericsnow> katco: sorry, bad example
<katco> ericsnow: no worries, just trying to get up to speed
<ericsnow> katco: #1468815 is better
<mup> Bug #1468815: Upgrade fails moving syslog config files "invalid argument" <ci> <regression> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1468815>
<katco> ericsnow: k, same question... what makes you think it's resolved in 1.25-beta?
<ericsnow> katco: it *might* be resolved already
<katco> ericsnow: based on what?
<ericsnow> katco: fix-released on 1.24
<katco> ericsnow: so you're thinking 1.25-beta1 was cut after this fix was released?
<ericsnow> katco: rather that someone may have forward-ported and not updated the ticket
<ericsnow> katco: or that it just needs a forward-port
<katco> ericsnow: couldn't we say that about any bug?
<ericsnow> katco: sure, as long as it's been fixed in an earlier release
<ericsnow> katco: another example: #1449054
<mup> Bug #1449054: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Triaged> <juju-core 1.22:Fix Released by dimitern> <juju-core 1.23:Won't Fix by fwereade> <juju-core 1.24:Fix Released by fwereade> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1449054>
<katco> ericsnow: hm. just unsure why someone would go through the trouble of forward-porting without updating the bug.
<ericsnow> katco: sometimes folks simply merge an earlier branch forward, picking up multiple fixes
<katco> ericsnow: ah, this is the whole merge vs. cherrypick issue?
<ericsnow> katco: sometimes
<ericsnow> katco: anyway, at the least there are some bugs on that list that may just need forward-porting
<katco> ericsnow: yep
<ericsnow> katco: I guess that was my original point lol
<katco> ericsnow: thanks for walking me through your thought process
<ericsnow> katco: haha
<katco> ericsnow: no seriously :) just wasn't understanding how you arrived at that conclusion
<ericsnow> katco: mostly I skimmed through the list and several looked like ones we'd already fixed :)
<rogpeppe> hi peeps, this PR was backed out earlier because of issues not directly related to the PR itself. This is a re-proposal: https://github.com/juju/juju/pull/3275
<rogpeppe> could someone please rubberstamp it please?
<dooferlad> dimitern: ping!
<dooferlad> dimitern, voidspace: https://plus.google.com/hangouts/_/canonical.com/networking?authuser=0
<dimitern> dooferlad, omw
<cmars> rogpeppe, looking
<rogpeppe> cmars: thanks
<rogpeppe> cmars: though i already have two reviews, just waiting for the juju blockage clouds to lift momentarily so i can land it
<mup> Bug #1495591 opened: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1495591>
<voidspace> dooferlad: omw
<voidspace> dooferlad: thanks
<voidspace> dooferlad: ah, I'm late
<mup> Bug #1495591 changed: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1495591>
<mup> Bug #1463408 changed: TestRunCommand fails <ci> <intermittent-failure> <test-failure> <juju-core:Fix Released> <juju-core db-log:Fix Released> <https://launchpad.net/bugs/1463408>
<mup> Bug #1495591 opened: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Triaged by cmars> <https://launchpad.net/bugs/1495591>
<mup> Bug #1463408 opened: TestRunCommand fails <ci> <intermittent-failure> <test-failure> <juju-core:Fix Released> <juju-core db-log:Fix Released> <https://launchpad.net/bugs/1463408>
<mup> Bug #1463408 changed: TestRunCommand fails <ci> <intermittent-failure> <test-failure> <juju-core:Fix Released> <juju-core db-log:Fix Released> <https://launchpad.net/bugs/1463408>
<rogpeppe> here's a mostly-cosmetic change to the apiserver internals. review appreciated: http://reviews.vapour.ws/r/2656/
<jam> katco: ping
<katco> jam: pong
<jam> katco: hi. are you able to meet tomorrow sometime early-ish in your morning?
<katco> jam: sure
<katco> jam: calendar is up to date
<jam> 11 or 12 utc ?
<katco> jam: 12 would work better, but can make either time
<katco> jam: thx for taking the time to meet
<jam> katco: happy to
<natefinch> that feeling when you've changed a whole bunch of code and finally get it all compiling... and the tests all pass on the first try.
<natefinch> <_<
<natefinch> >_>
<natefinch> not sure if tests are passing because you did it right... or there weren't tests for this code.
<jam> rick_h_: your early meeting tomorrow needs to be postponed. I have to go to my son's school right at that time.
<jam> anastasiamac: rick_h_: ^ same time on Wed works for me. I can also easily go 1 hr later if that helps Rick
<katco> ericsnow: natefinch: can i run a question by one/both of you about constraints?
<ericsnow> katco: sure
<katco> ericsnow: moonstone
<rick_h_> jam: all good will work with anastasiamac and see what can be done
<mgz> ericsnow: do you need any help with bug 1495542?
<mup> Bug #1495542: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1495542>
<ericsnow> mgz: not sure yet
<ericsnow> mgz: still making sense of the logs
<ericsnow> mgz: machine-0.log shows a panic, but the cause isn't exactly clear
<ericsnow> yet
<mgz> ericsnow: hm, as does the previous run, suggests it's not random
<mgz> ericsnow: the end cause of the test being failed is that the `juju set` token didn't propogate through the relation between charms
<mup> Bug #1495681 opened: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1495681>
<mup> Bug #1495681 changed: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1495681>
<mup> Bug #1495681 opened: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1495681>
<mup> Bug #1495681 changed: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1495681>
<mup> Bug #1495681 opened: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1495681>
<perrito666> wow, juju is not especially smart to report when it has issues with the state server
<wallyworld> ericsnow: i'll take the session closed bug off your hands; looking at your analysis (thank you) it looks like something tanzanite landed is the cause
<ericsnow> wallyworld: \o/
<katco> wallyworld: hey have a sec before the release standup?
<wallyworld> katco: still in my jammies, just woke up, give me a few
<katco> wallyworld: np at all... w/e you have time (can use 1:1)
<wallyworld> ok
<anastasiamac> ah there is proof that wallyworld does indeed sleep sometimes :D
<wallyworld> haha
<wallyworld> like you can talk :-P
<anastasiamac> :P
<perrito666> you should be both sleeping
<anastasiamac> perrito666: no sleep for me - school dropoff :)
<perrito666> do you people not have school buses?
<perrito666> or school kangaroos?
<mup> Bug #1495681 changed: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1495681>
<anastasiamac> perrito666: not form where we r and not for 3yo anyway..
<anastasiamac> perrito666: school buses here otherwise r known as parents :D
<perrito666> anastasiamac: well that sounds terribly inefficient
<anastasiamac> perrito666: are u saying tha parents r not efficient?
<anastasiamac> :D
<perrito666> well I live around the corner to a school
<perrito666> parents here seem to think that their children might break if they walk something more than 2m
<perrito666> which causes a heavy traffic jam 3 times a day
<xwwt2> alexisb: We are running late
<alexisb> xwwt2, ack
<xwwt2> alexisb: We are having a meeting running long with good info.  We should bag release meeting?
<xwwt2> s/?/.
<alexisb> xwwt2, we are meeting tha tis fine
<alexisb> do what you need at the sprint
<mup> Bug #1449054 changed: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Fix Released> <juju-core 1.22:Fix Released by dimitern> <juju-core 1.23:Won't Fix by fwereade> <juju-core 1.24:Fix Released by fwereade> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1449054>
<mup> Bug #1449054 opened: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Fix Released> <juju-core 1.22:Fix Released by dimitern> <juju-core 1.23:Won't Fix by fwereade> <juju-core 1.24:Fix Released by fwereade> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1449054>
<mup> Bug #1449054 changed: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Fix Released> <juju-core 1.22:Fix Released by dimitern> <juju-core 1.23:Won't Fix by fwereade> <juju-core 1.24:Fix Released by fwereade> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1449054>
<wallyworld> axw: anastasiamac: perrito666: give me a minute
<anastasiamac> wallyworld: k
<wallyworld> anastasiamac: we in standup
#juju-dev 2015-09-15
<davechen1y> http://edition.cnn.com/2015/09/14/travel/aso-volcano-erupts-japan-flights/index.html
<davechen1y> welp, i guess we're not going to be sprinting in japan
<perrito666> well prices will most likely plummet, we might
 * perrito666 works a bit more to procastinate writing tests for his personal projects
<mwhudson> jam, rogpeppe: it's "in progress" in the distro sense of the word
<mwhudson> i'm not sure when the fix will hit trusty-updates
<mwhudson> treat this as an opportunity to learn and go ask a distro person :-)
<mwhudson> (i'll do the same tomorrow when i'm working again)
<mup> Bug # changed: 1493887, 1494356, 1494441, 1494864, 1494913
<mfoord> morning all
<anastasiamac> mfoord: or night :D
<mfoord> anastasiamac: o/
<mfoord> anastasiamac: it's always morning on the internet...
<anastasiamac> :)
 * dimitern rebooting
<mfoord> fwereade: if I create a new error, ErrNoAddress, does that belong better in network or in errors
<mfoord> fwereade: my guess is errors alongside all the others
<fwereade> mfoord, heh, I am desperately ambivalent there :(
<mfoord> fwereade: hehe, cool - thanks
<fwereade> mfoord, I do worry that the error types in errors tend to become functionally useless re: detecting specific scenarios, because they often and up getting used in more and more contexts and having their meaning diluted
<mfoord> fwereade: interesting
<fwereade> mfoord, NotFound, for example, now means so many things that you'd be crazy to take any potentially-destructive action based on encountering one
<mfoord> right
<mfoord> NoAddress seems pretty specific, but then perhaps that's more of an argument for putting it in the network specific package
<fwereade> mfoord, indeed :)
<mfoord> returning an ErrNoAddress from machine.PublicAddress() now causes 20 unit test failures in the state package
<fwereade> mfoord, ouch
<mfoord> so I'm still not sure it's a good idea... looking at how easy to fix they are
<mfoord> most of those places were *probably* checking for an empty address anyway, so it shouldn't be too much work
<mfoord> they certainly *should* have been
<fwereade> mfoord, right, how many of my comments are actually addrssable is very much a judgment call, and you're right there by the code
<mfoord> yup, thanks
<natefinch> dammit... upgraded to vivid and it broke my ethernet :/
<natefinch> god, I miss trusty
<mup> Bug #1495952 opened: Return specific error cause when trying to add existing services or relations <juju-core:New> <https://launchpad.net/bugs/1495952>
<mup> Bug #1495952 changed: Return specific error cause when trying to add existing services or relations <juju-core:New> <https://launchpad.net/bugs/1495952>
<frankban> perrito666: could you please take a look at http://reviews.vapour.ws/r/2633/ ? thanks!
<frobware> dooferlad, are you in the webex conference?
<dooferlad> frobware: I have the java thing loaded. Failed on the phone bit
<frobware> dooferlad, I dialled in too. Odd.
<mup> Bug #1495952 opened: Return specific error cause when trying to add existing services or relations <juju-core:New> <https://launchpad.net/bugs/1495952>
<dooferlad> frobware: got it :-|
<voidspace> dooferlad: do you have a hangout sharing your screen?
<voidspace> dooferlad: the java icedtea plugin is just hanging for me
<frobware> voidspace, it would be a silent movie
<voidspace> ah...
<dooferlad> voidspace: no, but you can dial in and look at the presentation. Slides are in the meeting invite
<voidspace> dooferlad: frobware: dialling in
<frobware> voidspace, dooferlad: the amount of (my) data they want to own is scary.
<dooferlad> frobware: yea, they just want to watch your every move including what you eat. Nice.
<voidspace> frobware: dooferlad: :-)
<frobware> dimitern, want to sync up w.r.t to subnet/zones?
<mup> Bug #1495978 opened: Juju does not deploy CentOS images in LXC <centos> <containers> <juju> <lxc> <system> <juju-core:New> <https://launchpad.net/bugs/1495978>
<dimitern> frobware, yeah - in ~7m (at the hour) ?
 * fwereade will be back for a bit sometime this evening, but has a doctor's appointment in an inconvenient place
<fwereade> ping me if you need me and I'll see them later
<perrito666> fwereade: good luck
<mup> Bug #1495978 changed: Juju does not deploy CentOS images in LXC <centos> <containers> <juju> <lxc> <system> <juju-core:New> <https://launchpad.net/bugs/1495978>
<perrito666> btw, "doctor's appointment in an inconvenient place" is a beautiful euphemism for a proctologist
<natefinch> haha
<dimitern> frobware, let's use the standup HO?
<natefinch> perrito666: like the back of a volkswagon?
<frobware> dimitern, in there now...
<dimitern> frobware, coming
<mup> Bug #1495978 opened: Juju does not deploy CentOS images in LXC <centos> <containers> <juju> <lxc> <system> <juju-core:New> <https://launchpad.net/bugs/1495978>
<mup> Bug #1495978 changed: Juju does not deploy CentOS images in LXC <centos> <containers> <juju> <lxc> <system> <juju-core:New> <https://launchpad.net/bugs/1495978>
<mup> Bug #1495978 opened: Juju does not deploy CentOS images in LXC <centos> <containers> <juju> <lxc> <system> <juju-core:New> <https://launchpad.net/bugs/1495978>
<rogpeppe> dimitern: fancy a review? http://reviews.vapour.ws/r/2656/
<rogpeppe> or anyone else please? It's been almost 24 hours and I'd really like one review from someone in juju-core. http://reviews.vapour.ws/r/2656/
<rogpeppe> perrito666: (you're OCR, right?)
<anastasiamac> rogpeppe: perrito666 was OCR last week instead of me (but it's Wed 1am my time now)
<rogpeppe> anastasiamac: ok, np
<anastasiamac> rogpeppe: sorry i did not get to ur PR - was delaing with a blocker ;(
<rogpeppe> anastasiamac: that's good - the blockers are a pain! :)
<anastasiamac> rogpeppe: if noone will have a look, i rpomise to look at it in about 9hrs :D
<anastasiamac> promise*
<mgz> that sounds like the correct order, as rog can't land his branch till master is unblocked anyway
<rogpeppe> anastasiamac: thanks for the offer :)
<rogpeppe> mgz: the problem is that i'm trying to coordinate upstream branches and if this gets rejected then it's a right merge pain
<natefinch> rogpeppe: ship it
<rogpeppe> natefinch: ta!
<natefinch> rogpeppe: welcome :)
<perrito666> rogpeppe: sorry I was at the console, I can review your patch anyway :D
<perrito666> or instead nate did it
<perrito666> :p
<rogpeppe> perrito666: "at the console" - wow, you're using a mainframe?
<perrito666> rogpeppe: oh ffs, one cannot translate directly :p
<perrito666> I was paying attention at my terminal emulator (which incidentally is konsole, almost the same)
<katco> ericsnow: hey, looks like we need to ping mgz re: bug 1495681 sooner rather than later. can you handle that?
<mup> Bug #1495681: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1495681>
<ericsnow> katco: sure
<katco> ericsnow: ty
<alexisb> ericsnow, please work with mgz on any questions regarding test results
<ericsnow> alexisb: will do
<ericsnow> mgz: ping
<mgz> ericsnow: sorry, waving hands at people, I am around to help
<ericsnow> mgz: thanks
<mgz> ericsnow: so, we have three changes on 1.24 with the regression, none of which jump out at me as the cause
<ericsnow> mgz: from what I can see initially, the config-get hook command isn't working right (or something related to it)
<mup> Bug #1468815 changed: Upgrade fails moving syslog config files "invalid argument" <ci> <regression> <upgrade-juju> <juju-core:Fix Released> <juju-core 1.24:Fix Released by ericsnowcurrently> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1468815>
<mup> Bug #1496016 opened: jujud uses too much memory <juju-core:New> <https://launchpad.net/bugs/1496016>
<ericsnow> mgz: bb38bf13c00283f572f05139428961b259275079 is pretty closely related
<ericsnow> mgz: I meant 629fc0507c4b61be93fd79453244cb8fc853f413
<ericsnow> natefinch: ^^^
<natefinch> hmm
<mgz> ericsnow: so, we have a couple of options then
<mgz> can build 1.24 locally with that change removed, run the test locally, see if it passes
<mgz> then just back out the change for releases
<katco> mgz: i don't think we can release 1.24.6 w/o that fix
<katco> mgz: it is for a major vendor
<mgz> the other option means getting the fix fixed :)
<katco> mgz: either way, sounds like first step is testing w/ that fix reverted? meanwhile ericsnow can continue examining the error?
<ericsnow> katco: sounds good
<ericsnow> mgz: you'll test with that change removed?
<natefinch> this seems quite likely to be something broken from my change... I think the problem is that the default value for settings aren't getting set
<mgz> ericsnow: okay, I'll test that locally
<ericsnow> natefinch: yeah, that's what I was just thinking too
<mgz> and we'll reserve CI for testing a 1.24 with a fixed fix
<natefinch> which is why the dict is missing the value, instead of having a default value
<ericsnow> natefinch: you read my comment on the bug, didn't you <wink>
<natefinch> ericsnow: nah, just read the stack trace in the log and then thought about what changes I'd made to the settings.
<ericsnow> natefinch: :)
<mup> Bug #1496032 opened: backups restore won't create bootstrap on GCE <juju-core:New> <https://launchpad.net/bugs/1496032>
<mup> Bug #1496032 changed: backups restore won't create bootstrap on GCE <juju-core:New> <https://launchpad.net/bugs/1496032>
<natefinch> saw a quote once from Rob Pike about working with Ken Thompson... that Ken could usually figure out a bug faster by just sitting and thinking about the code, than Rob could by running it through a debugger.
<ericsnow> natefinch: but debuggers in the 70's were so fast! :)
<mup> Bug #1496032 opened: backups restore won't create bootstrap on GCE <docteam> <juju-core:New> <https://launchpad.net/bugs/1496032>
<perrito666> k ppl bbl
<ericsnow> natefinch: looks like there's a semantic difference between setting the service settings directly and calling UpdateConfigSettings()...
<ericsnow> natefinch: ...that they treat "unset" setting differently, which has a direct bearing on how defaults are applied later for each unit
<ericsnow> s/treat//
<natefinch> ericsnow: ug....   I hate all this implicit code that is so side-effect driven... so easy to break things that don't work in obvious, straight forward ways
<ericsnow> natefinch: I'm double-checking that UpdateConfigSettings discards "unset" settings, which is what I expect it does
<natefinch> ericsnow: thanks, currently wrestling with my 8 month old (harder than it sounds), so not able to do much for another 45 mins or so until my wife gets back
<ericsnow> natefinch: np
<cmars> fix for LP:#1495591  http://reviews.vapour.ws/r/2668/
<mup> Bug #1495591: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:In Progress by cmars> <https://launchpad.net/bugs/1495591>
<mgz> cmars: thanks
<bogdanteleaga> cmars: that bug looks like the listener is getting closed twice(?) is that the right place to fix it?
<mup> Bug #1449210 opened: cloudsigma index file has no data for cloud <bootstrap> <cloudsigma-provider> <tech-debt> <juju-core:Triaged by s-matyukevich> <https://launchpad.net/bugs/1449210>
<cmars> bogdanteleaga, where else is it getting closed?
<bogdanteleaga> cmars: that's the first thought I had when I saw element not found, no idea where it could get closed
<alexisb> dimitern, you still around?
<cmars> bogdanteleaga, i don't think it is getting closed twice.. if it were, i'd expect to see duplicate "juju-run listener stopped" messages
<mup> Bug # changed: 1493623, 1494121, 1494912, 1494917
<bogdanteleaga> cmars: https://github.com/juju/juju/commit/1fbd501e88ee29456b7c471ce617877f6f1393ad#diff-7842644c47e14851e8a4649bfe2ade13R125
<bogdanteleaga> cmars: looks like we've been ignoring them for quite a while now
<bogdanteleaga> still doesn't seem like the right approach is to continue ignoring it
<bogdanteleaga> cmars: I think it's worth asking natefinch about this since the error is most likely coming from inside npipe
<cmars> natefinch, regarding above ^^, can you review http://reviews.vapour.ws/r/2668/, or advise on a better solution? thanks!
<ericsnow> mgz: did that updated deployer test finish yet?
<natefinch> cmars: I can look, yeah.
<cmars> natefinch, much thanks!
<ericsnow> mgz, sinzui: where can I find the bundle that is used for the quickstart tests?
<rick_h_> ericsnow: it's a version of https://jujucharms.com/u/landscape/landscape-scalable/9 in their test repo.
<ericsnow> rick_h_: thanks
<mup> Bug #1496127 opened: [juju-log] help is poor and results are confusing <docteam> <juju-core:New> <https://launchpad.net/bugs/1496127>
<natefinch> rick_h_: quickstart seems to be hard coded to use /usr/bin/jujud  rather than just using the first one in the path?
<rick_h_> natefinch: hmm i know you can env var it but not on path list
<rick_h_> natefinch: checking
<thumper> rick_h_: hey
<rick_h_> thumper: party
<thumper> rick_h_: after saying thursdays are good (which they are generally), I can't do tomorrow
<thumper> kid school thing to be at
<rick_h_> thumper: ah ok
<natefinch> thumper: tomorrow is wednesday ;)
 * natefinch is hilarious
<thumper> I can move the normal meeting
<thumper> sorry
<thumper> ugh
<rick_h_> thumper: I've got a think fri (my thurs) but wanted to chat soon to get things in order pre-seattle if we can
<thumper> brain futz
<thumper> rick_h_: I have 15 minutes now
<rick_h_> thumper: sure
<alexisb> wallyworld, I am going to be late to the sts call
<wwitzel3> when you add a new facade verison is there anything else besides making sure you embed the previous, register the new version, and update facadeversions?
<wallyworld> alexisb: me too, i just got back from school drop off
<wallyworld> niedbalski: there now
<rick_h_> natefinch-afk: sorry for the delay, yes it doesn't look at the path list: http://bazaar.launchpad.net/~juju-gui/juju-quickstart/trunk/view/head:/quickstart/platform_support.py#L117
<rick_h_> natefinch-afk: if you'd prefer it to look there next please file a bug and can see what can be done, but that's the way it works currently.
<mup> Bug #1331151 opened: 'juju destroy-environment' sometimes errors <juju-core:New> <https://launchpad.net/bugs/1331151>
<mup> Bug #1496143 opened: juju 1.24.5 no-proxy does not work as expected. <juju-core:New> <https://launchpad.net/bugs/1496143>
<wallyworld> thumper: temp fix to unblock master http://reviews.vapour.ws/r/2669/
<katco> wallyworld: hm.. have to ask the q: is that in the spirit of blocking master?
<wallyworld> katco: ?
<wallyworld> it's a temp fix to unblock
<katco> wallyworld: unblocking by skipping tests seems like it's just side-stepping the problem
<wallyworld> katco: the functionality is dsiabled
<katco> wallyworld: when the purpose of blocking is so that we don't layer onto broken code
<wallyworld> so the tests need to be too
<wwitzel3> with current master I am jsut getting this error over and over : machine-0: 2015-09-15 21:24:21 ERROR juju.worker runner.go:223 exited "toolsversionchecker": cannot update tools information: cannot get latest version: canot find available tools: no matching tools available
<bogdanteleaga> wouldn't it be better to have something like ignore-<bug-no> when pushing rather than doing this?
<katco> wallyworld: ah ok. makes sense
<wwitzel3> anyone seen that before?
<wallyworld> katco: the functionality will be fixed today - a closed mgo session - this just buys time
<bogdanteleaga> wwitzel3: I've been getting it a lot, but I've just attributed it to "no upgrades available", but a bad error message
<katco> wwitzel3: sorry, haven't seen it. master's in a bad way atm though
<thumper> wallyworld: shipit
<wallyworld> ty
<thumper> cmars: I'm going to have to skip today
<mgz> thumper: a selection of bugs, there are going to be some dupes in here <https://bugs.launchpad.net/juju-core/+bugs?field.tag=wily>
<thumper> mgz: ta
<mgz> most of the test panics are one new form of panic on vivid/wily
<mup> Bug #1496159 opened: test suite runs fail to clean up all artifacts <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1496159>
<cmars> thumper, k
<mup> Bug #1496159 changed: test suite runs fail to clean up all artifacts <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1496159>
<sinzui> katco: thumper wallyworld alexisb: Ci will pause for at most 1 hour when a voting test fails its reties.
<wallyworld> sinzui: 1 hour is a long time :-)
<sinzui> wallyworld: we do need time to look at os-deployer which will trigger 1h. If the failure is in the first 2 hours of testing, there will be no delays at all
<mgz> wallyworld: this is somewhat complicated, in practice most failures will not cause a delay to retesting
<mgz> because it's based on time after failure, and if there are other tests ongoing, that time is used anyway
<wallyworld> makes sense
<mup> Bug #1467712 changed: cmd/jujud/agent: data race in test <juju-core:Fix Released> <https://launchpad.net/bugs/1467712>
<mup> Bug #1467753 changed: cmd/jujud/agent: multiple data races detected <juju-core:Fix Released> <https://launchpad.net/bugs/1467753>
<mup> Bug #1496159 opened: test suite runs fail to clean up all artifacts <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1496159>
<mup> Bug #1496161 opened: cmd/jujud/agent: data race detected <juju-core:New> <https://launchpad.net/bugs/1496161>
<mup> Bug #1496161 changed: cmd/jujud/agent: data race detected <juju-core:New> <https://launchpad.net/bugs/1496161>
<mup> Bug #1467712 opened: cmd/jujud/agent: data race in test <juju-core:Fix Released> <https://launchpad.net/bugs/1467712>
<mup> Bug #1467753 opened: cmd/jujud/agent: multiple data races detected <juju-core:Fix Released> <https://launchpad.net/bugs/1467753>
<wallyworld> mgz: sinzui: one fix landed, one more to go, let's hope tmac bad rec stays away, fingers crossed
<mup> Bug #1467712 changed: cmd/jujud/agent: data race in test <juju-core:Fix Released> <https://launchpad.net/bugs/1467712>
<mup> Bug #1467753 changed: cmd/jujud/agent: multiple data races detected <juju-core:Fix Released> <https://launchpad.net/bugs/1467753>
<mup> Bug #1496161 opened: cmd/jujud/agent: data race detected <juju-core:New> <https://launchpad.net/bugs/1496161>
<mup> Bug #1496166 opened: juju ssh connects to incorrect units <juju-core:New> <https://launchpad.net/bugs/1496166>
<mup> Bug #1494947 changed: Panic keyManagerSuite.TestImportKeys on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494947>
<mup> Bug #1494948 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494948>
<mup> Bug #1494949 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494949>
#juju-dev 2015-09-16
<mup> Bug #1494947 opened: Panic keyManagerSuite.TestImportKeys on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494947>
<mup> Bug #1494948 opened: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494948>
<mup> Bug #1494949 opened: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494949>
<mup> Bug #1494947 changed: Panic keyManagerSuite.TestImportKeys on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494947>
<mup> Bug #1494948 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494948>
<mup> Bug #1494949 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494949>
<mup> Bug #1494947 opened: Panic keyManagerSuite.TestImportKeys on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494947>
<mup> Bug #1494948 opened: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494948>
<mup> Bug #1494949 opened: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494949>
<davecheney> hello
<davecheney> both the blocking bugs have been marked fixed committed
<davecheney> how long until CI is unblocked
<davecheney> thanks
<mup> Bug #1494947 changed: Panic keyManagerSuite.TestImportKeys on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494947>
<mup> Bug #1494948 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494948>
<mup> Bug #1494949 changed: Panic charmVersionSuite.TearDownTest on wily <ci> <intermittent-failure> <panic> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1494949>
<mup> Bug #1496184 opened: juju bootstrap on armhf/keystone hangs   juju version 1.24.5 <juju-core:New> <https://launchpad.net/bugs/1496184>
<mup> Bug #1496184 changed: juju bootstrap on armhf/keystone hangs   juju version 1.24.5 <juju-core:New> <https://launchpad.net/bugs/1496184>
<mup> Bug #1496184 opened: juju bootstrap on armhf/keystone hangs   juju version 1.24.5 <juju-core:New> <https://launchpad.net/bugs/1496184>
<mup> Bug #1496184 changed: juju bootstrap on armhf/keystone hangs   juju version 1.24.5 <juju-core:New> <https://launchpad.net/bugs/1496184>
<mup> Bug #1496184 opened: juju bootstrap on armhf/keystone hangs   juju version 1.24.5 <juju-core:New> <https://launchpad.net/bugs/1496184>
<mup> Bug #1496188 opened: panic in juju worker <juju-core:New> <https://launchpad.net/bugs/1496188>
<rick_h_> anastasiamac_: around?
<anastasiamac_> rick_h_: Hi :D
<rick_h_> anastasiamac_: howdy, so the meeting time is a bit off for my tastes. So I wanted to check in and ask what I could provide to help in my absence?
<anastasiamac_> rick_h_: apologies for funny timing for the meeting - it's just aplaceholder... plz let me know what works for u :)
<rick_h_> anastasiamac_: aside from the info on the bundle break down lib and such, is there anything else you needed on my end?
<anastasiamac_> rick_h_: ur presence? :P
<rick_h_> anastasiamac_: :)
<anastasiamac_> rick_h_: what time would suit u better?
<rick_h_> anastasiamac_: the other one worked better, that's just getting up early
<rick_h_> anastasiamac_: but if you know what you're looking for input-wise maybe I can do it async?
<rick_h_> I don't want to hold things up on me
<rick_h_> as I think we had most things in hand back in the spec aside from the new bundle stuff and I've given you all the details on that
<anastasiamac_> rick_h_: from my perspective, I think m k at the moment with info
<anastasiamac_> rick_h_: would it suit u better if I'll try to move the meeting by 5hrs?
<rick_h_> anastasiamac_: ok, nvm. I'll be there
<rick_h_> anastasiamac_: don't worry about it
<rick_h_> anastasiamac_: forget I was here../me disapears into the night
<anastasiamac_> rick_h_: i would like u to sleep :D
<anastasiamac_> but I'll move the meeting to later as I think it may suit u and John better :D
<rick_h_> anastasiamac_: naw, I like to party :P
<rick_h_> anastasiamac_: ask my team, it's not that bad for me anyway and I can sleep in thus :)
<anastasiamac_> rick_h_: if u reconsider, m happy to move to more suitable time ;D
<rick_h_> anastasiamac_: all good, go back to happy work thoughts for your day
<anastasiamac_> rick_h_: thnx :D get rest!
<thumper> menn0: hey, got time to talk about the unit agent upgrades?
<thumper> davecheney: care to look at this bug? http://reports.vapour.ws/releases/3059/job/run-unit-tests-trusty-ppc64el/attempt/3804
<thumper> davecheney: weird that it happens on power but not amd64
<menn0> thumper: yep
<thumper> davecheney: although, that code looks so fucked up
 * thumper is ask menno too
<thumper> menn0: standup hangout?
<menn0> thumper: see you there
 * thumper wonders how this code works
<davechen1y> thumper: honestly no idea
<thumper> davechen1y: anyway, I've got this one
<davechen1y> we've had continual problems getting the right versino of gccgo and libgo5 on ppc64 machines
<thumper> menn0: this is the uniter panic on power http://reviews.vapour.ws/r/2671/diff/#
<thumper> davecheney: I have a golang question...
<thumper> if I have a function, and that function defines a pointer variable
<thumper> and in that same function I create some other closures (functions) that refer to that variable
<thumper> are the closures defined to refer to that variable by reference?
<davecheney> thumper: yes
<thumper> good
<davecheney> thumper: uniter fix LGTM
<thumper> davecheney: ta
<mup> Bug #1496217 opened: panic in cmd/jujud on power <blocker> <ci> <regression> <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1496217>
<mup> Bug #1496221 opened: Whitespace or new lines in "juju status --format=tabular" output <juju-core:New> <https://launchpad.net/bugs/1496221>
<mup> Bug #1496217 changed: panic in cmd/jujud on power <blocker> <ci> <regression> <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1496217>
<mup> Bug #1496221 changed: Whitespace or new lines in "juju status --format=tabular" output <juju-core:New> <https://launchpad.net/bugs/1496221>
<davecheney> thumper: when do "fix committed" blockers roll off the dashboard ?
<thumper> davecheney: when they have passed CI
<thumper> wallyworld: http://reports.vapour.ws/releases/3059/job/aws-upgrade-20-trusty-amd64/attempt/29 fix didn't fix
<davecheney> thumper: do we have to wait til tomorrow for that ?
<thumper> davecheney: I just removed one my marking it fix released
<thumper> davecheney: we are down to two failures
<thumper> I have submitted a fix for one of them
<thumper> which in honesty I think will change a panic into a different failure
<thumper> the other was the upgrade one...
<davecheney> thumper: true, if that valye was nil
<davecheney> something else failed to initalise
<mup> Bug #1496217 opened: panic in cmd/jujud on power <blocker> <ci> <regression> <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1496217>
<mup> Bug #1496221 opened: Whitespace or new lines in "juju status --format=tabular" output <juju-core:New> <https://launchpad.net/bugs/1496221>
<thumper> holy shit mup is somewhat behind
<thumper> wallyworld: is the uniter failure due to the missing upgrade step?
<mup> Bug #1495591 changed: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1495591>
<anastasiamac_> thumper: which uniter failure r u referring to?
<thumper> anastasiamac_: the most recent failure from ci - http://reports.vapour.ws/releases/3059/job/aws-upgrade-20-trusty-amd64/attempt/29
<thumper> menn0: I have a hacky fix for the uniter upgrade step in a 1.25 branch, running tests now
<menn0> thumper: cool
<mup> Bug #1495591 opened: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1495591>
<mup> Bug #1495591 changed: TestRunCommand fails on windows <blocker> <ci> <test-failure> <windows> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1495591>
<thumper> wallyworld: after chatting with anastasiamac_ the answer I think is "yes, the lack of uniter upgrade steps is causing the upgrade failure"
 * thumper has a hacky fix
 * thumper shakes his fist at the sky
 * thumper is grumpy
<thumper> in our current tests... even 1.25, I've come across three different intermittent failing tests
<davecheney> thumper: -race
<thumper> bah humbug
<thumper> no presents for anyone this year
<thumper> github.com/juju/juju/worker/peergrouper fails every time for me...
 * thumper sighs
<davecheney> yeah, that one has failed for months for me
<davecheney> especially on non intel platforms
<anastasiamac_> for bug 1495542, i think that the reported bug is no longer a blocker as the test exercising unused code which caused the trace has been skipped...
<anastasiamac_> This bug still needs to be addressed but it should not be blocking master.
<anastasiamac_> The failure observed now seem to be different - the trace looks different. What we currently observe MAY warant another bug ;D
<mup> Bug #1495542: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1495542>
<anastasiamac_> thumper gives presents? each year?
<thumper> anastasiamac_: not any more...
<anastasiamac_> thumper: no ned to stop on my account :)
<anastasiamac_> need*
<thumper> davecheney: I'm running go 1.5 now, wondering if that is related...
 * thumper hits the -race fails with the go package...
<thumper> no races for me
<davecheney> thumper: almost certainly
<davecheney> see long discussion about getting 1.5 rolled out everywhere before comitting to 1.6 for 16.04
 * thumper nods
<thumper> wallyworld: actually back to the upgrade failure, it seems that yes, while there is an error with a bad kind, things progress anyway
<wallyworld> thumper: "ModeAbide: cannot set invalid status "started"" - that looks like it's caused by new juju status enums not being processed correctly, but that code has been in place for months. not ure why that is shwing up now.
<davecheney> thumper: AFAIK juju passes on intel with 1.5
<wallyworld> maybe it was always there and we didn;t notice
<davecheney> it almost passes on non intel with go 1.5
<thumper> wallyworld: the fundamental error is that the source sets relation config, the sink never runs relation-changed hook
<davecheney> but slowness makes it hard to be absolute here
<davecheney> see previous plee for faster ppc64 hardware
<thumper> davecheney: amd64 and go 1.5 fails peergrouper almost every time for me
<wallyworld> thumper: ok, will start at that point
<wallyworld> thumper: peergrouper tests kinda suck (from memory). they need to be transformed into proper unit tests
<thumper> wallyworld: ack
<davecheney> thumper: try GOMAXPROCS=1 go test .../peergrouper
<davecheney> that will make the scheduler look more the like 1.2.1 scheduler
<davecheney> and may improve reliabilty
<thumper> menn0: is this what you were thinking for the uniter upgrades?  http://reviews.vapour.ws/r/2672/diff/#
 * menn0 looks
<davecheney> if so, then those tests need to be fixed to not expect a certain order of operations
 * thumper pokes the peergrouper
<sinzui> anastasiamac_: As long as master is not blessed, master is blocked. if you close bug 1495542, you need to replace it with a critical ci regression bug because stakeholder will not allow is to release if the test shows they canot upgrade from 1.20 to 1.24.
<mup> Bug #1495542: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1495542>
<sinzui> anastasiamac_: CI Cares about tests passing, fixing bugs without getting the tests passing does not mean we can release
<menn0> thumper: that's pretty much what I was thinking although in my mind those upgrade steps would be run from the unit agent Run()
<anastasiamac_> sinzui: ah... my bad - the bug is not about the trace but upgrade difficulties :D of course, it should stay :)
<menn0> thumper: calling them from Run is probably slightly preferrable in case there are some others we need to add that aren't uniter related
<menn0> thumper: also, I thought you said there were 2 upgrade steps like this?
<thumper> menn0: the other one is introduced in master, this is a 1.25 branch
<thumper> where is the unit agent Run func?
<menn0> cmd/jujud/agent/unit.go
<menn0> thumper: ^
<thumper> ta
<menn0> FFS!
<menn0> I'm getting TLS handshake errors with a freshly bootstrapped env using master
<menn0> actually... ignore me
<thumper> davecheney: FWIW setting GOMAXPROCS=1 makes the peergrouper tests pass
<menn0> I think there's agents from a hosted env trying to contact the previous state server
<thumper> heh
<thumper> menn0: I'll move the run upgrades func to the unit agent code
<menn0> thumper: they're probably also fine where they are (in the uniter package)
<thumper> menn0: nah...
<thumper> moving already
<mup> Bug #1496237 opened: peergrouper tests very unstable with Go 1.5 <intermittent-failure> <tech-debt> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1496237>
<mup> Bug #1496237 changed: peergrouper tests very unstable with Go 1.5 <intermittent-failure> <tech-debt> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1496237>
<thumper> menn0: this better? http://reviews.vapour.ws/r/2672/diff/#
<mup> Bug #1496237 opened: peergrouper tests very unstable with Go 1.5 <intermittent-failure> <tech-debt> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1496237>
 * menn0 looks
<menn0> thumper: LGTM
<thumper> menn0: I'll await fwereade's comment
<menn0> cool
<thumper> ok... I'm pretty much done for today
<thumper> see all y'all tomorrow
<jam1> wallyworld: ping
<wallyworld> jam1: hi
<jam> wallyworld: so of Series in Charm Metadata and Resources, I think we'd rather focus on the former
<jam> I know you poked me about the latter, and I can try to bring it up in 30 min, though I have other higher priority right now.
<wallyworld> jam: yeah, that work is underway. i'd also like to get resources spec at least approved so we can start on it this cycle. eco really wants it. maybe you can do it next week, or just ask the spec to be reviewed over the next few days
<wallyworld> jam: series in metadata won't take 100% so there will be a little spare capacity
<axw> fwereade: I'm debugging #1495542, getting a lot of "2015-09-16 07:24:41 DEBUG juju.worker.dependency engine.go:438 "uniter" manifold worker stopped: getting resource leadership-tracker: dependency not available"
<mup> Bug #1495542: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1495542>
<axw> fwereade: restarting the API server did not appear to help, but restarting the unit agent did. makes me suspect the dependency engine isn't delivering a new API connection to the leadership tracker... kind of clutching at straws though
<axw> fwereade: other part to the error: "leadership-tracker" manifold worker returned unexpected error: leadership failure: leadership manager stopped
<axw> fwereade: and after I restarted the API server, I was getting: engine.go:438 "leadership-tracker" manifold worker stopped: leadership failure: error making a leadership claim: connection is shut down
<axw> cmars: ^^ on upgrade from 1.20 to 1.26, "metricspooldir" doesn't exist. is there meant to be an upgrade step for this?
<dimitern> axw, o/
<axw> dimitern: heya
<dimitern> axw, just a heads up - we decided to do a simpler implementation (not as effective as I'd like) for provisioning instances into subnets of a space, only in EC2 and not integrating that with AZ distribution
<dimitern> axw, then, we should have a discussion how to properly do it with you and/or fwereade
<axw> dimitern: okey dokey, so does it disable distribution when a space is specified?
<dimitern> axw, it tries to accommodate both - i.e. starting from all AZs and restricting them to only those relevant to a given space
<dimitern> any reviewers? this is a forward port to master of the 1.25 fix for bug 1492066: http://reviews.vapour.ws/r/2673/ - please take a look
<mup> Bug #1492066: cloud-init fails when deploying CentOS with Juju. <centos> <cloud-init> <juju> <juju-core:In Progress by dimitern> <juju-core 1.25:Fix Committed by dimitern> <https://launchpad.net/bugs/1492066>
<axw> dimitern: hm ok, I don't really get how you implement DistributeInstances correctly then. I might need to see code to understand
<dimitern> axw, I'll send you a link to the PR by mail when I'm done
<axw> dimitern: thanks
<fwereade> axw, sorry, processing
<fwereade> axw, ok, I'm not really managing to form a coherent theory at the moment -- that latest log does rather look like the various workers are coming up? but we've got weird settings revision numbers?
<fwereade> axw, but then the leadership errors you posted above make it look like the problem is in
<axw> fwereade: it did in CI, but when I repro'd it just continuously restarted manifold workers
<fwereade> axw, fuck, I think I might know what's happening
<fwereade> axw, we've never had a mechanism for usefully bouncing the workers that live under state
<axw> fwereade: I saw the "HackLeadership" thing, which is called when the API server stops handling a client connection
<axw> fwereade: isn't that the only time we need to bounce it...?
<fwereade> axw, if it encounters some unknown error, it'll stop on its own
<fwereade> axw, but then I don't quite get what happens when the apiserver restarts
<axw> fwereade: I didn't see any "stopping leadership manager with error", but I'm about to repro again so I'll double check
<fwereade> axw, error logging around state/leadership/manager.go:38 would detect that, I think
<fwereade> axw, I don't think there's anything in state that'd otherwise notice the error until state itself is closed
<fwereade> axw, hopefully we'd log that one, but it could be a long time after it actually happened
<axw> fwereade: lol. third time's the charm, didn't repro this time :/
<fwereade> goddammit
<axw> fwereade: settings version is still screwed though
<fwereade> axw, huh
<fwereade> axw, so the settings version was right before the upgrade and something overwrote it?
<axw> fwereade: not sure yet. I upgraded, ran a "config-set" which triggers "relation-set", remote state sees it... but it's the same as what's in local state already
<fwereade> axw, is the relation-set to a unique value?
<fwereade> axw, we elide null changes
<axw> fwereade: yes
<fwereade> axw, just checking :(
<fwereade> axw, I guess check the rel-state/mgo-state versions before the upgrade and see which get touched by the upgrade?
<fwereade> axw, either something's breaking the versions
<axw> fwereade: yeah. gotta got for a bit, I'll try that after
<fwereade> axw, or the versions are already broken
<fwereade> axw, cheers
 * fwereade has to depend on wifi through walls for a few hours, please forgive any spottiness
<frankban> menn0 or katco: could you please take a look at http://reviews.vapour.ws/r/2633/ when you have time? proposed against a feature branch
<frobware> dimitern, voidspace, jam: standup time if you're coming...
<voidspace> frobware: I'm here
<jam> frobware: omw
<axw> fwereade: welp... before upgrade, api server sent a relation units change with version=3. after upgrade, the initial change event has version=2
<axw> fwereade: I think this might be because of the env-uuid migration... txn-revno would reset, because the doc is removed and re-added. but not sure why it hasn't been an issue before.
<fwereade> axw, holy hell, that is horrifying
<fwereade> axw, ok, if we're going to address this -- as we must -- let's do it right and maintain our own revno
<axw> fwereade: yep, SGTM
<axw> fwereade: any thoughts on why this wouldn't have been an issue until now?
<fwereade> axw, no idea at all
<fwereade> axw, ah-ha
<fwereade> axw, ...or not
<axw> heh :)
<fwereade> axw, hmm. did we change from remove/add in one txn to remove-a-bunch then add-a-bunch?
<fwereade> axw, I'm pretty sure that if we did a remove/add in one txn it'd preserve revno
<fwereade> axw, separate remove then add would be dangerous, I hope we don't do that
<axw> fwereade: one big txn, but they're different docs - they have different IDs
<fwereade> axw, but I suspect a separate add then remove would be safe *except* for changing the underlying revno
<axw> fwereade: https://github.com/juju/juju/blob/master/state/upgrades.go#L1133
<axw> fwereade: the doc inserted would have txn-revno in it I guess, but I think that'd be overwritten by the txn code?
<fwereade> axw, bugger, I would appear to be completely wrong
<fwereade> axw, I have to double-check mgo/txn
<axw> fwereade: and I'll do some experiments in that code
<fwereade> axw, ha
<frobware> dimitern, so I think I see why the number of sockets in close_wait dropped so dramatically: Out of memory: Kill process 4280 (jujud) score 699 or sacrifice child
<fwereade> axw, yes, I am completely wrong
<axw> fwereade: as in, revno is not preserved?
<fwereade> axw, yeah -- I now think all the new docs will be inserted with revno 2
<axw> fwereade: that matches what I saw
<fwereade> axw, and possibly we do a few subsequent txns on them to bump those up further?
<fwereade> axw, but it'll only work right if we bump them far enough?
<fwereade> axw, making it more likely to fail (subtly at first) in long-lived environments
<axw> fwereade: right... so I have nfi how nobody has found this before
<axw> fwereade: sorry gotta go, I'll bbl
<fwereade> axw, take care
<axw> fwereade: perhaps before we were using !=, now we're using >
 * axw actually goes
<fwereade> axw, I thought we always used > but could be wrong
<voidspace> frobware: dimitern: so the dimensions I have for the juju poster are wrong for the printer - I need to spend a bit of time getting the graphic the right size so I can get it printed for Friday :-/
<voidspace> frobware: dimitern: hopefully not too long, but I'm currently derailed onto that as it's a bit urgent (this is for PyCon UK and the Juju poster session I'm doing)
<voidspace> ericsnow: ping
<frobware> voidspace, ack
<voidspace> frobware: dimitern: graphic and printing sorted, collection Friday AM! *phew*
<axw> fwereade: confirmed that it is >2 before, and 2 after upgrade. I'll look into an upgrade step / watcher change.
<axw> fwereade: and we used to use !=
<axw> fwereade: https://github.com/juju/juju/blob/1.25/worker/uniter/relation/livesource.go#L223
<fwereade> axw, well, that would explain it
<fwereade> axw, and could indeed still show up as missed-hooks on upgrade
<fwereade> axw, but be much rarer
<fwereade> axw, but still needs to be fixed
<axw> fwereade: indeed
<fwereade> axw, re the other stuff
<fwereade> axw, er, I think I'm saying, it looks like there are more bugs
<fwereade> axw, but I'm not clear which they are and what's been addressed already in that thread
<axw> fwereade: ok. I definitely saw issues with manifolds restarting, and also one with the metrics spool dir not existing
<axw> fwereade: I had to manually create it to proceed
<axw> fwereade: but... that only happened to me the first couple of times I tested this
<fwereade> cmars, thoughts on ^^?
<dimitern> voidspace, frobware, I'm back now - reading scrollback..
<dimitern> voidspace, nice! I'm sure the poster session will be interesting :)
<dimitern> voidspace, incidentally, please have a look at this http://reviews.vapour.ws/r/2673/ ;)
<dimitern> dooferlad, frobware ^^
<voidspace> dimitern: looking
<frobware> dimitern, going to get some lunch whilst that process gobbles up sockets... want to look into this at the top of the hour (14 UTC+1)?
<voidspace> dimitern: that's a shed load of test changes!
<voidspace> dimitern: LGTM
<dimitern> frobware, sure, np
<dimitern> voidspace, thanks! yeah - but at least they're manageable  now :)
<voidspace> dimitern: yep, good work
<frobware> dimitern, voidspace: I was confused by the change largely because I looked that test changes first. the "other" file makes it quite obvious.
<axw> fwereade: I need to go have dinner, but this is what I'm intending to do: https://github.com/juju/juju/compare/master...axw:state-settings-version
<fwereade> axw, thanks
<axw> fwereade: there's some fallout in state tests I need to look into
<voidspace> frobware: yeah, me too :-)
<fwereade> axw, that's glorious
<fwereade> axw, I have wanted time and/or an excuse to do that forever
<dimitern> voidspace, frobware, yeah, but to test the obvious changes was mostly impossible due to the way tests were written (they accumulated a lot of cruft over time)
 * dimitern is gradually build up frustration with helm in emacs - it's helpful for a few things, but it gets increasingly annoying in most common cases
<rogpeppe> simple addition to the errors package: https://github.com/juju/errors/pull/21
<rogpeppe> reviews appreciated, thanks
<axw> fwereade: cool :)  upgrades will be a little bit messy, but worth the pain
<rogpeppe> axw: fancy a quick trivial review? :) http://reviews.vapour.ws/r/2674/
<axw> rogpeppe: only for you
<axw> rogpeppe: shipit
<rogpeppe> axw: ta! :)
<rogpeppe> axw: you're a love
<bogdanteleaga> unit-sleep-0[1244]: 2015-09-16 12:55:45 DEBUG juju.worker.dependency engine.go:438 "metric-sender" manifold worker stopped: failed to open spool directory "C:/Juju/lib/juju/metricspool": GetFileAttributesEx C:/Juju/lib/juju/metricspool: The system cannot find the file specified.
<bogdanteleaga> shouldn't this at least try to create that directory if it's not there?
<rogpeppe> this is what I wanted it for: http://reviews.vapour.ws/r/2675/
<frobware> dimitern,  I enable helm for about a day, get frustrated with it, turn it off for a few months. Then I see the potential in some youtube video, turn it on... lather, rinse, repeat...
<rogpeppe> if someone could give me a review of this please, it would make me happy http://reviews.vapour.ws/r/2675/
<dimitern> frobware, :) yeah - it's hijacking some commands somewhat haphazardly (latest frustration is compilation-find-file)
<ericsnow> voidspace: pong
<bogdanteleaga> dimitern, I've been starting with it lately and it seems really nice so far, but I find my usecases are somewhat limited and usually integrate with projectile
<dimitern> bogdanteleaga, yeah, the worst of all, helm *really* tries to be "helpful" *everywhere*
<bogdanteleaga> dimitern, well, I didn't find it intrusive, at least not yet
<bogdanteleaga> dimitern, it's not like it starts showing up if you didn't call for it :P
<voidspace> ericsnow: unping...
<ericsnow> voidspace: :)
<voidspace> dimitern: in my code review fwreade spake thusly "I'm raising a bit of an eyebrow at the gobal-prefer-ipv6 thing... is this going to be a long term approach?"
<voidspace> dimitern: I assume I should reply "no"...
<rogpeppe> katco: hiya
<katco> rogpeppe: hey
<rogpeppe> katco: fancy a simple review? :) http://reviews.vapour.ws/r/2675/
<katco> rogpeppe: working my way through them now actually :)
<rogpeppe> katco: cool
<rogpeppe> katco: this one's really really ickle simple...
<katco> rogpeppe: you are 3rd in the queue my friend
<rogpeppe> katco: awesome!
<rogpeppe> katco: BTW it's targeting the chicago-cubs feature branch
<katco> rogpeppe: what is that branch anyway?
<rogpeppe> katco: adding macaroon support to juju-core
<katco> rogpeppe: why wasn't it just named that lol
<rogpeppe> katco: it's the branch we worked on in the recent chicago sprint
<rogpeppe> katco: blame mattyw :)
<katco> voidspace: re: http://reviews.vapour.ws/r/2634 is this a forward-port of http://reviews.vapour.ws/r/2593/ ?
<voidspace> katco: it is...
<voidspace> katco: I'm going to close the two forward port reviews and repropose when the original PR is actually done
<katco> voidspace: ah ok, wondered if that was what was going on
<voidspace> katco: yeah, it was done. And then fwreade looked at it ;-)
<katco> haha
<voidspace> :-)
<voidspace> all good stuff
<katco> he keeps us honest :)
<dimitern> voidspace, yes I think so
<katco> rogpeppe: shipit
<rogpeppe> katco: ta!
<voidspace> fwereade: ping
<fwereade> voidspace, pong
<voidspace> fwereade: question about apiserver/client/status
<voidspace> fwereade: note as a precursor that machine/unit PublicAddress/PrivateAddress have now changed and the only error they can return *does* indicate that an address has not yet been set
<voidspace> fwereade: we fetch PublicAddress for both UnitStatus and MachineStatus
<voidspace> fwereade: you were unhappy about not surfacing errors (the old code and my branch set an empty DNSName / PublicAddress)
<voidspace> fwereade: as fetching the address can no longer raise arbitrary errors (no secret db write) are you happier with returning an empty address
<voidspace> fwereade: if *not*, there's no obvious existing field (maybe InstanceState on MachineStatus) to surface the error information in
<voidspace> fwereade: http://reviews.vapour.ws/r/2593/#comment17052
<fwereade> voidspace, yeah, and status is bloated enough already
<voidspace> fwereade: ok
<voidspace> I thought that removing the possibility of arbitrary errors made it less of an issue, but wanted to check
<fwereade> voidspace, possibly dump them to stderr as warnings..?
<voidspace> as in the diff there will still be an "apparently ignored" error
<fwereade> voidspace, I think that once you have an err return you have to assume the possibility of unknown errors
<voidspace> heh, hmmm
<fwereade> voidspace, (especially when the network is involved, as it is here)
<voidspace> I've *documented* the error the method can return
<voidspace> and if you can't trust a doc string what *can* you trust...
<fwereade> voidspace, I'm laughing both merrily and bitterly over here
<voidspace> fwereade: so this is on the apiserver, what do you mean by "dump to stderr"?
<voidspace> fwereade: log?
<fwereade> voidspace, oh -- ha
<fwereade> voidspace, dammit, sorry
<fwereade> voidspace, forget stderr then, just log it at WARNING or something :/
<voidspace> fwereade: cool, thanks
<voidspace> appreciated, sorry to steal your cycles
<fwereade> voidspace, was thinking of an older version of status that did much more client-side
<fwereade> voidspace, no worries, always a pleasure
<wwitzel3> ericsnow: ping
<ericsnow> wwitzel3: hi!
<wwitzel3> fwereade: ping
<fwereade> wwitzel3, pong
<katco> is pinging random people a thing now? a meme maybe? ping wwitzel3
<wwitzel3> katco: ping
<wwitzel3> ;)
<katco> wwitzel3: ping, wwitzel3. ping. as your fore-fathers have pung.
<wwitzel3> haha
<mattyw> I remember when this was a sensible channel
<katco> mattyw: ok mr chicago-cubs branch ;)
<mattyw> katco, I feel bad - I've ranted a couple of times the last week about how much I hate pet names
<mattyw> katco, and there I was sinning the whole time I was ranting
<katco> haha
<mattyw> katco, also - hello there, not spoken in ages
<katco> mattyw: yeah, hiya
<katco> mattyw: haven't talked to several folks in a long time whom are not in my tz =/
 * perrito666 learns that this was once a sensible channel
<natefinch> ericsnow: back now
<ericsnow> natefinch: k
<mup> Bug #1496472 opened: TestRun fails intermittently on ppc64 <ci> <intermittent-failure> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1496472>
<alexisb> sinzui, mgz is this a critical blocker for 1.25 ^^^
<alexisb> I saw the cursed runs
<mgz> alexisb: nope, it's intermittent, and we've had a selection of different ppc64 test issues
<mgz> don't have enough info yet
<alexisb> mgz, ack, thanks
<mgz> alexisb: bug 1496217 is another failure from the previous run that likely has higher impact
<mup> Bug #1496217: panic in cmd/jujud on power <blocker> <ci> <regression> <juju-core:In Progress by cmars> <https://launchpad.net/bugs/1496217>
<alexisb> mgz, ack, cmars thanks for picking up that bug!
<alexisb> alrighty all I am stepping out for a bit, katco knows how to get a hold of me if anything urgent comes up
<katco> i am accepting requests for prank calls
<mgz> katco: you are in an amusing mood today :)
<katco> mgz: i aim to please :)
<mup> Bug #1496472 changed: TestRun fails intermittently on ppc64 <ci> <intermittent-failure> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1496472>
<mup> Bug #1496472 opened: TestRun fails intermittently on ppc64 <ci> <intermittent-failure> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1496472>
<rogpeppe> katco: here's a somewhat more substantial PR if you have a moment or two. it's just code cleanup, but will make moving forward easier. https://github.com/juju/juju/pull/3296
<katco> rogpeppe: will tal in a bit
<rogpeppe> katco: ta!
<rogpeppe> ericsnow, natefinch: you might wanna take a look too https://github.com/juju/juju/pull/3296
<ericsnow> rogpeppe: nice
<rogpeppe> ericsnow: thanks :)
<ericsnow> rogpeppe: I looked into doing a refactor like this last year and chickened out :)
<rogpeppe> ericsnow: i couldn't just make things worse...
<rogpeppe> ericsnow: the error return stuff is terrible and sadly cannot be fixed.
<ericsnow> rogpeppe: yeah, that was one of the major things that hung me up
<rogpeppe> ericsnow: i've no idea what whoever it was was thinkin
<rogpeppe> g
<ericsnow> rogpeppe: hopefully I didn't make it worse with all the changes I made in that space for backups
<rogpeppe> ericsnow: i'm afraid backups is one of the problem places
<ericsnow> rogpeppe: :(
<rogpeppe> ericsnow: almost every API call returns an error in the form {Error: {error object}}
<rogpeppe> ericsnow: except backups returns the error as just {error object}
<ericsnow> rogpeppe: FWIW, that was some of the first code I wrote for Juju (and in Go)
<rogpeppe> ericsnow: that's ok - someone else should've been more on the ball
<ericsnow> rogpeppe: so no surprises :)
<perrito666> anyone is local provider savvy?
<ericsnow> katco, mgz, natefinch: I've found the problem (for backward-compatibility the API server converts empty strings into nil in config settings)
<ericsnow> katco, mgz, natefinch: this has an impact on the behavior of UpdateConfigSettings but not on directly setting the config settings
<ericsnow> katco, mgz, natefinch: I should have a patch up shortly
<natefinch> ericsnow: thanks!
<katco> ericsnow: wow, nice... i think fwereade categorized this as "spooky action at a distance"
<ericsnow> katco: :)
<mgz> ericsnow: woho!
<mgz> ericsnow: wave the branch at me when it's up
<ericsnow> mgz: k
<ericsnow> mgz: http://reviews.vapour.ws/r/2679/
<katco> ericsnow: tal
<ericsnow> katco: thanks
<mgz> ericsnow: thanks!
<ericsnow> mgz: sorry that took so long
<mgz> ericsnow: thank you, that branch makes sense to me
<mgz> and I can see why we struggled to track it down :)
<katco> ericsnow: shipit
<ericsnow> mgz: yeah, it was just non-obvious enough :/
<katco> ericsnow: and ty for your hard work
<ericsnow> katco: hey, this one was a team effort :)
<perrito666> bbl gym
<ericsnow> katco, mgz, natefinch: the fix has landed
<natefinch> ericsnow: thanks for figuring it out.  I'm catching up on the changes you made now
<katco> ericsnow: woohoo!
<mgz> ericsnow: you have the next run through CI, eta 40mins
<ericsnow> mgz: k
<natefinch> ericsnow: so your change fixes the problem because it drops all the map entries with a nil value?
<ericsnow> natefinch: yep
<ericsnow> natefinch: it's what it was doing before
<natefinch> ericsnow: yeah... I wish there was a more obvious way to do it... converting in and out of the settings object still qualifies as spooky action IMO.
<ericsnow> natefinch: I was aiming for a consistent execution path through the two approaches
<ericsnow> natefinch: I agree it isn't optimal
<natefinch> I'm still surprised that using the deployer to deploy the bundle with the config had different results than deploying from the command line.
<natefinch> I guess the client must strip out the empty config values where the deployer was relying on the server to do so
<ericsnow> natefinch: yep, I suspect some trickery from the CLI before it makes the API call
<natefinch> ericsnow: I wonder if it wouldn't be more appropriate to have createSettingsOp strip out keys with nil values
<ericsnow> natefinch: that would impact other callers of createSettingsOp that currently do not worry about it
<ericsnow> natefinch: not that it's necessarily the wrong thing to do
<ericsnow> natefinch: I was trying to minimize the potential impact for this patch
<natefinch> ericsnow: yeah... I'm just trying to make the code a little more obvious and explicit.
<ericsnow> natefinch: fair enough :)
<mgz> 1.24 building on CI now.
<natefinch> ericsnow: http://reviews.vapour.ws/r/2680/diff/#
<natefinch> gotta run to dinner... but I think that's a more obvious codepath (and keeps your new test, which still passes, as do all other tests)
<ericsnow> natefinch-afk: I'll take a look
<ericsnow> natefinch-afk: note that I already landed that other patch for 1.24
<ericsnow> katco: I've sent you my feedback on that email
<ericsnow> katco: enjoy! :)
<katco> ericsnow: sweet, ty
<mup> Bug #1496217 changed: panic in cmd/jujud on power <blocker> <ci> <regression> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1496217>
<menn0> wallyworld: have you got time for a quick hangout?
<alexisb> \o/ 1.24 is blessed
<alexisb> ship the puppy!
<xwwt> team is on it
#juju-dev 2015-09-17
<xwwt> have just a couple of steps remaining
<rick_h_> alexisb: lol
<wallyworld> menn0: sorry, just got out of meetings, can talk now
<menn0> wallyworld: thanks. which hangout?
<wallyworld> https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
<mwhudson> oh the cannot find encoding bug is fixed in trusty-proposed now
<mup> Bug #1496639 opened: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1496639>
 * thumper wanders off to make a coffee
<mup> Bug #1496652 opened: Relation settings watcher exposes txn-revno to uniter <juju-core:In Progress> <https://launchpad.net/bugs/1496652>
<mup> Bug #1496652 changed: Relation settings watcher exposes txn-revno to uniter <juju-core:In Progress> <https://launchpad.net/bugs/1496652>
<mup> Bug #1496652 opened: Relation settings watcher exposes txn-revno to uniter <juju-core:In Progress> <https://launchpad.net/bugs/1496652>
<thumper> menn0: here is the master branch for the uniter upgrade step work http://reviews.vapour.ws/r/2682/diff/#
<thumper> menn0: just a forward port of the 1.25 with the new step added
<axw> wallyworld: FYI I'm doing the tweak to unblock master, just being held up writing a test
<wallyworld> ty, saw the bug comment
<thumper> man I love my coffee machine
<menn0> thumper: looking
<thumper> davecheney: re bug 1465317, what's the current status of the fixes you have been doing?
<mup> Bug #1465317: Wily osx win: panic: osVersion reported an error: Could not determine series <osx> <packaging> <wily> <windows> <juju-core:Triaged> <juju-core 1.25:Triaged> <juju-release-tools:Fix Released by sinzui> <https://launchpad.net/bugs/1465317>
<thumper> davecheney: should we assign this bug to you?
<thumper> for the work you have been doing?
<menn0> thumper: ship it
<thumper> menn0: cheers
<axw> wallyworld: http://reviews.vapour.ws/r/2683/
<wallyworld> looking
<menn0> thumper, wallyworld: i'm beginning to think this idea of Juju using MAAS' cached images is a non-starter
<thumper> because?
<menn0> thumper, wallyworld: the images don't appear to be in the same format
<wallyworld> not surprised
<wallyworld> i was sort of afaid of that
<menn0> the MAAS images are a tarball of a root filesystem
<thumper> menn0: document your findings on the bug, and we'll look for another solution
<menn0> the images served by the public server are a tarball with the kernal separate and a .img file which contains the root filesystem
<menn0> thumper: ok
<menn0> thumper: should I remove it from the 1.25-beta2 milestone?
<wallyworld> axw: ship it, thanks for temp fix
<thumper> um... yeah
<axw> wallyworld: ta
<wallyworld> thumper: menn0: this maas image thing is a direct request from dan w so i suspect we'll be asked to make it work somehow
<wallyworld> by we, i mean us and maas
<thumper> wallyworld: perhaps mass need to cache the initial format, then unpack for themselves
 * thumper hopes it is more maas
<wallyworld> yes
 * wallyworld does too
<axw> I think there was a mailing thread list about this stuff a while ago...
<wallyworld> rings a bell
<menn0> i'll have a look around
<menn0> and i'll still send an email to relevant people about this
<axw> menn0: not sure how relevant, "Juju + MAAS + Image Downloads" from may 2014
<axw> err sorry sept
<axw> can't read US dates :)
<menn0> axw: i've seen that one
<menn0> axw: that's the discussion around the ticket i'm looking at
<axw> ah ok
<menn0> axw: no one seems to have checked if the image formats are actually the same :)
<axw> heh
<menn0> maybe it's easy to convert between them
<menn0> i'm just pulling down the maas code now to check a few things
<menn0> then i'll send this email
<wallyworld> axw: going afk for an hour or so, bbiab
<axw> wallyworld: no worries, ttyl
<menn0> wallyworld, thumper: bradm mentioned this to me when I was chatting to him earlier: https://bugs.launchpad.net/juju-core/+bug/1496639
<mup> Bug #1496639: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1496639>
<menn0> does that ring any bells?
<thumper> menn0: nope
<thumper> but interesting
<menn0> thumper: attach it to 1.25-beta2?
<thumper> menn0: sure, why not
<thumper> hmm...
<thumper> wrote a simple charm to test a problem
<thumper> but the charm's default apt-get is failing
<thumper> E: There are problems and -y was used without --force-yes
<thumper> seems a bit weird
<thumper> was my squid-deb-proxy being out of date somehow
<thumper> miken: here I am then :)
<thumper> trying to reproduce the failure
<miken> Yep, re-running my mojo spec with the same code revision that caused the config-changed failure, to see if I still see the issue too.
<thumper> miken: this one https://bugs.launchpad.net/juju-core/+bug/1494542
<mup> Bug #1494542: config-changed error does not cause error state <juju-core:In Progress by thumper> <juju-core 1.24:In Progress by thumper> <juju-core 1.25:Triaged by thumper> <https://launchpad.net/bugs/1494542>
<thumper> miken: ah... are you able to run the environment with --debug?
<thumper> miken: then let me have the logs?
<thumper> I'm not able to reproduce with my trivial failing charm
<miken> possibly - I'll let the current run finish so I know whether I can reproduce it. If I can, is that an option I can add to the juju-deployer call? /me checks
<thumper> no idea
<thumper> miken: does the deployer bootstrap for you?
<thumper> it is juju I want in debug mode, not really the deplower
<thumper> yer
<rick_h_> thumper: which problem is this? deployer/config-changed sounds like the thing eric and natefinch-afk fixed today?:
<miken> Ah - just to bootstrap. Sure.
<thumper> miken: if you use --debug when bootstrapping, that flag is passed through to the remote server, and it logs everything at debug too
<thumper> you can set it afterwards
<thumper> but easiest if bootstrapping to just use --debug
<miken> Nice. I'll try setting it now too in case it does reproduce (takes 10-15mins to run deployment)
<miken> Hmm - how can I set it with the deploy already going?
<thumper> juju set-env logging-config=juju=DEBUG
<miken> Great
<miken> rick_h_: https://bugs.launchpad.net/juju-core/+bug/1494542
<mup> Bug #1494542: config-changed error does not cause error state <juju-core:In Progress by thumper> <juju-core 1.24:In Progress by thumper> <juju-core 1.25:Triaged by thumper> <https://launchpad.net/bugs/1494542>
<thumper> miken: has rick_h_ hit it too?
<rick_h_> miken: ah ok, that's different then
<rick_h_> thumper: no, the mention of a config-changed issue made me think of https://bugs.launchpad.net/juju-core/+bug/1495681 from today
<mup> Bug #1495681: quickstart delployments broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1495681>
<rick_h_> thumper: so just checking, but looks like that's different
<thumper> ok
<thumper> what causes quickstart to not work?
 * rick_h_ goes back to hiding :)
<rick_h_> thumper: something to do with nil values and the API that got cleaned up
<thumper> rick_h_: WTH are you doing answering IRC now anyway?
<thumper> rick_h_: you are setting a bad precident
<rick_h_> thumper: meeting in an hour :)
<thumper> ew
<thumper> well that sucks
<rick_h_> thumper: fortunately not too many of these
<rick_h_> thumper: if only all of you would move to the US :P
<thumper> TBH I don't think the US would want us all
<thumper> stealing all those jobs from real americans
<rick_h_> thumper: heh we just say that a lot, but we don't really care.
<miken> thumper: ok, running with the same spec and code rev did reproduce it. I can give you logs, but I only switched on --debug half way through (but before the config-change error, afaict). Is it just the /var/log/juju/unit-sca-app-0.log that you're after?
<thumper> yes, and machine-0.log
<thumper> the unit one so we can see what it thinks
<thumper> and machine-0 to check any server side changes
<thumper> see where the problem is, client or server
<thumper> miken: can you attach the logs to the bug plz?
<miken> Ack
<thumper> awesome
<thumper> ta
<rick_h_> thumper: are all those safe to attach? e.g. cloud info? /me hasn't looked lately.
<thumper> hmm...
<thumper> I *think* so
<rick_h_> miken: might just give a quick grep for cloud creds or anything. I know there was work to not outut things but don't recall where it left off tbh
<miken> rick_h_: yeah, I was just checking through after noticing that they're root:root 600 in ~/.juju/local/logs
<miken> I can see lots of "Response": "'body redacted'" 's
<thumper> miken: yeah, need to set the logging level to TRACE to get the data in there
 * thumper is on kid duty, will check in later with the bug to look (maybe)
<thumper> I have a feeling that all work later will be emails and comments on documents
<miken> thumper: YOu can grab the logs from:
<miken> thumper: chinstrap.canonical.com:/home/michaeln/bug-1494542-logs.tgz If you can see there's nothing sensitive there, feel free to attach.
 * thumper has to go cook dinner before school production tonight
<miken> thumper: Nice - enjoy!
<thumper> miken: ok, ta
<thumper> bad status test... embedding actual version numbers in expected output
 * thumper settles down to write a few final emails with a glass of wine
<thumper> hopefully the wine will help
<axw> fwereade: I would appreciate a review from you on http://reviews.vapour.ws/r/2685/ when you can. I'd particularly like validation of my assertion in settingsDocNeedsMigration... but really the whole thing is a *little* bit hairy
<fwereade> axw, cheers
<mup> Bug #1496750 opened: Failed worker can result in large number of goroutines and open socket connections and eventually gets picked on by the OOM killer <juju-core:New> <https://launchpad.net/bugs/1496750>
<mup> Bug #1496750 changed: Failed worker can result in large number of goroutines and open socket connections and eventually gets picked on by the OOM killer <juju-core:New> <https://launchpad.net/bugs/1496750>
<voidspace> jam: fwereade: stdup?
<jam> voidspace: I' believe its strdup :)
<voidspace> jam: :-p
<mup> Bug #1496750 opened: Failed worker can result in large number of goroutines and open socket connections and eventually gets picked on by the OOM killer <juju-core:New> <https://launchpad.net/bugs/1496750>
<mup> Bug #1495542 changed: 1.20.x cannot upgrade to 1.26-alpha1 <blocker> <ci> <regression> <upgrade-juju> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1495542>
<bogdanteleaga> unit-err-0[1975]: 2015-09-17 09:39:11 DEBUG juju.worker.dependency engine.go:438 "metric-sender" manifold worker stopped: failed to open spool directory "/var/lib/juju/metricspool": stat /var/lib/juju/metricspool: no such file or directory
<bogdanteleaga> anybody getting this on master?
<ashipika> bogdanteleaga: windows?
<bogdanteleaga> ashipika: trusty
<bogdanteleaga> it appeared on windows as well fwiw, but it fails on trusty too
<ashipika> bogdanteleaga: that's a change that landed with the maltese falcon feature branchâ¦
<ashipika> bogdanteleaga: which test did you run?
<bogdanteleaga> ashipika: I'm trying to deploy a charm
<ashipika> bogdanteleaga: will look into it! thanks for reporting it
<ashipika> bogdanteleaga: could you please file a bug on launchpad?
<bogdanteleaga> ashipika: I'll try later
<ashipika> bogdanteleaga: thank you..
<jam> fwereade: voidspace: I was looking at https://github.com/juju/juju/wiki/mgo-txn-example the one thing that sticks out as odd is the "e, err = e.st.Environment()" to refresh itself
<jam> it feels really strange to override the 'self' pointer, as it doesn't change the caller, just the local context, right?
<fwereade> jam, rather than e.Refresh()?
<jam> IMO it would be clearer if we use a real local variable there
<jam> fwereade: well either Refresh or local var
<fwereade> jam, i.e. `env := e` outside?
<jam> fwereade: right
<fwereade> jam, there's a convention in state that methods not change unexpected fields, which is why no Refresh()
<fwereade> jam, I have no objection to local vars though
<fwereade> jam, would probably be clearer
<natefinch> +1 for not overriding the receiver... that's certain to cause confusion.
<jam> fwereade: I'm fine with the no Refresh, cause I understand not wanting to have to introspect your object to see if there were any side effects.
<fwereade> jam, then +1 to fixing it to use local vars
<axw> anastasiamac: when you're working... please see https://github.com/axw/juju/commit/3168dd66426d018a00ff32c61b91c9141351e8b5
<voidspace> fwereade: jam: although if you *know* your data is out of date, deliberately *not* refreshing struck me as a little odd
<voidspace> fwereade: jam: you're choosing to deliberately leave stale data for other "observers"
<voidspace> but I haven't gone back and read the doc yet
<fwereade> voidspace, all data is always stale anyway
<voidspace> I guess the risk is that you leave other observers with inconsistent data, and stale data is better than inconsistent data
<voidspace> fwereade: possibly stale versus we actually know this is wrong
<fwereade> voidspace, it's not so much about other observers -- individual entity types are not generally goroutine-safe -- as it is about ensuring every individual client sees a consistent (if not correct) picture of the entity; part of that is putting the responsibility for Refresh in their hands, and not pre-empting it
<voidspace> fwereade: right
<voidspace> inconsistent data could be arbitrarily bad
<voidspace> fair enough, although then it would seem that Refresh is almost never safe
<fwereade> voidspace, *that said* the value of that approach is certainly diminished -- it came from a time when everything had a direct db connection, and there were many more long-lived *state.Things floating around in the system
<fwereade> voidspace, almost all our state types are now very short-lived -- one API call and that's it
<voidspace> fwereade: I've addressed all the issues on my PR except the txn assert and the tests for that
<fwereade> voidspace, cool, thanks
<voidspace> fwereade: will ping you when it's done as it's a different beast from when it was last reviewed
<fwereade> voidspace, (fwiw, it's a slightly subtle notion of inconsistency -- a refresh *will* give you new, consistent data, but it potentially invalidates assumption you checked in the scope you're looking at it from. so we try to keep showing you the same picture; even if it *is* a lie, the cause of any failures will still be reported, and it's up to the client to Refresh and reconsider or just give up)
<mattyw> fwereade, ping?
<fwereade> mattyw, pong
<mattyw> bogdanteleaga, are you blocked on that metric spool error?
<voidspace> fwereade: sure, understood. A refresh at an arbitrary point on another goroutine could present a very odd view of the world to another observer.
<voidspace> fwereade: so a Refresh is *only* safe if you know there are no other observers. So Refresh is probably a bit of an anti-pattern.
<fwereade> voidspace, most of them aren't goroutine-safe, so the two observers are the internal and external views of the object -- but, yeah
<fwereade> voidspace, (State itself should be, but most of the stuff you get from a state will not be)
<voidspace> right
<bogdanteleaga> mattyw: nope, it just made logs harder to read
<bogdanteleaga> it does try a lot :)
<mattyw> bogdanteleaga, yeah - it's the first time I've read the logs from the point of view of someone who doesn't know about the dep engine - how understandable they are leaves a lot to be desired
<mattyw> bogdanteleaga, we'll fix it, thanks for bringing it up
<bogdanteleaga> mattyw: np, shouldn't it create the directory itself though? it does still error differently if I create it manually
<mattyw> bogdanteleaga, all of that stuff should get handled automatically when it needs it
<mattyw> bogdanteleaga, but from the logs that'a all left very unclear :/
<bogdanteleaga> mattyw: oh, I think I get what you mean
<anastasiamac> axw: saw :D
<rogpeppe> anyone got an opinion on what HTTP status code is appropriate for the error with code params.CodeOperationBlocked
<rogpeppe> ?
<rogpeppe> i'm pretty sure that ErrBadRequest is not correct
<rogpeppe> s/Err/Status/
<rogpeppe> perhaps StatusForbidden, I guess
<jam> 403 Forbidden seems close (I understood you, but you can't do that)
<jam> 409 Conflict is potentially valid (you made a request which conflicts with the  fact it is currently blocked)
<jam> rogpeppe: ^^
<rogpeppe> jam: i've gone with Forbidden
<rogpeppe> jam: (thanks)
<rogpeppe> [11:47:59] <voidspace> fwereade: sure, understood. A refresh at an arbitrary point on another goroutine could present a very odd view of the world to another observer.
<rogpeppe> voidspace: just picking up on that at random
<rogpeppe> voidspace: you *can* get a refresh at an arbitrary point in a goroutine
<rogpeppe> voidspace: if we had shorter refresh times, it would happen more often
<fwereade> rogpeppe, expand? we were thinking of Refresh calls on state types
<rogpeppe> fwereade: ah, totally out of context then, ignore me :)
<rogpeppe> fwereade: i always thought that it was a bit of an anti-pattern that state types had any internal state at all
<fwereade> rogpeppe, yeah, I think you're right there
<fwereade> rogpeppe, hey ho, we endure :)
<jam> cmars: still up for meeting in 10 or so?
<cmars> jam, yep, i'm in the hangout
<alexisb> dimitern, ping
<dimitern> alexisb, omw
<frankban> axw: hi, could you please take a look at http://reviews.vapour.ws/r/2633/ ? it's proposed against a feature branch
<perrito666> abentley: ping?
<abentley> perrito666: pong
<perrito666> abentley: by any chance would you know how I can get a hold of the packaging script of juju-mongodb?
<abentley> perrito666: Sorry, I don't know where that's kept.  On our team, sinzui is the main packager, but I don't think he did juju-mongodb.  I think that was juju-core.
<natefinch> katco: sorry for the late notice, I'm going to have to jet at 15 past the hour - our 2 year old threw up this morning and so I'm going to have to run an errand that my wife was going to do, while she's home with the kids.  I'll only be gone an hour, but it's time sensitive.
<katco> natefinch: no worries... it's just us for the standup, so shouldn't even take 15m
<natefinch> katco: oh yeah, forgot eric's gone now, too.
<perrito666> abentley: thank you :) Ill keep hunting
<abentley> Sorry I can't help more.
<abentley> perrito666: Try jamespage: https://dogfood.paddev.net/ubuntu/+source/juju-mongodb/2.4.10-0ubuntu1
<perrito666> abentley: you did, :) one less palce to look for
<xwwt> abentley: Notes look ok.  We are just looking for a time for those non-voting to be enabled
<abentley> xwwt: And commitment from core on the ones we can't enable without their help.
<xwwt> abentley: yeah, commitment on timing.  we will continue to work through that on the release meeting too.
<jamespage> abentley, perrito666: hello
<perrito666> jamespage: hello :)
<rogpeppe> second part of my apiserver cleanup. reviews much appreciated. https://github.com/juju/juju/pull/3307
<perrito666> jamespage: do you happen to know how can I get a hold of the scripts required to create a custom package for juju-mongodb ?
<jamespage> perrito666, bzr branch ubuntu:juju-mongodb
<rogpeppe> katco: this carries on from yesterday's branch if you wanna take a look
<perrito666> jamespage: that was easy, tx :)
<katco> rogpeppe: probably won't have time today... meeting day
<rogpeppe> katco: np, thanks for yesterday's review anyway
<katco> rogpeppe: np
<rogpeppe> natefinch: wanna take a look? 30 lines deleted :)
<katco> rogpeppe: he had to run an errand... kiddos sick
<rogpeppe> katco: ok, thanks
<mgz> perrito666 darling, are you around?
<perrito666> mgz: I am sweetie (I feel a bomb coming my way)
<mgz> $ grep -F 1.24 cmd/juju/status_test.go
<mgz> - new available version: "1.24.7" "1.24.7            \n"+
<mgz> rev 3b8ee08c9dd6c27c1147f2aab157868987489d8c seems incomplete
<perrito666> mm, I am pretty sure I replaced that after
<perrito666> mgz: -n?
<perrito666> shi***
<perrito666> fixing
<mgz> perrito666: 3026, 3131
<mgz> you got *most* of them :P
<perrito666> mgz: I suck
<perrito666> mm, it is fixed everywhere excepting there
<perrito666> mgz: running tests now
<mgz> perrito666: we'll need to combine your branch with the version bump to get the landing allowed,
<mgz> perrito666: see https://github.com/juju/juju/pull/3299
<perrito666> mgz: you can pull -r https://github.com/perrito666/juju/tree/hotfix_1.24_hardcoded_version_in_tests into the bump patch
<mgz> perrito666: thanks, we'll do that
<perrito666> mgz: sorry again
<mgz> perrito666: worth it just to wind you up :)
<perrito666> mgz: pleased to please :p
<katco> alexisb: our meeting conflicts with the juju-core meeting. want to move to after our 9am?
<dimitern> frobware, hey, I'm back again - should we start the call?
<frobware> yep
<dimitern> frobware, standup HO?
<frobware> dimitern, yes
<voidspace> fwereade: ping
<voidspace> dimitern: ping
<fwereade> voidspace, pong
<voidspace> fwereade: I'm struggling to write the assert I need I'm afraid and would appreciate some help
<voidspace> fwereade: I need to assert that the new address (type state.address) is in either the addresses field or machineaddresses field of the current machine doc
<voidspace> fwereade: but the $in operator doesn't work like that, and as far as I can tell there are no good references on mgo asserts
<voidspace> nor any similar examples that *I* can find
<voidspace> fwereade: this is for state/machine.go
<voidspace> fwereade: current state of code at http://reviews.vapour.ws/r/2593/diff/#
<voidspace> fwereade: it would go inside the getPreferredAddressOps method
<fwereade> voidspace, something like {$or: {addresses: {$in: [ADDR]}, machineaddresses: {$in: [ADDR]}}} ?
<voidspace> just found the mongodb operator docs
<fwereade> ah cool :)
<voidspace> fwereade: will try that and see what happens, I had that as a straight $in and it said it needed an array
<voidspace> and that addr was not a valid value
<fwereade> voidspace, yeah, it's ugly but I think it works
<voidspace> in some of the various permutations
<voidspace> *I tried
<voidspace> I have a call, will try in a bit
<rogpeppe> does anyone know if the /environment/$uuid/charms GET endpoint is used at all by anything?
<rick_h_> rogpeppe: is that used by the gui to get local charm info? /me isn't sure what endpoint is called but it looks like something we'd want
<rogpeppe> rick_h_: i think that's an API call - i'm talking about the REST endpoint here
<rogpeppe> fwereade: ping
<fwereade> rogpeppe, pog
<rogpeppe> fwereade:  your name seems to be associated with some of this code, so perhaps you can tell me...
 * fwereade gets nervous
<rogpeppe> fwereade: i'm looking at apiserver.charmsHandler
<rogpeppe> fwereade: it seems that sometimes it sends errors as JSON errors and sometimes just as text (with http.Error)
 * fwereade *thinks* he wants to point at frankban?
<rogpeppe> fwereade: can you think of a reason why that might be a good thing?
<fwereade> rogpeppe, I certainly can't
<fwereade> rogpeppe, I do remember drivebying some fix there a while back
<fwereade> rogpeppe, but I don't think that was related?
<rogpeppe> fwereade: do you think it would considered API breakage to fix that?
<rogpeppe> fwereade: (FWIW i can find any client code in juju-core that does a GET from that endpoint)
<fwereade> rogpeppe, probably :-/ but if you send the right thing *and* the wrong thing (with a comment explaining why) that might sound more sound plausible?
<rogpeppe> fwereade: (and none of those errors have any test coverage)
<fwereade> rogpeppe, ffs :(
<rogpeppe> fwereade: the two things are mutually exclusive
<rogpeppe> fwereade: you can't send a JSON-encoded error *and* the error as text
<fwereade> rogpeppe, oh balls ofc sorry
<fwereade> rogpeppe, end of day ;p
<rogpeppe> fwereade: np :)
<rogpeppe> fwereade: for my purposes, I'd very much like it all to standardise on one error format that can return an error code too
<fwereade> rogpeppe, I think it is api breakage then... can we have a /v2 handler, perhaps?
<rogpeppe> fwereade: my take on it is that there's currently no way to write a correct client
<rogpeppe> fwereade: so we're actually fixing things not breaking them
<fwereade> rogpeppe, juju error code, not an http error code?
<rogpeppe> fwereade: yes
<fwereade> rogpeppe, I would like that too, but... I don't see why it's impossible to write a client? bloody hindering awkward, yes
<rogpeppe> fwereade: BTW if you have any time (unlikely, I know) it would be great to have a review of my ongoing apiserver cleanup http://reviews.vapour.ws/r/2689/
<fwereade> rogpeppe, but I can't see how to change it without potentially breaking the clients that have gone to the effort
<rogpeppe> fwereade: so, the only decent way that I can see to write a non-breaking client is to check the content type and unmarshal as JSON if json and text otherwise
<rogpeppe> fwereade: so if we change the bad endpoints to return JSON, that client would still work
<rogpeppe> fwereade: also, all but one of the textual errors will be impossible to get unless there's juju internal breakage (corrupt charm cache files)
<rogpeppe> fwereade: the one error that may be possible to get would be "directory listing not allowed", but tbh the worst that can happen is that some client gets an error that doesn't look like the error they expect in that case.
<fwereade> rogpeppe, honestly it's too hairy for me, I have convinced myself that breakage is impossible, and been wrong, too many times :(
<fwereade> rogpeppe, would much rather see a new version
<rogpeppe> fwereade: the fact that this is only on a very unusual error path and that none of our actual juju code uses the endpoint makes me think that it should be ok
<rogpeppe> fwereade: do you know of any external programs that use this endpoint at all, let alone relying on juju internal service error formats?
<fwereade> rogpeppe, no, and that's the problem -- if I knew who used it I'd be able to find out how
<voidspace> fwereade: so this as a basic syntax (not yet with $or) at least is correct syntax
<voidspace> fwereade: bson.D{{"addresses", bson.D{{"$in", []address{addr}}}}}
<voidspace> fwereade: the value we're checking needs to be in an array, that's what was fooling me
<voidspace> however...
<voidspace> that fails with: "cannot set addresses of machine 0: state changing too quickly; try again soon"
<rogpeppe> fwereade: i think it's reasonable to change the internal errors as they are not possible to trigger, and thus impossible to rely on anyway.
<voidspace> (deterministically it seems)
<fwereade> voidspace, heh, ok, so that assert triggers? dammit
<rogpeppe> fwereade: i honestly think there's room for a small amount of pragmatism in the case of the "directory listing not allowed" error
<fwereade> voidspace, bugger, an address is a struct, isn't it, not sure how that'll play
<voidspace> fwereade: yup
<rogpeppe> fwereade: especially given the marginal cost of a version change
<fwereade> voidspace, does anything stick out about how an address serializes?
<voidspace> fwereade: it has a Value field which is really the  *only* important bit
<fwereade> rogpeppe, that doesn't feel like it should be too terribly costly?
<rogpeppe> fwereade: it clutters everything to leave it as is
<fwereade> voidspace, heh, then I'm afraid I must direct you back to the mongo operator reference, I'm sure there's some way to do that but I completely forget
<rogpeppe> fwereade: i think the API we really need to worry about is the API that juju itself uses.
<voidspace> fwereade: it's not documented on the $in page, but I will keep digging
<voidspace> fwereade: I know what I'm searching for now, which makes life easier
<fwereade> rogpeppe, I don't think that's the case at all, is it?
<rogpeppe> fwereade: that's the primary concern, yes
<fwereade> rogpeppe, doing horrible things to ourself is much more justifiable thann doing horrible things to our external clients
<fwereade> rogpeppe, at least we understand the scope of the pain when we do it to oourselves
<rogpeppe> fwereade: making an API endpoint consistent in the error format it hands out is doing a horrible thing to our external clients?
<fwereade> rogpeppe, *changing behaviour* is being horrible
<fwereade> rogpeppe, the aspects of X behaviour that people will find to depend upon are a continual source of amazement to me
<rogpeppe> fwereade: if we change this and there's a client that relies on this particular format of this particular error message, i will buy you beers for an entire sprint
<fwereade> rogpeppe, we both have a poor track record re predicting "safe" api changes
<fwereade> rogpeppe, please just make a new version of the endpoint that does the right thing?
<rogpeppe> fwereade: no, i guess i'll just keep the old behaviour
<fwereade> rogpeppe, what is problematic about versioning it?
<rogpeppe> fwereade: it means duplicating all the code
<rogpeppe> fwereade: it's a cure worse than the ill
<fwereade> rogpeppe, so we basically *can't* version http endpoints? that feels like an infinitely worse ill...
<rogpeppe> fwereade: i'm saying that providing a new version of an endpoint for one error message format is not worth it
<rogpeppe> fwereade: do you agree that changing the internal errors (that will never happen in practice) is OK ?
<fwereade> rogpeppe, (that seems mainly to point to a failure to separate the behaviour from the transport? one would think that a v2 that shared all the code except the outermost error translation layer would be trivial...)
<fwereade> rogpeppe, changing error *messages* does not bother me; changing error *codes* would bother me; changing error *formats* also bothers me
<rogpeppe> fwereade: if you don't mind changing error messages, this is essentially just changing the error message
<rogpeppe> fwereade: because there is no error code in this case
<fwereade> rogpeppe, I see it as the format?
<rogpeppe> fwereade: if a client is reading it as a textual error message, then JSON is just the text
<rogpeppe> fwereade: if the client knows enough to inspect the content type, then it won't run into the problem
<rogpeppe> fwereade: my aim is to remain backwardly compatible and not to require clients to fetch /environments/$uuid/charms-v2 or whatever because of a marginal-case error message.
<voidspace> fwereade: so this is a valid assert, but suffers from the same problem... (so not helpful, but the dot syntax for fields is interesting to note)
<voidspace> bson.D{{"$or", []bson.D{
<voidspace>                         bson.D{{"addresses.value", bson.D{{"$in", []string{addr.Value}}}}},
<voidspace>                         bson.D{{"machineaddresses.value", bson.D{{"$in", []string{addr.Value}}}}}}}},
<fwereade> rogpeppe, I maintain that this is a change in the *format* of the result, and in such cases we should always just add a new api version
<rogpeppe> fwereade: adding an api version is not free
<fwereade> voidspace, try a Find with your assert data and see what you get? it ought to match the doc you're looking for
<voidspace> kk
<fwereade> rogpeppe, externalising the costs by making secret api changes is not really ok, though
<rogpeppe> fwereade: i think there is room for pragmatism here
<fwereade> rogpeppe, and if there is a certainty out there, it is that you will need to change your apis
<fwereade> rogpeppe, my perception is that the last 3 times you've convinced me of that we've seen breakage ;p
<rogpeppe> fwereade: for an API that is just getting a file, I'm not sure.
<rogpeppe> fwereade: particularly when the API is inconsistent already, sometimes returning JSON, sometimes not, with no way for the client to know which might be returned.
<fwereade> rogpeppe, (so it's not even that I don't trust you -- I don't trust myself, or *anybody else*, to modify apis in place without causing breakage)
<fwereade> rogpeppe, eight or nine times bitten, $spanish-inquisition times shy
<rogpeppe> fwereade: oh, one other thing: you think that changing error codes is breakage. my code adds error code returns where previously there were none (Code field always empty). do you consider that breakage too?
<rogpeppe> fwereade: that is, now the error code will actually reflect the error
<fwereade> rogpeppe, yes, it's clearly a change in behaviour, new version please
<fwereade> rogpeppe, if you ask for v2 you can be sure you'll get an error code
<fwereade> rogpeppe, v1 -- maybe, maybe not. do you feel lucky? ;p
<rogpeppe> fwereade: ok, another question then: how should we determine API version?
<rogpeppe> fwereade: at the moment i think my tendency is towards declaring it a request header
<fwereade> rogpeppe, I have a mild preference for doing it in the url in general, but in all these cases we have to construct auth headers anyway, right?
<rogpeppe> fwereade: i'm not keen on the approach we've used elsewhere with the version as part of the URL path.
<rogpeppe> fwereade: yes
<fwereade> rogpeppe, stable urls and varying headers seem sane then
<rogpeppe> fwereade: do you think i should just copy the whole apiserver directory?
<fwereade> rogpeppe, ...no?
<rogpeppe> fwereade: so version-based if statements scattering the code?
<fwereade> rogpeppe, should it not just be a few maps of versions to handlers?
<fwereade> rogpeppe, first check the version, then dispatch to the appropriate handler
<rogpeppe> fwereade: if i'm copying the handler code, i might as well copy everything apart from the core
<rogpeppe> fwereade: honestly, i'm thinking it's not worth fixing.
<fwereade> rogpeppe, so the error format is so inextricably bound up with the rest of the code that the rest of the code can't be shared?
<rogpeppe> fwereade: at the moment, the error generation is not tied to the request
<voidspace> fwereade: so after the transaction, that same query finds the machine
<voidspace> fwereade: the assert does see the result of earlier txn.Op in the same transaction doesn't it...
<fwereade> voidspace, ah-ha! this is an address that wasn't there before?
<fwereade> voidspace, certainly not
<voidspace> fwereade: so extend the $or
<voidspace> fwereade: no, can't that's kind of the point
<voidspace> fwereade: needs to be done as a separate transaction then
<voidspace> set the new field, then set the addresses checking they still exist
<fwereade> voidspace, sorry, my brain was still tangled with the write-on-get code
<voidspace> np
<fwereade> voidspace, in this case I think you probably just want to check that addresses and machineaddresses are exactly what they were originally?
<fwereade> voidspace, so sorry I didn't make connection before
<voidspace> fwereade: ok
<voidspace> fwereade: it's better not to do it as a separate transaction as the first one could succeed then the second fail, and then what would we do
<voidspace> fwereade: and if they've changed, what do we do? we don't want to repeat the transaction
<fwereade> voidspace, yeah, exactly, avoid multiple txns like the plague :)
<voidspace> fwereade: we have an interesting race because of fallback scopes
<voidspace> fwereade: if SetProviderAddresses and SetMachineAddreses are set concurrently, which is highly likely
<voidspace> fwereade: the first time neither PreferredAddress will be (Public nor Private)
<voidspace> so the first one will set both (one correctly as it has the right scope and one as a fallback because the other isn't available yet)
<voidspace> and the second one will also attempt to set both
<voidspace> whereas we probably want ProviderAddresses to set one and MachineAddresses to set the other
<rogpeppe> fwereade: BTW are we allowed to change behaviour at all (e.g. add a field) without making a new version?
<voidspace> ok, so I need to rebuild the slice of addresses which I'm not currently doing in buildTxn
<voidspace> that will work
<voidspace> fwereade: sorry, using you as a rubber ducky
<voidspace> fwereade: I think I have it
<voidspace> fwereade: will need to craft a test to prove it...
<voidspace> but it will have to wait I'm afraid, EOD and PyCon UK tomorrow
<voidspace> I've told alexisb that this bug will be ready for beta 2
<fwereade> rogpeppe, AFAICS, you can only reasonably do that when you don't actually need to add the field at all -- otherwise you're screwing over your clients, who can't tell whether a given server will pay any attention to that field or not
<rogpeppe> fwereade: i'm talking about adding fields to a response
<fwereade> rogpeppe, it's less harmful, for sure, but isn't it better for the client that might use the extra fields to ask for a response it *knows* will have those fields?
<rogpeppe> fwereade: not necessarily
<rogpeppe> fwereade: if the field is present, it can act on it
<fwereade> rogpeppe, right, but what *actually happens* is that they use a new server to begin with, and come to depend on the field, and then their code breaks seemingly at random
<rogpeppe> fwereade: ha, one problem with doing the version-in-header approach is that a new client with an old server will still get the old response. but that's potentially an issue with doing it in the path too - a "not found" response might be a legitimate response to the new path, for example.
<fwereade> rogpeppe, ha, good point
<fwereade> rogpeppe, (see? I'm shit at predicting these subtle breaks)
<rogpeppe> fwereade: i'm thinking of adding an error code field to some responses (where there is none currently)
<fwereade> rogpeppe, yeah, that sounds like a great thing but a new api version to me
<fwereade> rogpeppe, and I kinda have to stop now :(
<rogpeppe> fwereade: one way to get around that issue is to include the API version with the response.
<wwitzel3> looking at juju/charms I see Metadata has a Hooks() method that returns a map of all the hooks a charm might want to implement, I don't see where we call that in juju core
<wwitzel3> so where in core do we determine what hooks should be run?
<wwitzel3> or could be run I should say
<natefinch> wwitzel3: not sure... haven't looked at that part of the code
<bogdanteleaga> wwitzel3: https://github.com/juju/juju/blob/master/worker/uniter/hook/hook.go#L24, it's taken from juju/charms
<bogdanteleaga> though you can kinda tell from validate
<mup> Bug #1496972 opened: juju bootstrap fails to successfully configure the bridge juju-br0 when deploying with wily 4.2 kernel <hs-arm64> <juju-core:Confirmed> <https://launchpad.net/bugs/1496972>
<wwitzel3> bogdanteleaga: ok, but where do we actually determine which hooks for a given charm should be run?
<wwitzel3> bogdanteleaga: I mean, where is the information that Validate is check, set
<wwitzel3> s/check/checking
<mup> Bug #1496972 changed: juju bootstrap fails to successfully configure the bridge juju-br0 when deploying with wily 4.2 kernel <hs-arm64> <juju-core:Confirmed> <https://launchpad.net/bugs/1496972>
<wwitzel3> bogdanteleaga: since a hook is really any name and interface and those could be free form, I'm trying to understand the process of how we know that the %s-relation-changed hook should be run for example
<mup> Bug #1496972 opened: juju bootstrap fails to successfully configure the bridge juju-br0 when deploying with wily 4.2 kernel <hs-arm64> <juju-core:Confirmed> <https://launchpad.net/bugs/1496972>
<bogdanteleaga> wwitzel3: you mean how we choose which one is next?
<bogdanteleaga> wwitzel3: https://github.com/juju/juju/blob/master/worker/uniter/resolver/loop.go#L70, here we choose the next operation that should run and then we run it until some error
<bogdanteleaga> (this just changed in master btw, so it's different in other versions)
<wwitzel3> bogdanteleaga: not quite, so if I add a new provides interface to my charm and upgrade, how does juju know now there is a new hook to be run?
<wwitzel3> bogdanteleaga: I looking for the code that actually generates the list of hooks
<mup> Bug #1496975 opened: upgrade-charm --switch does not warn about risks <juju-core:New> <https://launchpad.net/bugs/1496975>
<wwitzel3> katco: ping
<katco> wwitzel3: hey, in meeting
<wwitzel3> rgr
<wwitzel3> NewLiveHookSource
<bogdanteleaga> wwitzel3: I don't think there's a hook list and they're just taken from there, NextOp does the magic, it chooses a hook such that the localstate converges towards the remotestate
<bogdanteleaga> for example the install hook: https://github.com/juju/juju/blob/master/worker/uniter/resolver.go#L220
<bogdanteleaga> and config changed close below
<wwitzel3> right, but there is something that determines the remotestate which is essentially the list of hooks to be run
<bogdanteleaga> wwitzel3, found anything? I can't see anything obvious in the resolvers and those are supposed to set whatever is going to run
<mup> Bug #1496997 opened: TestErrorReadingEnvironmentsFile calls chmod on win <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1496997>
<sinzui> wallyworld: reports.vapour.ws doesn't now about this job yet. http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/run-unit-tests-mongodb3/5/console
<perrito666> thumper: are you around?
<thumper> perrito666: yes, but on calls
<thumper> whazzup?
<perrito666> thumper: I need someone a bit more local provider savvy than I
<thumper> for what?
<wallyworld> sinzui: i guess it will soon?
<mup> Bug #1328129 changed: panic when apiaddresses not present <cts-cloud-review> <panic> <sts> <juju-core:Triaged> <https://launchpad.net/bugs/1328129>
<perrito666> thumper: I am trying to introduce some changes to mongo connection and I have the sensation that ensureMongoAdmin is never being called but I am not getting logs either so I thought of asking while trying a few things
<thumper> sure it isn't in jujud/ bootstrap?
<thumper> grep ?
<perrito666> thumper: It should be, I just have the simptomps as if it wasnt ;) then again no logs make my life a bit harder :(
<sinzui> wallyworld: thumper: katco: deej in the #juju channel has an upgraded env that lost the apiaddress maybe bug 1366887. Who can help find and fix the mongo document
<sinzui> waigani: menn0 : do either of you have experience fixing a mongodb that lost its apiaddress during an upgrade? deej in #juju needs help fixing a broken env where all the units lost and continue to loose the apiaddress
<menn0> sinzui: i'll take a look
<menn0> sinzui: give me 2 mins
<perrito666> wallyworld: your lateness stands?
<wallyworld> perrito666: got back in time
<perrito666> wallyworld: cool, call me and tell me the lottery numbers then marty
<perrito666> :p
<wallyworld> i'll ask Biff
<perrito666> mm, all internet seems to be in poor shape today, I wonder if the eartquake broke something
 * thumper heads to the gym to roll around
<perrito666> good boy
#juju-dev 2015-09-18
<mwhudson> davechen1y: could the shootout awfulness be caused by softfloat?
<mwhudson> i.e. minux's vfp detection failing for whatever reason
<davechen1y> i don't think so
<davechen1y> the same command takes ~ 1.47 when run directly
<davechen1y> but i will check
<davechen1y> that bloody check
<davechen1y> it was a stalemate for months
<davechen1y> if he's broken the vfp checking i'll be royally pissed
<mwhudson> davechen1y: fair enough
<davechen1y> minux always wants the most complete solution
<davechen1y> even if it is 10x more complicated
<davechen1y> hmm, nope, that's not it
<davechen1y> it's some subtle shit between running inside go tool dist test
<davechen1y> and just running the shell script directly
<miken> thumper: did those logs have the info you needed for bug 1494542?
<mup> Bug #1494542: config-changed error does not cause error state <juju-core:In Progress by thumper> <juju-core 1.24:In Progress by thumper> <juju-core 1.25:In Progress by thumper> <https://launchpad.net/bugs/1494542>
<mwhudson> davechen1y: that's pretty messed up
<thumper> miken: still evaluating, lots of calls first thing today
<thumper> miken: definitely useful though
<thumper> miken: I'm trying to work out now why it has logged what it logged
<thumper> miken: the uniter did enter error state, but it then immediately went to running
<thumper> trying to work out why now
<miken> Great (wasn't sure if I'd set the debug env option too late to grab the useful bits)
<thumper> wallyworld: hey, I have veebers over for a visit, catch up later this afternoon?
<wallyworld> thumper: sure, ok, just ping
<davechen1y> mwhudson: you were right
<davechen1y> it was the goarm=5
<davechen1y> but not in the way we expected
<mwhudson> davechen1y: haha wtf
<davechen1y> keep it simple, ffs
<mwhudson> davechen1y: talking of minux his comment on https://go-review.googlesource.com/#/c/14635/2 is fairly out of left field
<thumper> miken: found the problem
<davechen1y> mwhudson: honestly, it's easier just avoid the conversation sometimes
<thumper> miken: juju run bumps the unit out of the error status
<mwhudson> davechen1y: yeah i only replied because i was replying to ian anyway
<davechen1y> arguing doesn't change his position
<miken> thumper: wow - always, or just in some special condition there?
<thumper> miken: unknown just yet...
<thumper> miken: it never used to record what it was doing
<thumper> miken: I'm guessing this changed when the agent-status stuff was added
<thumper> miken: appears to be always just now
 * thumper continues looking
<miken> ack, thanks thumper
<thumper> wallyworld: can we chat about this uniter issue, I think you know that bit of code more now
<wallyworld> thumper: sure, just otp right now
<wallyworld> finished soon
<thumper> kk
 * thumper goes to make a coffee
<wallyworld> thumper: finished meeting now
<thumper> ok
<wallyworld> what's the issue?
<wallyworld> the config eerror one?
<thumper> wallyworld: just need to deal with menn0 first :)
<wallyworld> sure, i might grab a quick bite to eat
<thumper> wallyworld: ping when you're free
<wallyworld> thumper: ok, meet you in 1:1 in a minute
<thumper> wallyworld: oh FFS, there is no way to get the current UnitStatus through the uniter API
<wallyworld> oh really
<thumper> sorry, AgentStatus
<wallyworld> let me check
<thumper> you can get UnitStatus but not AgentStatus
<wallyworld> thumper: damn, yeah, you can set agent status
<wallyworld> never was a need for it till now
<thumper> yeah
<thumper> bugger!
<menn0> axw: you got a moment?
<axw> menn0: what's up?
<menn0> axw: did you create configInternal.SetAPIHostPorts?
<menn0> in agent/agent.go
<axw> menn0: erm possibly, can't remember. there were a couple of people doing things related to that. why?
<menn0> i've been dealing with a broken IS env
<menn0> it wasn't the root cause but the way this method works made things worse
<menn0> Two issues
<menn0> 1. it modifies the list of API addresses just before updating the config but without telling anyone.
<menn0> So the logs emitted by apiaddressupdater are lies
<menn0> The API addresses it logs are being set aren't the actual ones that get written.
<menn0> this made the issue I was looking at somewhat harder to diagnose
<menn0> 2. It only writes out one address per server. Any reason it shouldn't write out every cloud local address of each server?
<menn0> In the case of the problem I was looking at, the LXC bridger address filtering wasn't working (I now know why, separate issue)
<menn0> so there was the LXC bridge address and the actual API server address
<menn0> and it was picking the LXC bridge address because it sorted first
<menn0> if it had written out both the env would have been ok
<axw> menn0: I don't recall why only one is set... where is it only writing one? I can't see that bit of code
<axw> menn0: oh I see
<axw> menn0: misread
<axw> menn0: I can't think of any reason why we shouldn't try them all
<menn0> servers has one element per server
<menn0> each element is a list of addresses for that server
<axw> menn0: IIRC we try them all in parallel anyway
<menn0> axw: ok cool. i'll write up ticket about that
<menn0> axw: we do try them in parallel
<menn0> axw: and what about the lack of visibility with the filtering? I was thinking the filtered list should be returned or the filtering should be moved to outside SetAPIHostPorts.
<axw> menn0: looking at that now, one minute
<menn0> axw: I looked at reading the config back out in apiaddressupdater - it's doable but more work than I realised
<menn0> axw: a number of interfaces need updating
<menn0> axw: it's probably cleaner ot have SetAPIHostPorts return the filtered list
<axw> menn0: I think my thinking was that the filtering of internal addresses was particular to the consumer of the apiaddressupdater worker (i.e. the agent)
<axw> menn0: the other filtering in apiaddressupdater was added later. originally it was just a dumb proxy
<axw> menn0: having SetAPIHostPorts return the filtered list sounds fine to me
<menn0> axw: ok sounds good
<menn0> axw: thanks
<axw> menn0: cheers. sorry for the confusing code/logging
<menn0> axw: np. it's one of those things that you only realise is problematic once you're trying to debug an issue around it.
<miken> thumper: So we should be able to `juju run` commands while the unit is in an error state? I expected to be able to `juju ssh` (as it's not a hook context), but assumed juju run should fail if the unit is in an error state?
<miken> thumper: certainly in our case, we only want to do the juju run commands if unit is not in an error state, but I can imagine there are scenarious where you want to juju run to fix an error state, or similar.
<mup> Bug #1497094 opened: SetAPIHostPorts shouldn't record just one address per API server <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <juju-core 1.25:Triaged by menno.smits> <https://launchpad.net/bugs/1497094>
<mup> Bug #1497098 opened: Addresses logged by apiaddressupdater aren't accurate <juju-core:New for menno.smits> <https://launchpad.net/bugs/1497098>
<mup> Bug #1497094 changed: SetAPIHostPorts shouldn't record just one address per API server <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <juju-core 1.25:Triaged by menno.smits> <https://launchpad.net/bugs/1497094>
<mup> Bug #1497098 changed: Addresses logged by apiaddressupdater aren't accurate <juju-core:New for menno.smits> <https://launchpad.net/bugs/1497098>
<thumper> miken: yes, we should be able to juju run things
<thumper> miken: the problem is it doesn't set the state back to what it was
<thumper> I'm working on it, but it isn't a quick fix unfortunately
<miken> thumper: I'm not blocked on it at all - so no pressure here. I was just surprised by the comment on the bug that the intent is in fact to execute the juju run when the unit is in an error state. Great.
<mup> Bug #1497094 opened: SetAPIHostPorts shouldn't record just one address per API server <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <juju-core 1.25:Triaged by menno.smits> <https://launchpad.net/bugs/1497094>
<mup> Bug #1497098 opened: Addresses logged by apiaddressupdater aren't accurate <juju-core:New for menno.smits> <https://launchpad.net/bugs/1497098>
<thumper> miken: it is entirely valid though in general to allow juju run and juju actions while in an error state
 * thumper has had a very interrupted afternoon
<thumper> will attempt some more work later
<thumper> have a good weekend folks
<mup> Bug #1496639 changed: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:Invalid by menno.smits> <https://launchpad.net/bugs/1496639>
<frobware> axw, thanks for the review. what's the procedure from here to getting this merged? Do I need to wait for other reviews? +2?
<axw> frobware: no worries, thanks for catching it. nope, only one LGTM is required. there was talk of using +1/+2 recently, but unless you see +1 or +2 in a review, assume a ship it is +2
<axw> frobware: +1 meaning "looks ok, get another opinion", +2 meaning "ship it"
<axw> frobware: have you merged a PR into juju before? you just need to add $$merge$$ as a comment on github
<frobware> axw, nope, first one. whoop! :)
<axw> frobware: yeah, I ruined your other first one, sorry about that ;)
<frobware> axw, probably very wise...
<frobware> axw, so post $$merge$$ what happens? ci jobs run unit tests and ci tests before actually merging?
<axw> frobware: there's a bot watching the PRs. it'll see the $$merge$$, and check that master isn't blocked (critical regressions block master; when you fix them there's another special string to add to the comment). if it's not blocked, it'll start a job on jenkins (juju-ci.vapour.ws). the jenkins job will pull master, merge in your branch locally, run unit tests, and if all is well it'll merge into master
<frobware> axw, so the "other" ci tests are orthogonal to that, where other != unit tests?
<axw> frobware: there's also a periodic CI job in jenkins which runs various functional tests, e.g. upgrades
<frobware> axw, aha
<axw> frobware: that one doesn't gate landings BTW, that's asynchronous
<frobware> axw, ack
<axw> if CI picks up an error later, a critical regression will be logged and the branch will become blocked until it's fixed
<ashipika> frobware: one minor comment.. otherwise LGTM..
<frobware> dooferlad, fwereade: standup?
<dooferlad> frobware: thought I was in it
<frobware> dooferlad, is friday always a different HO link?
<frankban> dooferlad: hi, could you please take a look at http://reviews.vapour.ws/r/2710/ ? it's part of an ongoing work for supporting bundle deployment, and it's proposed against a feature branch
<dooferlad> frobware: looking
<dooferlad> frankban: looking
<frankban> dooferlad: thanks!
<dooferlad> (sorry frobware - tab completion without brain)
<mup> Bug #1497229 opened: apiserver: TestAgentConnectionsShutDownWhenStateDies is very slow <unit-tests> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1497229>
<mup> Bug #1497229 changed: apiserver: TestAgentConnectionsShutDownWhenStateDies is very slow <unit-tests> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1497229>
<mup> Bug #1497229 opened: apiserver: TestAgentConnectionsShutDownWhenStateDies is very slow <unit-tests> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1497229>
<mup> Bug #1497241 opened: look who's back <juju-core:New> <https://launchpad.net/bugs/1497241>
<mup> Bug #1497241 changed: look who's back <juju-core:New> <https://launchpad.net/bugs/1497241>
<mup> Bug #1497241 opened: look who's back <juju-core:New> <https://launchpad.net/bugs/1497241>
<wallyworld> frobware: just fyi, i fixed the milestone on bug 1496750
<mup> Bug #1496750: Failed worker can result in large number of goroutines and open socket connections and eventually gets picked on by the OOM killer <juju-core:Fix Committed by frobware> <juju-core 1.25:Fix Committed by frobware> <https://launchpad.net/bugs/1496750>
<wallyworld> the milestone should be 1.25-beta1
<frobware> wallyworld, thx
<wallyworld> as the fix was committed to the 1.25 branch prior to beta1 shipping
<frobware> wallyworld, from the bug report how do you get to the PR that fixed the issue?
<frobware> wallyworld, I didn't see any obvious link
<wallyworld> frobware: sadly, we lost that ability once we were told to move to github
<wallyworld> so it has to be added manually
<wallyworld> paste the url into a bug comment
<frobware> yuck
<wallyworld> yeah
<wallyworld> lp is great
<wallyworld> having 2 systems means we get a bit of friction
<frobware> dooferlad, have some time to HO w.r.t.1416928
<dooferlad> frobware: sure, give me a moment or two to get a glass of water and task switch.
<frobware> dooferlad, or we just schedule in a bit, your call.
<frobware> dooferlad, don't want to context switch unnecessarily
<dooferlad> frobware: now is fine.
<frobware> dooferlad, great! https://plus.google.com/hangouts/_/canonical.com/juju-sapphire
<TheMue> dooferlad: thx for merging my PR. and greetings from my gardening ;)
 * TheMue 's current change is moving a lot of plants inside the garden
<katco> natefinch: hey just saw you declined this morning's stand-up? not going to be there?
<natefinch> katco: did I?
<natefinch> katco: not intentionally.  brt
<katco> natefinch: looks like
<mup> Bug #1497297 opened: TestFindToolsExactInStorage fails for some archs Again <blocker> <ci> <precise> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497297>
<mup> Bug #1497301 opened: mongodb3  SASL authentication failure <ci> <mongodb> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497301>
<mup> Bug #1497297 changed: TestFindToolsExactInStorage fails for some archs Again <blocker> <ci> <precise> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497297>
<mup> Bug #1497301 changed: mongodb3  SASL authentication failure <ci> <mongodb> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497301>
<mup> Bug #1497297 opened: TestFindToolsExactInStorage fails for some archs Again <blocker> <ci> <precise> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497297>
<mup> Bug #1497301 opened: mongodb3  SASL authentication failure <ci> <mongodb> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1497301>
<katco> wwitzel3: at any rate, if you end up getting a patch, just go ahead and land it for 1.25-beta1. no reason not to
<mup> Bug #1497312 opened: make assignment of units to machines use a worker <juju-core:New> <https://launchpad.net/bugs/1497312>
<mup> Bug #1497316 opened: TestUniterSteadyStateUpgrade permission problem <ci> <intermittent-failure> <windows> <juju-core:Incomplete> <https://launchpad.net/bugs/1497316>
<natefinch> katco: it looks like somehow I got set to "no" on all the standups, but I can't figure out how to change that back
<cherylj> Can I get a review for the blocker?  http://reviews.vapour.ws/r/2714/
<natefinch> cherylj: ship it!
<cherylj> thanks natefinch
<mup> Bug #1497351 opened: Cloudsigma 403 destroy instance <cloudsigma-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1497351>
<mup> Bug #1497351 changed: Cloudsigma 403 destroy instance <cloudsigma-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1497351>
<mup> Bug #1497351 opened: Cloudsigma 403 destroy instance <cloudsigma-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1497351>
<mup> Bug #1497351 changed: Cloudsigma 403 destroy instance <cloudsigma-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1497351>
<mup> Bug #1497351 opened: Cloudsigma 403 destroy instance <cloudsigma-provider> <destroy-environment> <juju-core:Triaged> <https://launchpad.net/bugs/1497351>
<mup> Bug #1497355 opened: TestCollectWorkerStarts failed to instantiate metric recorder <ci> <intermittent-failure> <unit-tests> <wily> <juju-core:Triaged by cmars> <https://launchpad.net/bugs/1497355>
<katco> natefinch: need rubber stamp for forward-port: http://reviews.vapour.ws/r/2715/
<katco> natefinch: also for master: http://reviews.vapour.ws/r/2716/
<natefinch> katco: I think we had said that forward ports didn't need reviews unless there were non-trivial merges that needed to be made
<natefinch> katco: but rubber stamped anyway :)
<katco> natefinch: ah, missed that. ty
<katco> natefinch: will have other patches soon for easy reviews
<natefinch> katco: cool
<katco> natefinch: any idea what cpupower actually represents?
<katco> natefinch: nm, juju help constraints explains
<katco> natefinch: actually, looks like amazon stopped using ecus... do you think it's safe to omit that for the new type?
<katco> natefinch: jees, yet again nm. looks like they've stopped using it, but they still list it: http://aws.amazon.com/ec2/pricing/
<katco> natefinch: quick review: http://reviews.vapour.ws/r/2717/
<natefinch> katco: looking
<katco> natefinch: ty
<natefinch> katco: ship it
<katco> natefinch: ty again. btw not finding this mythical default variable, so do you have your fix handy?
<katco> natefinch: re: default ec2 type
<natefinch> katco: just change default CPUPower to 300
 * katco cringes
<natefinch> what, isn't that obvious? ;)
<katco> natefinch: i wonder if i could do something still small, but a little more intentful
<natefinch> actually 201-300 also works
<natefinch> pick 243 and see how long it takes someone to notice
<katco> natefinch: e.g. if no constraints are set, set a constraint of instance-type ?
<natefinch> katco: the problem is that if someone sets like... 1 cpu core
<natefinch> you still want it to pick m3.medium
<katco> natefinch: but then that's not the default, right? they've set constraints?
<natefinch> katco: but the way it works now, the CPUPower is the magic number that keeps you from accidentally getting a tiny instance.
<katco> natefinch: (sigh) k, i'll maybe address that in my 1.26 branch
<natefinch> katco: yes... I'm not saying it's good... just saying that tweaking the minimum default CPU power will be consistent with the way it works now.
<cmars> could use a review of a simple timing fix if someone has a moment, http://reviews.vapour.ws/r/2719/
<katco> cmars: will look in a sec
<katco> natefinch: curious... it's picking a c1.medium with defaultCpuPower set to 500
<natefinch> katco: that's odd.  Is this live testing or running unit tests?
<katco> natefinch: live testing
<katco> natefinch: sorry, defaultCpuPower set to 300
<natefinch> katco: oh yeah... I remember that happening..
<natefinch> katco: I remember there were two things I had to do... I also had to set default mem to 2048
<natefinch> (to exclude the c1.medium)
<natefinch> katco: since it's cheaper than the m3.medium
<natefinch> katco: except no, it looks more expensive
<natefinch> katco: weird... it shouldn't pick the c1.medium since it's more expensive
<katco> natefinch: not so weird. the constraints code has all kinds of implicit obfuscation
<natefinch> katco: but I remember that happening when I was working on the code too.
<natefinch> yep
<natefinch> exactly what you were fixing
<katco> natefinch: bleh... the problem is we need a concept of "discouraged usage"
<natefinch> katco: yep, I was thinking the exact same thing when I was looking at it.
<katco> natefinch: alas, another thing for the 1.26 patch
<natefinch> katco: yeop
<katco> natefinch: lol if i specify instance-type=m3.medium in the live test: "no instance types in test matching constraints "instance-type=m3.medium""
<mgz> katco: in all regions?
<natefinch> :p
<katco> mgz: of course i have no idea because this test is sitting on a giant stack of layered testing framework
<mgz> :)
<natefinch> katco: yeah, the tests don't use the real list of instance types.... this is exactly the problem I was complaining about before.  We mock them out.... though I can't imagine why we're mocking out static data.
<katco> natefinch: time for actual bootstrapping
<natefinch> effectively:
<natefinch> var sky color = "blue"
<natefinch> func TestSkyColor(t *testing.T) {
<natefinch> 	sky = "red"
<natefinch> 	...
<natefinch> }
<katco> natefinch: last review: http://reviews.vapour.ws/r/2720/
<katco> natefinch: also, how's the fix coming?
<natefinch> katco: coming... gonna take some time over the weekend, but I kinda expected that with the amount I've been out this week.
<katco> natefinch: k
<katco> natefinch: time for a 25-line review?
<natefinch> katco: yeah.
<natefinch> katco: two things: one theoretical and one concrete
<natefinch> katco: the theoretical is.... in theory, the memory and cpupower from the default type could still have us choosing a different default type
<katco> natefinch: very true, but i thought that supported existing behavior: we don't specify a default type, just default constraints
<katco> natefinch: this happens to select the correct thing until i can get the more comprehensive patch landed which should include "discouraged" types
<natefinch> katco: ok... I'm just thinking that people might put too much faith in the fact that they're giving the memory and cpupower of a default type, but there's no real guarantee that the type you get out is the type you put in.
<natefinch> katco: anyway, that's theoretical and not really a big deal
<natefinch> katco: and actually the second one is probably also only theoretical, in practice (heh)
<natefinch> katco: the code sets the memory constraint even if the CPU power was set... which could cause a different instance type to get chosen after this change
<natefinch> katco: I kind of assume no one uses anything except maybe RAM as a constraint, and if they want to choose an instance type, they specify it with the instance type constraint.
<katco> natefinch: hmm
<natefinch> katco: I have a hard time believing anyone is out there going juju deploy mycharm --constraints=CPUPower=650 (or whatever)
<katco> natefinch: well... isn't this still doing the right thing though? aren't we saying "this is juju's default minimum memory constraint"?
<katco> natefinch: it's a new concept, i agree with that.
<natefinch> katco: I guess that's true.... I guess we're already making it choose something different by default explicitly... m3.medium instead of m1.small
<katco> natefinch: how's this: given this will change in 1.26, it's probably suitable for 1.25
<natefinch> katco: I think it'll be fine, yes.
<natefinch> katco: at worst, .01% of our users will have to very slightly modify their scripts, which were probably being oddly AWS specific, without actually being AWS specific.
<katco> natefinch: soo... shipit?
<natefinch> katco: couple very minor comments posted
<natefinch> katco: oh, also, I'd love love love a test that actually tests if you get m3.medium by default
<natefinch> katco: but I understand that there may not be time for that, and we're not worse off than we were before.
<katco> natefinch: so would i. i had one written up, but i'd have to delve into where those types are coming from
<katco> natefinch: i can confirm manual testing proves it out though
<natefinch> katco: yeah, that's how I was testing it, too.
<perrito666> I definitely hate  mongo 3
<natefinch> katco: we can just hang our head in shame about the whole mess and fix it right in 1.26
<natefinch> gotta run
<alexisb> perrito666, I take it your having fun w/ the mongo 3.0 upgrade spike
<perrito666> alexisb: what in my comment lead you to use the word fun in that sentence :p ?
<perrito666> they very nicely changed this feature http://docs.mongodb.org/manual/core/authentication/#localhost-exception in which our whole replicaset initiation is based :p
<mup> Bug #1497456 opened: TestResolveCharm regex mismatch <ci> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <juju-core use-charm.v6-unstable:Triaged> <https://launchpad.net/bugs/1497456>
<mup> Bug #1497456 changed: TestResolveCharm regex mismatch <ci> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <juju-core use-charm.v6-unstable:Triaged> <https://launchpad.net/bugs/1497456>
<mup> Bug #1497456 opened: TestResolveCharm regex mismatch <ci> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <juju-core use-charm.v6-unstable:Triaged> <https://launchpad.net/bugs/1497456>
#juju-dev 2015-09-20
<mup> Bug #1497653 opened:  Service   stuck in "dyning" and machine in "down" state because VM terminated from AWS EC2 console <juju-core:New> <https://launchpad.net/bugs/1497653>
 * thumper ignores the review queue for an hour to work on this bug
<mup> Bug #1496639 opened: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1496639>
<thumper> I guess the way to work out where this code is needed is to delete it and see what tests break, right?
<alexisb> thumper, ping
<thumper> alexisb: morning
<alexisb> morning
<mup> Bug #1496639 changed: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1496639>
<mup> Bug #1496639 opened: juju get incorrectly reports boolean default values <canonical-bootstack> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1496639>
 * thumper files another tech-debt bug
<mup> Bug #1497788 opened: Clean up state Status methods <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1497788>
<thumper> argh
<thumper> go 1.5.1 changed an error message slightly which causes a test to fail...
 * thumper sighs
<thumper> and this one...
<thumper> Error: entity mismatch; got len 1; want 1
<thumper> although I think that is just extra data around the atual failure
<thumper> wallyworld: got a minute?
<menn0> thumper: I'm pretty sure the calendar says wallyworld is off today
<thumper> ugh
<thumper> he should not show as around then
<menn0> thumper: is there a place to record things to change/fix for Juju 2.0
 * thumper berates wallyworld
<thumper> menn0: not that I know of, but would be a good idea to create one
<thumper> I'm currently hitting a real PITA
<menn0> thumper: "/wii wallyworld" says he's been idle for over 46 hours :)
<thumper> like I check that
<thumper> that command didn't work for me
<menn0> thumper: for your client it might be: /quote WHOIS wallyworld wallyworld
<menn0> yes twice
<menn0> thumper: /wii is common alias IIRC
<thumper> oh, the first one did work
<thumper> it just output it somewhere weird
 * menn0 emails alexisb about where we should record Juju 2.0 items
<thumper> menn0: JFDI, start a doc
<menn0> thumper: well ideally I'd like a milestone in LP
<menn0> thumper: screw it... i'll just do a doc
 * thumper head-desks
<thumper> because it feels better than fixing this bug
<menn0> thumper: what's the bug
<thumper> menn0: https://bugs.launchpad.net/juju-core/+bug/1494542
<mup> Bug #1494542: config-changed error does not cause error state <juju-core:In Progress by thumper> <juju-core 1.24:In Progress by thumper> <juju-core 1.25:In Progress by thumper> <https://launchpad.net/bugs/1494542>
<menn0> thumper: ah that one
<thumper> menn0: I have a feeling that the work arounds are more work than just doing it right in the first place
 * thumper will think about it while exercising at lunch
 * thumper afk
#juju-dev 2016-09-19
<alexisb> do we have documentation somehwere that walks one through how to use clock to get rid of time. (in favor of clock)?
<alexisb> or are there good examples in the code I could follow?
<natefinch> alexisb: instead of using stuff from time.Time, use clock.Clock, which has an interface close enough to identical to what's offered in time.Time, and use that
<natefinch> er sorry, instead of using stuff from the time package
<alexisb> well that is convienent, thanks natefinch
<natefinch> alexisb: therre's a clock.WallClock value that you use in production that wraps the time package into an interface-able value
<menn0> alexisb: a fairly simple example of some code which uses an injected Clock is the resumer (worker/resumer/)
<thumper> hmm...
<thumper> why does restoring a model give it a new uuid?
<thumper> perhaps controller model uuid
<natefinch> I presume because nothing ensures the old model is actually gone?
<thumper> except the agents can't connect to the new model
<thumper> because they all have the old model uuid
<natefinch> ahh
<natefinch> conundrum
<thumper> there seems to be a break in logic somewhere
<natefinch> indeed
<natefinch> I have only seen backup and restore work being done from afar, I know one of the details, unfortunately
<thumper> As part of restore, all known instances are
<thumper> configured to treat the new controller as their master.
<thumper> form the help
<thumper> let's hope that they are
<menn0> thumper: so I figured out the i/o timeout thing
<menn0> thumper: fixing it will be non-trivial
<thumper> oh?
<menn0> thumper: CodeRetry is only being handled if the API call itself returns an error
<menn0> thumper: but the error is usually turning up in an Error field
<thumper> ah
<thumper> ugh
<menn0> thumper: and the apiclient code doesn't check there
<thumper> yeah, that is horrible
<menn0> thumper: I'm considering some kind of reflection magic
<thumper> and you have the situations where something may have worked while another timed out
<menn0> thumper: which makes that pretty terrible
<thumper> it does
<menn0> thumper: you only want to redo the parts of the request which failed
<thumper> so not trivial
 * menn0 is annoyed this didn't occur to him before this work was done
<menn0> thumper: perhaps the right place to handle this is inside or just above State
<menn0> in our database layer
<menn0> and have that retry
<menn0> i'm going to have lunch and mull it over
<thumper> in juju/txn package?
<thumper> ok
<menn0> just doing it for txns isn't enough because queries can fail too
<menn0> thumper: it'll probably have to happen in both the txn layer and the collections layer (state/collection.go)
 * thumper nods
<menn0> we really need to get rid of mongo....
<menn0> lunch!
 * perrito666 reads backup and restore and pops in
<perrito666> Thumper menn0 anything I can do?
<thumper> perrito666: you could start by telling me what it is supposed to do and not do
<thumper> perrito666: I tried to restore into a running controller, and it just dies
<thumper> controller gets stuck
<thumper> restore doesn't work
<thumper> apiserver and state are down
 * thumper goes to walk the dog and think
<menn0> thumper: ping?
<thumper> menn0: ba ck
<menn0> thumper: 1:1? I'm stuck
<thumper> me too
<anastasiamac> natefinch: m guessing u r not really here, r u?
<natefinch> anastasiamac: night time is the right time
<anastasiamac> natefinch: :) re: bug 1552274, r u working on it already?
<mup> Bug #1552274: juju list-credentials inconsistencies between format output <2.0-count> <bitesize> <conjure> <juju-release-support> <rc1> <usability> <juju:In Progress by natefinch> <https://launchpad.net/bugs/1552274>
<natefinch> anastasiamac: wow uh, nom sorry
<anastasiamac> natefinch: awesome, i was going to pick it up but saw ur name on it.. so r u k for me to do it? :)
<natefinch> anastasiamac: totally
<anastasiamac> \o/
<natefinch> anastasiamac: that was during the london sprint. I probably started looking at it, then ran out of time, then was on vacation for 11 days, and forgot about it
<anastasiamac> nps, i'll count this as my win ;) might need to get u to review tho :-P
<natefinch> anastasiamac: sure thing
<natefinch> at one time I heard someone was going to fix needing to use --config=something.yaml during bootstrap to get reusable defaults on bootstrap.  Anyone know if that ever happened?
<natefinch> thumper, menn0 ^ ?
<menn0> natefinch: that's been done. define a custom cloud as per https://github.com/juju/juju/wiki/Faster-LXD#suggested-juju-config-for-lxd-deployments
<natefinch> menn0: oh, hmm.  I wondered why we could add clouds for things like AWS.
<anastasiamac> menn0: axw: as tech strategy gurus, what's ur take on defining json tags outside of cli package?
<anastasiamac> m seeing inconsistencies btw json/yaml output because yaml tags are defined but json aren;t and we are pulling juju/juju objects without transforming them at presentation layer
<axw> anastasiamac: better not to do that, because of unintended consequences
<axw> i.e. changing something outside CLI-related packages probably shouldn't affect presentation
<natefinch> anastasiamac: we're supposed to be transforming them at the presentation layer into display values
<axw> and it's surprising when it does happen
<anastasiamac> axw: awesome \o/ that was my take too :)
<axw> and leads to bugs.
<menn0> anastasiamac: what axw said :)
<anastasiamac> axw: menn0: \o/
<thumper> anastasiamac: want a trivial branch to do?
<thumper> anastasiamac: I'm in the middle of debugging restore
<anastasiamac> thumper: mayb :) what's needed?
<anastasiamac> thumper: m trying to figure out how to print value but not the label of the field in json :)
<thumper> cmd/juju/status/output_tabular.go:83 	const maxVersionWidth = 7
<thumper> that number is way to small
<thumper> e.g. "9.0.0~b3" is ellided to "9.0...."
<thumper> just make it something like 12
<thumper> or 15
<anastasiamac> thumper: sure :)
<anastasiamac> thu`was oging to say 15 :)
<anastasiamac> thumper: do u know how to NOT print field label but value in json? kind of like `yaml:","`
<thumper> json:"-"
<thumper> I think it is also  jaml""-"
<thumper> I think it is also jaml:"-"
<thumper> yaml:"-"
<thumper> perhaps "," works too, but I think the official supported way is the dash
<anastasiamac> thumper: that prints actual "-", when ommitted completely as in "", it prints captialised property name
<anastasiamac> anyay, i'll get back to it later, I'll increase version line chars first for :)
<anastasiamac> u*
<thumper> for yaml or json
<thumper> ?
<thumper> https://golang.org/pkg/encoding/json/#Marshal
<thumper> Each exported struct field becomes a member of the object unless
<thumper> - the field's tag is "-", or
<thumper> - the field is empty and its tag specifies the "omitempty" option.
<thumper> pretty sure yaml does the same
<anastasiamac> thumper: for json: if tag is "", then property name printed; if tag is "-", then "-" printed. i've played with it for hours now ;)
<thumper> umm... no
<thumper> check that you are getting a hyphen
<anastasiamac> for yaml if "-", filed name is not printed (as I want) but value is printed ;)
<thumper> and not an ndash
<anastasiamac> anyway.. m going to come back to it later
<thumper> we are using this in juju
<thumper> cmd/juju/cloud/show.go
<anastasiamac> thumper: oh.. what's the diff btw dash and hyphen? i thought we only had problmes with underscores...
<thumper> hyphen, ndash and mdash are all different characters
<thumper> search the codebase for `json:"-"`
<thumper> many uses
<anastasiamac> thumper: awesome! tyvm!
<anastasiamac> thumper: https://github.com/juju/juju/pull/6273
<thumper> thanks
<thumper> hmm...
<thumper> it seems I can't use a specific identity file key for juju ssh
<thumper> menn0: ^ ideas?
<menn0> thumper: no I think you can. juju ssh/scp/debughooks now get the appropriate public key for the target via the API and write that to a temp file which is then passed to ssh/scp.
<veebers> thumper: The 'default' hosted model has (or will be?) removed right?
<menn0> s/can/can't/
<menn0> thumper: ^
<thumper> menn0: quick call?
<menn0> thumper: sure
<anastasiamac> thumper: certainly, "-" from cmd/juju/cloud/show gets shown as "-"
<anastasiamac> thumper:  but thnx for the tip, i'll kee looking
<veebers> anastasiamac would you know? ^^ (re: default model)
<anastasiamac> veebers: i know that there is a bug for it but I do not know where it is at
<anastasiamac> veebers: let me find a reference for ur pleasure and tracking :)
<veebers> anastasiamac: awesome, thanks :-)
<anastasiamac> veebers: bug 1621295
<mup> Bug #1621295: Drop the 'default' model <juju:Triaged> <https://launchpad.net/bugs/1621295>
<thumper> anastasiamac: https://play.golang.org/p/6Bdpkkk7lw
<thumper> anastasiamac: yes it does
<anastasiamac> veebers: in since we r trading info, do u happen to know if retry-provisioning is tested vigorously/rigorously/at all?
<anastasiamac> thumper: it ignores the fiedl entirely ;) i want to ignore field name but print its value :)
<thumper> what?
<anastasiamac> thumper: field/property name
<veebers> anastasiamac: no idea sorry, I can bring it up in tomorrows standup if you would like
<anastasiamac> veebers: it's k.. just curious.. it can simmer for now ;)
<veebers> anastasiamac: :-)
<perrito666> Thumper a bit late but, restoring into a running controller should work, what exactly are you trying to do? About to start my day and I can most likely take over
<babbageclunk> Whoa, early in the morning for you perrito666! Or are you travelling?
<perrito666> I have irc on the phone so I answer from bed before getting up
<perrito666> Currently is 6:30 anyway and am already up
<lazypower> whats my excuse at 4:30?
<lazypower> oh right :D wizarding hour
<perrito666> You have no life
<lazypower> and that
<dimitern> morning all
<babbageclunk> anyone: do I need to be on the VPN, given that I'm at Bluefin?
<dimitern> babbageclunk: unlikely, but shouldn't hurt if you're
<babbageclunk> dimitern: If I turn it on I can't get to irc.canonical.com
<dimitern> babbageclunk: btw I've only noticed last week during my maas testing the devices are cleaned up along with their containers - nice work! :)
<dimitern> babbageclunk: really? well, come to think of it I had issues while on the vpn and irc.c.c connection drops
<dimitern> babbageclunk: I usually connect to irc first and only then start openvpn
<babbageclunk> dimitern: :)
<dimitern> frobware: hey, I've just landed the gomaasapi change thumper reviewed, and since your QA was OK, I'm thinking of starting to land the chain of PRs I have up - objections?
<dimitern> frobware: alternatively, I can push the last commit bumping deps of gomaasapi and give the whole thing a QA run today first
<dimitern> *facepalm*
 * dimitern fixed thumper's suggestions, but should've pushed before merging :/
<frobware> dimitern: I would like to do a complete QA run today; want to sync?
<frobware> dimitern: my 1 broken case was fixed, but that's because I only had 1 broken setup to test. Let's create a spreadsheet with the combos we want to test and split the effort up.
<dimitern> frobware: let's sync
<frobware> dimitern: in the standup HO
<dimitern> frobware: omw - bluetooth issues again
<dimitern> ooh mgz :)
<dimitern> you're an +o now? sweet :D
<mgz> dimitern: yo
<mgz> just channel spammer prevention
<anastasiamac> natefinch: axw: lis-credentials with json output - https://github.com/juju/juju/pull/6274
<dimitern> ;)
<frobware> dimitern: tar -cvf ~/juju-2-0-rc1-for-brad.tar --transform='s!^!juju-2.0-rc1-for-brad/!' bin/juju*
<natefinch> gah, we need a style guide for the CLI :/
<rick_h_> natefinch: heh grtting there
<rick_h_> we have most of the rules on place, polish time
<natefinch> rick_h_:  I was working on the output for when there's no results for a list-* command and decided to check some of the other list-* commands...  I realized we don't have rules like "always/never capitalize the first word"
<natefinch> also there's an unfortunate inconsistency between our flag parser's error output and the rest of juju's error output.  The flag parser writes "error: <something>" and everything else writes ERROR <something> (with error in red)
<natefinch> rick_h_: should I file bugs?
<mgz> well, there's a difference between the logging output
<mgz> and the general command response
<mgz> that's just somewhat blurred in a few cases
<rick_h_> natefinch: there's a bug on the different table list- output already I think
<rick_h_> natefinch: but yea, if you see anything file bugs please
<rick_h_> frobware: dimitern morning how goes folks?
<dimitern> frobware: morning!
<dimitern> oops
<dimitern> rick_h_: morning :) - goes well - we're waiting for a pre-merge CI run for the VLAN bug
<rick_h_> dimitern: cool, how are we feeling about it?
<dimitern> all seems to work fine so far, after my last change to gomaasapi
<dimitern> rick_h_: I feel pretty good about it, no issues so far from several sources that tested it
<rick_h_> dimitern: k, can we see about getting a build out to folks int he bug for early testing perhaps?
<dimitern> (well, some issues do exist, but they seem to be related to broken MAAS setups, not the fix itself)
<dimitern> rick_h_: frobware sent a build to a couple of guys
<rick_h_> dimitern: k
<rick_h_> dimitern: I just want to make sure we're solid on this
<dimitern> rick_h_: sure, so far we are good AFAICS
<rick_h_> dimitern: k, so branches are all reviewed/going in for landing?
<dimitern> well now.. wouldn't it be nice if MAAS CLI had a tabular output mode, like Juju? http://paste.ubuntu.com/23202865/
<rick_h_> dimitern: heh, have to see what this UX stuff they're doing in 2.1 is
<dimitern> rick_h_: there are several branches up, almost all changes are reviewed/approved
<rick_h_> dimitern: k, we have to get things landed EOD so they can look at a build tomorrow
<rick_h_> frobware: ^
<dimitern> rick_h_: but frobware and me wanted to see how the CI run will go on that branch, which contains all the stacked changes
<rick_h_> dimitern: k
<frobware> dimitern: +1 on the maas output - nice1
<dimitern> rick_h_: do you recall when/why it was decided to get rid of the tabular sections - e.g. [TITLE] and just go with headers?
<rick_h_> dimitern: because the heading were split enough and folks aren't meant to be parsing the tabular output which they tend to do with the [apps] and such
<rick_h_> so it was wasted space
<mgz> the `juju generate-config` command is totally dead, right?
<mgz> just updating some doc, and it still references that
<dimitern> it might be worth to have sections for that maas-tabular experiment I'm trying out though..
<mgz> we just have the common public clouds usable without that now...
<mgz> evurk, this neeeds much more rewriting...
<rick_h_> dimitern: typical thing is to try to fit whatever you would title into the heading in a clean way
<rick_h_> dimitern: so you have the info you need, and none of the info you don't
<dimitern> rick_h_: good point, yeah
<rick_h_> dimitern: e.g. a heading "interfaces" is kind of redundant whehn you see eth0, eth1, etc.
<rick_h_> but storage maybe a STORAGE NAME heading fits better
<dimitern> rick_h_: great suggestions, cheers!
<dimitern> btw the output is just a nice side-effect
<dimitern> the reason I'm doing this is to help automate maas setups via the API - easily change nodes networking for example
<rick_h_> dimitern: ping for standup
<dimitern> sorry omw
<rick_h_> babbageclunk: https://hangouts.google.com/hangouts/_/canonical.com/ui-daily?authuser=1
<frobware> dimitern: ping
<dimitern> r
<dimitern> frobware: pong
<frobware> dimitern: HO? I have a failing case (lxdbr0)
<mup> Bug #1625243 opened: Simplestreams for rackspace are now live! <cloud-images:Fix Released by josvaz> <juju-core:New> <https://launchpad.net/bugs/1625243>
<dimitern> frobware: sorry, got distracted.. joining standup HO?
<frobware> dimitern: great!
<dimitern> frobware: omw
<mup> Bug #1625243 changed: Simplestreams for rackspace are now live! <cloud-images:Fix Released by josvaz> <juju:In Progress by sinzui> <https://launchpad.net/bugs/1625243>
<mup> Bug #1625243 opened: Simplestreams for rackspace are now live! <cloud-images:Fix Released by josvaz> <juju:In Progress by sinzui> <https://launchpad.net/bugs/1625243>
<mup> Bug #1625243 changed: Simplestreams for rackspace are now live! <cloud-images:Fix Released by josvaz> <juju:In Progress by sinzui> <https://launchpad.net/bugs/1625243>
<frobware> dimitern: http://pastebin.ubuntu.com/23203649/
<rick_h_> frobware: where did the maas scripts end up?
<rick_h_> frobware: in github did we get it moved to /juju?
<frobware> rick_h_: nope
<rick_h_> frobware: k, where are they?
<frobware> rick_h_: github.com/frobware/kvm-maas
<rick_h_> frobware: k, if I create a fork there then and we can move that way please?
<frobware> rick_h_: if you create the fork you're saying continue development in the [juju] fork?
<rick_h_> frobware: yes I'm looking to email various teams that drive maas and try to point folks at a location for working together
<frobware> rick_h_: ack
<rick_h_> frobware: so will propose we start there and let the bikeshedding begin
<frobware> :)
<rick_h_> frobware: dimitern you guys chat? Where are we at?
<frobware> rick_h_: otp with dimitern
<rick_h_> frobware: k
<frobware> rick_h_: want to join? I think we should make a call RSN
<rick_h_> frobware: sure thing, linky?
<frobware> rick_h_: Standup: https://hangouts.google.com/hangouts/_/canonical.com/core?authuser=0
<frobware> rick_h_, mgz, sinzui: I just pushed another update for the vlan bug (lp-1566791-bridge-all). Do we want to wait for another CI run?
<rick_h_> frobware: asking QA to force a run in the other channel
<mgz> frobware: to address issues, or just with the new change rolled in as well?
<frobware> mgz: bumped gomaasapi deps; we found a panic in there which has since merged in gomaaspi
<mgz> frobware: gotcha
<perrito666> k all going out for a bit bbl
<natefinch> ew
<natefinch> $ juju list-spaces
<natefinch> cannot list spaces: spaces not supported (not supported)
<natefinch> ERROR cannot list spaces: spaces not supported (not supported)
<natefinch> I guess maybe spaces are not supported?
<redir>  
<redir> ruh?
<natefinch> I think it's because I'm on lxd, which would make sense... but the error message is somewhat kinda  redundant and repetitive
<babbageclunk> alexisb: free?
<alexisb> babbageclunk, for you always :)
<alexisb> babbageclunk, I will meet you in our standup HO
<alexisb> https://hangouts.google.com/hangouts/_/canonical.com/alexis-bruemmer?
<alexisb> https://hangouts.google.com/hangouts/_/canonical.com/alexis-bruemmer
<alexisb> thumper, perrito666 you guys available for a HO?
<alexisb> also, good morning thumper :)
<thumper> morning
<alexisb> hmm perrito666 may not be back yet
<anastasiamac> alexisb: rick_h_: have to reboot. b there in asec- sorry
<perrito666> Alexis still going home, I think another 30 min
<perrito666> Alexisb^
<alexisb> perrito666, np, just ping when you area available
 * perrito666 is delayed by his wife shopping list
<perrito666> If I don't get a second car soon I am going crazy
<alexisb> thumper, on the HO if you want to meet
<perrito666> alexisb: thumper back
<alexisb> perrito666, do you have skype?
<perrito666> alexisb: sure, let me snap install 1990
<alexisb> lol
<perrito666> alexisb: logging in
<alexisb> let us know when you are logged in perrito666
<perrito666> alexisb: just a few mins, had to re-download the client just finishing
<alexisb> perrito666, are you downloading from 1990??
<perrito666> apparently, the static got me a version that requires a set of i386 libs, lemme use my phone
<perrito666> I am in
<thumper> what is your skypename?
<perrito666> perrito666
<babbageclunk> menn0, thumper, perrito666: Can someone review this charm change? https://github.com/juju/charm/pull/221
<babbageclunk> I mean charm url change
<menn0> babbageclunk: sure, I'll take a look
<babbageclunk> menn0: Ta!
<alexisb> babbageclunk, if you are still on feel free to join the actual standup :)
<alexisb> thumper, ping
<thumper> babbageclunk: good to see you
<babbageclunk> thumper: :)
<perrito666> anastasiamac: https://bugs.launchpad.net/juju/+bug/1625390 <-- this is by design
<mup> Bug #1625390: add  user fail after remove some user <juju:New> <https://launchpad.net/bugs/1625390>
<perrito666> deletion of user "marks them as deleted" for "auditability" this was ian's idea iirc, perhaps redir can tell you more
<anastasiamac> perrito666: i think we'd ned a message to say" user is in-active, re-activate user" or something like that
<perrito666> for which we need re-activate user command
<redir> When we implmented user removal we determined it was best to mark users deleted and leave them in the db. For 1) if a user has been deleted and an or wants to see who made change x, then we need to be able to resolve the user that made the change even if they are deleted. 2) If you created a new user with the same name, they would have surprise permissions and blame.
<redir> anastasiamac: inactive is different than deleted
<redir> deleted it gone permanently, no recovering.
<anastasiamac> redir: i understand. but since they r trying to add the same username, we can tell them that this user existed but is now inactive
<anastasiamac> this way, admin will decide whether to re-activate or to add different usernmae
<redir> anastasiamac: IIRC we return a unique error if the user is a deleted user. And that choice was made with the intention of capturing in the add command at a future date.
<perrito666> if you ask me it was a bad decision, one can want to delete and re-create users
<redir> anastasiamac: there's no reactivating a deleted user
<redir> perrito666: I think that casual users may want that, but "enterprise" users won't
<perrito666> redir: really? sounds like re-hire is a clear use case for this
<redir> perrito666: I've worked for a couple very large orgs more than once. And each time I get a new username.
<perrito666> redir: I am curious what kind of username that was
<redir> perrito666: and email name-1 name-2 name-1@example.com
<redir> etc..
<perrito666> lol
<redir> yes, but if you are a large enterprise that has a lot of regulatory compliance like, say a telecom, healthcare, gov't etc... that's what they do
<perrito666> thumper: please keep me posted on the restore issue
<thumper> perrito666: ack
<redir> perrito666: if they want to deactivate a user that is available too.
<redir> and if the actual id was not the username also you could recreate users with the same username and still identify them uniquely, alas.
<redir> adding a flag or command to resurrect a user is also possible if desired.
<redir> whatevs
<anastasiamac> redir: ack. i'll comment on the bug. Having worked for large enterprises and govs, I agree for auditing purposes, users r never deleted and generally once de-activated are not re-activated :)
#juju-dev 2016-09-20
<babbageclunk> 'night everyone! o/
<menn0> babbageclunk: good night!
<veebers> Who is the best person to talk to about the jujugui charm store?
 * redir goes eod
<redir> bbl
<redir> veebers: I don't know. Maybe https://github.com/juju/charmstore/graphs/contributors that is in a current timezone
<veebers> redir: worth a shot, thanks :-)
<alexisb> is there a way for me to easily force a unit into error state?
<perrito666> Alexis a custom broken charm
<perrito666> Just take any charm and blow the install hook with a raise
<alexisb> perrito666, ok
<menn0> axw: ship it for your first azure PR
<axw> menn0: thanks!
 * thumper runs restore again...
<anastasiamac> natefinch: hi :)
<natefinch> anastasiamac: howdy
<anastasiamac> natefinch: since m in list-credentials, could u please not do the fix related to https://bugs.launchpad.net/juju/+bug/1596687 for it?
<mup> Bug #1596687: command list output not consistent <2.0> <usability> <juju:In Progress by natefinch> <https://launchpad.net/bugs/1596687>
<anastasiamac> natefinch: i'll do it in my current pr
<anastasiamac> the "No foo.." fix ;)
<natefinch> anastasiamac: sure
<anastasiamac> natefinch: \o/
<natefinch> anastasiamac: all I did was add a check in formatCredentialsTabular, if len(credentials.Credentials) == 0 { fmt.Fprintln(writer, "No credentials to display."); return nil; }
<anastasiamac> natefinch: waht about other formats like yaml and json?
<anastasiamac> what* :-P
<natefinch> anastasiamac: I had assumed we only care about tabular, since that's the "human readable" output.  But maybe it's appropriate for yaml and json too, if we write the notification stderr
<rick_h_> those are machine readable and an empty value seems less interesting.
<rick_h_> I guess standardizing on either an empty {} or empty string might be good to check
<rick_h_> but I'd suggest just sticking with the current readable work and landing that in case we can get it into rc1
<anastasiamac> natefinch: rick_h_: for tabular and yaml, I would have thought we'd want human-readable "no foo.." but for json leave it as empty? :D
<rick_h_> vs including it all in one card
<rick_h_> anastasiamac: no, not for yaml since it's parsable
<anastasiamac> rick_h_: ok. so only tabular for noe, the rest leave a sis?
<anastasiamac> as is* bleh..
<rick_h_> umm ok :)
<anastasiamac> :)
<anastasiamac> got it \o/ will do for credentials :) tyvm natefinch, rick_h_
<natefinch> ok.. just a note, we could still write out to stderr, without messing up people parsing the stdout of yaml and json... to a terminal user, it would look the same, but a script wouldn't see it.
<rick_h_> natefinch: rgr, and worth a polish bug as follow up
<rick_h_> natefinch: as well as checking we output a standard object when there's no data in each format
<rick_h_> natefinch: but as follow up please
<natefinch> rick_h_: will do. For now I'll just twiddle the tabular output, which I think is a pretty good 90%+ solution
<rick_h_> ty
<natefinch> rick_h_: so... I've actually being doing this as stderr, just because that's how our CLI context object works... when you say "write out this informational message", it goes to stderr.  Is that ok, or should I switch it to stdout?
<rick_h_> natefinch: hmm, it seems odd since it's not an error, but I feel like this is something I'd tend to get wrong and rely on folks like jam to correct me on
<natefinch> rick_h_: AFAIK, the usual mantra is use stderr for logging/human readable info, and use stdout for data output.  But I'm happy to have someone else weigh in.  Juju standards may be different.
<alexisb> ok last time I checked this should work right??: charm pull-source cs:xenial/postgresql
<alexisb> or just charm pull-source cs:postgressql
<alexisb> or just charm pull-source cs:postgresql
<alexisb> hmm cassandra works
<thumper> fuck what?
<thumper> how can this happen?
 * thumper digs
<natefinch> thumper: hey, do you think this is something we could just change? https://bugs.launchpad.net/juju/+bug/1625194
<mup> Bug #1625194: flag parsing error doesn't match juju error styling <usability> <juju:Triaged> <https://launchpad.net/bugs/1625194>
<thumper> natefinch: sure... best way would be to just write it to the logger
<natefinch> thumper: not sure I follow.
<thumper> instead of writing "error: %v"
<thumper> use loggo logger
<thumper> it will do the coloring
<natefinch> ahh, yes
<thumper> veebers: still trying to work out why the machine agent isn't getting the api updates
<veebers> menn0: would you know the version numbering works when 'juju upgrading'? I started with 2.0-rc1-xenial-amd64, expected 2.0-rc1.1 but got 2.0-rc1.2 (note the .2)
<veebers> thumper: ack
<veebers> menn0: hmm, I think it might be due to me thinking of the version as returned by the binaries '--version' as opposed to what was reported in the running status
<anastasiamac> axw: updated https://github.com/juju/juju/pull/6274 to have the comment and the new output for tabular output when no credentials
<anastasiamac> axw: i thinks it's good to go...
<thumper> veebers: how do you want these binaries
<thumper> ?
<veebers> thumper: are you able to scp them up to the machine from yesterday>
<thumper> probably
 * thumper thinks
<axw> anastasiamac: LGTM
<anastasiamac> axw: \o/
<axw> thumper menn0: will either of you have a chance to review my 3rd azure PR today? I'm going to need 2 reviews, so can't just be OCR. specifically just need https://github.com/juju/juju/pull/6272/commits/8e0933ad924cd72e1e2433e0c9a5fec78a236b73 reviewed
<menn0> axw: i'll take a look once i'm off this call with thumper
<axw> menn0: thanks
<menn0> thumper: I installed the minimal centos7 ISO in a VM while we were talking and it comes with /sbin/service too
<thumper> sweet
<thumper> does it do what we expect?
<menn0> thumper: looks like it
<thumper> cool
<menn0> thumper-cooking: yep it works as expected (redirecting commands to systemctl)
<menn0> axw: review done
<axw> menn0-afk: thanks muchly
<anastasiamac> axw: did the review too... funny how menn0-afk and i picked on the same things ;)
<axw> anastasiamac: TYVM
<axw> anastasiamac: I'm just dropping the TODO for now. implementing that means more changes to credentials in general
<axw> don't really have time for it right now
<anastasiamac> axw: \o/ thank you for doing it - very exciting to see interactive in azure
<anastasiamac> axw: I'd drop it too, there does not seem to be much need for it right not
<anastasiamac> now*
<axw> anastasiamac menn0-afk: thanks for the reviews, it's all landed in time for RC1. woot :)
<anastasiamac> axw: \o/ u r the champion
<babbageclunk> mgz: ping?
<thumper> ha ha
<thumper> got it
<thumper> well, at least some of it
<babbageclunk> mgz: ping?
<thumper> anyone feel like reviewing a restore branch? https://github.com/juju/juju/pull/6282
<thumper> this is really quite urgent for rc1
<thumper> anyway, it is almost 9pm
 * thumper is done
<anastasiamac> \o/
<perrito666> Thumper ship it with a small comment
<perrito666> Well github is very reviewable from my phone I love that
<mgz> babbageclunk: yo, worry
<mgz> -w+s
<babbageclunk> mgz: hey - if I've got a last minute change to charm URL formats that I'm rushing to finish, is it too late to get it into the RC1 build?
<mgz> it's fine to commit to master either way
<mgz> as far as I'm aware, nothing was kicked off in the way of release last night
<mgz> if the change lands now, can discuss when coming to it if that rev can go in the rc as well
<babbageclunk> Ok great - can you ping me if that's happening and I haven't landed my change? (I'll try to keep an eye out for it.)
<mgz> sure thing
<babbageclunk> mgz: Thanks!
<voidspace> I thought it was a quiet morning, I wasn't signed into IRC...
<voidspace> frobware: is the MAAS meeting happening today?
<voidspace> frobware: it's in my calendar
<frobware> voidspace: yep - as always. It's only me and mpontillo that turn up these days.
<voidspace> babbageclunk: if you're free at 2pm you *might* find it useful (if you have any questions or feedback about MAAS)
<voidspace> frobware: I can come today, I haven't done much on MAAS, but it's good to stay in touch with what's happening
<babbageclunk> voidspace: hopefully I'll have this stuff done, so I'll try to come along.
<voidspace> babbageclunk: cool
<macgreagoir> frobware: I was there last week. I must have been too quiet :-)
<frobware> macgreagoir: ignore my sweeping generalisations...
 * macgreagoir ignores frobware untl further notice
<babbageclunk> frobware, voidspace, macgreagoir: Can I please get a review of https://github.com/juju/names/pull/74? It's pretty short!
<frobware> babbageclunk: on the Q
<babbageclunk> frobware: Is that like a train or...
<frobware> babbageclunk: wreck.. to complete your analogy! :-D
<babbageclunk> frobware: :)
<frobware> babbageclunk: reviewed
<babbageclunk> frobware: Thanks!
<macgreagoir> babbageclunk frobware: I added a review too, but I only see mine :-/
<macgreagoir> (PR 74)
<babbageclunk> macgreagoir: Thanks!
<babbageclunk> frobware, I can't see yours either - are you just pretending to have done reviews for the mad props?
<babbageclunk> frobware: Or maybe you just didn't hit the complete button or something? I haven't done a review since we switched to github reviews.
<frobware> weird
<frobware> babbageclunk: I see my comments in http://reviews.vapour.ws/r/5723/
<frobware> babbageclunk: ah... so I'm still using RB.
<macgreagoir> frobware: Just seen your review for me in RB too. I was looking it GH :-)
 * frobware still wonders why we aren't using Gerrit.
<babbageclunk> frobware: Oh yeah, that would do it! Cool, I'll read it there.
<frobware> babbageclunk, macgreagoir, jam, voidspace: anyone - http://reviews.vapour.ws/r/5720/ :)
<voidspace> frobware: head down right now, but I can get to it in a bit if someone else doesn't
<frobware> voidspace: ack.
<rick_h_> dimitern: want to sync or are you all busy with the rc1 fixes? we can punt to later on?
<dimitern> rick_h_: now is as good as any time :) let's sync
<rick_h_> dimitern: k
<rick_h_> frobware: any reason not to $$merge$$ the branch?
<frobware> rick_h_: not really. didn't want to be both submitter and merger without at least passing it by 1 other person.
<frobware> rick_h_: I can merge now
<rick_h_> frobware: k, I'll add it then :)
<user_____> hi! need your help
<user_____> juju add-machine takes forever
<user_____> cloud-init-output.log shows âSetting up snapd (2.14.2~16.04) â¦â (last line in file)
<user_____> how to fix this?
<dimitern> user_____: can you try bootstrapping again with --config enable-os-refresh-update=false --config enable-os-upgrade=false ?
<dimitern> user_____: I've seen this can help in those cases
<user_____> thank you! will try
<babbageclunk> macgreagoir: Embarrassing review? https://github.com/juju/names/pull/75
<macgreagoir> babbageclunk: LGTM :-)
<babbageclunk> macgreagoir: Thanks!
<babbageclunk> voidspace: Sorry, can't make the meeting - neck deep in test failures at the moment.
<marcoceppi> what hook is triggered when I run `juju attach` ?
<voidspace> babbageclunk: argh, I'm late anyway!
<rick_h_> marcoceppi: upgrade-charm I believe
<marcoceppi> rick_h_: believe, or confirmed?
<rick_h_> marcoceppi: believe, 90% sure but I've been wrong before
<rick_h_> marcoceppi: docs don't seem to state it, so have to check the code.
<rick_h_> katco: ^ can you confirm I'm not crazy?
<rick_h_> or natefinch ^
<rick_h_> marcoceppi: ok, confirmed: https://github.com/juju/juju/blob/c5326a97429362f1c13b593bc58bc56757f9b3c8/resource/context/cmd/get.go#L62
<natefinch> I missed the context I think
<natefinch> oh, maybe not
<natefinch> yes, upgrade-charm
<marcoceppi> mbruzek: ^^
<mbruzek> ack, marcoceppi I will test this out, thanks
<rick_h_> katco: natefinch frobware dimitern ping for standup
<frobware> rick_h_: hmm. thought I was in there already.
<dimitern> omw
<redir> reboot brb
<rick_h_> frobware: ping, sent a card your way that's important to at least diagnose before other stuff please.
<alexisb> perrito666, ping
<rick_h_> katco: can you please review/QA nate's PR? https://github.com/juju/juju/pull/6285
<katco> rick_h_: sure
<rick_h_> katco: ty
<katco> are we not using review board?
<rick_h_> katco: evidently not. There was talk of using the new GH review system, but I missed that folks were using it officially
<katco> rick_h_: mmm i'm going to review this in RB. i don't have a horse in that race, but i'd prefer we don't do this piecemeal
<rick_h_> katco: k
<katco> natefinch: needs QA steps: ^^^
<natefinch> katco: oops, ok, will add
<natefinch> rick_h_: where do we support spaces, do you know?
<rick_h_> natefinch: maas and aws
<natefinch> rick_h_: ok
<frobware> rick_h_: reluctant to go back to beta15. unless they say it's still broken in beta18.
<rick_h_> frobware: understand, just need to work with them and have it move forward please. I'm nervous this is actually going to be hostname related because it was rabbit, but the fact that it was nova/etc means it might have been the container issues that just landed.
<rick_h_> frobware: so ideally we'd get them to test the RC, but if there's an issue with OIL and OS deploys in 2.0 we need to be on that as it's a bit of bread and butter for us
 * rick_h_ goes for lunchables 
<frobware> rick_h_: so here's a thing - we need a tool that capture lots of stuff about the environment. In trying to repro this can I just use 3 machines on a flat network? Do I need multiple disks? Etc. The bundle is useful but it doesn't describe the rest of the environment. Or put another way, how do we stop raising bugs that initially require a lot of back-and-forth getting said info.
<natefinch> katco: one minute, trying to fix list-spaces... realized I had forgotten it since I hadn't been able to test it
<katco> natefinch: k
<katco> frobware: the idea of an "environment debugger" tool has come up frequently. a way to both snapshot critical info about the env, and to interact with it in other ways than through the controller
<katco> frobware: i think our on-site folks would like such a tool
<katco> frobware: and it would help with critsits
<frobware> katco: the lack of this gates the speed at which we can fix issues.
<katco> agreed
<natefinch> it is a little tricky because there's basically infinite information that *could* matter.
<perrito666> alexisb: pong
<alexisb> perrito666, nevermind
<alexisb> we are all good
<frobware> natefinch: some is better than none. Right now I need info. it's also close to my EOD. which means another day passes.
<frobware> natefinch: but in essence we have compute, storage and network.
<perrito666> alexisb: oh, so you only call me when things are bad, so that is how things are
<natefinch> frobware: true enough.  And a tool we can continue to extend as we think of things we need would still be helpful.  I wrote something like that for my last job  when people kept filing issues without even grabbing logs.
<frobware> natefinch: some description covering the basics would be a head start
<alexisb> perrito666, :) that is what happens when you are a loan teammember on
<alexisb> tha manager finds you when things go bad
<perrito666> loan teammember? something tells me I should check my email :p
<alexisb> well reed is at the dentist and christian is close to eod
<alexisb> the rest are not online yet
<alexisb> so that make syou the go to Ateam dude
<alexisb> you lucky dog
<perrito666> meh, shame on reed, his mouth is occupied not his hands, he could very well be working while his teeth get fixed
<alexisb> lol
<natefinch> wtf
<natefinch> why does list-spaces default to yaml?
<voidspace> natefinch: because we hate you
<natefinch> I knew it!
<voidspace> :-)
<frobware> natefinch: and juju spaces?
<natefinch> frobware: juju list-spaces is just an alias for juju spaces
<rick_h_> frobware: round one would be to template-ize the questions we tend to ask
<rick_h_> frobware: then look at automating those questions with tools
<natefinch> frobware: the default for commands should be tabular, with json and yaml as flag options
 * alexisb takes a break, back in an hour
<natefinch> rick_h_: uh, so, what should I do about the fact that spaces has no tabular output?  That seems like a separate bug.
<rick_h_> natefinch: +1, please file and target to 2.0 GA and we'll have to get it cleared up
<natefinch> cool
<rick_h_> natefinch: ty for the catch
<natefinch> rick_h_: it's amazing how many bugs get shaken out just by running a bunch of commands :)
<rick_h_> :)
<rick_h_> using the product ftw
<natefinch> indeed
<natefinch> rick_h_: https://bugs.launchpad.net/juju/+bug/1625737
<mup> Bug #1625737: list-spaces doesn't have tabular format <juju:New> <https://launchpad.net/bugs/1625737>
<natefinch> katco: replied to your comment btw
<babbageclunk> Anyone know if there's a cunning debugging trick for getting stack traces out of a running go program?
<redir> alexisb-afk: ping
<redir> holler when you're back
<frobware> babbageclunk: if you send SIGQUIT does that work (cannot remember)
<frobware> babbageclunk: "
<frobware> 7
<frobware> down vote
<frobware> According to the docs of the runtime package, sending a SIGQUIT to a Go program will, by default, print a stack trace for every extant goroutine, eliding functions internal to the run-time system, and then exit with exit code 2."
<frobware> babbageclunk: read all the way to the end of the sentence. :)
<babbageclunk> I mean, I think that'd be ok - it'll get restarted, and I seem to already have bounced it once.
<babbageclunk> frobware: The process in question is basically hanging already.
<frobware> babbageclunk: get some coffee whilst the go routines are printed. Just general advice. :-D
<alexisb> redir, heya, I have a meeting at the top of the hour, but will ping when I am off
 * frobware EOD
<babbageclunk> Where's it going to print to? The jujud process is a d
<babbageclunk> aemon
<redir> alexisb: cool
<redir> babbageclunk: what are you trying to print?
<redir> wb katco :)
<natefinch> Me reading a bug report:  Steps to reproduce:  1) Deploy Openstack private cloud.   **rapid banging on the back button**
<perrito666> lol
<natefinch> rick_h_: I don't understand exactly what is wrong in this bug: https://bugs.launchpad.net/juju/+bug/1616200
<mup> Bug #1616200: Error message uses 'local:' URL to refer to local charm. <deploy> <jujuqa> <juju:Triaged by rharding> <https://launchpad.net/bugs/1616200>
<rick_h_> natefinch: looking
<alexisb> redir, available when you are
<rick_h_> natefinch: so local: isn't useful, the invalid name is in the metadata.yaml right?
<redir> where alexisb ?
<rick_h_> natefinch: so the local: blows smoke
<alexisb> 1x1 HO
<katco> redir: sorry, o/
<katco> natefinch: k, tal
<redir> katco: no sweat
<katco> natefinch: were you able to get list-spaces working as well?
<natefinch> katco: no... there's no tabular format for list-spaces, and tabular is the only place we should have this output, so it's basically N/A for now.  I filed another bug about list-spaces missing the tabular output
<katco> natefinch: ah ok. that was the whole thing of list-spaces defaulting to yaml
<natefinch> katco: correct... I thought it was just a bad default, but when I want to look at it, tabular just didn't exist
<katco> oops
<marcoceppi> halp rick_h_ (and others) network spaces in beta18 work against maas and lxd, yeah?
<natefinch> rick_h_: FYI, that local: charm bug... it would be a pretty invasive thing to change.  I agree that the UX is kind of bad:
<natefinch> $ juju deploy ./star_say
<natefinch> ERROR bad charm URL in response: URL has invalid charm or bundle name: "local:win2012r2/star_say-0"
<natefinch> marcoceppi: AFAIK it's AWS and Maas only
<marcoceppi> natefinch: so, if I do a deploy to LXD on a MAAS node
<marcoceppi> no worky?
<natefinch> marcoceppi: oh, no, I meant the lxd provider.... not sure how spaces interacts with containers on maas nodes
<natefinch> marcoceppi: I know they've been working on that area, might be post-18, not sure
<katco> natefinch: point me at that bug? i might have some insight since i've been in that area lately
<katco> natefinch: the local deploy bug
<natefinch> katco: https://bugs.launchpad.net/juju/+bug/1616200
<mup> Bug #1616200: Error message uses 'local:' URL to refer to local charm. <deploy> <jujuqa> <juju:Triaged by rharding> <https://launchpad.net/bugs/1616200>
<marcoceppi> natefinch rick_h_ I'm about to walk into a place tomorrow  where we could /really/ use it
<marcoceppi> so I'd like to know now
<natefinch> katco: updated the bug with an easier repro that makes the problem more obvious
<katco> natefinch: ta
<natefinch> katco: the error is generated from gopkg.in/juju/charm.v6-unstable/url.go ~line 278
<katco> natefinch: can you put that in the bug as well?
<natefinch> katco: heh good idea
<natefinch> katco: we could change the error to only specify the name, but that might have unintended side effects when the rest of the url matters.... maybe not, but I'm not entirely sure, since it's in code that is probably used in a lot of places.
<katco> natefinch: so the bug is that the "local:" schema is used?
<katco> in the error message?
<katco> natefinch: also, it seems like juju did a lot of work before it validated the charm name... i'm just skimming, but did we do a lot of setup for nothing?
<natefinch> katco: seems like the charm is being validated on the server
<katco> natefinch: i wonder if we should do both. but seems like validation of the name should be one of the first things we try
<natefinch> katco: well, I think validation can be done all at once.  It's not expensive.  Maybe the reason we do it on the server is in case validation rules ever change?  The server is really the thing that has final say as to what's valid.
<katco> natefinch: yeah, if we soften/change validation, we wouldn't want the check in a newer client different than an older server
<katco> natefinch: well, for that matter i wonder if we can do it client side, because an older server may be *more* permissive than the client
<natefinch> yep
<rick_h_> marcoceppi: right, only MAAS. there's nothing to setup subnets/etc in lxd yet that we can use. lxd was working on some of that this cycle
<katco> natefinch: maybe we just need to move the validation of the charm on the server to before we do any real-work
<rick_h_> marcoceppi: so it has to be raw machines on maas, though I guess there's some things that you can stick in a container and as long as the raw machine is in the space the container will be
<rick_h_> marcoceppi: openstack works in this way on maas and it's deployed into containers on maas
<marcoceppi> rick_h_: well, we're deploying into containers on maas for openstack
<rick_h_> marcoceppi: k, well spaces was written and tested with openstack on maas so should be good then
<marcoceppi> but for whatever reason, containers are being put on one nic and not the other
<rick_h_> marcoceppi: so there was a bunch of container on maas issues that fix landed today and is in the RC being built
<marcoceppi> rick_h_: since there are two nics (two networks) one public ext net and the other internal
<marcoceppi> rick_h_: good to know
<marcoceppi> rick_h_: I'll make them move to rc1 tomorrow
<rick_h_> marcoceppi: but it was typically that the container was only getting the br0 vs the whole list of interfaces
<rick_h_> natefinch: sorry, ran to get the boy from school. So what's up with the local:?
<katco> rick_h_: i think we're first wondering what the bug is. that it's using "local:"?
<thumper> morning
<rick_h_> katco: yes, that the user is told to fix the name of their charm and the local: is nothing to do with the name of their charm
<katco> thumper: heyo
<katco> rick_h_: ahh i see. so it should get rid of "local:trusty/" in that example
<rick_h_> katco: right
<perrito666> thumper: I bow to your sed-fu
<katco> rick_h_: ok, i think natefinch was saying that it would be difficult to fix bc it's done server-side
<thumper> perrito666: it was actually menn0
<rick_h_> katco: I see, and shared code vs something specific to this deploy command situation?
<perrito666> thumper: thanks for that fix also, I had hit a wall regarding that issue
<katco> rick_h_: that might be it. i wonder why we couldn't just modify that error message though
<thumper> perrito666: we were checking many things while trying to work out why the machine agent got the wrong ip address
<rick_h_> katco: yea, the revision is also in that error
<katco> rick_h_: maybe something to follow up on if he's already out for the day
<rick_h_> katco: so it seems we go "bad name, here's the ID to go fix"
<thumper> the race with the peergrouper only became apparent late yesterday
<natefinch> katco, rick_h_ : we certainly could, it's just that the code is very likely used by a lot of different consumers... I'd be afraid we'd break something
<thumper> now to look at the HA CI test
<katco> rick_h_: bc the flipside is also true: if it's shared code, we have the opportunity to fix it for all clients
<katco> natefinch: we'd just be changing an error's text right?
<rick_h_> natefinch: "very likely" doesn't scare me away :P need to see the list of what's effected and if the message makes sense in the other locations then
<katco> i would be grumpy if anyone is doing anything fancy with the text of an error message downstream =|
<natefinch> katco: I'd bet a reasonable sum of money that a ton of tests fail
<natefinch> katco: not that anything actually stops functioning per se
<katco> natefinch: so, i think i've found a sane way out of that mess
<rick_h_> well should only be a couple asserting the sanity check. Most tests will be working on the assumption the charm name is correct
<katco> natefinch: part of the reason our tests are so fragile is that we re-check the same thing all over the place
<rick_h_> I'm not sure how a ton of tests should fail on that corner case failure mode
<katco> natefinch: so if a bunch of tests fail, we should isolate that check into a test that checks that one thing, and delete the checks in all the other tests
<natefinch> rick_h_: have you run our tests? :)
<katco> i.e. this is where the principle of "tests should only test one thing" really pays off
<rick_h_> natefinch: a couple of times, but yes I'm attmepting to apply logic
<rick_h_> natefinch: it seems easy enuogh to see how many will fail
<rick_h_> natefinch: the whole "it could be bad" just doesn't jive. If it's bad, let's find out how bad. Guessing the outcome is :(
<natefinch> rick_h_, katco: well, I only see 4 places that would fail in core, and like 7 in charm.v6
<natefinch> tests that is
<natefinch> at least from a quick grep of the message
<rick_h_> natefinch: cool, doesn't seem horrible
<natefinch> rick_h_: not at all :)  But I also didn't search all of github/juju  ... there are a lot of repos that interact with charms... others might break.  But I guess it should be obvious enough when that happens for whoever to just fix it.
<rick_h_> natefinch: +1
<thumper> test fix for someone https://github.com/juju/juju/pull/6290
<thumper> now... how to create a new branch locally based on the revert of a revert...
<thumper> natefinch, katco: either of you know?
 * thumper goes to uncle google
 * katco thinks
<katco> thumper: well the revert is just a commit with changes reversed, right? your goal is what, to rebase your local branch off that commit?
<thumper> I landed a branch earlier
<thumper> that has just been reverted
<thumper> and I want to make a branch that reverts the revert and fix the issue
<thumper> I feel it should be simple
<thumper> and I think it is
<thumper> I just don't remember the incantation
<katco> thumper: you should just be able to revert the revert commit... what is it and i'll paste you a command?
<redir> anyone made any docs on snapping juju?
<thumper> does the revert command make a branch?
<alexisb> perrito666, https://bugs.launchpad.net/juju/+bug/1625657
<mup> Bug #1625657: add-model fails erroneously when a cloud is specified and a credential is specified and needs to be uploaded. <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1625657>
<katco> thumper: git revert --no-commit master^2
<katco> thumper: can't revert the merge-commit, but can revert the revert if that doesn't not make sense :)
<thumper> ok
<thumper> yeah
<thumper> katco: do you remember how to tell go to compile as if another os?
<thumper> hmm
<thumper> is it just `GOOS=osx go test`?
<thumper> yep
<thumper> not sure it will actually run the tests though
<thumper> but it did find the compile error it was getting
<thumper> exec fork error :) - not entirely surprised there
<thumper> well... GOOS=darwin
<perrito666> alexisb: Cordoba
<perrito666> CÃ³rdoba
<perrito666> alexlist: is there a card for that bug?
<alexisb> axw, when you come online hatch will be chasing you down
<redir> where do I publsh a juju snap? a namespace? or just make up a name like juju-with-feature-blah
<redir> ?
<thumper> perrito666: still around?
<thumper> perrito666: I'd like to talk restore and HA and juju's expectations
<perrito666> thumper: sure, gimme a moment to get my icq going on
<thumper> perrito666: we can use a hangout
<menn0> redir: the snappy namespace feature didn't really seem to do anything
<thumper> it is just alexisb that my HO hates
<menn0> redir: I think the idea is you just pick a name like "juju-redir-blahblah"
<menn0> I used "juju-menno"
<redir> menn0: tx, I can do that
 * perrito666 starts his acoustic coupling dial
<perrito666> thumper: standup HO is ok?
<alexisb> anastasiamac, I saw your side update on release notes in teh standup, can you add it to the 2.0 doc so we can interate
<thumper> perrito666: ack
<thumper> veebers: would be great if you could join perrito666 and me in the standup HO
<anastasiamac> alexisb: sure, it was cut-and-paste from release minutes but i'll add it to release notes too
<alexisb> anastasiamac, thanks
<veebers> thumper: still there? I can join now
<thumper> veebers: yep
<anastasiamac> alexisb: it's in as "Done and needs elaborating" under "what's new in rc1" :D
<axw> hatch: I'm available for being chased down now
<axw> alexisb: I thought you were going to drop the "d" off resolved?
<alexisb> axw, one sec
<redir> Task 8dd3657e-3470-4262-9253-0558b18d4aef is waiting for execution.
<redir> :/
<redir> and fails
<redir> does the snapcraft yaml included in the repo checkout a fresh repo? Looks like it
<menn0> thumper: http://reviews.vapour.ws/r/5733/
<redir> to do a build with a feature, does one use a commit has in the source-tag field?
<redir> s/commit has/commit hash/
<thumper> menn0: shipit
<alexisb> ok axw, sorry about that
<alexisb> so the issue with resovle is that the command doesnt actaully resovle anything
<alexisb> in only marks errors 'resovled'
<axw> alexisb: ok. I guess I feel the same thing thumper mentioned yesterday, that it's much the sme as "bzr resolve"
<axw> I guess it's fine
<axw> at least the not-scary thing is done by default now
<alexisb> and thta was the point of the PR (change the default)
<alexisb> renaming the command in my mind means changing the behavior too
<thumper> alexisb: I was always told that the idea of the "resolved" command was to tell juju that you had resolved the issue yourself
<thumper> not that you expect juju to resolve it
<thumper> so clearing the flag is fine
<alexisb> menn0, ping
<mwhudson> hangouts doesn't want to talk to me
<rick_h_> axw: hmm, is that 40 right? in looking at the numbers I expected the t2.medium to be more cpu power especially as dual core?
<axw> rick_h_: baseline performance is 40%
<axw> rick_h_: when idle, t2 machines accrue credits to surge or whatever it's called
<rick_h_> axw: of the burstable
<rick_h_> axw: hmm, ok
<alexisb> axw, I need just a few minutes before we meet
<alexisb> is that ok?
<axw> alexisb: np, ping when yoiu're ready
#juju-dev 2016-09-21
 * thumper goes to make a test faster
<thumper> well... when the test run with -race is done
<thumper> perhaps food now
<alexisb> ok axw, ready when you are
<axw> alexisb: brt
<redir> I think this testing with snap isn't going to work out so slick
<redir> it requires uploading the agent with the features baked into the backend and the snap can't do that
<redir> alexisb: ^
<redir> or maybe not. I might not understand how juju uses streams
<alexisb> redir, I am pretty sure both menno and wallyworld have done snaps with agents baked in
<alexisb> menn0, ^^^ ??
<redir> alexisb: deploying one now to see
<menn0> redir, alexisb: it can be done. https://github.com/mjs/juju-menno-snap
<menn0> redir, alexisb: here's how to build off an alternate repo and branch: https://github.com/mjs/juju-menno-snap/blob/MM-tabular-trial/snapcraft.yaml
<menn0> that also shows how to include other files from outside of the juju build
<redir> menn0: I just used . as the repo...
 * redir looks
<menn0> redir: if that works that's great too. using a specific branch is probably a bit more repeatable though.
<redir> menn0: so it turns out that it didn't like my named branch earlier because I had a typo in the name.
<redir> :(
<menn0> redir: ah right
<redir> I went from the snapcraft.yaml in the juju repo
<redir> I am guessing adding jujud to the list of snap binaries will give me a working jujud
<redir> but --upload-tools is gone, will a prebuilt jujud work with --build-agent?
<redir> me tries
<redir> unhandled snapcraft exceptions, yay:)
<natefinch> redir: if you have a jujud in the same directory as your juju client with the same version, it automatically uses upload tools
<natefinch> redir: as long as your version is greater than what's in streams (otherwise it'll use streams).
<natefinch> redir: if you have the code, you can use --build-agent to force a rebuild and upload of jujud (basically like the old upload tools)
<redir> natefinch: thanks. I think that will work with menn0's hint to include jujud in the list of binaries in parts: juju: snap:...
<redir> natefinch: it's a snap so there's no code with the binaries
<natefinch> redir: right.  The whole idea is to support snaps... it's just overly complicated because we *also* still support streams
<redir> we need streamcraft !
<natefinch> redir: personally, I really wish we just still had --upload-tools to force it to do what we want.  It removes a lot of the guessing about what juju bootstrap would do.
<redir> looks promising this build and push
 * redir goes to get some exercise and will be back later
<menn0> axw: ping?
<axw> menn0: pong!
<menn0> axw: I'm trying to determine whether will fixed this before he left: https://bugs.launchpad.net/juju/+bug/1608956
<mup> Bug #1608956: local charms can be deleted while still referenced <juju:Triaged> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1608956>
<menn0> axw: b/c I *think* he has based on the last PRs he sumbitted and our discussions with him
<axw> menn0: pretty sure it's resolved for 2.0. I'll double check
<menn0> axw: any idea how to have the same local charm references by 2 applications?
<menn0> axw: referenced even
<axw> menn0: hrm, not sure, I think we auto increment each time we upload don't we?
<axw> menn0: normally there would be refs from multiple units, not applications
<menn0> axw: exactly. i've been playing around with a model and a local charm for a while and I can't think how to get to the situation the ticket describes.
<axw> doesn't make sense for an app to be removed before units though...
<menn0> yeah... either will was confused or there's more to it
<axw> menn0: I suspect it's theoretical, not sure though. there definitely is code to do ref checking now though. Application.removeOps decrefs, and then schedules a cleanup that will fail if the charm is still in use
<menn0> axw: I saw that too, but thought I'd try it out. I guess it's theoretical like you said.
<menn0> axw: i'm going to add some references to PRs and close the ticket. sound good?
<axw> menn0: sounds good
<anastasiamac> menn0: closing theoritical tickets always sounds good \o/
<thumper-dogwalk> winning !!!
<thumper-dogwalk> oh
<menn0> anastasiamac: closing tickets for theoretical problems were actually fixed despite being theoretical is even better :)
<thumper> got  StatusHistorySuite.TestPruneStatusHistoryBySize from 42s to 1.5s
<anastasiamac> thumper: \o/
<anastasiamac> menn0: closing any tickets is amazing!
<thumper> and under race, from 192s to 8s
<thumper> 8s still seems long
<thumper> but we are inserting 20000 documents
<anastasiamac> i'd take 8s over 192s and run with it ;D
<thumper> changing batchsize from 1000 to 10000 takes 1.5s to 1.3s
 * thumper waiting for race test to run
<menn0> thumper: nice!
<thumper> still 8s for -race
<thumper> so will stick with smaller batch size
 * thumper looks at next on hit list
<anastasiamac> thumper: pout of curiosity, what was the fix? how did u reduce time so dramatically?
<thumper> stopped inserting one document at a time
<anastasiamac> ha :)
<thumper> with 20000 documents
<thumper> kinda dumb
<thumper> doing 1000 at a time
<anastasiamac> awesome \o/ i wonder if there is something we can do more gloabbly - even if just detect and improve places where we have similarly large sequential operations...
<thumper> not really, it is very specific
<mup> Bug #1597601 opened: ERROR cannot deploy bundle: cannot deploy application: i/o timeout <2.0> <bundles> <deploy> <oil> <oil-2.0> <repeatability> <retry> <juju:Fix Committed by menno.smits> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1597601>
<Mian> Hi, does anyone here know something about  the Xenial version of mongodb charm?  it's not available in the charm store right now
<Mian> is there a schedule or calendar as to when we will release mongodb charm on Xenial
<thumper> menn0: https://github.com/juju/juju/pull/6296
<menn0> thumper: looking
<menn0> Mian: you're likely to have better luck on #juju
<Mian> menn0:  got it, thanks ...
<menn0> thumper: did you know there's already a state/clock.go
<menn0> ?
<menn0> thumper: it looks like you've done things in a better way
<menn0> thumper: but all the things used state.GetClock should probably get updated to use the injected clock
<thumper> ah
<thumper> I thought there was a clock somewhere
<thumper> but I thought I had imagined it
<thumper> let me fix them up
<menn0> thumper: looks like there's only one place?
<thumper> well, half a dozen
<thumper> on it already
 * thumper runs the tests
<thumper> yay
<thumper> removed lots of TODO
<thumper> and bug references with this branch
<thumper> and *state.State now has a clock
<menn0> thumper: yeah, really good to have this done - thank you!
<thumper> all GetClock references remvoed, running tests now
<thumper> just to make sure
<thumper> hmm
<thumper> found a reference in worker/uniter
<thumper> has to be the suites with the longest tests runs doesn't it
<thumper> three tests failed in state
 * thumper enfixorates
<thumper> hmm... think I have broken the uniter tests...
<thumper> I think they are waiting to time out
<thumper> PASS: uniter_test.go:1173: UniterSuite.TestActionEvents	39.168s
<thumper> no
<thumper> just that one
<thumper> no, uniter tests all good
<thumper> phew
<thumper> menn0: review updated
<menn0> thumper: looking
<thumper> menn0: makes many things simpler
<thumper> I was caught out by the lease managers not working
<thumper> but that was because I need to restart the workers when you set the clock
<menn0> thumper: I thought of that too but when I checked, so had you :)
<menn0> thumper: ship it
<thumper> w00t
<menn0> thumper: that shaves a bit of landing times then
<menn0> thumper: as well as making state testing a whole lot better
<thumper> at least a minute
<thumper> no...
<thumper> only about 20s
<thumper> the race is where it really sucks
<redir> axw: yt? if you have a minute PTAL https://github.com/juju/juju/pull/6297
<thumper> given how long I worked yesterday, I'm going to call it now
<thumper> I'll check to make sure the branch lands
<thumper> but apart from that, dinner making time
<thumper> laters
 * menn0 is done for now too... long tech board meeting tonight + more calls
<axw> redir: looking
<redir> tx
<redir> having trouble getting snaps to upload a backend with support, but I guess that won't really help either now that I think about it
<redir> :|
<axw> redir: reviewed
<redir> I haven't yet gotten the snap to deploy with an appropriate jujud
<redir> giving up for the day.
 * redir goes eod
<redir> axw: started on changes per your review
<redir> axw: was going to use an environs.RegionSpec since that is what we use elsewhere to help generate the region key elsewhere
<axw> redir: works for me
<redir> but then found that the param structs for [Unset|SetModelDefaults are different shapes
<redir> so I'll refactor UnsetModeldefaults to be shaped like SetModelDefaults and generate the regionspec there to pass to state.
<redir> unless that sets off some alarm bells for you
<redir> axw ^
<redir> nite
<axw> redir: don't really understand what you mean by them being shaped differently. they look the same to me.
<axw> good night
<redir> in case it comes up menn0-afk axw the cli snap for reviewing modeldefaults isn't working because I've been unable to make the snap use the right jujud -- so not going to make the tech board agenda
<redir-afk> axw one has region and cloud tag on the top of the paramas struct the other has it with each item in the list
<axw> redir-afk: okey dokey. possibly because rc1 images have been released, and you're building code with version=rc1? may need to rebase. anyway, a job for tomorrow
<axw> redir-afk: neither SetModelDefaults nor UnsetModelDefaults have it at the top.
<redir-afk> axw: you're right it is at the top in the private setModelDefaults.
<redir-afk> sigh
<marcoceppi> frobware: ping, maas question
<voidspace> finally submitted some of my expenses for the juju core sprint
<voidspace> the one in june...
<marcoceppi> voidspace: hah, I still have expense reports from May outstanding >.>
<voidspace> marcoceppi: :-)
<perrito666> Really? I submit them on the go, way easier to track
<voidspace> perrito666: that's much more sensible...
<voidspace> frobware: ping
<voidspace> frobware: do you have opinions on bug 1624495?
<mup> Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495>
<voidspace> frobware: should we filter ipv6 addresses out of dns-name?
<voidspace> frobware: that doesn't seem very forward compatible with work coming "soonish"
<marcoceppi> halp please
<marcoceppi> juju deploy bundle is hanging on this error
<marcoceppi> we can curl the request, we have DNS and network connectivity
<marcoceppi> http://paste.ubuntu.com/23210954/
<marcoceppi> any help is appreciated, onsite, etc
<frobware> voidspace, marcoceppi: pong - (sorry was at the opticians)
<marcoceppi> we can't deploy anythin gatm
<frobware> marcoceppi: juju version?
<marcoceppi> beta18
<marcoceppi> frobware: ^
<voidspace> frobware: hey, hi
<voidspace> frobware: did you see my question about bug 1624495
<mup> Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495>
<frobware> voidspace: was partially looking through backlog
<voidspace> frobware: rather than filtering ipv6 out we could just prefer ipv4 (always give an ipv4 address if one is available)
<voidspace> frobware: but if only ipv6 addresses are available still return one for dns-name
<frobware> voidspace: in the PB I didn't see anything IPv6 related - is that captured elsewhere?
<voidspace> frobware: PB?
<frobware> voidspace: pastebin from marcoceppi
<frobware> voidspace: http://paste.ubuntu.com/23210954/
<voidspace> frobware: my question is unrelated
<frobware> voidspace: oh
<voidspace> frobware: I'm talking about bug 1624495 and ways to fix it
<mup> Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495>
<marcoceppi> rc1 "fixed" it
<voidspace> frobware: you can address marcoceppi first - it sounds like a higher priority
<frobware> marcoceppi: huzzah
<frobware> voidspace, marcoceppi: I'm a confused. we're all talking about the same bug... or so I thought.
<voidspace> frobware: marcoceppi: I have no idea...
<marcoceppi> I just need help, in general, unrelated to any bugs
<marcoceppi> we're onsite in a high pressure situation and weird things are cropping up
<frobware> marcoceppi: ok you got bumped; can I help? wha'ts broke? did rc1 fix things and you are no longer stuck?
<marcoceppi> rc1 got us deploying again
<marcoceppi> we're having issues now where machines aren't be requested from juju in the maas, but we're still looking
<frobware> marcoceppi: MAAS 2.0?
<marcoceppi> maas 2.0, juju 2.0rc1
<marcoceppi> I think it's realted to requring us to put IP addresses into noproxy
<marcoceppi> this environment has an http-proxy, but it means ALL traffic gets routed to that proxy, including traffic to the controller and maas
<marcoceppi> which is annoying
<rick_h_> morning
<marcoceppi> I'm getting a TON of activity in the log, but still no instances in maas booted
<rick_h_> marcoceppi: this is in the MAAS log?
<frobware> marcoceppi: care to share the log
<marcoceppi> juju machine- log
<rick_h_> marcoceppi: k, yea can you pastebin that log and peek at the maas log?
<marcoceppi> trying to pastbein
<marcoceppi> yeah, I can't really, juju scp doesn't run as root, I don't have access to run a pastebin from that server
<rick_h_> marcoceppi: k
<rick_h_> marcoceppi: email?
<marcoceppi> we're getting a few instances booted, but it's taking a very long time for req to come trhough
<marcoceppi> http://paste.ubuntu.com/23211052/
<marcoceppi> enjoy 5mb of text
<rick_h_> marcoceppi: hmm, some sort of timeout maybe...what would slow down maas provisioning...
<rick_h_> marcoceppi: heh, yea browser is choking on it
<marcoceppi> we've gotten three of the 10 machines deployed
<marcoceppi> just seems like juju is taking it's sweet time making these requests
<marcoceppi> rick_h_: http://paste.ubuntu.com/23211059/ that's the bundle
<rick_h_> marcoceppi: yea, just trying to think wtf. You're saying it's taking a long time for maas to show the machine is pulled by Juju, not that the charms are taking a long time to come up, or a long time for juju to upgrade the machines once up.
<rick_h_> marcoceppi: all the things I'd expect to be slow behind the proxy you aren't hitting, it's the stuff that should be pretty damn fast
<frobware> marcoceppi: can we isolate juju and/or maas. Can you deploy a machine - does that take similar time?
<marcoceppi> frobware: deploy a machine from maas?
<frobware> marcoceppi: deploy from MAAS, taking Juju out of the equation
<marcoceppi> or add-machine from juju
<marcoceppi> frobware: it's instantly allocated, then comes up in a few mins time
<frobware> :(
<frobware> marcoceppi: can you pase $(ip route) from the maas controller
<frobware> *paste
<marcoceppi> we're 23 minutes in, and only three machines were requested from maas
<marcoceppi> jk, another machine was just allocated
<marcoceppi> frobware: http://paste.ubuntu.com/23211068/
<frobware> marcoceppi: so are the machines taking a long time to install packages? (just guessing know)
<marcoceppi> frobware: http://paste.ubuntu.com/23211072/
<marcoceppi> frobware: they're just not being requested from maas
<marcoceppi> frobware: http://paste.ubuntu.com/23211079/
<frobware> marcoceppi: and can you see the console messages when a machine is booting?
<marcoceppi> frobware: http://i.imgur.com/RgSyCAQ.png
 * frobware wonders at 72 cores
<marcoceppi> it's not the speed of the machine booting, infact it's like 3 minutes from aquired -> ready in juju
<marcoceppi> it's that I've got a lot of machines ready, and I'm 30 mins in, and Juju as only made the request for a handful of the machines
<marcoceppi> we were doing this with beta18 yesterday, without this problem, we movd to rc1 because we lost the ability to deploy a bundle with beta18 this morning
<frobware> marcoceppi: can you login/ssh to a node once it is mostly up and look to see what CPU usage is being consume
<marcoceppi> 0.00
<marcoceppi> there's a minimal load
<frobware> bleh
<marcoceppi> you want me to look ath the controller cpu usage?
<frobware> marcoceppi: ok, so, $(juju add-machine) without deploying is how long? Do you have a spare machine to do that operation?
<rick_h_> voidspace: ping, need a hand here. I'm trying to look at marcoceppi's logs and find the maas communication section where juju asks for the machine and juju gets it and starts responding: http://paste.ubuntu.com/23211052/
<marcoceppi> well, technically wehave a bunch of machines not being used, but they're supposed to be allocated
<frobware> marcoceppi: also, what's the CPU load on the MAAS controller?
<rick_h_> voidspace: any clue as to what in the logs I'm looking for besides all the lines that have something like "machine-6" in them?
<marcoceppi> frobware: 0.23
<marcoceppi> I'll try to add-machine
<marcoceppi> frobware: equally as slow I did an add-machine and we're 2 mins in since I rand the command and not yet acquired
<marcoceppi> it's like maas provider is serializing requests only after the last machine agent fully boots
<frobware> marcoceppi: and could you send truncated logs around the time you did add-machine - what was the machine number?
<marcoceppi> machine # 10
<marcoceppi> let me try
<frobware> rick_h_: given log size, tryig to narrow down to a known machine allocation ^
<rick_h_> frobware: yea, understand
<frobware> rick_h_: that other log made my browser behave like ye-olde-netscape
<rick_h_> need 32gb of ram :) took a sec but loaded here
<marcoceppi> frobware: the machine never came up, I had to remove-machine because we need it for the deploy
<frobware> marcoceppi: ok
<marcoceppi> frobware: http://paste.ubuntu.com/23211105/
<marcoceppi> that's 8000 lines, from just around the time I did a `juju add-machine nfv145.maas`
<rick_h_> frobware: voidspace any clue if this means anything? {"request-id":100,"response":"'body redacted'"} Provisioner[""].MachinesWithTransientErrors
<voidspace> rick_h_: not seen it before, I can grep the code though
<mup> Bug #1613992 opened: 1.25.6 "ERROR juju.worker.uniter.filter filter.go:137 tomb: dying" <canonical-is> <cdo-qa-blocker> <landscape> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1613992>
<voidspace> rick_h_: with an empty response I'd say that means no machines - it's an api facade method that calls into the provisioner_task
<voidspace> rick_h_: no machines with errors I mean
<frobware> rick_h_, voidspace: the other oddity and repeated is: "2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [1EC]  {"request-id":1,"type":"Admin","version":3,"request":"Login","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver admin.go:201 hostPorts: [[192.168.0.251:17070 127.0.0.1:17070 [::1]:17070]]
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [1EC]  23.963626ms {"request-id":1,"response":"'body redacted'"} Admin[""].Login
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [1EC] user-admin@local {"request-id":2,"type":"Client","version":1,"request":"FullStatus","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:181 Applications: map[openstack-dashboard:openstack-dashboard cinder:cinder ntp:ntp ceph-radosgw:ceph-radosgw neutron-api:neutron-api cinder-ceph:cinder-ceph glance:glance mysql:mysql neutron-openvswitch:neutron-openvswitch nova-cloud-controller:nova-cloud-controller ceph-mon:ceph-mon ceph-osd:ceph-osd nova-compute:nova-compute rabbitmq-server:rabbitmq-server keystone:keystone neutron-gateway:n
<frobware> eutron-gateway]
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<rick_h_> voidspace: the body is redacted so not sure if the bosy id empty
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [1EC] user-admin@local 75.777822ms {"request-id":2,"response":"'body redacted'"} Client[""].FullStatus
<frobware> 2016-09-21 11:42:19 INFO juju.apiserver request_notifier.go:80 [1EC] user-admin@local API connection terminated after 112.508087ms
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12490,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12491,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 865.024ÃÂµs {"request-id":12490,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 815.317ÃÂµs {"request-id":12491,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12492,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 702.154ÃÂµs {"request-id":12492,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12493,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12494,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<voidspace> frobware: how many mb did you paste...
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 724.592ÃÂµs {"request-id":12493,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 852.802ÃÂµs {"request-id":12494,"response":"'body redacted'"} InstancePoller[""].InstanceId
<rick_h_> wheeee
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [D7] unit-keystone-0 {"request-id":481,"type":"LeadershipService","version":2,"request":"ClaimLeadership","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.worker.lease manager.go:217 waking to check leases at 2016-09-21 11:43:00.159215809 +0000 UTC
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [D7] unit-keystone-0 19.749863ms {"request-id":481,"response":"'body redacted'"} LeadershipService[""].ClaimLeadership
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12495,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12496,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<babbageclunk> Oh dear
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 568.786ÃÂµs {"request-id":12496,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12497,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 698.863ÃÂµs {"request-id":12497,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 11.057ms {"request-id":12495,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12498,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12499,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 689.344ÃÂµs {"request-id":12498,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 546.075ÃÂµs {"request-id":12499,"response":"'body redacted'"} InstancePoller[""].InstanceId
<marcoceppi> goodbye IRC bouncer
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12500,"type":"InstancePoller","version":3,"request":"InstanceStatus","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.395166ms {"request-id":12500,"response":"'body redacted'"} InstancePoller[""].InstanceStatus
<marcoceppi> it was nice knowing you
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12501,"type":"InstancePoller","version":3,"request":"ProviderAddresses","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 587.746ÃÂµs {"request-id":12501,"response":"'body redacted'"} InstancePoller[""].ProviderAddresses
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12502,"type":"InstancePoller","version":3,"request":"Status","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 749.181ÃÂµs {"request-id":12502,"response":"'body redacted'"} InstancePoller[""].Status
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12503,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<marcoceppi> BIG WHEELS KEEP ON TURNNINGGGG
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.050716ms {"request-id":12503,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12504,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12505,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 859.082ÃÂµs {"request-id":12504,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 696.796ÃÂµs {"request-id":12505,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12506,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12507,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 619.928ÃÂµs {"request-id":12506,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 779.62ÃÂµs {"request-id":12507,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12508,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.152377ms {"request-id":12508,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12509,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 837.126ÃÂµs {"request-id":12509,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12510,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12511,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 775.213ÃÂµs {"request-id":12510,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 709.194ÃÂµs {"request-id":12511,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12512,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12513,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 648.18ÃÂµs {"request-id":12512,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 684.41ÃÂµs {"request-id":12513,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12514,"type":"Singular","version":1,"request":"Claim","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:22 DEBUG juju.worker.lease manager.go:217 waking to check leases at 2016-09-21 11:43:22.822052442 +0000 UTC
<frobware> 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 10.463619ms {"request-id":12514,"response":"'body redacted'"} Singular[""].Claim
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12515,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 788.027ÃÂµs {"request-id":12515,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12516,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12517,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 998.939ÃÂµs {"request-id":12516,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 798.386ÃÂµs {"request-id":12517,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12518,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12519,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.062551ms {"request-id":12518,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 748.926ÃÂµs {"request-id":12519,"response":"'body redacted'"} InstancePoller[""].InstanceId
<frobware> 2016-09-21 11:42:24 INFO juju.apiserver request_notifier.go:70 [1ED] API connection from 192.168.0.1:32796
<frobware> 2016-09-21 11:42:24 DEBUG juju.apiserver utils.go:72 validate model uuid: 0af2d6d9-6c9c-4bc4-843c-00c6bccb3675
<frobware> 2016-09-21 11:42:24 DEBUG juju.apiserver request_notifier.go:115 <- [1ED]  {"request-id":1,"type":"Admin","version":3,"request":"Login","params":"'params redacted'"}
<frobware> 2016-09-21 11:42:24 DEBUG juju.apiserver admin.go:201 hostPorts: [[192.168.0.251:17070 127.0.0.1:17070 [::1]:17070]]
<marcoceppi> <3
<marcoceppi> I suppose I could have done that, looking at the ops
<voidspace> :-)
<mgz> surprised flood prevention didn't hit
<mgz> frobware: sorry about that
 * rick_h_ has to run the boy to school, voidspace frobware please see if there's anything we can figure out that would cause a delay in juju asking for a machine and it getting sent back. I see the machines get asked for/found around line 2001 of the log http://paste.ubuntu.com/23211052/ (and might be worth trying to curl the /plain url on that 
<frobware> mgz: well, me too. My machine is crawling atm
<voidspace> marcoceppi: we'd need trace logging enabled to see the body of those MachinesWithTransientErrors calls
<voidspace> marcoceppi: and tracing generates a shit-ton of logging
 * frobware will try and not paste that into IRC
<marcoceppi> voidspace: we'll I already have a shit-ton, wouldn't mind making it a shit-tonne
<voidspace> frobware: :-)
<voidspace> marcoceppi: heh, it would at least tell us if that's the issue
<babbageclunk> You could use something like ngrep to watch the requests - they're not https.
<marcoceppi> voidspace: well, we just got the lsat machine requested
<marcoceppi> and we need to validate the deployment, but I imagine we'll be redeploying in about 20-30 mins
<marcoceppi> and I'll turn on trace at that point
<voidspace> marcoceppi: cool, thanks
<marcoceppi> voidspace: `<root>=TRACE;unit=DEBUG` seem about right?
<voidspace> marcoceppi: yep that should do it
<marcoceppi> okay, different problems
<marcoceppi> we're bringing up lxd machines in this maas, and they are not getting the cloud-init manifest/userdata and as a result, they are not getting agents or networking configured
<marcoceppi> voidspace frobware steel yourselves, trace logging is about to be enabled
 * frobware makes another pledge to not copy it to the channel
<babbageclunk> https://media.giphy.com/media/OCu7zWojqFA1W/giphy.gif
<frobware> babbageclunk: hey, text is cheap!
 * marcoceppi premetively kicks frobware ;)
<voidspace> :-)
<marcoceppi> do you have to enable trace logging before or after deployment
<voidspace> marcoceppi: it takes effect from whenever you set it
<marcoceppi> I'm still only seeing debug output
<voidspace> marcoceppi: so whenever really...
<voidspace> hmmm
<voidspace> marcoceppi: even in the all-machines log?
<marcoceppi> voidspace: what all-machines log ;)
<voidspace> has that gone now?
<voidspace> logsink.log maybe
<voidspace> I'm bootstrapping locally to check
<marcoceppi> yeah, I'm looking at that one, still just debug
<marcoceppi> oh well, guess you'll get it next tim
<voidspace> mgz: you know much about setting juju logging? we want trace logging from the request_notifier
<voidspace> mgz: does this sound sensible, it looks good to me: `<root>=TRACE;unit=DEBUG`
<voidspace> marcoceppi: if I set that logging-config (on the controller model) I see TRACE logs
<marcoceppi> voidspace: I set it on the current model
<marcoceppi> let me do that
<voidspace> probably both would be wise... current model *should* be fine I think (but obviously isn't if we're not getting TRACE)
<marcoceppi> helllooooooooooo data
<mgz> voidspace: seems reasonable
<marcoceppi> voidspace frobware enjoy: http://paste.ubuntu.com/23211230/
<marcoceppi> I can ship these logs to a server to deliver via plaintxt if that's easier
 * frobware temporarily closes IRC, and fetches the trace. "I'll be back!"
<voidspace> :-)
<voidspace> marcoceppi: doesn't look like MachinesWithTransientErrors is anything interesting
<voidspace> marcoceppi: is it just as slow this time round?
<aisrael> rick_h_, do bundles support deploying from a specific channel?
<rick_h_> aisrael: looking
<marcoceppi> voidspace: yes
<marcoceppi> voidspace: we're rolling back to beta18, but aisrael will be updating his juju to 2.0 rc1 to try to replicate
<rick_h_> aisrael: can you try juju deploy $bundle --channel=edge
<aisrael> rick_h_, I think that would work, but the case I'm looking at is wanting to deploy some components from stable, like mariadb, but other components from edge
<rick_h_> aisrael: there's nothing in the bundle definition right now.
<rick_h_> aisrael: the idea was that you have a working solution, you want to test "does the upcoming" one work. Adding channels to the bundles leads to a hodge podge of bundles that are in different 'states'
<aisrael> rick_h_, ack. I'll try pointing it at the charm revision in edge and see if it'll pull in the right bits
<rick_h_> aisrael: +1 revision always works
<frobware> rick_h_: are we meeting today?
<rick_h_> frobware: oh sorry yea
<frobware> rick_h_: we can leave it in lieu of helping marcoceppi
<rick_h_> frobware: actually yes please. I just see I was invited to a maas cross team at this time I want to check out
<marcoceppi> frobware rick_h_ voidspace we've downgraded to beta18 to progress
<frobware> rick_h_: ack
<marcoceppi> too many shap pointy edges and too little time to grind them down
<frobware> marcoceppi: ack - also trying your bundle to see if I can repro
<marcoceppi> frobware: gl
<frobware> marcoceppi: please let me know if beta18 is radically different in its behaviour
<marcoceppi> frobware:
<frobware> marcoceppi: do your MAAS nodes commission with trusty?
<marcoceppi> frobware: xenial
<marcoceppi> frobware voidspace not good news, we're seeing this with beta18 now
<marcoceppi> the only thing we changed from yesterdday to today was the juju bundle
<marcoceppi> (and added more hardware to maas)
<abentley> sinzui: let's chat when you hit a lull in the release.
<frobware> marcoceppi: well I think that helps us stop chasing ghosts between 18 and rc1
<marcoceppi> frobware: we're waiting to see if the lxd issue persists, but it's taking forever, still, to get machines allocated in maas
<frobware> marcoceppi: the lxd issue being they end up using lxdbr0?  that's true in 18, but should now be fixed in rc1
<rick_h_> marcoceppi: so this same network setup an just fine yesterday?
<marcoceppi> frobware: the lxd issue we had is that cloud-init didn't run, we didn't get agents for the machines and networking wasn't configured in rc1
<marcoceppi> we rolled back to address that, but we're still experiencing a long ass time getting juju to ask for machines
<marcoceppi> rick_h_: the deltas from yesterday were juju rc1, 5 more machines in maas, changes to the bundle
<frobware> marcoceppi: ok - any chance that issue is: https://bugs.launchpad.net/juju/+bug/1611981
<mup> Bug #1611981: LXD guests not configured due to the lack of DHCP on the interface selected as eth0 <network> <sts> <juju:In Progress by macgreagoir> <https://launchpad.net/bugs/1611981>
<marcoceppi> rick_h_: we elimenated the rc1, we're going to try the bundle in a minute as long as we can verify lxd machines are working
<marcoceppi> frobware: that looks like it
<marcoceppi> frobware: the interface was setup as eth2 in the lxd machine
<marcoceppi> frobware: after dhclient on eth2 in lxd machine, address was allocated
<frobware> marcoceppi: so the "fix" there (asssuming this is your issue) is to reoder the NICS in MAAS
<frobware> macgreagoir: ^^
<macgreagoir> frobware: Reading back...
<marcoceppi> frobware: reorder the nics for the bare metal?
<frobware> macgreagoir: ^^ yep?
<macgreagoir> marcoceppi: If you are able to rename the nics in maas so that the pxe iface sorts lower, you should work-around that. Is that a possibility?
<marcoceppi> macgreagoir frobware this is what we have now
<macgreagoir> marcoceppi: Aye, on the metal.
<marcoceppi> macgreagoir: http://i.imgur.com/seIUIiZ.png
<rick_h_> voidspace: chat or are you helping ^ ?
<macgreagoir> marcoceppi: Are you able to see what network-interfaces looks like in <container>/var/lib/cloud/seed/nocloud-net/user-data, please?
<marcoceppi> macgreagoir: we're waiting for another container ot come online
<marcoceppi> macgreagoir: we're hitting two major issues with maas today, this and one where Juju takes 10 minutes to request 1 machine from maas
<macgreagoir> marcoceppi: Is eno3 the consistently used pxe iface?
<marcoceppi> macgreagoir: yes
<marcoceppi> on all the metal
<macgreagoir> If you can try rennaming it to... dynamic0 (or something else lower than eno1) it would be a good test of the dhcp/eth0 bug, at least.
<frobware> macgreagoir: wouldn't user-data have the correct list of interfaces?
<macgreagoir> frobware: It should, yes, I'd like to get a picture of the full config.
<marcoceppi> macgreagoir: so rename eno1, which is not configured, to dynamic0
<frobware> macgreagoir, marcoceppi: I think to ensure that we are seeing this bug we should try and repro: juju add-machine lxd:<X>; then let's enter the container ($lxc exec juju-<container-name> bash) and poke around
<marcoceppi> frobware macgreagoir we are not seeing the DHCP/misconfiguration on beta18
<marcoceppi> we are getting containers with IP addresses and agents
<marcoceppi> frobware macgreagoir we're under the gun to get this out, and fighgint maas's delayed instance allocaiton is killing our iterations
<macgreagoir> marcoceppi: What is the 10.95.172.x subnet?
<marcoceppi> frobware macgreagoir if we get the green light to continue for the demo with Mark next week, we'll try to repro tomorrow
<marcoceppi> macgreagoir: that's an external network not managed by MAAS
<frobware> marcoceppi: are you able to HO and screenshare?
<marcoceppi> frobware: we can hangout and screen share, yes, might be easier to explain
<frobware> marcoceppi: let's do that; we're going too slowly - sending link...
<frobware> macgreagoir: https://hangouts.google.com/hangouts/_/canonical.com/maasissues?authuser=0
<mup> Bug #1626097 opened: juju deploy lxd provider inconsistent ipv4 or ipv6 names <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1626097>
<voidspace> rick_h_: sorry, missed your msg
<voidspace> rick_h_: was taking a break
<rick_h_> voidspace: k
<voidspace> rick_h_: oh bugger, forgot we were supposed to chat today!
<voidspace> rick_h_: you free now?
<rick_h_> voidspace: in the room now
<rick_h_> voidspace: though we've got standup in 8
<rick_h_> macgreagoir: standup if you're free
<katco> \o/
<macgreagoir> rick_h_: On HO with marcoceppi, sorry
<rick_h_> macgreagoir: all good, that's the best priority
<babbageclunk> Anyone know how to talk to the introspection worker?
<babbageclunk> It looks like it exposes a web UI with profile info like stack traces, but it's listening on an abstract domain socket and I can't work out how to get a web client to talk to it.
<alexisb> babbageclunk, heh sorry
<alexisb> I dropped to fast
<alexisb> is there somethng else?
<babbageclunk> alexisb: No worries!
<babbageclunk> alexisb: Was just going to say, I'm chatting with thumper tonight so I'll pick his brains about the introspection worker
<babbageclunk> (If I haven't already sussed it out by then.)
<alexisb> cool
<mup> Bug #1626097 changed: juju deploy lxd provider inconsistent ipv4 or ipv6 names <cpe-sa> <usability> <juju:Triaged> <https://launchpad.net/bugs/1626097>
<babbageclunk> Worked it out - you can't do it with ncat but you can with socat.
<babbageclunk> yay, now I have 18k lines of stack traces.
<frobware> rick_h_: did you want to sync?
<rick_h_> frobware: on a manager call atm, easy to email so I don't hold up your EOD?
<CorvetteZR1> hello.  i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 22
<CorvetteZR1> google turned up some old posts with people having similar issues, but i haven't found any solution
<CorvetteZR1>  i can see the container is running, but bootstrap can't auth using ssh key-auth.  any suggestions?
<natefinch> CorvetteZR1: what provider?  AWS, Google, Openstack, Local?
<rick_h_> CorvetteZR1: and what versin of juju?
<CorvetteZR1> local.  got Maas going and want to play with openstack
<CorvetteZR1> version 2.0-beta15 i think
<CorvetteZR1> whatever is latest in 16.04...
<CorvetteZR1> as far as maas goes, it's up and running and i got a few servers deployed
<CorvetteZR1> with juju i'm kind of lost.  should i be bootstraping it on the maas server or on one of the deployed nodes?  i get the same error on both...both are same version of ubuntu and juju
<CorvetteZR1> natefinch, local
<CorvetteZR1> rick_h_, 2.0-beta15
<rick_h_> CorvetteZR1: can you get the RC1 from the PPA please? There was some networking issues with containers binding to the correct interface/address pre-RC1
<CorvetteZR1> ok
<CorvetteZR1> ok, looks like it logged in now on 2.0-rc1
<CorvetteZR1> it's doing apt inside the container
<CorvetteZR1> thanks rick_h_ !
<natefinch> yay!
<rick_h_> CorvetteZR1: <3 ty
<redir-afk> \o/
<redir-afk> what I am not afk
<natefinch> lol I can't tell you how often I do that :)
<hml> has anyone bootstrapped a remote openstack cloud with juju 2.0 recently?  iâm having some challenges  - it appears that the charm deployed didnât get assigned a floating ip, causing (iâm assuming) the install of the charm instance to get stuck
<redir> natefinch: right. I usually just don't
<redir> if I  don't anwer I'm afk or focused
<natefinch> *nod*
 * rick_h_ goes to grab lunchables
<redir> anyone else use chromium and having issues with it displaying jpegs?
<CorvetteZR1> stuck on fetching juju gui 2.1.10
<CorvetteZR1> debug doesn't have anything interesting
<CorvetteZR1> although, can it have something to do with eno1 and eno2 has no address?
<natefinch> not sure... maybe it's a networknig issue?  Does your environment have internet access?
<natefinch> CorvetteZR1: you can also bootstrap with --no-gui to skip that step if you're not going to sue it
<CorvetteZR1> it does have internet access.  it's possibly a network issue.  i rebooted the maas node, now it just fails saying it can't find a node in the zone
<CorvetteZR1> meh...i'll poke around :)
<CorvetteZR1> k, different issue now.  cannot acquire a node in the zone
<CorvetteZR1> i have 2 deployed nodes in my zone...do they need to be ready or allocated instead of deployed?
<natefinch> definitely not deployed. Deployed is effectively "in use" so juju won't mess with them.  I believe ready is what they should be in.
<CorvetteZR1> ah...i think it's doing something now
<CorvetteZR1> i released it
<natefinch> cool
<CorvetteZR1> now juju is powering it up
<thumper> so close to a bless
<perrito666> sounds like a cheap pop song
<rick_h_> heh, almost
<natefinch> rick_h_: can I get your input on an error message?  This is what we show if the user has no current controller specified:
<natefinch> $ juju models
<natefinch> ERROR No current controller.
<natefinch> Please use "juju controllers" to view all controllers available to you. You can
<natefinch> set the current controller by running "juju switch" or "juju login".
<rick_h_> natefinch: I feel like that's in the wrong order and a lot of info
<natefinch> this is what it used to say:
<natefinch> `not logged in
<natefinch> Please use "juju controllers" to view all controllers available to you.
<natefinch> You can login into an existing controller using "juju login -c <controller>".
<rick_h_> natefinch: if you don't have a controller specified the first thing is switch, then if you don't know what controller to go to you'd use juju controllers
<natefinch> what it used to say is actually not a good error message because it has nothing to do with being logged in or not.  Although I think I should check that, too.  Hmm.
<natefinch> rick_h_: the big error messages were from a mark bug: https://bugs.launchpad.net/juju/+bug/1589061
<mup> Bug #1589061: Juju status with no controllers offers up juju switch <juju:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1589061>
<redir> anyone name a charm with optional-storage?
<rick_h_> natefinch: hmm, ok
<natefinch> rick_h_: Â¯\_(ã)_/Â¯
<rick_h_> natefinch: yea, sorry, multi-tasking
<natefinch> rick_h_: oh, no big deal.  Just... error messages are hard]
<rick_h_> natefinch: yea, agree
<natefinch> "I don't know how you got into this situation, but here's a few ways to get out that may or may not be what you really should do"
<natefinch> dinner time, back later
<rick_h_> natefinch-afk: https://pastebin.canonical.com/166172/
<perrito666> mm, I was expecting add-cloud to be interactive
<perrito666> annybody knows the magic incantation to add-cloud for a maas cloud?
<alexisb> thumper, have you opened your wine?
<thumper> no
<alexisb> I love that are first bless begins with a rev ID of "bad..."
<thumper> :)
<alexisb> axw, great minds ;) ^^^
<axw> alexisb: :)
<mup> Bug #1626304 opened: Unit disk fills with inprogress-n files <juju-core:New> <https://launchpad.net/bugs/1626304>
<axw> alexisb: just checking, you said we're not worrying about "graduated reviewer" business now that we have the checklist?
<alexisb> axw, no
<alexisb> axw, but we need to update the process given we dont have assigned mentors (as we did before)
<axw> alexisb: no you didn't say that, or no we're not worrying about it?
<alexisb> no I didnt mean to say that
<axw> okey dokey
<axw> anastasiamac: do you have a moment to stamp https://github.com/juju/juju/pull/6294 ?
<anastasiamac> axw: sure thing
<axw> anastasiamac: thanks
<menn0> redir: thanks for the review. after proposing I decided to do more - hence the lack of QA steps.
<redir> menn0: np
<redir> I figured that or changing to GH reviews altered workflow
<alexisb> axw, is this still a bug: https://bugs.launchpad.net/juju/+bug/1623761
<mup> Bug #1623761: drop userpass auth-type from azure <azure-provider> <juju:Triaged> <https://launchpad.net/bugs/1623761>
<menn0> redir: nah, I shouldn't have created the PR yet
<axw> alexisb: yes, still need to drop it
<redir> in that case Not LGTM:)
<axw> alexisb: before 2.0
<thumper> hmm...
<thumper> ugh
<thumper> can't hear anyone properly
<thumper> all robot
<mwhudson> whee funtimes mongodb 3.2.10-rc1 fails to build on arm64 & s390x
<perrito666> can anyone with maas experience check http://paste.ubuntu.com/23213786/ and give me an opinion?
<perrito666> mwhudson: oh? mmap? or the new stuff?
<mwhudson> perrito666: no, nothing so deep i think
<mwhudson> perrito666: https://launchpadlibrarian.net/285886652/buildlog_ubuntu-yakkety-arm64.juju-mongodb3.2_3.2.10~rc1-0ubuntu1~ppa1_BUILDING.txt.gz https://launchpadlibrarian.net/285880249/buildlog_ubuntu-yakkety-s390x.juju-mongodb3.2_3.2.10~rc1-0ubuntu1~ppa1_BUILDING.txt.gz
<perrito666> mwhudson: AttributeError: Values instance has no attribute 'use-s390x-crc32': >
<perrito666> ?
<mwhudson> perrito666: yeah, that's going to be something in my s390x patches i guess
<perrito666> this one is a bit surprising __wt_checksum_init
<mwhudson> yeah
<mwhudson> going to see if that happens with the upstream source and report a bug if it does
<perrito666> I would not expect wt to try to link to unexisting things :(
<mwhudson> bet it's some per-arch thing
<mwhudson> oh hey someone's done it already i think
#juju-dev 2016-09-22
<perrito666> redir: love your abbreviations
<redir> tyvm, perrito666
<redir> sorry i am a lazy typist
<perrito666> lol
<perrito666> does Maas knows how to upgrade itself?
<mup> Bug # changed: 1312290, 1321212, 1339866, 1353242, 1357760, 1381340
<redir> axw: updated pr from yesterday. It is a bit different so ptal https://github.com/juju/juju/pull/6297
<axw> redir: ok, looking
<axw> redir: reviewed
<redir> axw: tx
<thumper> ah ha
<thumper> think I have found why cert updater tests fail intermittently
<thumper> well, getting to the bottom of it anyway
<thumper> ugh
<thumper> menn0: got a minute?
<menn0> thumper: now I do
<thumper> quick HO?
<menn0> yep
<menn0> axw: would you mind taking another look at this one please: https://github.com/juju/juju/pull/6302
<menn0> axw: i've added an upgrade step now
<axw> menn0: sure, looking
<axw> menn0: LGTM with QA, which I see you're doing now
<menn0> axw: cheers
<menn0> pop quiz... what's wrong with this in the bootstrap output?
<menn0> Launching controller instance(s) on dev...
<menn0>  - juju-0facb9-0                                                                    00%
<thumper> menn0: https://github.com/juju/juju/pull/6303
<menn0> looking
<thumper> menn0: are you?
<menn0> thumper: sorry, got distracted
<menn0> thumper: done
<thumper> menn0: quick hangout do discuss something?
<menn0> thumper: ok
<mup> Bug #1626304 changed: Unit disk fills with inprogress-n files <juju-core:Invalid> <https://launchpad.net/bugs/1626304>
<mup> Bug #1626304 opened: Unit disk fills with inprogress-n files <juju-core:Invalid> <https://launchpad.net/bugs/1626304>
<mup> Bug #1626304 changed: Unit disk fills with inprogress-n files <juju-core:Invalid> <https://launchpad.net/bugs/1626304>
<gennadiy> hi, we have private openstack cloud and we want to use different subnetwork for deployed software. is it possible to do it with juju 2.0 ?
<menn0> thumper: here's the formatting fix: https://github.com/juju/juju/pull/6304
 * thumper looks
<thumper> lgtm or at least good enough
<thumper> gennadiy: not yet I don't believe
<thumper> gennadiy: we will have spaces and subnets for openstack soonish, but I'm not sure on the timeline
<thumper> night all
<Mmike> Hi, lads. I need help in creating a password hash for the unit/machine agents that is stored in the mongodb database. On a fairly complex customer environment I lost /var/lib/juju directory. I was able to recreate everything, and jujud-s start without an issue, I just had to use passwords/hashes from some other unit - but now I ended up with same passwords for two units.
<Mmike> Is there a simple way generate both the password and the hash?
<Mmike> This is juju 1 (1.20, we are prepping the customer for upgrade to 1.25)
<Mmike> Erm, hi to ladies too!
<frobware> voidspace: ping - can we sync regarding the MAAS bug we saw yesterday
<voidspace> frobware: we can
<frobware> voidspace: standup HO
<voidspace> frobware: the ipv6 bug I'm on has turned critical as well :-/
<frobware> voidspace: in my experience that category is now quite low. :)
<voidspace> frobware: I'm in the HO
<voidspace> frobware: category?
<anastasiamac> voidspace: critical because the bug that u said is the same was critical ;)
<voidspace> mgz: ping
<frobware> voidspace: the logs do have TRACE info
<voidspace> frobware: ah yes, odd
<voidspace> frobware: maybe they hadn't loaded that far when I searched!
<voidspace> frobware: I see them now, sorry
<rogpeppe1> anyone fancy a review of a little addition to juju/testing? (factored out from somewhere else we've been using it for while): https://github.com/juju/testing/pull/111
<rogpeppe> axw: ^
<axw> rogpeppe: I was a bit slow, but LGTM anyway
<rogpeppe> axw: tyvm - i always like to have someone from core sign off on juju/* PRs
<babbageclunk> Am I in the wrong place for the core team meeting?
<babbageclunk> juju core team meeting? Is it not on today?
<babbageclunk> frobware, voidspace ^
<voidspace> babbageclunk: we're in a hangout on a critical issue - so we can't make it
<voidspace> babbageclunk: if there's no-one else there, then it probably aint happening :-(
<babbageclunk> Oh yeah, I thought you guys might still be working on that. Weird that no-one is there.
<frobware> voidspace: can you join the HO again?
<frobware> voidspace: scrap that - heading out for a bit
<voidspace> kk
<voidspace> frobware: so it's the provisionerTask that should be creating the machines and that's using a straightforward machines watcher
<voidspace> frobware: interestingly the "process machines with transient errors" call is from the provisioner task loop
<voidspace> frobware: and only happens when there is something on the retryChanges notify channel
<voidspace> frobware: I can't see in the logs a *failed* attempt that would cause that though
<voidspace> frobware: I'm still digging through the code and the logs
<voidspace> taking a break
<jamespage> urgh
<rick_h_> jamespage: that good eh?
<jamespage> rick_h_, soooo
 * rick_h_ runs away
<jamespage> relations at the bottom of status
<jamespage> mmm
<rick_h_> jamespage: yes, what's up?
<jamespage> the side effect of that change is that if I type juju status, I just see the list of relations
<mgz> jamespage, did we break you somehow?
<jamespage> for an openstack env with lots of them
<jamespage> mgz, just feels odd
<rick_h_> jamespage: yea, the thing was that folks doing status tend to either watch (so most improtant/changing stuff up top) or less.
<rick_h_> jamespage: and having the things that have status/notes/etc together seemed appropriate
<jamespage> http://paste.ubuntu.com/23215829/
<jamespage> maybe dropping them from the tabular view althogther might be better
<rick_h_> jamespage: so that was a thought, but because it's directly part of the "topology" held that back
<rick_h_> e.g. two status's are very different with the relations set for something vs not set
<jamespage> side effect is if not using watch, then a juju status cli type results in most of the useful information about 3" above the top of my monitor
<rick_h_> so status seems "all good" as far as machines deployed/applications running but still not be useful
<rick_h_> jamespage: right, it's a trade off one way vs the other way
<rick_h_> We emailed the juju list, asked for feedback, and wanted to try this out.
<jamespage> rick_h_, yeah - sorry I did read that, but had my head in other stuff at the time
<rick_h_> jamespage: understand, this is fine. Sometimes you don't know until you try it
<jamespage> rick_h_, btw that pastebin has the version stuff we just added in
<rick_h_> jamespage: <3 ty
<jamespage> rick_h_, one comment is that its a bit lossy
<jamespage> rick_h_, as its possible to get different versions on different units during transitions
<jamespage> and if someone does something wonky by hand
<rick_h_> jamespage: yea, that was something we debated about that feature
<rick_h_> jamespage: we wanted to treat it more like resources where you get what it "should" be (by what most folks think it is) and a way to ask the units
<rick_h_> jamespage: but that got shot down
<natefinch> maybe a flag for --relations or --no-releations depending on the default
<natefinch> maybe same for machines... machine info is fairly boring most of the time
<natefinch> also, we obviously need to do something about ports
<natefinch> wow, the relation data is just static? There's no state there at all?  I agree, that shouldn't be in status at all.
<babbageclunk> You really want an ascii-art diagram of the relations.
<babbageclunk> (only half joking)
<rick_h_> natefinch: katco macgreagoir voidspace ping for standup
<rick_h_> mgz: ^
<mgz> ta
<voidspace> omw
<alexisb_> babbageclunk, you around still?
<mup> Bug #1544796 opened: Backup restore fails: upgrade in progress <2.0-count> <backup-restore> <blocker> <ci> <regression> <juju:Fix Released by cherylj> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1544796>
<mup> Bug #1626576 opened: credential v. credentials is confusing <juju-core:New> <https://launchpad.net/bugs/1626576>
<perrito666> babbageclunk: could you tell me what do you mean by provider/maas/maas2instance.go L68 ?
<babbageclunk> perrito666: Sure - looking at it now
<perrito666> tx
<mup> Bug #1544796 changed: Backup restore fails: upgrade in progress <2.0-count> <backup-restore> <blocker> <ci> <regression> <juju:Fix Released by cherylj> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1544796>
<babbageclunk> perrito666: I don't remember writing that at all, but from comparing with the v1 api version it looks like it's right - that code should rerequest the status from the controller.
<perrito666> babbageclunk: heh, ok
<babbageclunk> perrito666: :( hope that's not the source of the problems voidspace and frobware have been chasing.
<perrito666> babbageclunk: the problem being?
<babbageclunk> perrito666: marcoceppi's problem where the controller was taking a long time to deploy machines
<mup> Bug #1544796 opened: Backup restore fails: upgrade in progress <2.0-count> <backup-restore> <blocker> <ci> <regression> <juju:Fix Released by cherylj> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1544796>
<perrito666> now mup, I find that hard to believe
<perrito666> babbageclunk: mm, could be, I have a problem like that when the deployment fails in maas
<mup> Bug #1544796 changed: Backup restore fails: upgrade in progress <2.0-count> <backup-restore> <blocker> <ci> <regression> <juju:Fix Released by cherylj> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1544796>
<natefinch> easy review anyone? https://github.com/juju/juju/pull/6306
<perrito666> you are asking if the review is easy?
<natefinch> no
<natefinch> I'm asking you to go look at the code and review it :)
<katco> natefinch: i'm tal
<perrito666> I am not tal, its a hedious templating language but ill take a look anyway
<katco> been wanting to try a GH review :)
<katco> is there still no way to select multiple lines for a comment?
<rick_h_> katco: no, don't think so
<mup> Bug #1544796 opened: Backup restore fails: upgrade in progress <2.0-count> <backup-restore> <blocker> <ci> <regression> <juju:Fix Released by cherylj> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1544796>
<katco> natefinch: review up
<katco> not being able to highlight lines for context is unfortunate. one of my comments appears to be about 4 lines of code when i intended it to be about the entire function
<katco> is there some sort of best practice around that?
<katco> wow that bothers me way more than i thought. it kind of leaves the author guessing which lines are being discussed
<rogpeppe> natefinch: juju is using Go 1.6 now, right?
<rick_h_> rogpeppe: yes
<rogpeppe> rick_h_: thanks
<rogpeppe> rick_h_: just wanted to make double sure
<katco> how do you differentiate between a suggestion and a request in a github review?
 * katco thinks she better go read their tutorial
<rick_h_> katco: if it says please?
<katco> lol
<rogpeppe> trivial (but important) review, please: https://github.com/juju/utils/pull/239
<rogpeppe> katco: have they fixed it so you can comment on lines that aren't changed now?
<katco> rogpeppe: you are asking the wrong person
<katco> rogpeppe: i have no idea
<rogpeppe> katco: :)
<rogpeppe> katco: i'll have to try it
<katco> rogpeppe: looks like no?
<rogpeppe> katco: it always annoys me when i want to say "that code you added over there should be here", but "here" wasn't somewhere that was changed
<katco> yeah... wow that is not ideal either
<natefinch> oh weird, yeah, it's not whether or not it's been changed, just whether or not it's in the window that shows what's changed
<katco> natefinch: but if i expand the file that's changed i can't even comment outside the diff i don't think
<natefinch> right yeas
<natefinch> sorry, there's the initial preview with a little extra context, anything in that preview window you can comment on, changed or not.  Anything outside that, even when expanded, you can't.
<natefinch> katco: I made the suggestion in the email thread about marking issues that need to be addressed with :x: which makes a nice big visible red X in the text.  It's not perfect, but it seems sufficient.
<katco> natefinch: i agree that would be an easy convention. i am always super hesitant to add conventions to teams bc they introduce a cognitive load, and only so much of that can be maintained.
<natefinch> katco: I know... I wish that feature existed in the review system, but I don't think it's a huge cognitive load, especially since we should all be doing reviews all the time, and should become pretty automatic.  Not really much more difficult than remembering to check "comment represents an issue" or whatever it's called.
<mup> Bug #1626626 opened: juju status outputs ERROR "" is not a valid tag <juju-core:New> <https://launchpad.net/bugs/1626626>
<natefinch> katco: trying to figure out how to properly replace the existing notfound error with my own notfound error... it's unfortunate we wrote our errors package in the way we did, just doing type assertions
<natefinch> katco: maybe Wrap is what I'm looking for
<natefinch> katco: yeah, wrap seems to have the correct behavior
<katco> natefinch: i know i always confuse them.
<natefinch> katco: I had to actually write a little program to make sure that wrap didn't munge the old error's text in with the new one.
<natefinch> oops... I'm going to have to get used to not doing commit --amend during reviews
<rick_h_> natefinch: do you still have your PR in review?
<natefinch> rick_h_: yep
<rick_h_> natefinch: can you please create a second PR for that change but from your branch to the juju/develop branch ?
<katco> natefinch: huh yeah, it's not really easy to tell what has changed in response to a review
<natefinch> rick_h_: for the error message change when there's no current controller?
<rick_h_> natefinch: yes please
<natefinch> rick_h_: sure thing
<rick_h_> natefinch: ty
<natefinch> katco: it might be easier if I didn't force push an amended commit.  That's something we'll ahve to look at
<natefinch> brb
 * rick_h_ goes for lunchables
<bdx> how's it going all?
<bdx> is there a doc that details what ports and protocols need be allowed for client <-> controller, and agent <-> controller communication?
<bdx> I'm having to write up an application network and security spec for a client
<bdx> what I'm seeing is client <- tcp 22/17070 -> controller, and agent <- tcp 22/17070 -> controller
<bdx> so 22 and 17070 are all that need be allowed between both sets?
<rick_h_> bdx: sounds right
<mup> Bug #1626626 changed: juju status outputs ERROR "" is not a valid tag <juju-core:Invalid> <https://launchpad.net/bugs/1626626>
<bdx> rick_h_: thanks
<bdx> I know this is a long shot, and I think I know the answer, but may as well ask ... does juju have anyway to deploy time based instances?
<bdx> on AWS
<rick_h_> bdx: no, you have to script that
<bdx> rick_h_: would it be a better practice to use the manual provider, and add machines that have been manually provisioned as time based?
<rick_h_> bdx: maybe we're not thinking the same thing. What do you mean by time based?
<bdx> rick_h_: something we currently use to save on our AWS bill - time based instances can be scheduled to turn off durring non use hours
<rick_h_> bdx: oic, so it's an instance type? /me goes to look at aws docs
<rick_h_> oh, not really an instance type, but a scaling type config on an instance layer
<bdx> rick_h_: yea -> https://postimg.org/image/uc03ismnf/
<rick_h_> bdx: interesting, a manual prover type situation may work?
<rick_h_> the fun part will be how juju handles units showing up/going away like that
<rick_h_> bdx: especially in relations, where juju epxects there to be these units talking to those units and they disappear
<bdx> the boxes I will be manually provisioning will be lxd staging silos
<rick_h_> bdx: since nothing there will be updating the units in the relation lists in juju
<bdx> so they wont even have any charm deployed to them
<rick_h_> oic
<bdx> I'm thinking I will just use juju to grant devs access to their respective instances that will each have the lxd non prod envs on them
<bdx> seems to be the best option for my use case of containerizing our non-prod envs
<bdx> still seems slightly shiesty
<bdx> ha
<bdx> I think it will work though
<alexisb> anastasiamac, rick, tim and I will be on another call and will not be at the bug scrub
<anastasiamac> alexisb: \o/ enjoy
<rick_h_> oh wheeee
<mup> Bug #1587644 opened: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <eda> <performance> <juju:Fix Released> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1587644>
<anastasiamac> and bug scrub time? :D
<mup> Bug #1587644 changed: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <eda> <performance> <juju:Fix Released> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1587644>
<mup> Bug #1587644 opened: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <eda> <performance> <juju:Fix Released> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1587644>
<babbageclunk> menn0: hi! can I pick your brains some more please?
<menn0> babbageclunk: sure
<menn0> babbageclunk: hangout or IRC?
<babbageclunk> menn0: hangout might be quicker
<babbageclunk> https://hangouts.google.com/hangouts/_/canonical.com/mongo-stuff?hl=en&authuser=1
<babbageclunk> menn0: ^
<veebers> thumper: would you know why when attempting a juju upgrade, this message states "juju.environs.sync sync.go:333 using agent binary 2.0-rc2-xenial-amd64 aliased to 2.0-rc2.2-xenial-amd64" why is it using patch version .2, I expected it to use .1. Any insight?
<thumper> nope, sorry
<thumper> well
<thumper> the existing one probably uses .1 right?
<veebers> thumper: the existing was rc1 binary, model version stated rc1.1, but this is using rc2 binary (and using lxd so I was expecting rc2.1)
<thumper> hmm
<thumper> not sure sorry
<veebers> thumper: any idea who might be able to shed some light?
<thumper> heh, wallyworld, but he isn't around
<thumper> I'm not sure who reviewed those changes, perhpas axw?
<veebers> thumper: cool, I spoke with him a little yesterday. I'll pester him again today :-)
<axw> veebers: if the client is .1, when you go to upgrade it'll auto-increment to .2
<axw> veebers: (because .1 is not in streams)
<veebers> axw: so it'll go from rc1.1 -> rc2.2 ?
<axw> veebers: sorry, didn't notice the change in RC. that doesn't seem right.
<veebers> axw: oh odd, this run through (each time a fresh bootstrap etc.) it's aliased it to .3
<axw> veebers:that makes no sense. the client is just plain old "rc2", right?
<veebers> axw: oh, I plugged '--version 2.0-rc2.2' onto upgrade-juju (previous run was --version 2.0-rc2.1)
<veebers> axw: boostrapped with rc1 binary, using rc2 binary to execute the juju-upgrade
<axw> veebers: ok, that sounds a bit funky. if you specify a version we shouldn't be auto-incrementing anything
<alexisb> axw, ping
<axw> alexisb: coming
<alexisb> thanks axw
<axw> np
<axw> alexisb: apparently it was wallyworld who added the default security group thing :/
<axw> i'll ping martin anyway
<alexisb> that must have been post the provider work
<alexisb> did his PR state why it was added?
<alexisb> I wonder if we can track it ot a bug
<alexisb> it probably was a user request
<veebers> axw: I'm going to file a bug re: the juju-upgrade --version thing, I was able to manually recreate it too
<axw> veebers: thnaks
<anastasiamac> alexisb: sorry.. installing bluejeans plugin
<veebers> axw: fyi: https://bugs.launchpad.net/juju/+bug/1626784
<mup> Bug #1626784: upgrade-juju --version increments supplied patch version <juju:New> <https://launchpad.net/bugs/1626784>
<axw> mwhudson: apparently there's a meeting option: http://bluejeans.force.com/KnowledgeSearch/articles/Knowledge_Base/Mute-Participants-on-Entry-to-your-meeting/p
<axw> alexisb: can you set that please? ^^
<mwhudson> axw: ah awesome
<mwhudson> axw: it was also the usual thing of "i know, i'll use the internal mic!" :(
<mup> Bug #1626304 opened: Charm GC in 1.25.6 leaves breakage behind <juju:In Progress by menno.smits> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1626304>
#juju-dev 2016-09-23
<menn0> axw: I said in the standup that the blobstore changes to allow it support io.{ReaderAt,Seeker} turned out to be bigger than I thought they would be
<menn0> axw: I just pushed through with it and it's not too bad
<menn0> axw: although the requirements for the ReaderAt interface are a little tricky to get right, especially efficiently
<menn0> axw: I've gone for correctness over performance
<menn0> axw: parking for now though, will hopefully pick it up later today
<veebers> thumper: dumb question, if I create a model, deploy a charm and add a unit, if I just destroy the model it'll cleanup that unit right? Or do I need to specifically remoev that unit myself?
 * thumper afk for lunch
<thumper> yes
<thumper> it will clean up that unit
<veebers> awesome, thanks th
<veebers> thumper*
<redir> axw: https://github.com/juju/juju/pull/6308 when you have some time
<redir> anastasiamac: I also applied your recommendation wrt --reset as []string in https://github.com/juju/juju/pull/6308
<redir> anastasiamac: apologies though, as I squashed it in with the other changes.
<anastasiamac> redir: \o/ u r wonderful! tyvm :D
<axw> menn0: ok, thanks. no drama if you don't get to it until later - it's just a nice to have
<axw> redir: looking
<redir> np , I'm on to make dinner, so won't be back until this eve.
<menn0> axw: it definitely would be a lot better that way, and I am going to get to it
<redir> Hopefully, this is really close with trivials or less:)
<anastasiamac> thumper: axw: menn0: if u have any ideas/suggestions for "server not reachable" and/or "replicaset" intermittent failures that we are seeing, it'd be great to share them now... i'd love to improve this area as part of the sprint :D
<lazyPower> interesting, is there a known bug currently where you have to model-config enable-os-update && enable-os-refresh-update to false?   I'm seeing this on both gce and aws, investigating further as it pops up while testing.
<natefinch> lazyPower: yes
<natefinch> lazyPower: some lxd update is failing
<lazyPower> natefinch: ta, i was looking for it and didn't find a bug. i was about to reproduce and file one
<natefinch> lazyPower: it's like the apt upgrade of lxd is failing or something, I forget the details, but it was not our fault, per se.
<lazyPower> right, bit by an SRU or something
<natefinch> right
<lazyPower> not even mad just trying to be helpful :)
<natefinch> :)
<natefinch> anastasiamac: you around?
<anastasiamac> natefinch: it depends on how difficult ur question is :D
<natefinch> heh
<natefinch> https://bugs.launchpad.net/juju/+bug/1625243
<natefinch> mup: https://bugs.launchpad.net/juju/+bug/1625243 ?
<mup> natefinch: I apologize, but I'm pretty strict about only responding to known commands.
<natefinch> mup help
<anastasiamac> natefinch: yep, i'llb mup and got it :D
<natefinch> anastasiamac: anyway.... do you know what they're talking about? :)
<anastasiamac> natefinch: mostly about format of simplestreams files
<natefinch> juju should fix its code to know that https://identity.api.rackspacecloud.com/v2.0 Is a global URL expected to be accessed first before going to any regionalized service.
<anastasiamac> natefinch: tech board had fun discussing format this week too :D
<natefinch> what does "accessed first" mean?
<anastasiamac> natefinch: if u look at endpoint url, the one we understand have v2
<anastasiamac> without region in it
<anastasiamac> the data that was supplied had region
<anastasiamac> comment 10 :)
<natefinch> yes
<natefinch> the comment that says "we could fix this, but nah"
<anastasiamac> yep :(
<natefinch> so, what I don't understand is their suggested fix
<anastasiamac> so rackpsace provider need to be modified to handle region in endpoint url ..
<natefinch> I guess I don't understand. It's a URL.  Are we parsing it?  What does it matter if the URL has a region in it?
<anastasiamac> my biggest disappointment with htis, if they've fixed data, all our current implementations (including 1.25) would work
<anastasiamac> since we ar fixing the provider only release with the fix will work :(
<natefinch> right
<anastasiamac> dunno more than that.. u'll need to try it out with both our current (residing in CI) data without region and supplied data with regions.. not fun but this what I would have been doing
<anastasiamac> to see what's wrong and why it matters...
<natefinch> the thing I don't understand is that the URLs they're listing in the bug don't have any data in them
<natefinch> for example https://iad.images.api.rackspacecloud.com/v2
<natefinch> and the url that curtis says is correct (https://identity.api.rackspacecloud.com/v2.0) just returns some xml that does not look pertinent to juju at all
<natefinch> I also don't see a difference between the rackspace data and the rest of the clouds' data here: https://cloud-images.ubuntu.com/releases/streams/v1/index.json
<natefinch> they *all* have region in the endpoint url
<natefinch>      "region": "eu-central-1",
<natefinch>      "endpoint": "https://ec2.eu-central-1.amazonaws.com"
<natefinch> If I shrug any harder I'm gonna pull a muscle
<anastasiamac> natefinch: this looks like ec2 endpoint not rackspace.. curtis is the best person to talk about this bug :)
<anastasiamac> oh no - don't pull muscles \o/
<anastasiamac> talk to curtis :D
<natefinch> no no, I know... I was showing an endpoint from another cloud that also had region in the endpoint, which they were saying was bad somehow
<natefinch> I will talk to curtis when he gets back on.
<anastasiamac> \o/
<anastasiamac> m sooo happy that u r being thrown into simplestreams too :)
<natefinch> rofl
<natefinch> misery loves company
<anastasiamac> :D
<axw> thanks thumper
<thumper> np
<axw> thumper: I replied to your q, but was perhaps a bit terse. we don't care about the lifecycle state of the units, just that none have been added or removed
<thumper> ok
<thumper> fair enough
<thumper> I was just wondering
<thumper> you are good to land
<axw> ta
<menn0> axw: https://github.com/juju/juju/pull/6309 pls
<menn0> axw: reviewing your other PR now
<veebers> menn0, thumper: list-disabled-commands now returns "''" instead of "[]" if there are no disabled commands, I imagine this is a bug and not expected?
<thumper> no
<thumper> I think I changed that on purpose
<veebers> thumper: sorry, this is in format yaml/json too
<veebers> s/too//
<thumper> not sure now, sorry
<natefinch> that's a bug.  It should only be for tabular
<veebers> natefinch: cool, filing now. Thanks :-)
<natefinch> veebers: did you get that bug filed, 'cause I have the fix ready ;)  thumper?  +8 -2 https://github.com/juju/juju/pull/6310
<axw> menn0: looking
<thumper> natefinch: lgtm
<natefinch> thumper: thanks
<anastasiamac> natefinch: bug 1626824
<mup> Bug #1626824: list-disabled-commands  returns "''" instead of "[]" for json/yaml format <ci> <regression> <ui> <juju:Triaged> <https://launchpad.net/bugs/1626824>
<axw> menn0: LGTM
<menn0> axw: thanks. I now it's a backport but it was different enough that I thought it needed a review
<veebers> natefinch: yep, https://bugs.launchpad.net/juju/+bug/1626824
<mup> Bug #1626824: list-disabled-commands  returns "''" instead of "[]" for json/yaml format <ci> <regression> <ui> <juju:Triaged> <https://launchpad.net/bugs/1626824>
<veebers> anastasiamac: hah, I'm to slow
<anastasiamac> veebers: :) nps
<menn0> axw: just a few minor things for yours. ship it!
<axw> menn0: thanks
<axw> menn0: you really want me to add docs to each method on Backend and friends? can I just say "go look at the state package"?
<menn0> axw: just describe the interfaces briefly (referring to state if that makes sense)
<axw> menn0: ok
<menn0> axw: and maybe "<blah> implements <blerg>" for the method implementation (although I'm fairly +0 on that)
<menn0> implementations
<axw> menn0: okey dokey
<natefinch> veebers: fix landed, will be in next release
<thumper> I get this periodically
<thumper> ERROR cannot resolve charm URL "cs:xenial/ubuntu": cannot get "/xenial/ubuntu/meta/any?include=id&include=supported-series&include=published": Get https://api.jujucharms.com/charmstore/v5/xenial/ubuntu/meta/any?include=id&include=supported-series&include=published: dial tcp: lookup api.jujucharms.com on 127.0.1.1:53: server misbehaving
<thumper> I think we should have a retry around the charmstore access
<thumper> why does the ubuntu charm install g++ ?
<thumper> it is getting build-essential
<thumper> and python3.5-dev
<thumper> seems weird
 * thumper waits for them all to finish
<thumper> well that didn't work
<thumper> ah fuck...
 * thumper calls it a day
<thumper> later folks
<axw> anastasiamac: do you have time for a review?
<anastasiamac> axw: i wish but not really ;( m only marginally here - ghossting..
<axw> anastasiamac: np
<axw> menn0: are you EOD yet?
<menn0> axw: not quite
<axw> menn0: if you have time, would appreciate a review on https://github.com/juju/juju/pull/6311.
<axw> doh, doc string updates snuck in
<menn0> axw: looking
<menn0> axw: done
<menn0> axw: all good
<axw> menn0: thanks
<menn0> axw: Clean ups of uniter temporary download files: https://github.com/juju/juju/pull/6314
<axw> menn0: cool, looking
<menn0> axw: I just added some comments to the change about possible improvements. pls review anyway though.
<mup> Bug #1626878 opened: ERROR juju.worker.dependency engine.go <juju-core:New> <https://launchpad.net/bugs/1626878>
<axw> menn0: reviewed
<mup> Bug #1626878 changed: ERROR juju.worker.dependency engine.go <juju-core:New> <https://launchpad.net/bugs/1626878>
<mup> Bug #1626878 opened: ERROR juju.worker.dependency engine.go <juju-core:New> <https://launchpad.net/bugs/1626878>
<babbageclunk> hey menn0, I think I found it!
<babbageclunk> https://github.com/juju/juju/blob/master/state/cleanup.go#L155
<babbageclunk> menn0: ^
 * menn0 is about to watch something :)
<menn0> babbageclunk: that's awesome though!
<menn0> I don't see the problem though :-/
<babbageclunk> menn0: It deletes settings non-transactionally. If you're unlucky, you get this sequence: cleanup queued, undertaker runs, makes remove settings txn, cleanup runs, txn gets stuck.
<babbageclunk> menn0: I think
<babbageclunk> menn0: Making a binary now to get Jason to try it.
<babbageclunk> menn0: Isn't it a bit late to start watching something!? Kids don't sleep later on weekends.
<babbageclunk> :)
<babbageclunk> I guess he went to bed
<voidspace> frobware: ping
<voidspace> perrito666: ping
<perrito666> voidspace: pong
<perrito666> voidspace: be kind I am barely conscious
<voidspace> perrito666: babbageclunk says you had an issue with maas2Instance.Status being effectively cached and not updated per call
<voidspace> perrito666: heh
<voidspace> perrito666: are you fixing that or working around it?
<perrito666> voidspace: i thought it was that, but it might be something else, I fixed the call to be not cached anyway
<voidspace> perrito666: I wonder if it is behind the problem at Telefonica that frobware, macgreagoir and I have been looking at
<frobware> voidspace: pong
<voidspace> perrito666: is it landed?
<voidspace> frobware: I'm wondering whether an issue that perrito666 found is behind the telefonica issue
<voidspace> frobware: it doesn't *entirely* fit with the logs, so I'm still trying to dig into it
<perrito666> voidspace: it is not landed, can you describe the problem?
<voidspace> perrito666: basically a maas2Instance.Status call always returns the same value (the status when the maas2Instance was created)
<voidspace> frobware: basically a maas2Instance.Status call always returns the same value (the status when the maas2Instance was created)
<voidspace> perrito666: oops, that was meant to go to frobware
<voidspace> perrito666: we're seeing delayed provisioning on maas
<voidspace> perrito666: making a large delay very slow
<voidspace> perrito666: frobware: so if somewhere in the code is polling Status waiting for it to change, then it won't
<perrito666> well I have both problems, not enough polling and the status being cached anyway
<perrito666> so I started by fixing the not enough polling
<frobware> voidspace: heh, interesting. in trying to repro locally I think my MAAS setup is truly borked atm
<voidspace> perrito666: frobware: I can't find any polling in the code yet, but the code paths are quite convoluted
<voidspace> frobware: :-(
<voidspace> perrito666: when do you think your fix will land?
<frobware> perrito666: happy to be a guinea pig before then
<voidspace> perrito666: I wonder if we can get binaries to them to check if this fixes the issue
<voidspace> frobware: it's provider/maas/maas2Instance.go:#L68
<perrito666> voidspace: let me push up my changes
<voidspace> frobware: perrito666: it looks like this is called by the provisioner_task in classifyMachine
<voidspace> frobware: hard to see if it's polling though
<voidspace> frobware: the Status is checked if InstanceId errors, which it does if there is no real maas machine yet (so no instance data)
<voidspace> frobware: and we do have "found machine pending provisioning" log lines from that code path
<perrito666> voidspace: its something cherilj added some time ago
<voidspace> perrito666: what, polling?
<perrito666> polling of the underlying status for useful return iirc
<voidspace> right
<voidspace> so it *could* be the problem
<perrito666> voidspace: that is half of my problem solved https://github.com/perrito666/juju/tree/fix_1604965
<perrito666> I am now trying to figure where are we exactly refreshing since my problem is that failed machines never gets marked as so
<voidspace> right
<voidspace> perrito666: you probably want to delete the TODO as well.
<voidspace> frobware: worth trying that branch to see if it fixes the issue
<frobware> voidspace: yep, need to get a working MAAS2 atm
<voidspace> cool
<perrito666> voidspace: true, sorry was not thiking on uploading that :p
<voidspace> heh
<babbageclunk> frobware: You've done this lots - is sending someone a binary to run just zipping up juju and jujud? Anything else to send?
<frobware> babbageclunk: I do
<frobware> babbageclunk: cd ~/go
<frobware> babbageclunk: tar -cvf ~/juju-2-0-rc1-for-brad.tar --transform='s!^!juju-2.0-rc1-for-brad/!' bin/juju*
<frobware> babbageclunk: at least then it has a path in it when you untar
<babbageclunk> frobware: Awesome, thanks.
<frobware> babbageclunk: it's also handy to s/bard/for-git-commit-sha1/
<frobware> babbageclunk: and to complete the metacircular evaluator, s/bard/brad/
<babbageclunk> frobware: lol. In words, you mean, it's handy to include the git hash so you know what code they're running?
<frobware> babbageclunk: yup.
<babbageclunk> frobware: Will do
<frobware> babbageclunk: we should change our build to include this
<babbageclunk> frobware: Do I need to tell the recipient to use -build-agent? Will that work without the source?
<frobware> babbageclunk: I haven't mentioned it for binaries I've given away
<babbageclunk> frobware: So hopefully it just does the right thing? I'm a bit nervous because I'm building off the beta18 tag and in my testing it kept trying to use the stream version unless I passed --build-version
<frobware> babbageclunk: if you bootstrap with debug does it not show that it's using the juju from your $PATH?
<babbageclunk> No, it says http://paste.ubuntu.com/23219686/
<babbageclunk> frobware: ^. I might rebase onto tip just to be sure.
<frobware> babbageclunk: doesn't it say all the anyway; I think if you do: $(which jujud) -- and grep for that in the debug output
<babbageclunk> frobware: No, that definitely doesn't appear.
<frobware> babbageclunk: hmm. so, with -build-agent?
<babbageclunk> frobware: The sha1sums don't match either (although I guess they might not with --build-agent either).
<babbageclunk> frobware: Huh, with build-agent they do.
<frobware> babbageclunk: TIL
<babbageclunk> frobware: That could just be that go didn't touch the binary because everything was up to date.
<babbageclunk> frobware: Ok, a quick scan of the code suggests that it'll use jujud from beside the running juju binary as long as it matches the version. So I think --build-agent is the right thing.
<mup> Bug #1627015 opened: no way to specify additional storage for controller(s) <juju-core:New> <https://launchpad.net/bugs/1627015>
<mup> Bug #1627015 changed: no way to specify additional storage for controller(s) <juju:Triaged> <https://launchpad.net/bugs/1627015>
<rick_h_> frobware: voidspace natefinch ping for standup
<voidspace> omw
<voidspace> rick_h_: gah, browser hang
<mgz> hangout just dropped me...
<voidspace> rick_h_: browser hang again - I can hear everyone but currently can't unmute!
<redir> morninf
<redir> had one of those mornings where system wouldn't display fonts after resume.
<natefinch> lol
<redir> then wouldn't resolve dns after reboot
<redir> it was a two reboot morning
<natefinch> dang
<natefinch> speaking of which, my "hangs for 7 minutes after reboot" problem mysteriously went away after 2 months of me never rebooting. :)
<alexisb> redir, you system is trying to tell you it is friday
<redir> right. I better update and reboot everything before I leave tomorrow
<redir> I feel a little lenient toward this machine not always working right. I't has been getting every upgrade since 2013, including betas
<redir> speaking of, i it time to upgrade to yakkety yet?
<redir> also have this ongoing issue where chromium doesn't display jpegs, on multiple different systems. Works on arch though.
 * redir quits whinging and gets to being productive
<voidspace> frobware: ping
<frobware> voidspace: pong - otp
<voidspace> frobware: ok, answer when you get a chance (you may not know anyway)
<voidspace> frobware: I'm trying to deploy a lxd (to the juju controller machine for convenience) and I get this error in the logs:
<voidspace> machine-0: 17:39:57 ERROR juju.worker.proxyupdater lxdbr0 has no ipv4 or ipv6 subnet enabled
<voidspace> It looks like your lxdbr0 has not yet been configured. Please configure it via:
<voidspace> 	sudo dpkg-reconfigure -p medium lxd
<voidspace> and then bootstrap again.
<voidspace> frobware: there is no lxdbr0 of course
<voidspace> frobware: and so the container is forever stuck in pending
<frobware> voidspace: i'm back
<frobware> voidspace: on MAAS?
<frobware> voidspace: are you using rc1/
<rick_h_> natefinch: ping, did you get your info and unblocked on things?
<natefinch> rick_h_: yeah, I got it.  The descriptions in the bug were evidently only one side of a conversation, more or less :)
<rick_h_> natefinch: ok cool, thanks
<mup> Bug #1627138 opened: unable to bootstrap on openstack provider <juju-core:New> <https://launchpad.net/bugs/1627138>
<mup> Bug #1627138 changed: unable to bootstrap on openstack provider <juju:New> <https://launchpad.net/bugs/1627138>
<mup> Bug #1627138 opened: unable to bootstrap on openstack provider <juju-core:New> <https://launchpad.net/bugs/1627138>
<mup> Bug #1627138 changed: unable to bootstrap on openstack provider <juju:New> <https://launchpad.net/bugs/1627138>
<natefinch> good lord bzr + launchpand is slow.
<natefinch> anyone online know how to set up credentials for rackspace?  It's asking for domain name, and that isn't described in our docs
<natefinch> gah, when you bootstrap rackspace, all the error messages talk about Openstack.
<natefinch> holy hell.  bootstrapCommand.Run is an almost 500 line function.
<redir> only 500?
<natefinch> :/
<redir> sorry natefinch I don't know about bootstrapping rackspace.
<natefinch> redir: it's ok.... either it doesn't matter or my bug is occurring before that field is checked :)
<natefinch> I am using delve for the first time ever, and it's amazing.  Running it in Visual Studio Code is almost as good as debugging C# with the real Visual Studio.... and that's saying a lot for me.
<redir> natefinch: that is saying something.
<redir> but I've not debugged C# or used visual studio
<rick_h_> natefinch: leave off the domain name? thought that was not needed
<rick_h_> natefinch: search our bugs for domain name, thought there was one about removing it
#juju-dev 2016-09-24
<redir> ls
<redir> whoops
<redir> reboot brb
#juju-dev 2016-09-25
<thumper> Unable to fetch Juju GUI info: error fetching simplestreams metadata: cannot read product data, invalid URL "https://streams.canonical.com/juju/gui/streams/v1/com.canonical.streams-released-gui.sjson" not found
<thumper> hmm...
<thumper> why do I still get this bootstrapping lxd
<thumper> menn0: do you get that ^^^ ?
<menn0> thumper: checking
<thumper> well... that was disappointing
<thumper> I had something that should have blocked 50% of the agents from stopping
<menn0> thumper: I don't get that GUI simplstreams error
<thumper> hmm...
<thumper> weird
<thumper> I always get it
<thumper> perhaps it is my proxy setup?
<thumper> menn0: got time for a quick call?
<thumper> I have a question or two
<menn0> thumper: sure
<thumper> hangout or bluejeans?
<menn0> thumper: I just jumped into our 1:1 HO
<mup> Bug #1627554 opened: juju binary broken on sierra <juju-core:New> <https://launchpad.net/bugs/1627554>
#juju-dev 2017-09-19
<axw> babbageclunk: can you please take a look at https://github.com/juju/1.25-upgrade/pull/29 when you have a moment?
<babbageclunk> axw: sure
<axw> babbageclunk: thanks
<babbageclunk> axw: sorry, meant to ping here but got distracted!
<wallyworld> axw: if you get a chance, here's a PR which adds to the processing of suspended relations. i'd also like to catch up with you before my EOD for a quick chat https://github.com/juju/juju/pull/7862
<axw> wallyworld: will take a look after lunch
<axw> wallyworld: and will be free to chat then too
<wallyworld> sgtm
<thumper> I think I'm though many of the bundle hurdles
<thumper> but this last one is kinda biig
<thumper> I think I have come across a wonderful undocumented feature
<thumper> and I need to unpack what it means
<wallyworld> awesome
<axw> wallyworld: reviewed, and free to chat when you are
<wallyworld> axw: ok, give me 5
<wallyworld> axw: standup ho?
<axw> wallyworld: okey dokey
<axw> wallyworld: with the dyingOrDead thing, what I was getting at is: if the relation is Dying and also Suspended, won't it fail to be removed? doesn't the Suspended state mean the departed/broken hooks won't be run?
<wallyworld> axw: if it is already suspended, the hooks won't run as they already have been, but the uniter still removes the relation from its state
<axw> ah yes, already run, of course
 * wallyworld afk to buy more coffee, it's an emergency
<jam> axw: ping
<jam> I was trying to figure out why it takes so long to rebuild the tests for state, as we end up importing apiserver
<jam> it turns out that it is because we are using provider/dummy, but we seem to only be using it for storage
<jam> is there another way we could test those, or possibly move it to a different dir? then go test for state would have far less to rebuild
<jam> hm, maybe I'm wrong
<jam> looks like internalStateSuite also uses dummy
<hml> Dmitrii-Sh, jhobbs: i think i might have hit bug 1717590 the other day trying to run the amulet tests for the percona-cluster charm - i hit with juju based on develop, but not with 2.2.4  - i didnât look into what the cause was, focused on other things.  nothing else was running in the model.
<mup> Bug #1717590: percona-cluster unit constantly fires hooks, blocks hacluster unit from being setup <cdo-qa> <cdo-qa-blocker> <foundations-engine> <uosci> <OpenStack
<mup> hacluster charm:Incomplete> <OpenStack percona-cluster charm:Incomplete> <juju:New> <Telegraf Charm:Incomplete> <https://launchpad.net/bugs/1717590>
<Dmitrii-Sh> hml: hmm, good info. Might need to try to loop-test it
#juju-dev 2017-09-20
 * babbageclunk goes for a run
<thumper> otp
<axw> thumper: robot again
<thumper> it normally settles down
<thumper> not sure why it does this
<thumper> axw, jam: I'm going
<babbageclunk> axw: ping?
<axw> babbageclunk: pong
<babbageclunk> axw: hey - just tried an lxc-lxd upgrade with an IS chap - it returned an error about unknown config key lxd.hook.mount
<babbageclunk> axw: how do you think I should handle that?
<babbageclunk> I haven't been able to find anything on lxd mount hooks on the web.
<axw> babbageclunk: lxd.hook.mount? or lxc.hook.mount?
<babbageclunk> axw: oops, the latter
<axw> babbageclunk: can you check what the hook is? I haven't come across that one in my testing
<babbageclunk> From here https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html it's a script that runs after mounting is done but before pivot_root, whatever that is.
<axw> babbageclunk: yeah, this is on bootstack? I think I found a charm that's setting that in lxc config
<babbageclunk> axw: yup - I think they're using it to set static routes in the containers.
<axw> rogpeppe1: sorry, my PR is a bit messy. I would appreciate your thoughts on the general approach
<axw> it started out as a targeted PR, then morphed along the way
<rogpeppe1> axw: i just looked at it in passing tbh.
<rogpeppe1> axw: i don't really understand it, i'm afraid, or quite what the eventual motivation is
<axw> rogpeppe1: ok. motivations are in the description, which I take I didn't express well :)
<axw> rogpeppe1: gotta eat dinner and then go out, will try again tomorrow
<rogpeppe1> axw: ok, i think i understand the motivation a bit more now
<rogpeppe1> axw: but... if we moved to something like Raft, would we really keep all the Op crap?
<axw> rogpeppe1: not the underlying ones, no. but the high level operations, yes - they're the ones that would map to log entries
<axw> rogpeppe1: i.e. DestroyUnitOperation et al. would be made serializable, and would be a log entry
<axw> that's my vague idea atm anyway
<rogpeppe1> axw: ok, that seems reasonable
<axw> there'd need to be another sort of entry for ops in a transaction
<rogpeppe1> axw: but for mongo, i'm not entirely convinced that arbitrary operations can be composed
<axw> rogpeppe1: no, they can't, but we do it already... :)
<rogpeppe1> axw: this will only make it worse, right?
<rogpeppe1> axw: because people will *think* they can compose a NewService op with a DestroyUnit op, for example
<axw> rogpeppe1: hmm, I'm not sure. I guess it would encourage it for future operations. for existing ones, I don't think they're going to be any worse
<axw> I mean State.AddApplication adds both application and unit ops; if we compose them outside, yo uget the same result
<rogpeppe1> axw: anyway, i'd like to see a more focused version of the PR so that it's clear what changes are as a result of the introduction of ModelTransaction
<axw> rogpeppe1: ok, will try and do that tomorrow
<rogpeppe1> axw: yup, that's true for AddApplication specifically, and it's tested that way
<axw> fair point. bounded badness
<rogpeppe1> axw: but i'm concerned that if we provide arbitrary composition, that things will fail easily
<rogpeppe1> axw: also, it's very easy to blow the txn doc size limit
<rogpeppe1> axw: (we already do in some cases)
 * axw out
#juju-dev 2017-09-21
<axw> babbageclunk: https://github.com/juju/1.25-upgrade/issues/32 <- huh? what do you mean we don't use the local juju client? the plugin is called by it...
<axw> babbageclunk: or do you just mean, other than the calling into the plugin
<babbageclunk> axw: I mean, the plugin doesn't run juju - so it doesn't matter how they're installed.
<axw> babbageclunk: okey dokey
<babbageclunk> axw: sorry, those are mostly notes for myself for things that have occurred to me while testing with xavpaice
<axw> babbageclunk: np
<babbageclunk> axw: although, one I wanted to ask you about was the lxc-lxd conversion - it seemed like it didn't leave the containers running?
<axw> babbageclunk: yeah just saw that, that's odd - it's always worked for me...
<babbageclunk> axw: I haven't had a chance to dig in there - he's had to go do some real work
<axw> you can see in the code that we start the LXD containers and wait for them to be accessible
<axw> babbageclunk: in fact, the migrate-lxc command actually runs an SSH command to check that they're acccessible
<babbageclunk> Hmm
<babbageclunk> He ran that without showing me output. Next time I do it I'll get him to pastebin it so we can see.
<axw> babbageclunk: okey dokey
<babbageclunk> axw: how was assembly? Heather and I had a fun chat-meeting.
<axw> babbageclunk: not bad, just went because Charlotte was getting a certificate. the class item was a bit meh :)
<babbageclunk> stink
<axw> babbageclunk: did I miss anything? how're things going?
<babbageclunk> Going through an upgrade with xav has been super helpful - no blockers yet, I've been adding issues for things that could be smoothed out as they come up.
<axw> veebers: what's up with the "Can one of the admins verify this patch?" message from the bot? what does that even mean?
<babbageclunk> The lxc migration and import's done, but upgrade-agents hasn't been finished yet - we were just manually restarting the lxds and about to retry when he got interrupted by other stuff.
<babbageclunk> I'm getting set up with canonistack to test the fixes I'll make to openstack tagging.
<veebers> axw: where are you seeing that?
<veebers> axw: I'm trying out some jenkins things, I had assumed I had correctly limited it to just my one test branch
<veebers> axw: to answer your question, the plugin I'm testing apparently has a whitelist of 'admin' that can actually trigger a build (and thus asks for one of them to comment on the PR)
<veebers> Oh, I see where that might have leaked though. Sorry, that should be plugged now
<axw> veebers: no worries, was just curious - thank you
<veebers> axw: I think I've removed all instances of the message. Sorry for the noise :-P
<axw> all good :)
<babbageclunk> axw: w00t - blahdeblah is happy with his upgraded canonistack model (modulo the security group problem)
 * blahdeblah does a little dance
<axw> babbageclunk: sweet :) nice work
<blahdeblah> \o\ /o/ \o\ /o/
<axw> blahdeblah: was there any lxc in your environment?
<babbageclunk> no
<axw> okey dokey
<blahdeblah> axw: zarroo lxcs present!
<babbageclunk> that's the thing i'm working on now with xavpaice
 * axw nods
<babbageclunk> axw: actually, any ideas about this? https://pastebin.canonical.com/198968/
<babbageclunk> specifically the ssh errors at the end
<axw> babbageclunk: the ssh_exchange_identification?
<babbageclunk> yup
<axw> babbageclunk: might be that the container's still initialising. try running migrate-lxc again, should resume from the right place
<axw> oh but... already migrated
<axw> so I guess that's what you/he just did
<babbageclunk> I can ask him to try it again again though
<axw> babbageclunk: doesn't hurt I suppose... but if that's what was already done, I wouldn't expect a change
<axw> babbageclunk: strange that it only happens on a subset of containers tho
<axw> maybe they are still coming up, and those ones are lagging behind the others
<axw> babbageclunk: the loop in waitLXDContainerReady was supposed to prevent this ... :/
<babbageclunk> axw: Looking at this, https://unix.stackexchange.com/questions/156908/intermittent-ssh-exchange-identification-connection-closed-by-remote-host-err
<babbageclunk> What if we're creating too many connections to the host and overloading it?
<axw> babbageclunk: you mean to the host of the lxc container? possible
<babbageclunk> yeah
<axw> babbageclunk: get him to try with --match=x/lxc/y
<axw> to match just one container that's failing
<babbageclunk> axw: he got this before (not with --match) https://pastebin.canonical.com/198977/
<axw> better
<babbageclunk> but still one failure even though it didn't need to migrate any
<axw> babbageclunk: yeah it still needs to stop the juju agents though
<babbageclunk> ok
<babbageclunk> Oh, for idempotence?
<axw> babbageclunk: yup
<babbageclunk> yeah, makes sense although wasn't obvious to me at first.
<babbageclunk> axw: If I want to throttle the max connections per host, what's a good way to structure that in Go?
<babbageclunk> axw: might be easier to just do a max number of concurrent exec tasks, rather than per-host.
<axw> babbageclunk: not sure there's a one size fits all, I think I'd just batch the errgroup.Go's and call Wait when the limit is hit
<babbageclunk> axw: ok, thanks
<babbageclunk> axw: I was looking at this: http://jmoiron.net/blog/limiting-concurrency-in-go/ but it seems a bit lower level - mixing it with errgroup seems clumsy.
<axw> babbageclunk: actually that's probably not a bad approach. fill up a channel to whatever capacity, take one on entry to Go, immediately after defer adding one back
<axw> babbageclunk: you could have a map of channels keyed on host addr too
<babbageclunk> yeah, makes sense - I might try that.
<babbageclunk> s/ might/'ll/
<babbageclunk> axw: ha ha, fast
<axw> :)
<axw> reviewing the other now
<babbageclunk> axw: Thanks
<babbageclunk> axw: mostly wondering about the panicking/timing out stuff. Actually I think I'll change the panics to errors.
<axw> babbageclunk: sounds fine, I don't mind either way - it's a case where panic is reasonable, but we don't really do panics
<babbageclunk> just feels a bit icky to have panics and lots of goroutines
<axw> babbageclunk: your call, code looks sound anyway
<babbageclunk> cool, thanks
<axw> babbageclunk: did you find out about your flight yet?
<babbageclunk> axw: no, not yet
<axw> babbageclunk: hope you're not too tired for the tour. I guess we'll all be jet lagged anyway though.
<babbageclunk> I think the worst-case scenario is that thumper and I have to get the flight that gets into NYC at about 5 - in that case I'll probably bail on the tour.
<axw> ouch, yeah
<babbageclunk> the advisory at the moment lists our flight NZ8 as one of the ones not needing a refuelling stop - but it doesn't say whether that's for all days or just for now
<balloons> thumper, what happened to github.com/juju/loggo/loggocolor
<thumper> um... what do you mean?
<balloons> github.com/juju/loggo/loggocolor is a 404 now
<thumper> it is there for me
<balloons> thumper, can you check the repo settings perhaps? unless I'm crazy
<thumper> https://github.com/juju/loggo has it
<thumper> where are you seeing it missing?
<balloons> I can't import it
<thumper> last commit into loggo was 5 june
<balloons> so my build fails
<balloons> I guess it's me.. has been all along
<balloons> thumper, specifically cmd/juju/commands/debuglog.go import fails, which actually is the only place it's imported
<thumper> which branch?
<thumper> and where?
<balloons> thumper, this is the archive package. I'm going to assume it's just more annoyingness
<thumper> balloons: check the dependencies are updated properly
<balloons> thumper, ack, I don't doubt it because we're building debs, snaps, and I can build manually myself.
<balloons> but no dice for archive builders
<thumper> does the commit hash of loggo match the dependencies.tsv
<balloons> thumper, I'm willing to bet it's the debian rules messing with things
<balloons> thumper, anyways, I'll head back to it
 * thumper nods
<thumper> ok
<balloons> and yes, it matches
<thumper> wat?...
<thumper> FFS
<balloons> ohh yep, it's debian/rules
 * thumper feels his head is close to exploding
<babbageclunk> Oh dear
<babbageclunk> I mean, I feel like that too, but just because I'm getting so smart these days!
<thumper> babbageclunk: care to be a teddybear?
<babbageclunk> thumper: of course!
<thumper> https://hangouts.google.com/hangouts/_/canonical.com/stuff
<thumper> babbageclunk: ^^
<babbageclunk> did you just ragequit
<babbageclunk> thumper: ?
<thumper> back
<thumper> no it went to sleep (or thought it did)
<thumper> I was thinking and talking, and didn't realise you weren't there for a while
<thumper> babbageclunk: ^^
<babbageclunk> thumper: I dropped anyway - team meeting soon
<thumper> yeah
<thumper> ok
<thumper> babbageclunk: meeting?
#juju-dev 2017-09-22
<babbageclunk> hml: sorry, just realised I totally forgot to review your PR! :(
<hml> babbageclunk: it all worked out - andrew found it and reviewed. :-)
<babbageclunk> oh yay
<thumper> babbageclunk: are you up for some more pair programming?
<babbageclunk> sure
<thumper> babbageclunk: did you want to break for lunch first?
<thumper> babbageclunk: my daughter wants cream from the supermarket
<babbageclunk> thumper: I'll have lunch a bit later - my mum's popping round
<thumper> babbageclunk: ok, let's jump on the hangout again
<thumper> and do some now
<thumper> can break when we need to
<babbageclunk> cool
<MalcomX379> AN IRCD DESIGNED FOR REAL NIGGAS (NOT COON ASS NIGGAS)
<MalcomX379> YOU MAY ASK IS THIS IRCD FOR YOU? ANSWER THE QUESTIONS BELOW
<MalcomX379> DO YOU BLAME THE WHITE MAN FOR EVERYTHING?
<MalcomX379> DOES THE THOUGHT OF WORK CAUSE TERROR IN YOUR HEART?
<blahdeblah> thumper: ^ time for a kick
<MalcomX379> DO YOU NOT GIVE TWO SHITS IF BLACKS ARE CALLED BY THE THOUSANDS
<MalcomX379> FROM OTHER BLACKS BUT IF A FEW GET KILLED BY WHITES YOU WANT TO
<MalcomX379> BURN A CITY??
<MalcomX379> DO YOU LIKE TO HAVE BABIES AND LEAVE THEM??
<thumper> blahdeblah: done
<babbageclunk> what a dick
<blahdeblah> ta
<veebers> thanks thumper
