[00:28] <davecheney> can anyone help me make the juju-core build recipe dependent on a ppa
[00:28] <davecheney> is taht possible ?
[00:40] <davecheney> arosales: did jamespage say we could use his PPA via a LP build recipe ?
[00:40] <davecheney> first signs are that this is not possible
[01:29] <m_3> davecheney: looking
[01:29] <davecheney> m_3: s'ok i figured it out
[01:29] <m_3> I think it's a separate
[01:29] <m_3> ah, cool
[01:30] <davecheney> m_3: arosales bigjools is there a way to get this bumpedup ?
[01:30] <davecheney> https://code.launchpad.net/~dave-cheney/+recipe/juju-core
[01:30] <davecheney> 6-7 hour build delay before I find out if it worked
[01:30] <bigjools> davecheney: I no longer have da powahs, go to the internal ops channel and ask nicely
[01:31]  * davecheney practices best asking for things smile
[01:31] <bigjools> davecheney: also you get get a permanent boost on a nominated PPA by also asking nicely
[01:31] <m_3> davecheney: dunno
[01:31] <bigjools> s/get get/can get/
[01:32] <bigjools> I approve of your LP avatar
[01:32] <m_3> davecheney: might be worth the time setting up pbuilder-dist... you'll at least know in short order for archs and series that you target
[01:33] <bigjools> indeed, pbuilder should be used for build testing, NOT launchpad
[01:33] <davecheney> bigjools: i need to move to having a proper source build, not lp build recipes
[01:33] <davecheney> they are crutch
[01:33] <bigjools> it's a piece of piss to do it all from a recipe locally
[01:34] <bigjools> recipes make it easy
[01:34] <m_3> my fav is the 'merge-part'... ahem
[01:35] <m_3> sorry... s/merge-part/nest-part/
[01:35] <m_3> that's the icky one
[01:43]  * bigjools imagines davecheney's "best asking for things smile" to look a bit like Sheldon Cooper's
[01:44] <davecheney> bigjools: yes, but with more sincerity
[03:26] <bigjools> wallyworld: do you have affinity group stuff in openstack?
[03:27] <wallyworld> not sure tbh
[03:27] <wallyworld> never had cause to use it
[03:27] <bigjools> trying to work out whether the azure solution we're going with needs to worry about it
[03:27] <wallyworld> i'll ask martin tonight
[03:28] <bigjools> the way juju works seems to assume your env is one group
[03:28] <bigjools> if that's not the case then we have a problem
[03:28] <wallyworld> there's only one state server right now
[03:28] <bigjools> yeah
[03:28] <wallyworld> but that will change with HA
[03:29] <bigjools> when is that coming?
[03:29] <wallyworld> "soon"
[03:29] <wallyworld> i think it's next after API work
[03:29] <wallyworld> can't recall exactly
[03:29] <bigjools> ok ta
[03:29] <bigjools> should be ok... I think
[03:31] <wallyworld> actually, in sept
[03:31] <wallyworld> after manual provisioning
[04:04] <davecheney> faaaaaaaaaaaaaaaark
[04:04] <davecheney> 2013-07-10 04:00:46 ERROR juju supercommand.go:234 command failed: cannot start bootstrap instance: no "precise" images in ap-southeast-2 with arches [amd64 i386]
[04:04] <davecheney> error: cannot start bootstrap instance: no "precise" images in ap-southeast-2 with arches [amd64 i386]
[04:05] <davecheney> how the fuck did that happen
[04:07] <davecheney> precise	server	release	20130502	ebs	amd64	ap-southeast-2	ami-9d4bdba7	aki-31990e0b		paravirtual
[04:07] <davecheney> precise	server	release	20130502	ebs	i386	ap-southeast-2	ami-9f4bdba5	aki-33990e09		paravirtual
[04:07] <davecheney> precise	server	release	20130502	instance-store	amd64	ap-southeast-2	ami-9b4bdba1	aki-31990e0b		paravirtual
[04:07] <davecheney> precise	server	release	20130502	instance-store	i386	ap-southeast-2	ami-4f4bdb75	aki-33990e09		paravirtual
[04:07] <davecheney> they really are there, I promise
[04:30] <davecheney> wallyworld: is ec2/image.go used anymore
[04:30] <davecheney> or has it been replaced byt he stuff in environs/instances ?
[04:57] <davecheney> can anyone bootstrap at the moment ?
[04:57] <davecheney> I keep getting 2013-07-10 04:56:52 ERROR juju supercommand.go:234 command failed: cannot start bootstrap instance: no "precise" images in us-west-1 with arches [amd64 i386]
[04:57] <davecheney> error: cannot start bootstrap instance: no "precise" images in us-west-1 with arches [amd64 i386]
[05:20] <davecheney> ffs, every time I go to use juju is broken
[05:20] <davecheney> about to call instances.FindInstanceSpec with images []
[05:20] <davecheney> FindInstanceSpec: matchingTypes [{ m1.small [amd64 i386] 1 1740 80 0xe3bc50 0xc210000510} { m1.medium [amd64 i386] 1 3840 160 0xe3bc50 0xc210000518} { c1.medium [amd64 i386] 2 1740 183 0xe3bc50 0xc210000560} { m1.large [amd64] 2 7680 320 0xe3bc50 0xc210000520} { m2.xlarge [amd64] 2 17408 495 0xe3bc50 0xc210000548} { m1.xlarge [amd64] 4 15360 640 0xe3bc50 0xc210000528} { m3.xlarge [amd64] 4 15360 700 0xe3bc50 0xc210000530} { c1.xlarge [amd6
[05:20] <davecheney> FindInstanceSpec: possibleImages []
[05:20] <davecheney> 0 images are passed as candidates...
[05:31] <wallyworld> davecheney: it's used, but only as glue logic between ec2 provider and simplestreams
[05:32] <wallyworld> davecheney: sounds like the metadata that has been set up for ec2 is out of date
[05:34] <davecheney> wallyworld: i think it is the cost data
[05:34] <davecheney> ec2/image.go ~ line 60
[05:34] <davecheney> nothing is making it through that filer
[05:34] <wallyworld> let me look
[05:36] <davecheney> wallyworld: http://paste.ubuntu.com/5860638/
[05:39] <wallyworld> davecheney: at first look, it seems very weird since there's no constraints to exclude instance types
[05:49] <wallyworld> davecheney: it would be good to see some more tracing, especially the value of matchingImages around line 33 of ec2/image.go
[05:50] <wallyworld> and also images from line 38
[05:51] <davecheney> dialing connection 20 cloud-images.ubuntu.com:80
[05:51] <davecheney> findInstanceSpec: matchingImages []
[05:51] <davecheney> none
[05:53] <wallyworld> hmmmm
[05:54] <davecheney> wallyworld: is it broken for you ?
[05:54] <davecheney> i tried other series' and regions
[05:54] <wallyworld> davecheney: well, i bootstrapped an ec2 env this morning from source
[05:54] <wallyworld> different region though i think, let me check
[05:55] <davecheney> ap-southeast-2 and us-west-1 at broken for me at least
[05:55] <wallyworld> i used the default, us-east-1
[05:56] <davecheney> trap for young players, us-east-1 always works
[05:56] <davecheney> nope, us-east-1 doesn't work for me
[05:56] <wallyworld> davecheney: the fact that matchingImages is [] means that there is a mismatch between the simplestreams metadata and the hard coded regions and endpoints in the aws lib
[05:57] <wallyworld> well, that's my theory
[05:57] <davecheney> goamz is up to date
[05:58] <wallyworld> since the metadata parsing process is to load up all simplestreams metadata and filter out the stuff that doesn't have the same region and endpoint as defined in goamz
[05:58] <wallyworld> i'll boot up an env again and see what happens
[06:01] <wallyworld> davecheney: fails for me now. worked this morning
[06:01] <davecheney> bloody hell
[06:02] <wallyworld> davecheney: and it appears simplestreams metadata was recently updated
[06:02] <wallyworld> davecheney: http://cloud-images.ubuntu.com/releases/streams/v1/
[06:02] <wallyworld> those timestamps would be UTC
[06:02] <wallyworld> so last update was 2 hours ago i think
[06:03] <wallyworld> davecheney: i'll dig into it have see what i find. i'm not blaming the metadata yet. need evidence first :-)
[06:03] <davecheney> shit, 14:02 ws when I first noticed it
[06:04] <davecheney> it must have _just_ broken then
[06:04] <davecheney> actaully about 15 minutes before hand
[06:04] <davecheney> but I spent that time facepalming
[06:04] <wallyworld> hmmm. i'll see what i can find
[06:05] <davecheney> wallyworld: it *looks* like there is a ap-southeast-2 (and other) region defined in the streams data and index files
[06:06] <wallyworld> yes it dos
[06:06] <wallyworld> does
[06:06] <wallyworld> there's nothing obviously wrong that jumps out at me
[06:11] <davecheney> 2013-07-10 06:11:01 WARNING juju.environs.ec2 image.go:43 no matching image meta data for constraints: &instances.InstanceConstraint{Region:"us-east-1", Series:"precise", Arches:[]string{"amd64", "i386"}, Constraints:constraints.Value{Arch:(*string)(nil), Container:(*instance.ContainerType)(nil), CpuCores:(*uint64)(nil), CpuPower:(*uint64)(0xc2100dad98), Mem:(*uint64)(nil)}, Storage:(*string)(0xc210280fd0)}
[06:14] <davecheney> WHOA, loggo doesn't output warnings without -v
[06:15] <wallyworld> davecheney: some tracing shows that juju thinks the index file is missing some data
[06:16] <wallyworld> but to my eye, the data is there, so looking further
[06:16] <davecheney> ╯°□°）╯︵ ┻━┻
[06:16] <wallyworld> huh?
[06:16] <jam> davecheney: 'juju' never output anything without '-v', that behavior was preserved when we switched to loggo.
[06:16] <davecheney> jam: that sucks
[06:17] <jam> In a "lets change one thing at a time" sort of way
[06:17] <davecheney> that is a suckful experience
[06:17] <jam> davecheney: agreed.
[06:17] <jam> And it was intended to change
[06:17]  * davecheney writes a shitty gram to list
[06:17] <jam> at least by a few of us
[06:18] <jam> wallyworld: any chance we have an archive of the contents of simplestreams, so we can compare to what it used to be, and maybe seen an obvious way it was generated differently?
[06:18] <wallyworld> i wish :-)
[06:18] <wallyworld> that would be too easy
[06:18] <wallyworld> i don't think we have the previous copies
[06:18] <wallyworld> could be wrong
[06:18] <jam> wallyworld: it is worrying that there is no fallback/archiving done for important info like this.
[06:19] <wallyworld> jam: there might be, *I* just am not aware of it
[06:19] <jam> wallyworld: so given there is json sjson and json.gpg, is json.gpg just a --detach-sign, and sjson just an inline sign?
[06:19] <wallyworld> yes
[06:20] <jam> wallyworld: it is clear that the archives aren't being put in an "old/" or a "$DATE/" sort of directory
[06:20] <jam> which is probably what I would like to see.
[06:20] <wallyworld> agreed
[06:20] <jam> wallyworld: I think in theory the history is in the file itself, right? With the problem that it gets regenerated from scratch with whatever the current logic is.
[06:20] <wallyworld> yes
[06:23] <wallyworld> jam: davecheney: the index file is wrong
[06:23] <davecheney> wallyworld: \o/
[06:23] <wallyworld> it doesn't contain an "index" map key
[06:26] <wallyworld> i'll email the relevant people
[06:26] <wallyworld> clearly our processes nd to be fixed
[06:27] <wallyworld> there should be a staging area where new metadata is vetted
[06:27] <wallyworld> using some acceptance tests
[06:53] <jamespage> davecheney, you figure out the dependencies are between PPA's right?
[06:54] <davecheney> jamespage: yes sir I did
[06:54] <davecheney> hopefully it'll bake correctly this afternoon
[06:54] <jamespage> also mgz copied those binaries to https://launchpad.net/~juju/+archive/golang
[06:54] <davecheney> ahhhhhhhhhh
[06:55]  * davecheney shakes fist at LP for having *three* names for things
[06:55] <davecheney> url, name and description
[06:55] <davecheney> ffs
[07:06] <jam> jamespage: from what I see in that page, it looks like 1.1.1 is officially in saucy's archive?
[07:27] <jamespage> jam: it is indeed
[07:27] <jamespage> (uploaded it yesterday)
[07:28] <jam> jamespage: great! Did you get the multi-arch stuff sorted out?
[07:28] <jam> (last I saw there was failures to build  go on i386 or something)
[07:28] <jamespage> jam: no - thats still pending - I just pushed a new version based on the packaging currently in saucy
[07:29] <jamespage> jam: there are still some issues with CGO and the approach that the debian maintainer is taking re cross compiling
[07:32] <jam> jamespage: so it is asking the amd64 builder to cross compile an i386 binary?
[07:32] <jam> (or something to that effect)
[07:32] <jamespage> jam: actually its the other way round - i386 builds the compilers and runtimes for all archs
[07:32] <jamespage> not sure I'm 100% comfortable with that
[07:33] <jam> jamespage: that sounds problematic for i386 to build an amd64 arch
[07:35] <fwereade> wallyworld, the diff for https://codereview.appspot.com/10777044/ is very dirty, would you take a look at it please?
[07:54] <jamespage> jam: maybe but cross compilation is a feature of golang
[07:54] <jamespage> jam: so long as the required cross build headers are present it should work
[08:01] <jam> good morning fwereade
[08:01] <fwereade> jam, heyhey
[08:34] <TheMue> hmm, the bootstrap_test in cmd/juju fails intermittent
[08:39] <fwereade> TheMue, bug it please (and please fix it if it's reasonably easy to do so)
[08:39] <fwereade> jam, just looking at SyncAPIServerState
[08:40] <fwereade> jam, is that really the only way to get at the *State backing the api server?
[08:41] <TheMue> fwereade: yep, will first propose my sync-tools and then investigate. seems to be only if my system is under load by parallel tasks.
[08:42] <fwereade> TheMue, cheers
[08:44] <fwereade> jam, I can believe it is, I was just not even aware that the api server was running in the dummy environment
[08:44] <fwereade> jam, and it kinda feels like there's something rather surprising there
[08:45] <fwereade> frankban, heyhey, reviewed, please let me know what you think
[08:46] <fwereade> jam, (there are also perspectives from which it makes sense ofc)
[08:46] <frankban> fwereade: cool, thanks
[08:56] <fwereade> jam, https://codereview.appspot.com/10890047/ reviewed
[09:14] <jtv2> fwereade: I put off my StartInstance/Bootstrap refactoring to give Tim time to finish his, but I have another good one for you.  Got a moment for a video chat?
[09:15] <fwereade> jtv, sure, would you start one please?
[09:15] <jtv> Coming right up.
[09:15] <jtv> fwereade: https://plus.google.com/hangouts/_/77dc830b0aa56248e9ccb3151e2e92e99cb4427c?hl=en-GB
[09:42] <jam> fwereade: sorry about the delay, I was out getting groceries. But yes the actual server machine you are running is started as part of dummyenviron.Bootstrap()
[09:43] <jtv> Does that even work on the dummy provider!?
[09:43] <jam> And that isn't returned anywhere that I could obviously see.
[09:43] <jam> jtv: It is running in-process
[09:43] <fwereade> jam, yeah, that makes some sort of sense... even though the fact that we're running hacked-up half bits and pieces of a bootstrap and a machine agent is really just crazy
[09:43] <jam> fwereade: certainly it isn't what *I* expected.
[09:44] <fwereade> jam, all I'd known about was the hacked-up bootstrap
[09:44] <jam> I would have thought JujuConnSuite would have been starting it locally and retaining a reference to it, rather than indirectly via Bootstrap
[09:45] <fwereade> jam, yeah, that would be the sane thing to do
[09:45] <jam> fwereade: so, I'm perfectly happy to change the code so that the back-door exposes the full state.State object
[09:46] <jam> and then we shove that on the test suite.
[09:46] <jam> I'm not 100% sure what to do about the State connection we have, which is an independent connection to the mongodb
[09:46] <jam> we could just replace it
[09:46] <jam> so the lines in JujuConnSuite for  "s.State = NewConn()" would be replaced with "s.State = ExtractStateConnFromEnvironment()"
[09:46] <jam> or whatever we call that thing.
[09:47] <jam> That means the tests for API Server also use the same State object
[09:48] <jtv> Anybody free to review a refactoring across the providers?  Cleans up the interaction between environments and their storage: https://codereview.appspot.com/11087043
[09:48] <jam> fwereade: for https://codereview.appspot.com/10890047/diff/5001/state/apiserver/upgrader/upgrader_test.go?column_width=80
[09:49] <jam> Does defer AssertStop() interact if you directly call AssertStop() at the end of the test?
[09:49] <jam> I like the "make sure it is stopped even if these assertions fail"
[09:49] <fwereade> jam, AssertStop should be safe repeatedly, because Stop should also be
[09:49] <jam> I'm worried about "AssertClosed" leaving us in a state where AssertStop fails accidentally.
[09:49] <fwereade> jam, won't help in cases where we expect an error out of stop
[09:49] <jam> I'll put it back, if it fails, I'll take it out. :)
[09:49] <fwereade> jam, ok :)
[09:50] <fwereade> jam, I think AssertClosed should be independent
[09:50] <fwereade> jam, I'm not sure about smooshing all possible state conns together though
[09:51] <fwereade> jam, we definitely need access to the API one but I'd prefer to extract that to a sensible place so we have a bit more clarity on what's in play before we make sweeping changes
[09:55] <dimitern> fwereade: hey, can you take a look at this please? https://codereview.appspot.com/10944044
[09:56] <fwereade> dimitern, ack
[09:58] <dimitern> jam: you too maybe? since it's watcher stuff ^^
[10:01] <fwereade> dimitern, LGTM with comment on the description
[10:01] <dimitern> fwereade: thanks, I'll try to accomodate this at client-side on the next CL
[10:01] <fwereade> dimitern, great
[10:02] <dimitern> fwereade: the picture is somewhat unclear yet though
[10:02] <fwereade> dimitern, the easy way to do it is to s/LifecycleWatcher/StringsWatcher/ on the client side and be done with it
[10:02] <dimitern> fwereade: that's where the StringsWatcher (in params) interface will come into play (client-side), right?
[10:02] <fwereade> dimitern, yeah, but it shouldn't be in params
[10:03] <dimitern> fwereade: ah, so you're saying to have only one concrete implementation at client-side, rather than an interface and multiple implementations?
[10:03] <fwereade> dimitern, and tbh... api/watcher.NotifyWatcher and api/watcher.StringsWatcher both seem to be like perfectly reasonable concrete types, no interface hackery required
[10:03] <fwereade> dimitern, they'll still implement those interfaces just fine, but we don't need to declare them
[10:04] <dimitern> fwereade: ok, then I can get rid of the NotifyWatcher interface in params and do the rename in this CL?
[10:04] <fwereade> dimitern, sure, that works for me
[10:04] <dimitern> fwereade: cool
[10:04]  * fwereade bbiab
[10:04] <jam> fwereade: so one concern is how to lable the thing. "StateBeingUsedByAPIServer" is a bit long :)
[10:05] <fwereade> jam, heh, and APIState is taken
[10:05] <fwereade> jam, if api were called api maybe things would be saner, but that's a big change
[10:07] <fwereade> jam, BackingState? at least it's compact
[10:08] <jam> dimitern, fwereade:  so for StringsWatcher, I'd certainly want to see something that looks a bit more like NotifyWatcher, where you pass in the *WatchResult and get back a running StringsWatcher
[10:08] <jam> the existing code makes the Watch call for you
[10:08] <fwereade> jam, but doesn't collide, and with a comment it'd make some sense
[10:08] <jam> which means that the client side API is a bit tangled.
[10:08] <fwereade> jam, agreed, as soon as we need an implementation, which is imminent
[10:08] <jam> fwereade: StateInAPIServer ?, but BackingState is reasonable.
[10:09] <fwereade> jam, dimitern: for the purposes of this CL I think the implementation should be left alone though
[10:10] <fwereade> jam, your call
[10:11] <dimitern> jam: should there be worker/stringswatcher as well for it?
[10:12] <jam> fwereade: usually multiple threads in IRC can be resolved because you are talking to a different person. Hard when it is all the same. :)
[10:12] <jam> dimitern: If you think there is commonality worker/stringsworker could be really nice, as it then lets you decouple the "I'm watching something from" "Here's what I do when it changes"
[10:14] <fwereade> dimitern, I'm not sure yet
[10:15] <fwereade> dimitern, would you mail thumper for an opinion on how well it'll fit with the provisioner? if that will really have to be tortured to make it fit, then no
[10:16] <fwereade> dimitern, because deployer will be the only implementation for the forseeable... oh wait
[10:16] <fwereade> dimitern, minunits handling
[10:16] <dimitern> fwereade: yeah
[10:16] <fwereade> dimitern, ok, that's a yes, and if the provisioner can fit it too it'll be a nice bonus
[10:16] <fwereade> dimitern, thanks
[10:16] <jtv> Sorry to interrupt... anybody free to review a refactoring?  It cleans up the interaction between providers and Storage: https://codereview.appspot.com/11087043
[10:17] <fwereade> and fwiw it is very nice, and simple to review :)
[10:17]  * fwereade really off for a bit now
[10:20] <dimitern> jtv: looking
[10:21] <jam> dimitern: so we had Cleaner and Machiner and a few of them that used the Notify pattern. I *like* splitting things out to separate concerns. But as William said, if it can't actually be reused it isn't worth writing.
[10:21] <jtv> Thanks dimitern!
[10:21] <jam> I did develop NotifyWorker in tandem with changing Cleaner so I would know the design fit.
[10:22]  * jtv runs errand
[10:27] <jam> jtv: is https://codereview.appspot.com/11087043/patch/6001/7023 actually being used? It looked like everything already had an implementation.
[10:27] <dimitern> jtv: reviewed
[10:32] <dimitern> jam: btw the :12345 - my bad :) I was testing and there were some failures and I don't actually remember why I did it, but it shouldn't have landed
[10:32] <dimitern> jam: I think it was to be able to monitor the exact server that was launched
[10:32] <jam> dimitern: interestingly enough, the Machiner one was already gone by the time we noticed the failures in UpgradeWatcher. It took me a bit to figure out where it came from.
[10:33] <dimitern> jam: yep it came from there
[10:33] <jam> dimitern: yeah, except it doesn't actually work, because the server that is launched is running inside the Environ.
[10:33] <jam> which is what I'm working on exposing a bit now.
[10:42] <jam> TheMue: can you include a paste of the failing test https://bugs.launchpad.net/juju-core/+bug/1199698 ?
[10:42] <_mup_> Bug #1199698: intermittent test failure in BootstrapSuite <juju-core:New for themue> <https://launchpad.net/bugs/1199698>
[10:46] <dimitern> fwereade, jam: updated as discussed: https://codereview.appspot.com/10944044/
[10:52] <jam> dimitern: +1
[10:53] <dimitern> jam: good to land you think?
[10:53] <TheMue> jam: sure
[10:54] <dimitern> jam: moving that outside of params was fwereade's request and I like it as well
[10:57] <jtv> jam: I used them in the dummy provider, localstorage, or both — can't remember which.
[10:58] <jtv> Thanks for looking at my branch, people!
[10:59] <TheMue> jam: right now I have no more test run without a failure :(
[11:00] <jam> jtv: what I saw at least looked like you might have wrote that first and not realized we had implementations, but if it is used, its fine to keep it.
[11:00] <TheMue> jam: the failing tests are the --upload-tools tests
[11:00] <jam> TheMue: something to keep in mind for future reports. It is hard to see context in case someone else tried to pick up the failures.
[11:00] <jam> TheMue: interesting. Might be a platform bug.
[11:00] <jam> I think "--upload-tools" forces precise always
[11:00] <jam> (recent change)
[11:01] <jam> so if you are running on precise, you only upload 1 tool
[11:01] <jam> if you are not
[11:01] <jam> then you upload at least 2
[11:01] <jam> "recent" as in a month ago or so.
[11:02] <jam> jtv: I need to teach you enough API so you can do some quid-pro-quo :)
[11:02] <jtv> jam: I'll check, because of course I can always have mis-remembered.  :)  IIRC the dummy and localstorage both lacked a deleteAll equivalent.
[11:02] <TheMue> jam: but I already had runs where it is working
[11:02] <jam> jtv: https://codereview.appspot.com/11036043/ is a pretty straightforward rename of a package if you want to give it a look.
[11:02] <TheMue> jam: and the test case which is failing (8, 9 or 10) is changing from run to run
[11:03] <jam> TheMue: that is very strange.
[11:03] <TheMue> jam: exactly
[11:03] <jtv> jam: coming...  But be warned: I'm a notorious nit-picker.  :_
[11:03] <jtv> :)
[11:04] <jtv> Oh, that's quite a lot — let me just polish off what I have in flight first.
[11:04] <jam> jtv: it is big, but all mechanical
[11:05] <jam> mv environs/agent => agent/
[11:05] <jtv> Good, I'll put any heavy lifting for my branch's follow-up into its own branch, and review yours first.
[11:05] <fwereade> dimitern, reviewed
[11:05] <jam> fwereade: I responded to https://codereview.appspot.com/10890047/
[11:06] <jam> It now opens up BackingState
[11:06] <dimitern> fwereade: cheers
[11:06] <jam> fwereade: https://codereview.appspot.com/10944044/ ...  I would *really* like to see things that are working on the API *not* import juju-core/state
[11:07] <jam> so while we could use the interface from there
[11:07] <jam> it means we also expose all the raw state stuff
[11:07] <jam> vs just knowing "we don't import state in this module"
[11:07] <jam> thoughts?
[11:07] <jam> (it is why I created an interface rather than re-using the existing one)
[11:07] <fwereade> jam, I'm only thinking of worker/cleaner really, that's still using state
[11:07] <jam> fwereade: well, let me finish Upgrader, and Cleaner is quite tempting :)
[11:08] <jam> but fair enough
[11:08] <fwereade> jam, uniter is strictly higher priority I think
[11:08] <fwereade> jam, I'm firmly +1 on keeping state imports out of packages that use the API though
[11:08] <fwereade> jam, sorry if that wasn't clear
[11:15] <jam> sure, but uniter is complex, Cleaner is like 1 API change :)
[11:16] <dimitern> rv-submit is awesome! it makes using rietveld+lp+tarmac the same as lbox submit does!
[11:16] <dimitern> I recommend all of you to try it out - no need to set commit message in LP, set the MP to approved, etc. it even updates the [r=reviewers line] correctly
[11:16] <fwereade> dimitern, nice
[11:17] <jam> fwereade: yeah, version_test.go was the first time I've ever seen: import "." anywhere.
[11:17] <dimitern> install it just with: bzr branch lp:rvsubmit ~/.bazaar/plugins/rvsubmit
[11:17] <jam> I didn't know it worked
[11:17] <jam> but there was "version.*" in the test suite.
[11:18] <jam> dimitern: well, sort of correctly:  R=fwereade, fwereade, jameinel, jameinel
[11:18] <jam> If we used Launchpad's "Approve" tarmac would be picking the R= line correctly as well
[11:19] <dimitern> jam: that's because it counts all LGTMs
[11:19] <dimitern> jam: that could be improved - but it's better than having [r=dimitern] only ;)
[11:19] <jam> dimitern: right, it just needs to dedupe them. Though I'd rather put the time in fixing tarmac to understand LGTM as well.
[11:23] <dimitern> jam: any idea what's this: https://code.launchpad.net/~dimitern/juju-core/066-apiserver-stringswatcher/+merge/173896/comments/389246
[11:24] <jam> fwereade: 2 design things about upgrader-api-worker. (1) I thought of a way to do the delayed shutdown with a LifesupportRunner. Since the goal is that an outside signal is delayed, but an inside signal is immediately shutdown. I won't implement it yet, but I think it can fit nicely if we want it.
[11:24] <jam> fwereade: but (2) The Upgrader wants to wait on 2 channels, 1 for the new versions, and 1 for when the download completes.
[11:24] <jam> I could re-implement all of the NotifyWorker locally just to get another channel
[11:24] <jam> or I could expose a "Handlers may request 1 extra <-chan interface{} to wait on"
[11:25] <jam> then Handlers can multiplex any waiting they want to do onto that channel
[11:25] <jam> does that seem a reasonable design?
[11:25] <jam> Or should I just go with "ignore NotifyWatcher for now"
[11:25] <fwereade> jam, thinking
[11:26] <jam> dimitern: I think it is that rv-submit is setting Approved state without setting the Approved revision field.
[11:26] <dimitern> jam: hmm so I can see a patch to fix this
[11:26] <dimitern> and the reviewers dedups
[11:26] <jam> fwereade: with a WatchExtra() <-chan interface{} => HandleExtra(interface{})
[11:27] <jam> fwereade: then things based on NotifyWatcher can handle as many actual channels as they want, they just need to add context on in and out
[11:27] <jam> alternatively, I'm not sure Upgrader would suffer terribly if it just blocked until it finished downloading the current thing.
[11:28] <fwereade> jam, heh, that latter one feels like a nice gordian knot solution
[11:29] <fwereade> jam, but I'm having some difficulty properly picturing the WatchExtra/HandleExtra in practice so I may just be dumb today
[11:30] <jam> fwereade: right now, SetUp() returns a Watcher, we could have it also return a "<-chan interface{}"
[11:30] <jam> or we could have inside the main loop
[11:30] <jam> select { case <-watcher.Changes(): ... case <-handler.WatchExtra())
[11:32] <fwereade> jam, can you think of other use cases? feels a bit situational to me right now...
[11:33] <jam> fwereade: Provisioner also has one primary watcher (I think) but then a few other side things going on.
[11:33] <jtv> jam: branch reviewed.
[11:34] <fwereade> jam,yeah, but that's stringswatcher-based
[11:34] <jam> jtv: thx
[11:39] <pavel> Hhello, just now I had an interesting case. A week ago my teammate used juju bootstrap in the same EC2 zone, but with different control-bucket and security credentials. Today I run juju bootstrap with my own control-bucket and security credentials. And when I run juju destroy-environment both bootstrap nodes were terminated.
[11:40] <pavel> We have separate credentials within one AWS account.
[11:40] <pavel> Is this correct behavior?
[11:43] <jam> pavel: I *think* if you name your environments differently it won't step on eachother, but instances that get deployed are labeled by environment name, not by control bucket.
[11:44] <pavel> yeah... I thought that exactly this was the case
[11:45] <pavel> But it's frustrating I think
[11:48] <jam> fwereade: your stream is frozen.
[12:28] <jtv> jam: how do I run a live test?
[12:33] <jam> jtv: cd environs/ec2; source ~/.ec2creds; go test -amazon
[12:33] <jtv> Thanks.
[12:33] <jam> for whatever values of ec2 credentials you want to use
[12:33] <jam> or
[12:33] <jam> cd environs/openstack
[12:33] <jam> go test -live
[12:33] <jam> though you also need your Canonistack credentials, etc.
[12:33] <jtv> Is there no way to do this against the local or dummy provider?
[12:33] <jam> jtv: all the providers run them against the test doubles if you just do "go test ./..."
[12:33] <jam> except azure doesn't implement a double
[12:33] <jtv> Ah OK, so then "make check" should be enough.
[12:34] <jam> right
[12:49] <mgz> jam: prereq/bot question
[12:50] <mgz> I marked my branch that fixes tests on danilo's having his as a prereq
[12:50] <jam> mgz: to avoid landing snafus, approve and wait for prereq, then approve post
[12:50] <jam> if you can't do that
[12:50] <jam> you can mark prereq as merged
[12:50] <jam> and land the second
[12:51] <mgz> does this mean his needs to land first... in which case that was a mistake, because the bot will reject it as the tests don't pass without both...
[12:51] <mgz> ah, neat hack, will try that
[12:51] <jam> mgz: right, you can set the MP status to Merged, and then the bot will let you land the second one.
[13:09] <dimitern> abentley: hey, I found 2 issues while testing rvsubmit
[13:09] <abentley> Oh, what?
[13:10] <dimitern> abentley: it seems it's not setting ApprovedRevision and also should remove duplicates in approvers list (e.g. if X has 2 LGTMs it appears twice)
[13:10] <mgz> ah, right, I also ran into both of those :)
[13:10] <mgz> dimitern is a better user than I for reporting
[13:12] <abentley> dimitern: I'm surprised I have to specify approved revision-- why doesn't it default to the current revision like the UI does?
[13:13] <dimitern> abentley: it seems to be required by the go-bot (or tarmac)
[13:13] <dimitern> abentley: see here https://code.launchpad.net/~dimitern/juju-core/066-apiserver-stringswatcher/+merge/173896/comments/389246
[13:14] <abentley> dimitern: I'm not disagreeing, I'm just saying it's dumb that launchpad makes me do extra work to set the approved revision.
[13:14] <jam> abentley: I'm guessing the GUI grabs the current revision from some data on the page when it submits the request
[13:14] <jam> but it is explicitly sending it, rather than just setting approved.
[13:14] <jam> abentley: because if your page is out of date
[13:14] <jam> the approved revision is out of date
[13:15] <jam> abentley: I've done 'bzr push' and then marked it approved on the page, and the bot tells me the wrong revision is approved. (Because I didn't reload the page after Launchpad noticed the updated branch)
[13:15] <jam> ah, I scared him off :(
[13:15] <dimitern> :)
[13:17] <jam> abentley: https://launchpad.net/+apidoc/devel.html#branch_merge_proposal
[13:17] <jam> It looks like "revid" is an optional field
[13:17] <jam> to setStatus
[13:17] <jam> and the webpage is grabbing reviewed_revid and passing it into queued_revid
[13:18] <jam> I've certainly had cases where I had an old MP view, and didn't refresh after "bzr push" and then tarmac tells me "there are additional revisions that have not been reviewed"
[13:18] <abentley> jam: Do you have any thoughts about whether I should look at the local tip or just always use Launchpad's tip?
[13:19] <jam> abentley: you mean the one on the review page, the one on lp:~branch-i-pushed; vs WT.revision?
[13:19] <jam> ... page, or the one...
[13:19] <abentley> jam: I mean the branch on the user's computer vs the one on codehosting.
[13:19] <jam> abentley: I would say the one you intend to approve is probably your local revision, the one you are actually approving is the one on lp:~...
[13:20] <jam> if we are going to look up the one on lp:~... it is probably worth comparing that value to the local one.
[13:20] <jam> abentley: 'lbox submit' would do the push, though. Is that something we should put into rvsubmit ?
[13:20] <jam> so local rev should end up matching lp rev
[13:21] <jam> mgz, dimitern: I could use a second review of https://codereview.appspot.com/10890047/
[13:21] <jam> I'm off for now, though I'll probably be back later tonight.
[13:21] <dimitern> jam: lookinh
[13:21] <mgz> jam: looking
[13:22] <abentley> jam: Since the branch on Launchpad has been approved, it seems a bit weird to automatically push.
[13:22] <abentley> jam: Though of course, some approvals are really conditional.
[13:24] <jtv> jam: I added that test you asked for by the way: https://codereview.appspot.com/11108043
[13:26] <jam> abentley: sure, though certainly the bzr process was that you might ask for tweaks, and the person submitting would do a tweak and submit. Which is mostly how we're doing LGTM.
[13:27] <jam> jtv: review
[13:27] <jam> reviewd
[13:27] <jtv> thanks
[13:27] <abentley> dimitern: Why are there multiple LGTMs for X?
[13:27] <jam> abentley: conditional LGTM from one, followed by larger refactorings, leading to a follow up push and re-review
[13:27] <dimitern> abentley: because i think the approvers list is sorted, but dupes are not removed
[13:28] <jam> dimitern: I think he is asking why someone would possibly LGTM 2 times
[13:28] <abentley> jam: right.
[13:28] <dimitern> abentley: ah, right - yes, the reason is as jam explained
[13:29] <dimitern> abentley: happens sometimes with longer-lived MPs that undergo a refactoring mid-way to landing, so need second approval
[13:30] <dimitern> jam: you have a review
[13:30] <mgz> is bug 1130209 still relevent? we take mongo from distro now, no?
[13:30] <_mup_> Bug #1130209: we need a mongodb compiled for i386 <mongo> <package> <juju-core:Triaged> <https://launchpad.net/bugs/1130209>
[13:32] <jtv> ...Precise?
[13:33] <dimitern> mgz: we have both amd64 and i386 builds in the ppa, but none of them seem to work
[14:10] <fwereade> dimitern, api-common-life is merged now
[14:10] <dimitern> fwereade: cool
[14:10] <dimitern> fwereade: i wanted to ask you do you think EnsureDead and Remove should be combined and should this be done throughout?
[14:11] <dimitern> fwereade: actually the machine uses only EnsureDead, the deployer needs both so far
[14:11] <fwereade> dimitern, I *think* that a Remove that also does an EnsureDead will also be useful for the provisioner
[14:12] <fwereade> dimitern, however an EnsureDead that does not remove will be required for machiner and uniter
[14:12] <dimitern> fwereade: i see
[14:14] <fwereade> dimitern, (deployer and provisioner both have responsibility for units/machines that may be dying and not yet deployed/provisioned; in both cases they want to fast-forward to destruction
[14:14] <fwereade> )
[14:14] <fwereade> they're really annoyingly similar at heart
[14:15] <dimitern> fwereade: so if we call it Remove wouldn't that be confusing? how about EnsureRemoved?
[14:15] <dimitern> (from state package's point of view and those familiar with it)
[14:16] <fwereade> dimitern, yes from state package POV, not so sure from api POV
[14:16] <fwereade> dimitern, the clients gosh-darn want that thing removed and don't really care otherwise
[14:16] <dimitern> fwereade: well, as long as there never is a Remove method that doesn't do both in the api, we'll be fine
[14:19] <fwereade> dimitern, fair point
[14:19]  * fwereade thinks
[14:20] <dimitern> fwereade: otherwise we'd better call it EnsureDeadAndGone or something :)
[14:22] <fwereade> dimitern, I kinda feel that the implementations of similar methods on different facades are not an issue
[14:22] <fwereade> dimitern, it's two facade-specific questions I guess
[14:23] <dimitern> fwereade: we'll see soon enough anyway
[14:24] <fwereade> dimitern, yeah, often it just takes implementing it and seeing how much the implementation bugs you :/
[14:24] <dimitern> fwereade: exactly :)
[14:39] <abentley> dimitern: I've updated the plugin.  Could you try it again?
[14:42] <dimitern> abentley: will do in my next MP, thanks!
[14:45] <abentley> dimitern: You're welcome.
[15:04] <fwereade> ha, found the upgrade bug
[15:04] <dimitern> fwereade: oh, what is it?
[15:05] <fwereade> it is the initial watcher event -- the selector wasn't picking up machines with an unset containertype, just an empty containertype
[15:05] <fwereade> as soon as trunk code has changed a machine it's fine
[15:06] <fwereade> before then it's not reported so we think the instance is not associated with anything
[15:06] <fwereade> verifying it actually works
[15:06] <dimitern> fwereade: hmm
[15:07]  * fwereade really hates waiting for machines to come up
[15:14] <hazmat> is juju working with ec2 atm?  it can't seem to find an image (trunk from 12hrs ago) http://pastebin.ubuntu.com/5861907/
[15:15] <hazmat> specifying default-image-id in env.yaml generates a warning
[15:15] <mgz> hazmat: nope, the cloud images data got changed which breaks things
[15:15] <mgz> ...we should really put that in the topic or summat
[15:21] <hazmat> mgz, any workarounds?  you mean the simple metadata stream data? the tab separate file looks like its still the same ie https://cloud-images.ubuntu.com/query/precise/server/released.current.txt
[15:21] <fwereade> hazmat, it should be fixed
[15:22] <fwereade> hazmat, there was a problem with the simplestreams, index.sjson was pointing to unsigned product files and we didn't like that
[15:23] <fwereade> hazmat, 1.10 still uses that cloud-images path, trunk uses simplestreams
[15:25] <fwereade> dimitern, TheMue: review of https://codereview.appspot.com/11116043 ? very small change, disproportionately useful
[15:25] <dimitern> fwereade: looking
[15:26] <hazmat> fwereade, cool, looks like its working
[15:27] <dimitern> fwereade: reviewed - should this be linked to the upgrade bug? does it fix it?
[15:27] <fwereade> hazmat, thanks for confirming
[15:27] <fwereade> dimitern, yeah, good point, thank you
[15:29] <fwereade> jam, you still around?
[15:30] <fwereade> TheMue, no joy on the bootstrap tests?
[15:35] <TheMue> fwereade: no, absolutely no luck
[15:36] <fwereade> TheMue, would you link me the bug while you review https://codereview.appspot.com/11116043 please? I'll see if anything springs to mind
[15:37] <TheMue> fwereade: you've got a 2nd lgtm
[15:37] <TheMue> fwereade: the bug is the https://bugs.launchpad.net/juju-core/+bug/1199698
[15:37] <_mup_> Bug #1199698: intermittent test failure in BootstrapSuite <intermittent-failure> <juju-core:In Progress by themue> <https://launchpad.net/bugs/1199698>
[15:39] <TheMue> fwereade: I'm trying to follow the call flow, so far I don't see a race here.
[15:39] <TheMue> fwereade: ha, while typing this I have one idea, one moment.
[15:41] <TheMue> fwereade: bingo, all passed, will try with a little change
[15:42] <fwereade> TheMue, I know what it is
[15:42] <fwereade> TheMue, we're uploading more files at bootstrap than we were before
[15:42] <fwereade> TheMue, so once we've checked that the expected N were uploaded, we proceed with the test, even though bootstrap hasn't necessarily finished
[15:44] <TheMue> fwereade: hmm, dunno
[15:44] <TheMue> fwereade: but what changed the behavior here is the setting of GOMAXPROCS
[15:45]  * TheMue slaps himself
[15:49] <dimitern> fwereade: deployer facade v1 - not sure about the auth stuff yet, but take a look please: https://codereview.appspot.com/11117043
[15:49] <TheMue> fwereade: *crash*
[15:49] <fwereade> TheMue, hmm, I may be wrong, but I'm suspicious -- didn't someone just land something that wrote differently on bootstrap?
[15:49] <fwereade> TheMue, oo, you've broken something hard? nice
[15:50] <TheMue> fwereade: yes, with GOMAXPROCS=1 it works, but with 4 as I had before it fails
[15:55] <fwereade> dimitern, sorry, there's been a misunderstanding: reviewed
[15:55] <dimitern> fwereade: ah, you mean Remove is on units
[15:56] <dimitern> (facepalm)
[15:56] <dimitern> :) ofc
[15:56] <fwereade> dimitern, no worries, I know how it is :)
[15:56] <fwereade> dimitern, just remember to use up-to-date state, because I think the auth.entity is likely in general to be stale
[15:57] <fwereade> dimitern, and we need it to know about the right set of units
[15:57] <dimitern> fwereade: that's what I needed to ask
[15:57] <fwereade> dimitern, the *easy* way is to start a WatchUnits on that machine and see what comes out of the first event
[15:57] <dimitern> fwereade: where does the getcanchange comes in? i've yet to do the setpassword, so apart from there - should it be used elsewhere?
[15:58] <fwereade> dimitern, the logic behind getCanChange will be needed in Remove as well
[15:58] <fwereade> dimitern, I think (open to argument ofc) that the right thing to do is to figure out the set of entities we're allowed to operate on as soon as we're asked to operate on at least one
[15:59] <dimitern> fwereade: when that?
[15:59] <dimitern> fwereade: in WatchUnits? or in NewDeployerAPI?
[15:59] <fwereade> dimitern, both way too early
[15:59] <fwereade> dimitern, just after `if len(request.Entities) == 0 { return }`
[15:59] <dimitern> fwereade: so after getting the unit from state
[16:00] <fwereade> dimitern, I don't think we should be speculatively getting units, no
[16:00] <fwereade> dimitern, I think we should only pay the roundtrip for the entities we know we need
[16:00] <fwereade> dimitern, so we pay an upfront cost to discover what entities we're allowed to operate on
[16:00] <dimitern> fwereade: hmm.. so once we know we have entities to operate on (in Remove only) we create the auth func? put it where though
[16:01] <fwereade> dimitern, g+?
[16:01] <dimitern> fwereade: was about to suggest, yeah, will start one
[16:02] <dimitern> fwereade: https://plus.google.com/hangouts/_/34b4267f6a46cb41a76fd05f6dc398f7409f94c4?authuser=0&hl=en
[16:20] <jamespage> fwereade, question - should I be raising bugs for anything I find different CLI api wise between juju and juju-core
[16:21] <fwereade> jamespage, yes please; there may be a very small number we don't want to support but in general we want to be entirely compatible
[16:22] <fwereade> TheMue, ok, it is not exactly what I thought it was
[16:22] <fwereade> TheMue, but try waiting for uploads+1 OpPutFiles
[16:22] <fwereade> TheMue, I think that will actually make the test more reliable than ever before
[16:23] <jamespage> fwereade, sorry - another question
[16:23] <jamespage> is the cli in juju-core intentionally extremely quiet?
[16:24] <fwereade> jamespage, ...sort of
[16:24] <jamespage> its quite disconcerting
[16:24] <fwereade> jamespage, bugs pointing out that this is actually crap will help us to get it done though
[16:24] <jamespage> fwereade, on it
[16:25] <fwereade> jamespage, because actually providing useful feedback is a little less trivial than one might hope
[16:25] <fwereade> jamespage, although just providing *some* by default would probably be a good idea regardless of ugliness like irrelevant logging badges
[16:27] <jamespage> fwereade, agreed - bugs raised
[16:29] <fwereade> jamespage, thanks
[16:31] <TheMue> fwereade: yep, will do
[17:00] <jamespage> mgz, are we going to be able to get a juju-core release uploaded to saucy this week?  AFAICT the 1.1.1 stuff is working OK from PPA's
[17:00] <jamespage> and we have 1.1.1 in saucy....
[17:01] <jamespage> mgz, just give me the source and I'm primed to go!
[17:01] <dimitern> jamespage: whoohoo! 1.1.1 in saucy really? cool!
[17:01] <jamespage> dimitern, yep
[17:05] <fwereade> jamespage, that's awesome; and so, hopefully, but not tomorrow, there are two notable problems I have just found :/
[17:06] <fwereade> (log is filled with api failure spam, and lxc-not-installed spam, after upgrading; bugs forthcoming, need to characterise them better)
[17:07] <mgz> jamespage: I'll send an email to the list now
[17:07] <mgz> *1.11 btw
[17:08] <mgz> fwereade: can you target those bugs against 1.11 on launchpad?
[17:08] <fwereade> mgz, will do
[17:12] <jam> dimitern, fwereade: I can't land https://codereview.appspot.com/10890047/ because moving api.NotifyWatcher means that state/api/upgrader/upgrader needs to import api (in order to return an api.NotifyWatcher). But api needs to import upgrader/upgrader
[17:13] <jam> because api.State.Upgrader() returns an Upgrader instance
[17:13] <jam> That is why I had put it in params before
[17:14] <mgz> jamespage: bug 1194022 is fixreleased?
[17:14] <_mup_> Bug #1194022: Please bump golang version to 1.1.1 <golang (Ubuntu):New> <https://launchpad.net/bugs/1194022>
[17:14] <jam> do we have a better solution?
[17:14] <fwereade> jam, any problem dropping the interface and returning a concrete api/watcher.NotifyWatcher type from api/watcher funcs?
[17:14] <dimitern> jam: why does api need to import upgrader?
[17:14] <jam> dimitern: because api.State has attributes that return the facades
[17:14] <dimitern> fwereade: yes, in fact there is a problem
[17:14] <jam> it does the same for api.State.Machiner() etc.
[17:14] <fwereade> dimitern, bother :)
[17:14] <fwereade> dimitern, what's that?
[17:14] <dimitern> fwereade: i tried that but it didn't work because of state interfaces
[17:14] <dimitern> fwereade: used in workers
[17:14] <fwereade> dimitern, sorry, expand please
[17:15] <jam> fwereade: state/api/state.go imports all the sub packages
[17:15] <jam> machiner
[17:15] <jam> machineagent
[17:15] <jam> upgrader
[17:15] <fwereade> dimitern, doesn't the concrete type satisfy that interface?
[17:15] <dimitern> fwereade: let me check the exact line
[17:15] <fwereade> dimitern, thanks
[17:16] <jam> fwereade: so at present, state/api/watcher/watcher.go imports state/api because it wants to return an api.NotifyWatccher
[17:16] <dimitern> fwereade: the problem was cleanup worker in SetUp needed to return a concrete type out of the state watcher
[17:17] <dimitern> jam: well, I don't see big problem with moving api/interfaces.go in api/params/interfaces.go
[17:17] <jam> we could turn state/api/watcher notifyWatcher public
[17:17] <fwereade> jam, yeah, that's what I'm suggesting
[17:17] <jam> dimitern: well, you then have params.NotifyWatcher, which I thought was what we were avoiding
[17:18] <fwereade> dimitern, I do, they have nothing to do with the params over the wire
[17:18] <dimitern> jam: well, fwereade was mostly against it
[17:18] <fwereade> dimitern, the cleanup worker doesn't use the api
[17:18] <dimitern> fwereade: but uses watchers, which now use interfaces
[17:19] <dimitern> fwereade: aah, now i got your remark that the workers should use state interfaces!
[17:19] <fwereade> dimitern, haha :)
[17:20] <dimitern> fwereade: yeah, i was thinking "aren't we moving towards using the api anyway - why depend on state if it's gonna go away" :)
[17:20] <fwereade> dimitern, jam, and if *that* still causes problems somehow
[17:21] <fwereade> dimitern, jam, then I think the answer is for notifyworker to use its own damn copy of the interface, because the problem is that it's being used for state things but pretending it's all about api things
[17:21] <dimitern> jam, fwereade: so basically get rid of the api/interfaces and inside workers use state interfaces for watchers
[17:21] <fwereade> dimitern, so it happily accepts both a state.NotifyWorker and a concrete *api/watcher.NotfiyWatcher
[17:22] <jam> dimitern: except we don't want to end up there. fwereade and I both agree that we *really* want to switch to not import state in things using the API, and state/api/watchers *consume* the api.
[17:22] <fwereade> dimitern, I want to use api interfaces in code implemented against the api, and state interfaces in code implemented against state
[17:22] <jam> fwereade: initial results look like NotifyWorker is ok with Upgrader returning a state/api/watcher/NotifyWatcher
[17:22] <jam> well *watcher.NotifyWatcher
[17:22] <jam> pointer receiver and all that
[17:23] <fwereade> jam, ok, cool
[17:23] <dimitern> fwereade: my confusion was: won't the workers *eventually* use only the api as well?
[17:23] <fwereade> dimitern, eventually, yes, but that particular worker is as low on the priority list as it can be
[17:23] <dimitern> fwereade: ok, better one at a time, sure
[17:27] <jam> fwereade, dimitern: Do you want to eyeball that change? https://codereview.appspot.com/10890047/
[17:28] <dimitern> jam: looking
[17:28] <jam> https://codereview.appspot.com/10890047/diff2/11001:17001/state/api/watcher/watcher.go
[17:28] <jam> and https://codereview.appspot.com/10890047/diff2/11001:17001/state/api/upgrader/upgrader.go
[17:28] <jam> pretty much just exposing the type, and then changing what package I get it from.
[17:28] <fwereade> jam, LGTM
[17:29] <dimitern> jam: me too
[17:29] <jam> thx
[17:29] <dimitern> (as long as the tests pass ofc)
[17:29] <dimitern> ;)
[17:30] <jam> yeah, I ran the tests that were particularly relevant (all state/api and worker)
[17:30] <mgz> is bug 1199680 critical for release, and is it in fact fixed? (the linked branch is merged)
[17:30] <_mup_> Bug #1199680: upgrade-juju fails from 1.10 to trunk <juju-core:In Progress by fwereade> <https://launchpad.net/bugs/1199680>
[17:30] <dimitern> jam: please run them all, just in case
[17:30] <jam> dimitern: that's what go-bot is for :)
[17:30] <fwereade> mgz, it is and it is
[17:30] <mgz> ace.
[17:30] <jam> mgz: fwereade patch is up
[17:30] <dimitern> jam: ah, we're getting sloppy now then, ok ;)
[17:31] <jam> d
[17:31] <jam> dimitern: I'm just not waiting 8 minutes to submit something and wait 15 minutes for it to merge.
[17:31] <jam> as long as I have high confidence that it will pass
[17:31] <jam> it is 9:30pm here and all :)
[17:31] <dimitern> jam: i'm still afraid of breaking the build, so I still run them pretty much 10-20 times a day (after each pull, after merge, etc.)
[17:31] <mgz> fwereade: have you filed the other issues you had as blockers yet?
[17:32] <jam> dimitern: I run subsets of them far more than that
[17:32] <dimitern> mgz: i think the biggest blocker still is the ppa mongo not working
[17:32] <mgz> that's not relevent for saucy, no?
[17:33] <dimitern> mgz: I don't think there's even a build for saucy?
[17:33] <jam> fwereade: also to confirm, with 1.10 upgrading to trunk, you didn't have any specific other upgrade issues?
[17:33] <jam> nice catch, btw
[17:33] <fwereade> jam, I didn't spot anything else
[17:33] <jam> I'm curious how you debugged it
[17:34] <fwereade> jam, sorry to say there's only one step... I said to myself "hmm, that sounds like the initial event might be empty"; so I logged the initial event and defanged the actual instance-stopping, tried it out and, yep, empty event
[17:35] <fwereade> jam, which led me directly to the members field of that lifecyclewatcher and *d'oh!*
[17:35] <fwereade> jam, so, er... female intuition?
[17:36] <jam> fwereade: guy who has his head deep in watcher and db interation intuition is my guess
[17:36] <fwereade> details details :)
[17:38] <jam> fwereade: so for the Deployer tests, https://codereview.appspot.com/11117043/patch/1/1004 If I understand you correclty, we need to add a bunch of units to a machine, (and maybe some units on another machine)
[17:38] <jam> and then test that you can Watch the units on your machine, but not the other one
[17:39] <jam> so https://codereview.appspot.com/11117043/patch/1/1003 "Remove" shouldn't be thinking it has a list of machines to remove, but a list of Units ?
[17:40] <fwereade> jam, yep, exactly
[17:40] <fwereade> jam, also needs subordinates to be checked
[17:40] <fwereade> jam, but, can I pull you away from that to take a look at https://bugs.launchpad.net/juju-core/+bug/1199915 please?
[17:40] <_mup_> Bug #1199915: api cannot connect, fills deployed machine log with spam <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1199915>
[17:41] <jam> fwereade: it should? or should not? be able to do something with them?
[17:41] <fwereade> jam, units assigned to that machine, directly or indirectly, should pass permissions checks for the machine's deployer
[17:41] <jam> fwereade: not seen on just installing trunk directly?
[17:41] <fwereade> jam, I'm afraid I haven't done that today, it's all been upgradey
[17:42] <jam> fwereade: k, I wasn't sure if there were child deployers, but I think that was something where you pulled deployer out of uniter, right?
[17:42] <jam> fwereade: np
[17:42] <fwereade> jam, yeah
[17:43] <fwereade> jam, sorry I didn't get to characterise it better, but I'll need to be off shortly and I wanted to let you know about it before tomorrow
[17:43] <jam> fwereade: so our Authorizer (today) doesn't have fancy stuff like "is recursively the machine running the given unit". It also doesn't have AuthUnitAgent which we'll need for Upgrader.
[17:43] <jam> fwereade: I'll give it a look, thanks for pointing it out
[17:43] <fwereade> jam, and I am currently much exercised by fricking https://bugs.launchpad.net/juju-core/+bug/1190715
[17:43] <_mup_> Bug #1190715: unit destruction depends on unit agents <juju-core:In Progress by fwereade> <https://launchpad.net/bugs/1190715>
[17:44] <fwereade> jam, because rog went on holiday without adding the cleaner worker
[17:45] <jam> fwereade: that is, we have a cleaner worker implemented, but it isn't integrated?
[17:45] <fwereade> jam, yeah, my integrating it was that patch with the somewhat questionable mocking that I thought was better than nothing, but that collided with roger's connect-to-the-api patch
[17:47] <fwereade> jam, so I bowed out and left the integration in his hands, but I think it got pushed aside by... whatever that non-critical branch that's sitting in review is :/
[17:47] <jam> fwereade: so is it as approximately simple as adding a line 152 in cmd/jujud/machine.go inside StateWorker ?
[17:48] <fwereade> jam, yeah basically
[17:48] <fwereade> jam, but I feel obliged to actually test it
[17:48] <fwereade> jam, so Cleaner gets a (sigh) side effect test
[17:48] <fwereade> jam, and Resumer gets no test, but a comment indicating why
[17:49] <fwereade> jam, and maybe one day we'll factor that lot cleanly enough that we can test it without SAN damage
[17:53] <jam> fwereade: so for bug 1190715, this is on the root machine, or on the worker machines?
[17:53] <_mup_> Bug #1190715: unit destruction depends on unit agents <juju-core:In Progress by fwereade> <https://launchpad.net/bugs/1190715>
[17:53] <jam> sorry, wrong one
[17:53] <jam> bug 1199915
[17:53] <_mup_> Bug #1199915: api cannot connect, fills deployed machine log with spam <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1199915>
[17:53] <fwereade> jam, that was on the bootstrap node
[17:54] <fwereade> jam, as a possible shortcut, consider what actually gets written into the conf's API info in a 1.10 bootstrap
[17:54] <fwereade> jam, ha, I bet I know
[17:54] <fwereade> jam, 1.10 used the same password for both, but only rewrote the stateinfo one to match the new password it set for itself
[17:55] <jam> oldapipassword
[17:55] <fwereade> jam, (I might be wrong ofc, but I *think* all we do is write oldapipassword and use the fact that we used it as reason to change it)
[17:56] <fwereade> jam, ok that was very poorly expressed
[17:56] <fwereade> jam, I should read the code I suppose
[17:56] <jam> fwereade: well, you are allowed to hand things off to other people :)
[17:56] <jam> I think I at least have your train of thought
[17:58] <fwereade> jam, no I think that maybe we never even set oldapipassword
[17:58] <jam> fwereade: in a fresh but running install it is set to ""
[17:59] <jam> with trunk
[17:59] <fwereade> jam, ok, and that doesn't logspam?
[17:59] <jam> oldapipassword="" and I can see active Pinger requests on the RPC
[18:00] <jam> 2013-07-10 17:51:08 DEBUG juju codec.go:103 rpc/jsoncodec: <- {"RequestId":5,"Type":"Pinger","Request":"Ping","Params":
[18:01] <jam> so *that* is spammy because we ping every 30 seconds or so and every request generates 2 log entries
[18:01] <jam> but it isn't failing to connect.
[18:02] <jam> and it is DEBUG level
[18:03] <fwereade> jam, ok, well that is good news :)
[18:03] <fwereade> jam, it's probably just an upgrade issue then
[18:03] <jam> fwereade: except now I have to figure out how to get a copy of the 1.10 tools :)
[18:03] <jam> well, 1.10 juju binary
[18:04] <jam> precise never had it, and the PPA has already pushed out 1.11.2 which means it doesn't expose 1.10.0 anymore (I think)
[18:04] <fwereade> jam, heh, I hadn't actually updated it from the PPA
[18:04] <mgz> jam: should all still be in the ec2 bucket, no?
[18:05] <fwereade> mgz, not the client binary
[18:05] <jam> mgz: ec2 bucket holds jujud
[18:05] <jam> not juju
[18:05] <fwereade> jam, oh for frickin' repeatable builds
[18:05] <mgz> oh, joy
[18:06] <jam> http://archive.ubuntu.com/ubuntu/pool/universe/j/juju-core/
[18:07] <jam> mgz: fwereade ^^
[18:07] <dimitern> fwereade: bug 1199680 is just committed, not yet released, right?
[18:07] <_mup_> Bug #1199680: upgrade-juju fails from 1.10 to trunk <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/1199680>
[18:07] <jam> the archive is long lived
[18:07] <jam> even if ppa's are not
[18:07] <mgz> dimitern: technically, doesn't matter too much
[18:08] <jam> they wouldn't let us delete stuff from like Hardy
[18:08] <dimitern> mgz: it might matter for the release manager (ha!) :)
[18:08] <fwereade> dimitern, I consider it released, because it's a problem with trunk
[18:08] <dimitern> fwereade: well, just marked it as committed - should I revert it?
[18:09]  * fwereade shrugs and thinks it's pretty academic in this case
[18:10] <dimitern> fwereade: i'm ready with the remover
[18:10] <dimitern> fwereade: if you're still here 10 more minutes, can you take a look (just about to propose)
[18:11] <jam> mgz: "use of closed network connection" with the 1.10 binary...
[18:11] <mgz> guuu
[18:11] <fwereade> dimitern, go for it
[18:11] <jam> mgz: it worked on second attempt
[18:12] <jam> heisenbugs and all that
[18:13] <fwereade> dimitern, and you can take a look at https://codereview.appspot.com/10825044/ while I do, brb
[18:14] <dimitern> fwereade: sure - there it is: https://codereview.appspot.com/11125043 (disregard the typo in the title - will fix it to Remover before submitting)
[18:17] <dimitern> mgz, jam: I'd appreciate a look from you as well, if still around
[18:17] <dimitern> ^^
[18:19] <mgz> sure
[18:20] <jam> ugh, "juju --version" broke "juju upgrade-juju" because we now have "--version" defined in 2 places.
[18:20] <jam> Why didn't the test suite catch this?
[18:21] <mgz> >_<
[18:23] <jam> mgz: and of course --version is a global defined as a BoolVar, and "upgrade-juju --version" is a local defined as a StringVar...
[18:23] <jam> *but* it panics in gnuflag
[18:23] <jam> which is a terrible traceback.
[18:24] <mgz> the test suite doesn't run real binaries, which has been a shortcoming in several places...
[18:26] <jam> mgz: well, it would have caught this, if we had an entry in main suite that tried to install the 'upgrade-juju' subcommand
[18:26] <jam> the issue is that the args for subcommands aren't added to be processed until you request one of them.
[18:26] <jam> So "juju upgrade-juju -h" actually panics
[18:27] <jam> mgz: I wouldn't be terribly surprised to not see a test that "juju upgrade-juju" actually works, even without the "actually run the binary" testing.
[18:27] <fwereade> dimitern, LGTM, just trivials
[18:27] <fwereade> ,
[18:27] <fwereade> mgz, jam, dimitern: sorry, I have to be away now
[18:28] <dimitern> fwereade: thanks!
[18:28] <dimitern> fwereade: reviewed as well
[18:29] <fwereade> dimitern, cheers
[18:29] <jam> dimitern: why rename results => result?
[18:29] <jam> Given the return value is containing a plural slice
[18:29] <dimitern> jam: well, my original intent was to call all bulk result vars "result", since it's a single result struct
[18:30] <dimitern> jam: i mean you're returning not an array, but a struct which has the array inside
[18:30] <dimitern> jam: so result.Results or result.Errors seems better
[18:32] <jam> dimitern: at the top level, you're right there is only one object, but it still feels like a collection of results to me. Though I can't say a feel terribly strongly either way.
[18:33] <dimitern> jam: it made more sense to me at least in singular, but i can change it back if someone feels strongly against it (roger's not around ;)
[18:34] <jam> dimitern: as long as each person touching the code doesn't decide to change them all to their way, I don't actually care. It isn't exposed in the external, they are just local variables.
[18:36] <dimitern> jam: yeah, certainly
[18:42] <jam> dimitern, mgz: https://codereview.appspot.com/11124044/  Unfuck 'juju upgrade-juju'
[18:44] <dimitern> jam: looking
[18:44] <jam> it is just reverting what I just landed
[18:44] <jam> dimitern: thanks ^^
[18:45] <dimitern> jam: Remove is a write action - and the func is canRemove
[18:46] <jam> dimitern: well the func was "canRead" which is obviously wrong. What is the actual auth level control?
[18:46] <jam> can you remove something but not otherwise modify it?
[18:46] <jam> write it, etc.
[18:46] <jam> ?
[18:46] <dimitern> jam: i changed it canRemove and getCanRemove respectively
[18:46] <dimitern> jam: do you mind if it stays like that?
[18:48] <jam> dimitern: I'm slightly concerned about having 20 different "canAddOneLine" "canAppend" "canOverwrite" "canTruncate" that really all just are the difference between "can I view this thing" and "can I modify this thing"
[18:48] <jam> dimitern: canRemove is fine, as long as it actually has a different auth model than "canModify"
[18:49] <dimitern> jam: ok, canModify it is then
[18:55] <jam> dimitern: maybe you know. What causes the api to login with password vs oldpassword?
[18:55] <dimitern> jam: when it's about to change the password
[18:55] <dimitern> jam: at first connect
[18:57] <dimitern> jam: i *think*
[19:00] <jam> dimitern: I have to shut down for tonight, but I can confirm bug #1199915
[19:00] <_mup_> Bug #1199915: api cannot connect, fills deployed machine log with spam <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1199915>
[19:00] <jam> It spins trying to connect with oldpassword
[19:00] <jam> though on the successful case, It asks to set the password to something new, which doesn't seem to be stored in agent-0.conf
[19:01] <jam> so I don't know how it expects to login again.
[19:01] <dimitern> jam: the idea is to connect with the hash of the original agent conf password, then change it and set old to ""
[19:02] <dimitern> jam: but it's pretty convoluted, i'll take a look tomorrow
[19:03] <dimitern> i'm off as well
[19:03] <dimitern> see ya'all tomorrow ;)
[20:45] <wallyworld> fwereade: hiya, did you have a chance to look at those code reviews?
[23:07] <davecheney> hello - we finally have a goo 1.11.2 build
[23:07] <davecheney> i'm just uploading the tools now and doing a test release
[23:07] <davecheney> then i'll bump the version and try to tag it
[23:14] <davecheney> wallyworld_: is it possible to bzr branch from a specific revno ?
[23:15] <davecheney> ie, I want to take rev 1414 as release-1.11.2
[23:15] <davecheney> /s/take/tag
[23:20] <wallyworld_> davecheney: i think so, i'll have to check th exact syntax
[23:21] <wallyworld_> i think it's -c
[23:22] <wallyworld_> nope
[23:24] <bigjools> -r
[23:25] <wallyworld_> i just tried -r and it still pulled in all 1414 revisions
[23:27] <wallyworld_> ah cobzr stuffs it up
[23:29] <davecheney> wallyworld_: that doens't surprise me
[23:30] <davecheney> so, bzr branch -r 1414 lp:juju-core mybranch ?
[23:30] <wallyworld_> davecheney: yeah, i really need to go back to using straight bzr like i did for lp
[23:30] <wallyworld_> yes
[23:30] <davecheney> wallyworld_: thanks for fixing the simple streams snafu
[23:30] <davecheney> looks like we'll have a release today
[23:30] <wallyworld_> np
[23:30] <wallyworld_> i didn't rrally fix anyhting :-)
[23:30] <wallyworld_> really
[23:31] <davecheney> wallyworld_: well, you know who to throw tables at to get it fixed
[23:31] <davecheney> ahasenack: is probably right, we need more logging to show where that happened
[23:31] <wallyworld_> guess so :-)
[23:31] <wallyworld_> yes
[23:31] <davecheney> i'm working on a branch now to show WARNINGs by default
[23:31] <davecheney> but that means we also show ERRORs by defulat
[23:31] <bigjools> bzr help revisionspec
[23:31] <davecheney> we used to quench everything
[23:31] <wallyworld_> davecheney: i've also sent emails to set up a meeting to put in place processes so this doesn't happen again
[23:31] <davecheney> wallyworld_: +1
[23:32] <davecheney> and that has upset the test cases, a lot
[23:32] <wallyworld_> yes, i am not surprised
[23:32] <wallyworld_> davecheney: i really think logging should be separate from our CLI