[00:01] <wallyworld_> AeroNotix: without looking closely at your code, it seems you've written an openstack client similar to goose
[00:01] <AeroNotix> wallyworld_: indeed, that's what I thought at a cursory look through goose
[00:01] <wallyworld_> you think you have features we don't support yet in goose?
[00:02] <wallyworld_> we wrote goose generically but with juju in mind
[00:02] <AeroNotix> I don't know, I can't make heads or tails of that launchpad site to tell the truth
[00:02] <wallyworld_> so it does what we need for juju but is usable as a separate lib
[00:02] <wallyworld_> i feel the same about github :-)
[00:02] <AeroNotix> :)
[00:02] <wallyworld_> how can i help with launchpad
[00:03] <AeroNotix> And I'm not certain what are "base" openstack modules
[00:03] <AeroNotix> I'm assuming CDN/Block/Compute are base
[00:03] <AeroNotix> object store
[00:04] <wallyworld_> at the moment, compute, object store and limited image (glance) support, as well as identity (userpass, key pair)
[00:04] <wallyworld_> if you want the code, easiest to use bzr
[00:04] <AeroNotix> yeah I will grab the code now
[00:04] <wallyworld_> bzr branch lp:goose
[00:05] <wallyworld_> if you need help etc, just ping me
[00:05] <AeroNotix> will do
[00:08] <AeroNotix> So Goose is for your needs, are you open to it being extended beyond those and becoming a more comprehensive library?
[00:08] <thumper> wallyworld_: I'm surprised you didn't say "go get launchpad.net/goose" :-)
[00:08] <thumper> AeroNotix: I would assume yes, as long as it continues to meet our needs
[00:09] <wallyworld_> thumper: what can i say, i like bzr
[00:09] <AeroNotix> ok sounds good
[00:09] <thumper> wallyworld_: me too :)
[00:09] <wallyworld_> AeroNotix: go for it :-)
[00:09] <thumper> hmm lunchtime
[00:09] <wallyworld_> AeroNotix: pun intended :-)
[00:09] <AeroNotix> :)
[00:14] <AeroNotix> ok it's quite late here but I'll check in tomorrow :)
[00:14] <AeroNotix> night all
[00:23]  * thumper heads to lunch
[02:40] <Makyo> join #juju-gui
[02:41] <Makyo> Yikes, too many windows at once.
[06:14] <wallyworld_> jam: hi, i have to go to my son's school concert (oh joy) so will miss the weekly meeting but i should be back in time for the standup
[06:49] <davecheney> Quoting fail
[06:49] <davecheney> workitems_text: Invalid work item format: &quot;[TODO] Get Go 1.1 into Saucy.&quot;
[06:49] <davecheney> OK
[09:07] <rogpeppe> mramm, fwereade: keeps on chucking me out
[09:07] <rogpeppe> i'll try one more time
[09:56] <fwereade> blast, forgot appointment, bbiab
[10:21] <dimitern> fwereade: when you're here, i have a question
[10:31] <dimitern> mgz: ping
[10:32] <mgz> dimitern: hey
[10:32] <dimitern> mgz: hey, it seems your branch reverting the raring workaround haven't landed yet
[10:32] <dimitern> mgz: does it mean that's still in the release?
[10:32] <mgz> yeah, was holding off on it till after release deliberately
[10:33] <dimitern> mgz: ah, ok
[10:33] <mgz> I didn't want to patch it back in late, felt more risky that leaving it till we do 1.10.1
[10:33] <mgz> and the backports thing meant what went in on day 0 was less crucial
[10:34] <dimitern> mgz: ok, so we do have a workaround for raring in 1.10.0 then
[10:34] <mgz> and now the release has happened, that's not going to be hit
[10:35] <mgz> (unless you use a pre-release raring image, rather than the actual release)
[10:35] <dimitern> mgz: what do you mean?
[10:35] <mgz> there' no upstart update pending when you start a 13.04 machine
[10:36] <dimitern> mgz: ah, they backed it out then
[10:36] <mgz> no, it's just that there's no *update*, it's in the image
[10:37] <dimitern> mgz: sorry, i still don't get it - the upstart issue we had, due to which the workaround was introduced - is it fixed or not?
[10:38] <mgz> not yet, but it's only triggered when apt-get upgrade will install a new upstart
[10:38] <mgz> this isn't the case for the release images, they have the latest upstart
[10:38] <dimitern> mgz: ah, i see, we're good then
[10:39] <mgz> and before a new upstart release is SRUed, this bug will get fixed
[11:32] <jam> mgz: standup?
[11:33] <mgz> jam: ta
[11:38] <dimitern> fwereade: ping
[11:44]  * dimitern bbi3m
[12:08] <fwereade> dimitern, pong
[12:29] <dimitern> fwereade: hey, can you take a look at this, to see if i'm heading in the right direction? http://paste.ubuntu.com/5625717/
[12:29] <fwereade> dimitern, sure
[12:31] <fwereade> dimitern, looks sane -- but you'll also, I think, want to be passing the charm in so you can extract the local endpoint of each relation and check that the new charm implements it
[12:31] <fwereade> dimitern, no extra asserts necessary there though, just an error return
[12:31] <fwereade> dimitern, from the ops POV it looks perfect as it is
[12:31] <dimitern> fwereade: well, that's a separate card, wasn't sure if i should mix them in the same branch?
[12:32] <dimitern> fwereade: but i guess the extra code for the endpoints checking is not much
[12:32] <fwereade> dimitern, I think they're the same task -- checking the relations are the same delivers no value without checking they're sane, while checking their sanity almost demands that we also assert sameness
[12:33] <dimitern> fwereade: ok then
[12:33] <fwereade> dimitern, but, hmm
[12:33] <dimitern> fwereade: should i exclude (new/all) peer relations from the generate ops?
[12:33] <fwereade> dimitern, exclude new ones -- just work against original ones
[12:34] <dimitern> fwereade: so as is it's ok - since the new ones won't be there until after setcharm succeeds
[12:34] <fwereade> dimitern, yeah, but there's something knocking at my brain
[12:35] <fwereade> dimitern, oh yeah! you want to check len(relations) against doc.RelationCount and return errRefresh or something (and handle that at the top level)
[12:35] <dimitern> fwereade: ah! yeah, good point
[12:35] <dimitern> fwereade: will do
[12:35] <fwereade> dimitern, so I'll leave it to your judgment re 2 CLs or 1
[12:36] <dimitern> fwereade: ok, thanks
[12:36] <fwereade> dimitern, you might still have trouble testing that bit in isolation though
[12:37] <dimitern> fwereade: i already have one failing test (only one) - the one testing new peer relations.. so i'm still figuring out how to test this
[12:38] <fwereade> dimitern, yeah, bears some thinking about
[12:39] <dimitern> fwereade: btw, can you print out the itinerary + sprint info in 2 copies for saturday?
[12:39] <fwereade> dimitern, sure, np
[12:41] <dimitern> fwereade: cheers
[12:59] <rogpeppe> fwereade: FWIW this was the document i originally wrote regarding upgrades. It has a footnote on major-version upgrades, but not much. http://paste.ubuntu.com/5625794/
[13:00] <rogpeppe> fwereade: i think we'd want some agent to be responsible for waiting for all machines to indicate they're ready, and perform the actual process
[13:00] <fwereade> rogpeppe, yeah, but we also need to make sure that no units or machines are created i the meantime
[13:01] <rogpeppe> fwereade: that's easy if we know that all agents have halted
[13:01] <rogpeppe> fwereade: which is what i meant by "all machines to indicate they're ready"
[13:01] <rogpeppe> fwereade: then we have a single lonely agent carrying out the appointed upgrade tasks
[13:02] <fwereade> rogpeppe, ok, I think it's actually a little harder than "easy" but I agree it's not the hardest thing we'll ever have to do
[13:02] <rogpeppe> fwereade: i meant that, given that step that we've already agreed, making sure that no units or machines are created in the meantime falls out naturally.
[13:03] <rogpeppe> fwereade: that does of course assume the clients communicate through the API
[13:03] <fwereade> rogpeppe, yeah, it's a bit hard t separate the two issues
[13:04] <rogpeppe> fwereade: BTW one thing i don't see on the blueprints is the ability to dynamically change the agents running on an instance
[13:05] <fwereade> rogpeppe, jobs on a machine?
[13:05] <rogpeppe> fwereade: yeah
[13:06] <fwereade> rogpeppe, that's something that feels like a bit of a can of worms that we don't strictly *need* at this stage, but I'll bear it in mind
[13:06] <rogpeppe> fwereade: yes, i'm not sure about it, but it's something worth considering.
[13:09] <rogpeppe> fwereade: the other significant thing with regart to major-version upgrades that i'm considering is how to do the actual mongo schema migration
[13:10] <rogpeppe> fwereade: i'm wondering about building special upgrade binaries that know how to transition from one major version schema to another
[13:10] <rogpeppe> fwereade: then the agent that's responsible for upgrading find the appropriate binary for the upgrade and runs it
[13:10] <rogpeppe> fwereade: possibly running several in succession if upgrading across several major versions
[13:11] <fwereade> rogpeppe, not sure how that's any better than just making the first agent of the new version responsible for running whatever series of upgrade methods is appropriate and just available right there in state
[13:13] <rogpeppe> fwereade: that assumes the agent knows how to upgrade from every version. i *think* it might be nicer to isolate the compatibility code from the main code
[13:14] <fwereade> rogpeppe, sounds like an awful lot of binaries to download and run, especially since the state-upgrade code is kinda going to have to be in state anyway, isn't it?
[13:15] <rogpeppe> fwereade: i wasn't imagining it was
[13:15] <rogpeppe> fwereade: i'd thought we'd have some specialised code which knows about the schemas for both versions and can run some mongo bulk change stuff.
[13:17] <rogpeppe> fwereade: if the code is in state, then we're going to have loads of alternative versions of the same data structures in the state package indefinitely.
[13:18] <rogpeppe> fwereade: but i take your point about multiple binaries
[13:19] <rogpeppe> fwereade: perhaps put everything in state/upgrade
[13:20] <rogpeppe> fwereade: although it's possibly about more than just mongo schemas, though i can't think of any good counter examples currently
[13:26] <fwereade> rogpeppe, I think that API versioning may be waiting to confuse us here
[13:27] <fwereade> rogpeppe, we'll see
[13:28] <rogpeppe> fwereade: is there a particular problem scenario you have in mind there?
[13:34] <fwereade> rogpeppe, nothing specific -- just that managing two versions at the same makes me a little confused
[14:37] <rogpeppe> ha, new internet fix time delayed by another two days
[14:38] <Makyo> Rebuilt my dev environment this morning, but I'm still getting panics around mongo when testing in trunk: http://pastebin.ubuntu.com/5626071/
[14:42] <dimitern> rogpeppe: they're just messing with you now :)
[14:43] <fwereade> Makyo, this remains somewhat baffling -- you can start and use an environment with `default-series: raring`, right?
[14:44] <rogpeppe> Makyo: are you using the mongo from tarball or from the PPA?
[14:44] <Makyo> fwereade, will try that next.  rogpeppe, 2.2.4 from a PPA, looks like.
[14:45] <rogpeppe> Makyo: i recommend trying the tarball version and seeing if that makes a difference
[14:45] <Makyo> rogpeppe, alright, fetching that.
[14:51] <Makyo> fwereade, bootstrap succeeded, status shows agent-state: down  agent-state-info (started)  series: raring.  Will try again in a few.
[14:51] <Makyo> Oh, though I haven't installed from this new env.  Let me do that again.
[14:53] <fwereade> Makyo, yeah, the presence doesn't seem to be 100% reliable in the first few seconds, I think it settles down solidly after that
[14:53] <fwereade> Makyo, regardless, that's looking very much like a working mongo
[14:53] <fwereade> Makyo, can you check whether you have the same version at home?
[14:54] <dimitern> rogpeppe: reviewed both godeps CLs
[14:54] <rogpeppe> dimitern: thanks!
[14:58] <rogpeppe> dimitern: responded
[14:59] <dimitern> rogpeppe: my reasoning about the const usage is that you probably don't need the [1:] anyway - it'll print out a NL, which is not bad
[15:00] <Makyo> fwereade, 2.2.4 on both bootstrap node and home.  I can try the tarball though
[15:00] <rogpeppe> dimitern: i don't want an extra newline before the Usage line. call me anal if you like :-)
[15:00] <dimitern> rogpeppe: ok :) fair enough
[15:00] <rogpeppe> dimitern: and the difference between var and const is minimal here really
[15:01] <dimitern> rogpeppe: LGTM then
[15:01] <fwereade> Makyo, just for confirmation, can you try to run the tests on the machine you just started? I imagine it'll demonstrate the problem but it would be good to check
[15:01] <rogpeppe> dimitern: cool. i'll add some notes to the usage info about the output format
[15:02] <dimitern> rogpeppe: yeah, that'll be helpful, thanks
[15:02] <Makyo> fwereade, Okay, will report back.
[15:16] <fwereade> dimitern, hey, I think you fixed this?https://bugs.launchpad.net/juju-core/+bug/1122134
[15:16] <_mup_> Bug #1122134: status must report machine provisioning errors <juju-core:New> <https://launchpad.net/bugs/1122134>
[15:17] <dimitern> fwereade: yeah, I did, I'll mark it appropriately
[15:17] <fwereade> dimitern, cool, thanks
[15:26] <dimitern> fwereade: i had to remove a uniter test case, because it violated the compatibility checks
[15:26] <dimitern> fwereade: renaming a relation in wp charm from "db" to "db2" and trying to upgrade
[15:27] <fwereade> dimitern, hmm, are you sure it wasn't the only thing covering some other case as well?
[15:27] <dimitern> fwereade: well, i'll propose in a bit, so you can see
[15:27] <fwereade> dimitern, cool, thanks
[15:30] <dimitern> https://codereview.appspot.com/9084045
[15:30] <dimitern> fwereade, rogpeppe: ^^
[15:31]  * dimitern bbiab
[15:41] <Makyo> fwereade, Fewer failures, but still some in state.  http://pastebin.ubuntu.com/5626261/
[15:42] <fwereade> Makyo, huh, very strange
[15:44] <fwereade> dimitern, reviewed, should be quite a simple change but the tests are a little more involved
[15:45] <dimitern> fwereade: thanks!
[15:45] <fwereade> dimitern, it's just rel.Endpoint(serviceName) for each relation that needs to be checked
[15:46] <dimitern> fwereade: and serviceName is s.doc.Name?
[15:46] <fwereade> dimitern, yeah
[15:46] <dimitern> fwereade: ok, will change
[15:47] <dimitern> fwereade: how about the tests?
[15:48] <dimitern> fwereade: do they look good, except that endpoints change?
[15:48] <fwereade> dimitern, well, I'd love to see a mechanism whereby we could pause txn execution just before it hapens, and hook in to fuck up the state and see how it reacts
[15:48] <fwereade> dimitern, but that's a bit out of scope here
[15:49] <dimitern> fwereade: yeah, just "a bit" :)
[15:49] <fwereade> dimitern, ;p
[15:49]  * fwereade wants to dash off and work on another new thing now
[15:50] <dimitern> fwereade: i was thinking of adding more tests with changing relations, but as you said we cannot change state during a transaction like that
[15:52] <fwereade> dimitern, yeah, I don't think it's practical to test that behaviour via spray-and-pray
[15:52] <dimitern> fwereade: i spotted a follow-up though - adding more tests to upgrade-charm to handle incompatible upgrades
[15:53] <fwereade> dimitern, +100
[15:53] <dimitern> fwereade: will add a card then
[15:53] <fwereade> dimitern, cheers
[16:05] <dimitern> fwereade: how can I get a fresh s inside a service method without Refresh() ?
[16:05] <dimitern> fwereade: s = s.st.Service(s.doc.Name) ?
[16:06] <dimitern> fwereade: changing the method receiver like that seems wrong..
[16:08] <mgz> dimitern: fun branch for you to review if the upload ever finishes...
[16:08] <dimitern> mgz: sure
[16:08] <mgz> it's only like I'm touching every source file, what's the issue rietveld?
[16:12] <mgz> dimitern: up, 9104045
[16:12] <fwereade> dimitern, yeah, that'd be fine
[16:12] <mgz> fwereade: ^you may also want to eyeball
[16:13] <dimitern> mgz: will look shortly
[16:13] <fwereade> mgz, cheers
[16:13] <dimitern> fwereade: Assert: append(txn.DocExists, sameRelCount...), doesn't seem to work (txn.DocExists is an "ideal string" and sameRelCount := D{{"relationcount", s.doc.RelationCount}})
[16:15] <fwereade> dimitern, hmm, perhaps DocExists is implicit in any other field check?
[16:16] <dimitern> fwereade: so skip it?
[16:17] <dimitern> fwereade: yeah, it seems to work
[16:18] <fwereade> dimitern, cool
[16:51] <rogpeppe> fwereade: here's a sketch of how we might do major version upgrades: http://paste.ubuntu.com/5626471/
[16:52] <rogpeppe> fwereade: there are actually some interesting interactions between upgrades and multitenancy that we need to discuss
[16:58] <dimitern> mgz: wow, that has to be the biggest diff ever! :)
[17:01] <dimitern> mgz: so how did we manage to go through packaging for the release without copyrights?
[17:09] <dimitern> fwereade: updated https://codereview.appspot.com/9084045
[17:09] <dimitern> rogpeppe: you too perhaps wanna look? ^^
[17:09] <rogpeppe> dimitern: looking
[17:13] <dimitern> mgz: interesting, how is this mojo bzr update-copyright acts so smart?
[17:13] <dimitern> s/acts/actually acting/
[17:17] <dimitern> a fine example why rietveld sucks for not having full diff preview :)
[17:17] <mgz> dimitern: it's pretty funny, isn't it :)
[17:19] <dimitern> mgz: LGTM
[17:19] <dimitern> mgz: when are you planing on landing this?
[17:20] <mgz> nowish, though might wait till next week just so people can bikeshed it a bit
[17:21] <rogpeppe> dimitern: replied
[17:21] <dimitern> rogpeppe: thanks!
[17:21] <dimitern> mgz: haven't heard of parkinson's law of triviality before :) lmao
[17:37] <dimitern> mgz: it would be a blast watching the yak shaving about it
[17:39] <dimitern> fwereade: ping
[18:08] <rogpeppe> dimitern, fwereade: i'd be interested in your reaction to this straw man sketch on multitenancy: http://paste.ubuntu.com/5626707/
[18:08] <rogpeppe> hazmat: ^
[18:09] <hazmat> rogpeppe, that's a strange definition of multi-tenancy
[18:10] <rogpeppe> hazmat: perhaps so
[18:10] <rogpeppe> hazmat: any definition i tried to make seemed to be pushing me in that direction though
[18:10] <rogpeppe> hazmat: this was the definition i started with:
[18:10] <rogpeppe> We want to alow several state instances to be served from the same machine or,
[18:10] <rogpeppe> in a high-availability context, to be able to have an arbitrary n-m mapping
[18:10] <rogpeppe> from server processes to machines that the server processes run on,
[18:10] <rogpeppe> with restrictions as deemed appropriate.
[18:12] <rogpeppe> hazmat: this is my rough draft text for the blueprint: http://paste.ubuntu.com/5626724/
[18:12] <hazmat> rogpeppe, the common definition is a single api endpoint that knows how to isolate resources for clients. a separate endpoint per client per client might be  viable, but it scales poorly
[18:12] <hazmat> rogpeppe, goal we want to amortize the cost of a set of ha state servers over many tenants
[18:13] <rogpeppe> hazmat: there are two state servers we're talking about here
[18:13] <rogpeppe> hazmat: one is mongod
[18:13] <rogpeppe> hazmat: the other is the juju API state server
[18:13] <rogpeppe> hazmat: and then there's the matter of the environment management agents too
[18:14] <rogpeppe> hazmat: the solution i'm thinking about does amortise the cost of the mongod servers
[18:14] <hazmat> rogpeppe, yes.. and we want a set of ha for all.. not a per process per tenant.
[18:14] <hazmat> imo
[18:14] <rogpeppe> hazmat: i don't know that that's too bad
[18:14] <rogpeppe> hazmat: it's much much better than one instance per client :-)
[18:15] <rogpeppe> hazmat: and means environments are naturally isolated
[18:15] <rogpeppe> hazmat: i'm sure we could easily have 500 or so API servers per instance
[18:15] <dimitern> rogpeppe: looks solid, if not a bit overcomplicated in places
[18:16] <hazmat> rogpeppe, so they each get a different port?
[18:17] <rogpeppe> hazmat: i guess so
[18:17] <hazmat> i'm skeptical, but it would be nicer to talk about in person.
[18:18] <hazmat> scaling O(n) tenants is going to be a problem for some use cases
[18:18] <hazmat> like cloud ;-)
[20:18] <ahasenack> hm, the way the development in juju-core goes, bugs stay open even after they are merged
[20:19] <ahasenack> so I don't get a notification that a bug has been fixed already (the branch was merged),
[20:19] <ahasenack> because the bug status doesn't chnage
[20:19] <ahasenack> change
[20:19] <ahasenack> case in point: #1172895 which was blocking me
[20:19] <_mup_> Bug #1172895: relation-list incompatibility with pyjuju: -r <juju-core:Fix Committed by fwereade> <https://launchpad.net/bugs/1172895>
[20:19] <ahasenack> I marked it as "fix committed" just now, because I saw that the branch was merged
[20:19] <ahasenack> note that the review request is still up in LP, because you guys use something else
[20:20] <ahasenack> if you are using something else for reviews, why not bite the bullet and use that same something else for bugs? If you link lp bugs to it (if supported), then the bug status will be good
[21:14] <thumper> morning
[23:10] <bigjools> morning
[23:10] <thumper> hi bigjools
[23:10] <bigjools> wazzup big t
[23:16] <thumper> bigjools: not a lot
[23:16] <thumper> bigjools: doing some side hacking... in go
[23:16] <thumper> bigjools: a more useful logging package
[23:17] <bigjools> \o/
[23:17] <thumper> just writing some more tests, then I'll push to LP
[23:17] <thumper> calling it ...
[23:17] <bigjools> we spent some time looking at errors recently
[23:17] <thumper> wait for it
[23:17] <thumper> golog
[23:17] <bigjools> and how we could improve them
[23:17] <bigjools> your imagination is astounding :)
[23:17] <thumper> i know, right?
[23:18] <thumper> has the standard ideas
[23:18] <thumper> modules, variable levels, writers and formatters
[23:18] <bigjools> loggo would have been amusing
[23:18] <thumper> bigjools: there is still time...
[23:18] <thumper> I think I may well change it to that
[23:19] <thumper> I was going for consistency
[23:19] <thumper> but amusing is good
[23:19] <thumper> and it still fits
[23:19] <bigjools> :)
[23:23] <thumper> bigjools: renamed
[23:23] <bigjools> \o/
[23:23] <thumper> bigjools: to be honest, I think jam suggested the same thing last night :)
[23:23] <bigjools> haha
[23:24] <bigjools> it sounds antipodean