[00:05] wallyworld_: thanks [00:05] sure :-) [00:16] ericsnow: not to be a party pooper but login using github is no longer working [00:16] ericsnow: I get a 404 from github [00:16] perrito666: (╯°□°)╯︵ ┻━┻ [00:17] lol [00:17] perrito666: I'm so gladd thumper showed me that URL :) [00:17] perrito666: you get a 404 in your browser or in rbt? [00:18] browser I changed browser and when I try to re-login I go to oauth, log into github and then get redirected to a 404 page [00:19] perrito666: try it now [00:20] still now the oauth button goes directly to the 404 since I am already logged in [00:21] perrito666: a github 404 [00:21] perrito666: ? [00:21] yup, that is what I meant by: ericsnow: I get a 404 from github [00:37] * fwereade has updated http://reviews.vapour.ws/r/288/ and is busy pushing right now [00:38] * fwereade would *really* like to be able to merge it tomorrow morning -- and, fwiw, jam has already taken a look at the logic, but it necessitated a depressing quantity of test changes that could use a look [00:38] * fwereade goes to bed [01:05] thumper: is one of the ideas between multi-environment to support things like having most things in MAAS but being able to burst to EC2? [01:05] no [01:05] not multi-provider [01:05] still one provider [01:05] but separate environments [01:06] ah [01:17] axw: standup? [01:18] wallyworld_: sorry, brt [01:58] menn0: http://reviews.vapour.ws/r/295/ [01:58] menn0: I have to take Bella to ice skating, back online in 40min or so [01:58] menn0: see what you think of the solution for interface ids [03:40] menn0: https://github.com/juju/loggo/pull/5 [03:41] thumper: just finishing up another review. You're next :) [03:41] menn0: ack [04:02] thumper: I've had a look and have one question. [04:02] thumper: (see PR) [04:02] kk [04:13] thumper: done [04:52] wallyworld_, axw, thumper: I have a local environment that's failing to upgrade from 1.20 to current master due to: [04:52] upgrade step "migrate tools into environment storage" failed: failed to fetch 1.20.11.1-precise-amd64 tools: stat /home/menno/.juju/local/storage/tools/released/juju-1.20.11.1-precise-amd64.tgz: no such file or directory [04:52] hmmm [04:52] looking in my .juju it looks like the directory should be "releases" not "released" [04:53] does that ring any bells? [04:53] yeah, it needs to retry [04:53] i'll fix [04:53] the upgrade logic is retrying all the steps anyway [04:54] but it's not getting there yet [04:54] wallyworld_: is "released" correct? [04:55] menn0: it is for 1.21 [04:55] wallyworld_: right [04:55] i did test that upgrade a couple of days ago [04:55] something must have changed [04:55] sigh [04:56] so i need to fix [04:56] wallyworld_: I think I've seen this once before but then it stopped happening [04:56] could be, i'll take a look. thanks for letting me know [04:56] wallyworld_: np [04:58] wallyworld_: in case it's relevant, fwereade would like us to split out the upgrade steps that act state.State directly from the ones that use api.State, running all the state.State steps first (on state servers only obviously) [04:58] sounds reasonable [05:02] wallyworld_: it's actually more important then I actually first realised, especially now that we have upgrade steps defined for various alpha releases. You really need all the DB migrations for all intermediate releases to have run before other upgrade steps. [05:02] wallyworld_: the upgrade step will need to use LegacyRelases or whatever you named it [05:02] yes, changing tha now [05:02] cool [05:04] axw: are you still around? [05:04] I'm running into problems with "AddCharmConcurrently" sometimes failing (about 1 in 3 for me, and it failed on the bot) [05:05] jam: I am [05:05] oh [05:05] and you were the last person to touch the "c.Assert(blobs, gc.HasLen, 10)" line [05:05] yeah, I was trying to fix it :) [05:05] jam: anything useful in the error output? [05:06] http://juju-ci.vapour.ws:8080/job/github-merge-juju/1073/console [05:06] it lists the blobs it did find [05:06] thanks [05:06] which in my case is 9 total blobs, and only 1 of them has "true" [05:11] axw: I *do* see 10 calls to Client.AddCharm [05:12] oddly, I then see 10 "successfully dialed mongo on 127." but only 8 "successfully dialed mongo on ::1" [05:12] that seems a bit WTF to me [05:12] why would we be dialing mongo separately, because of the storage abstraction? [05:13] resource catalog entry created with id "47d6e7812099383e8266f590d37b7f5369ef43931aecc72a4b86674e7052f346a391648209b8e69ea5b649d2c0e86da0" [05:13] jam: possibly because we're Copying the mgo.Session [05:13] concerns me, because it is exactly the same identity [05:13] jam: IIRC that particular id is based on a hash of the content [05:13] and so it's expected [05:14] I do only see 9 "managed resource entry created" calls [05:14] log entries [05:14] yeah, odd. [05:14] and 9 "resource catalog entry created" lines [05:15] axw: ok, weird... I see all of the requests result in a success response [05:16] (you can match up the RequestId 3-12 and see the Response for each one of them) [05:16] axw: I have a theory... that one of those "concurrent" uploads actually doesn't start until the first success finishes [05:16] and thus it doesn't have anything to upload because the content is already in the db [05:17] sounds very plausible... [05:17] yep [05:17] that'd do it [05:18] we should just check <= 10 [05:18] axw: but why would it ever be >10 [05:19] jam: heh, true [05:19] axw: I think we need to check with dimitern on what his thoughts are [05:19] I understand that we want to make sure if there is *some* concurrency, only one entry wins [05:20] axw: but I think line 1019 of apiserver/client/client.go is responsible [05:20] "if stateCharm.IsUploaded(), return nil) [05:20] jam: yep [05:23] axw: so how was it failing before that you were trying to fix it ? [05:26] jam: I misremembered. I just changed it to work with the blobstore-backed charm storage; some bugs were found in the blobstore as a result of running that test, but it didn't end in me changing the test for that purpose [05:30] axw: yeah, I'm looking at your diff, trying to see how the test was intended to work [05:33] jam: IMO we should just remove the count check. I guess it'd be nice to force the concurrency, but seems like a lot more work than it's worth [05:33] would be nice for it to not be random though [05:37] axw: so I'm trying to figure out if the old test was just as wrong, or if it is just that we introduced deduping and now only sometimes it fails [05:42] jam: I'm pretty sure it was broken before. the only change, looking at the diff is how the charm archive is stored - in the blobstore as opposed to in provider storage [05:43] and the existence check was there before [05:43] axw: it doesn't help that "git blame apiserver/client/client.go" lists the last modification for "IsUploaded()" lists a revision that "git log" say doesn't include the file [05:43] huh [05:45] axw: I'm guessing that "git blame" is tracking hunks being moved between files while "git log" isn't [05:46] axw: how about a reasonable compromise of asserting that we do have at least 2 entries (so at least *some* concurrency was observed) [05:46] fair enough? [05:48] jam: it's still possible to fail though... I'm not super keen on that [05:48] what if we synchronise 2 of the calls in the storage Put? [05:48] then we at least exercise the concurrency of the state changes [05:48] axw: so it would have to not be concurrent at all on 10 requests, and ATM we've seen at least 9 [05:48] axw: so you mean have Put wait to return until it gets a second request ? [05:49] yeah I guess it's pretty unlikely to fail [05:49] I do like that more [05:49] (I'm all for "deterministic" concurrency) [05:49] I'm not 100% sure how to do that, though [05:49] I guess recordingStore could do it for us? [05:49] jam: yep, if we have a waitgroup or something in there we could use that [05:51] we could just have them all wait actually - the bits after storage.Put don't interact with blob storage [05:51] oh actually I lie [05:51] they can Remove [05:52] axw: I think they have to Remove because they end up as "false" [05:52] right [05:53] and if we sync them all then we lose potential testing of concurrent Put/Remove [05:53] axw: well we can do it as a barrier [05:53] so the code waits at the beginning of Put until it accrues 10 requests and then all of them unblock [05:54] yes that'd be better. I was thinking in terms of having it at the end [05:55] axw: so... hmm. could it be done with a sync.WaitGroup ? [05:56] I don't know if it is evil or not, but rs.Put() could do a "wg.Done(); wg.Wait()" sort of thing? [05:56] so we have 2 WGs, one for "the goroutine is completely done" and another for "each goroutine has gotten to the start of Put()" [05:58] jam: that seems fine to me [06:09] axw: so if I put a time.Sleep(10ms) into the for loop, it fails reliably [06:09] and with putBarrier and the sleep it passes reliably [06:14] axw: http://reviews.vapour.ws/r/297/ [06:15] jam: thanks, just finishing a review will take a look in a sec [06:15] actually this may take a bit, I'll switch now [06:19] jam: LGTM, thanks === urulama___ is now known as urulama [06:43] axw: this makes upgrades look in the old releases dir http://reviews.vapour.ws/r/296/ [06:44] wallyworld_: on it [06:44] ta [07:46] dimitern: morning [08:25] morning folks [08:46] * fwereade has $$merged$$ 987, and is going back to bed for a bit, not feeling so good [08:47] bollocks, forgot to push the last tweaks, at least they're trivials [08:47] I'll do them later [08:50] morning [09:21] morning all [09:24] morning [10:07] wallyworld_: team meeting? [10:07] voidspace: TheMue: team meeting [10:07] jam: I'm in there. [10:08] TheMue: not standup, juju-core-team [10:08] TheMue: https://plus.google.com/hangouts/_/canonical.com/juju-core-team [10:08] jam: oh, ouch, coming. [10:08] voidspace: you as well, if you didn't realize [10:08] fwereade: are you coming ? [10:09] jam: he mentioned going to bed [10:09] oops [10:09] sorry [10:09] I forgot both... [10:22] we have three m1.small instances running on the ec2 account [10:22] I think they've been there for a few days [10:23] ah no, launched October 30th [10:34] mgz, looks like we just landed some code that doesn't build [10:34] http://juju-ci.vapour.ws:8080/job/github-merge-juju/1081/console [10:36] mattyw: looks like fwereade change https://github.com/juju/juju/pull/987 [10:36] submitted a fix for it: http://reviews.vapour.ws/r/299/ [10:37] axw, you're ocr today right? could you take a look at this ^^? [10:37] mattyw: do the test pass with those alone? [10:37] perrito666, they do [10:37] mattyw: sorry, EOD as soon as this meeting is up. been online for about 10h and back for another meeting later :/ [10:38] axw, ok no problem, it is quite trivial - fixing build failures that have landed [10:38] mattyw: looks like perrito666 already did it anyway [10:39] mattyw: I just lgtmd but you might need davecheney love [10:39] although its a really trivial change [10:39] axw, perrito666 ok cheers guys [10:39] mattyw: but as will said: bollocks, forgot to push the last tweaks, at least they're trivials [10:39] which I presume conains your change [10:40] axw: yeah, was at soccer tonight, see you again in 90 mins [10:52] could I get another review from a "graduated" reviewer so we can get this landed? http://reviews.vapour.ws/r/299/ [10:54] mattyw: done [10:54] TheMue, thanks very much [10:55] yw [11:11] mgz: mattyw: do we know how we were able to land the code on the bot if it doesn't even compile ? [11:27] jam, no idea, I pinged mgz to see if the logs can tell us any more [11:38] jam, this is the offending merge: http://juju-ci.vapour.ws:8080/job/github-merge-juju/1080/console but it shows the context test as passing. Then it fails in the next merge http://juju-ci.vapour.ws:8080/job/github-merge-juju/1081/console [11:40] mattyw: what is really strange is that it is failing inside "cmd/juju" how would that even have visibility into worker/uniter/context/factory_test.go [11:42] jam, there's something odd happening there, because the line that contains uniter/context correctly states the build failed - just the build errors are nowhere near it [11:42] review appreciated on this change that makes it possible to use Go 1.4 with our testing stuff: https://github.com/juju/testing/pull/36 [11:42] jam, that must just be a bug with the way output is captured [11:43] mattyw: it may be a stderr vs stdout buffering issue [11:43] (stdout is oftend buffered and stderr is immediate) [11:43] jam, sounds most likely [11:43] though you have 300s from the time of that error messages and when it gets to uniter build failed, but I guess it isn't that many lines [11:45] mattyw: I think we need someone investigating this, but I'm at a loss for any more information myself [11:46] this feels a *lot* like we can't trust our bot to reject bad patches, which is a bad place to be [11:46] * jam away [11:47] jam, I don't really see how that could land at all. Maybe running tests with -a to force rebuilding all the packages might help? [11:47] jam, that's all I can think of [11:52] mattyw: yeah, I just did a checkout of 2224d55 and it clearly doesn't "go test" here. [11:53] jam, it clearly shows it passing in the log of the merge [11:53] jam, even though we know it doesn't [11:54] mattyw: ok github.com/juju/juju/worker/uniter/context 17.402s yeah [11:54] mattyw: this feels a *lot* like we were running the tests on raw "master" rather than the merge of "master" and the new code, but we've had rejections for bad patches before [11:55] jam, we've had this issue before, where broken code lands [11:55] jam, and we've tried landing intentionally broken stuff before to check it gets rejected [11:57] mattyw: so I feel like we are well into territory that we need mgz/someone who set up this bot/ to be doing the investigation [11:57] but we clearly were able to land broken code on trunk, and than is (IMO) a Critical priority bug because it causes us to land Critical breakage accidentally [12:46] mattyw: jam I have seen the delay between error and the test stdout when there is a compilation errror, same happened to me when I encountered an import cycle [12:47] perrito666: sure, I'm not particularly worried about that. I *am* worried that William's patch caused a build failure but it didn't fail the merge, but it did fail the *next* merge. [12:47] and I can checkout the revision that the bot landed and "go test" fails in that directory for me [12:49] do you have the link for the merge job that passed? [12:49] wallyworld_: sorry [12:49] had clicked the button and it was lagging :p [12:49] np :-) [12:49] to answer, yes, I think so [12:49] we'll catch up tomorrow [12:50] it was a good meeting [12:50] cya tomorrow [12:50] yep [13:00] anastasiamac_: hey is http://reviews.vapour.ws/r/300/diff/# released? I see that the launchpad bug is marked as fix released [13:00] perrito666: there was a minor inconvenience with the fix... [13:01] oh ok, so that still needs a review? [13:01] perrito666: actually 2: 1. warning was displayed when not needed 2. upgrade did not clean deleted attribute [13:01] phew, back again after some troubles with my network [13:01] perrito666: would be gr8 :-) [13:23] perrito666: thnx for the review :-) [13:23] np [13:24] perrito666: hope no oxygen tanks at high speed for u today :-) [13:24] anastasiamac_: I would certainly hope a) they deliver once a week only and b) the already know there is a speed bump there for next time [13:25] perrito666: u have oxygen delivered once a week? [13:26] anastasiamac_: I believe they where delivering to a construction nearby, its oxygen used for soldering not for hospitals [13:27] perrito666: ahh! makes sense ... [13:52] ericsnow: since you're ocr, could you take a look at http://reviews.vapour.ws/r/302/ ? it's a critical bug fix for alpha3 [13:52] :-) [13:56] wallyworld_: will do [13:56] thanks [13:59] fwereade: ping [14:02] natefinch: are you able to get on to the tosca call? [14:02] natefinch: I keep getting disconnected when I dail in [14:04] wwitzel3: oops, lost track of time, lemme try [14:05] wwitzel3: works for me [14:05] natefinch: just worked for me too *shrug* [14:06] something tells me that our standup is pushed? [14:06] perrito666: presumably it got tosca'ed an hour later [14:07] lol, well if we could toscate :p the calendar appt that would be awesome [14:08] yeah sorry guys [14:08] dont worry everything is awesome [14:08] :p [14:09] ill get you to see that movie [14:09] I just bought it, actually :) [14:09] I had opportunity to watch it on the way to and from brussels, but I didn't want to watch it without my wife [14:10] natefinch: lol, I watched all the movies in the plane it wwas a long flight [14:10] natefinch: you could include your kids too, you know, as an excuse to watch a kids movie === natefinch is now known as natefinch-afk [15:04] natefinch-afk, perrito666, wwitzel3: standup? [15:04] ericsnow: my calendar says like in 2 hs [15:04] perrito666: my bad [15:34] fwereade: ping [15:42] mgz: ping [15:42] rogpeppe: hey [15:43] mgz: hiysa [15:43] mgz: we're just debating bzr stuff [15:43] mgz: if i've done "bzr checkout --lightweight something -r N [15:43] " [15:43] yup [15:44] mgz: is there some way of finding out the revision digest that i've just checked out? [15:44] bzr revno --tree [15:44] mgz: unfortunately that prints: [15:44] bzr: ERROR: Server sent an unexpected error: ('error', 'GhostRevisionsHaveNoRevno', 'Could not determine revno for {roger.peppe@canonical.com-20130430112516-r2iav07y5kqd9cmk} because its ancestry shows a ghost at {roger.peppe@canonical.com-20130913075328-6ok3ntchesxkt04o}') [15:45] you've done a checkout of a remote branch? not a local one? [15:45] mgz: yes [15:45] Don't Do That [15:45] mgz: because we're only interested in the tree (and now, the revision-info of that tree) [15:45] mgz: we don't care about the history at all [15:45] (it's a bug, but you're asking bzr to do bascially everything over the api) [15:46] mgz: yeah, ok [15:46] you can fixup the remote branch to not tickle the bug probably, but I'd have to go lookup how [15:46] mgz: it needs to work with any branch [15:46] mgz: as this is part of an automated process, ingesting charms/bundles to the charm store [15:47] rogpeppe: bug 1161018 [15:47] Bug #1161018: bzr revno fails on ghost ancestry [15:48] has a workaround (which is slow - needs to fetch some things over http, but fine for just revno I think as that shouldn't touch packs) [15:48] (just less fine for update if that also hits the issue) [15:49] mgz: i don't see a workaround there [15:49] use nosmart+ for the remote branch url [15:50] as in, force bzr not to use the api, but just touch the remote objects directly [15:50] mgz: what would a url using nosmart+ look like? [15:52] natefinch-afk, perrito666 : can someone on moonstone take a look at the request in comment #9 and get it resolved asap: [15:52] https://bugs.launchpad.net/juju-core/+bug/1384001 [15:53] Bug #1384001: Juju doesn't retry hard enough when destroying MAAS environments [15:55] rogpeppe: literally just any other url with nosmart+ on the front [15:55] so, nosmart+lp:... nosmart+http://... [15:56] alexisb: looking [15:56] mgz: interesting. that's about 20 times faster too. [15:57] mgz: (and it works) [15:57] mgz, the version change issue you are working, is it related to a specific bug? [15:58] mgz, I just want to make sure core is not also trying to work the same bug :) [15:59] alexisb: it looks like it's just an accidential change [15:59] mgz: sorry, that's bollocks [15:59] mgz: but it does work [16:00] mgz, is there a bug number? [16:00] commit 6ae54f8... changes version back from alpha3 to alpha2 [16:00] alexisb: filing now [16:01] alexisb: natefinch-afk running now tests before PR the change [16:05] perrito666, mgz thanks guys! [16:09] bug 1387764 filed, putting up a branhc now [16:09] Bug #1387764: Version reverted from alpha3 to alpha2 [16:19] ok anyone ? https://github.com/juju/juju/pull/1004 [16:24] ericsnow, Please take a look ^^ [16:24] perrito666: looks fine to me, what's the reviewboard bits up to atm? [16:25] https://github.com/juju/juju/pull/1005 for the version fix [16:25] alexisb: will do [16:25] mgz, may have already done it :) [16:25] mgz: reviewboard? [16:25] thanks all for pulling together to get these critical bugs nailed [16:26] mgz: http://reviews.vapour.ws/r/306/ [16:29] perrito666: do I need to manually create still? I thought it was magically handled now [16:29] (lgtmed btw) [16:29] mgz: should be magically created [16:29] though you have to add the comment by hand [16:29] meh 1387764 is blocking me [16:30] perrito666: mwahaha, easy way to resolve that :) [16:30] mgz: go to rb [16:30] look for your pr [16:30] make sure its not draft [16:30] if so publish it [16:30] because its not there [16:31] I do not see anything... [16:31] mgz: rbt post it [16:31] plz [16:31] or even better [16:31] hold [16:31] I can rbt post [16:32] mgz: there, you have the lgtm in github... but now I just realized that you need a graduated [16:32] mgz: like this http://www.youtube.com/watch?v=hsdvhJTqLak :p bad joke, I know [16:33] rogpeppe: could you lgtm mgz branch? you have super powers [16:33] perrito666: which branch? [16:33] https://github.com/juju/juju/pull/1005/files [16:34] mgz, perrito666: i don't understand the issue but i trust martin :) [16:35] reviewed [16:35] rogpeppe: someone backed the version number [16:35] by accident I assume [16:35] mgz: merge your thing [16:35] doing [16:36] perrito666: wtf, that trailer is basically the whole film [16:37] mgz: 70s [16:38] anyway, how do I know when CI is unlocked? [16:38] I shall do it as soon as pos [16:39] perrito666: at least a while for it to go through merge though [16:39] I never get if that is automated or someone needs to do something by hand [16:39] the marking is by-hand [16:40] is it the lp search query? [16:40] I'll do that as soon as the build succeeds, rather than waiting for all test jobs though [16:40] yeah, but peoples are the oneses who mark bugs fix released [16:49] review request for the on call reviewer today : http://reviews.vapour.ws/r/307/ [16:50] I did a pre-review just commenting on things that might make the intent clearer. [16:55] mgz: are we there? are we there? [16:58] not yet, needs to go through build revision and publish [17:00] * perrito666 runs in circles [17:05] * perrito666 feels assigned [17:05] natefinch-afk: ericsnow wwitzel3 ? [17:05] standup? [17:08] wwitzel3: standup? === natefinch-afk is now known as natefinch [17:36] perrito666, wwitzel3, ericsnow: sorry guys... the halloween party at my kid's preschool ran over [17:37] natefinch: no worries, perrito666 and I talked about restore [17:37] natefinch: ah you have that :p [17:37] I was not aware it was to the level of having school parties [17:37] I thought it was only the disguises + getting candy [17:39] perrito666: this was basically a shortish thing at the end of the day where all the kids dress up and parade around the parkinglot :) [17:40] perrito666: publish-revision running now [17:40] (was slow as eric had sent a change through just before mine, and I didn't really want to interrupt) [17:41] * perrito666 eyes eric [17:41] perrito666: so, eta 20 mins [17:41] heh, how nice it would be to do $$mergeafter##$$ [17:45] I love the way everyone rags on simple streams for putting the "simple" in the name. [17:48] natefinch: I view it as like, refering to someone as simple [17:48] a semi-polite way of saying it's a bit dumb [17:49] lol [17:49] nice [17:52] ericsnow: isnt there a metadata FromJson? [17:53] perrito666: nope [17:53] booo [17:53] :p [17:53] perrito666: I figured if we needed it for restore we could add it then :) [17:57] ericsnow: I dont want to lie to you, at this point I am ranting at my dog for your decission [17:57] perrito666: ┬─┬ ノ( ゜-゜ノ) [18:02] mgz: ? [18:02] looks like it's just finishing [18:03] publish-revision/1104 is what I'm watching [18:14] natefinch: didn't this get superceded: http://reviews.vapour.ws/r/237/ [18:14] natefinch: I.E. you can discard it :) [18:16] mgz: finished \o/ [18:17] itsblue! [18:17] perrito666: go ahead and land [18:17] ericsnow: yeah, thanks for reminding me [18:18] natefinch: it's my job, sir [18:18] * ericsnow shines badge [18:18] ericsnow: its rbt post -v ### right? [18:19] to update [18:19] perrito666: for an update it -r ### (or just use -u) [18:19] ericsnow: rbt post -u? [18:19] perrito666: yeah, it will figure out the right one and ask you [18:40] fwereade: hey, did you ever got a chance to look at charm-sync spec? [19:34] g'night all [20:04] davecheney, mwhudson: email standup today? [20:05] afghahsgfhagsfh I hate local restriction to have juju-local package [20:06] ericsnow: thanks for looking at that review [20:06] jw4: yeah, still working on it :) [20:06] mwhudson,davecheney,menn0: did I miss standup or are we skipping it today? [20:07] ericsnow: ta x2 [20:07] waigani: we haven't had it. I suggested email standup but davecheney and mwhudson aren't around yet. [20:07] menn0: davecheney just popped up [20:08] menn0: sure [20:08] scratch that, davecheney just dropped out [20:08] okay email [20:14] menn0: you dig up more on that local provider issue you were having? [20:14] wwitzel3: no I haven't but that's one of the things I want to do this morning [20:15] wwitzel3: had a number of other things to take care of yesterday [20:15] wwitzel3: and since it appears to just be my machine it seemed less important [20:20] perrito666: you might as well start writing doc strings now, no need for my review to go through first ;) [20:26] menn0: i'm on leave today [20:27] mwhudson: oh yeah, sorry. have a good day off :) [20:28] menn0: just manually testing upgrade for my branch, then saw in your standup notes about Ian's branch. [20:29] waigani: yeah, there's a tools related upgrade step that keeps failing [20:29] natefinch: ok ok [20:29] menn0: okay, so we'll wait for the fix [20:29] waigani: i'm checking now to see if it's been fixed [20:40] natefinch: you exagerate, its not that bad [20:41] perrito666: it's not that good if I stop writing comments that are just "docs" [20:42] (because there's too many) [20:42] wwitzel3: I think I see something that could be related to this rsyslog problem [20:43] wwitzel3: can you do this: cd $GOPATH/src/code.google.com/p/go.crypto ; hg heads [20:43] wwitzel3: what do you get? [20:43] natefinch: that might depend on your patience :p (documenting as we speak) [20:44] wwitzel3: for me, I'm not on the version listed in dependencies.tsv. which could be the problem. godeps issue? [20:44] wwitzel3: i'll manually switch to the version in deps.tsv and see if that resolves the issue [20:54] wwitzel3: never mind. it turns out I don't understand mercurial well enough. the right command to see the checked out rev is "hg id" and that shows the version matching deps.tsv [20:54] wwitzel3: i'll keep digging [20:57] waigani: I just did a successful 1.20 to current master upgrade and I can see Ian's fix landed yesterday so you should be good to try your manual test again [20:57] menn0: awesome, thanks for checking that [21:09] perrito666, wwitzel3, katco: sorry, I've been in bed most of the day [21:09] perrito666, I didn't :( [21:10] perrito666, mail me with a link (again I presume...) [21:10] fwereade: feeling better? [21:10] fwereade: you are lucky that my backlog is long :p or that would make absolutely no sense [21:10] perrito666, I'm not planning to do lots of work right *now* [21:11] perrito666, but, yeah, hopefully will be with it again tomorrow [21:22] fwereade: no worries, hope you feel better :) [21:25] wwitzel3, cheers [21:25] fwereade: for whenever you return, we might have broken your uniter context remaining commits [21:25] wwitzel3, what can I do for you? [21:25] perrito666, I had a super-quick look back just now, yeah [21:25] sorry [21:25] perrito666, looks like I broke things pretty hard myself [21:26] perrito666, whatever needed to be done, needed to be done [21:26] perrito666, I will try to catch up [21:26] fwereade: no worries at all. hope you're feeling better [21:28] waigani: here's that EnvironUUID branch. Can you have a look please? http://reviews.vapour.ws/r/308/ [21:29] fwereade: well after rebasing master I broke, I was before doing relationer.Context() and then calling UnitNames() on that context. [21:29] katco, yeah, will try to sleep better tonight :) [21:29] fwereade: that is for the remoteUnit inference method [21:29] wwitzel3, ah, sorry [21:29] menn0: will do [21:30] fwereade: i have some material for you to review which might help put you to sleep :) [21:30] wwitzel3, I think you can just use .ContextInfo().MemberNames [21:30] fwereade: thanks for the recommendations :) [21:30] fwereade: I see that I need to now get the RelationContext to get UnitNames, so was wondering if I should ... oh ok [21:31] wwitzel3, that's what will be called in the factory to determine membership [21:31] fwereade: I will give that a shot right now, thanks [21:31] wwitzel3, or since you're in-package, .state.Members(), but that's a map [21:35] menn0: you have a minute for a chat? [21:35] menn0: done [21:36] waigani: thanks [21:36] wallyworld_: sure [21:36] https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand [21:50] katco, ping [21:50] alexisb: howdy [21:50] howdy! [21:50] :) [21:50] what can i do for you? [21:51] I am working on milestone breakdowns, was wondering where you were with the LE work you and fwereade broke down in brussels [21:51] alexisb: working on that right now actually. i'm wrapping up the 1st draft of the spec [21:52] alexisb: then wallyworld_ will give it a once-over to make sure i'm not wasting fwereade's time, and we'll propose it for feedback [21:52] katco, ah ok, so official work has not begun yet [21:52] alexisb: well, i consider the architecture part of the work ;) [21:53] lol [21:53] very true [21:53] but that answers my q I will make sure to account for it in the roadmap [21:53] ericsnow: great comments; I'll respond to some of the points and update the PR. Thanks [21:53] thanks! [21:53] jw4: cool [21:53] alexisb: sure thing! good talking with you again [21:53] dido [21:55] menn0: bug 1387884 raised [21:55] Bug #1387884: juju upgrade from 1.20 to 1.21alpha3 is broken [21:55] wallyworld_: thanks [21:56] waigani: i've just been chatting to wallyworld_. part of the recent upgrade problems is due to the splitting out of upgrade steps by alpha releases [21:56] menn0: oh really> [21:56] ? [21:56] katco, had a brief chat with wallyworld_ about it a day or two ago, I presume he passed on what needed to be? [21:56] waigani: it means when an upgrade from 1.20 to 1.21 alpha3 happens, the steps are really being run in the wrong order [21:56] fwereade: indeed he did, thank you [21:56] waigani: going to merge them back into one list for 1.21 now [21:56] katco, cool -- broadly sane in your estimation? [21:56] * wallyworld_ afk for a bit to have breakfast etc [21:57] * katco has a good... "not manager but manages people" [21:57] menn0: I don't follow. why the wrong order? [21:57] fwereade: yes absolutely [21:57] waigani: if you upgrade from 1.20 to 1.21alpha3 the steps for alpha1 are run, then alpha2, then alpha3 [21:58] katco, excellent, so long as it really is [21:58] waigani: the steps for alpha3 have the machine and instancedata upgrade [21:58] katco, it's your job to tell me when I'm being crazy [21:58] katco, everyone else's too ofc [21:58] fwereade: do i constrain my comments to professional work? [21:58] waigani: which means the steps for alpha1 and alpha2 have trouble because they sometimes need to be able to look up machines [21:58] waigani: because the running software is actually alpha3 [21:58] fwereade: ;) [21:58] waigani: the steps shouldn't be split out by alpha version [21:58] katco, I'm grinning broadly but not yet able to formulate a coherent response [21:59] menn0: got it [21:59] waigani: I'm fixing now. [21:59] waigani: alpha3 is being held up until this is sorted (should be easy though) [21:59] fwereade: haha ok good the joke made it across OK. that pause is always nerve wracking :p === kadams54_ is now known as kadams54 [22:00] menn0: I've hit a really interesting bug, in the process of hunting it down. A test failed once I merged the machines branch with the sequences branch. I'll touch base with you on it later [22:02] waigani: ok cool [22:12] ok EOD, have a nice one people [22:14] wallyworld_: if we have upgrade steps defined for 1.21 (final) they currently won't run when an upgrade to a 1.21 alpha or beta release happens [22:14] wallyworld_: I can tweak the upgrade steps logic to ignore the release tag (i.e. alpha/beta) [22:14] wallyworld_: or we can define the upgrade steps against alpha1 [22:14] wallyworld_: I prefer the former [22:15] wallyworld_: but it does reduce our flexibility a little [22:15] wallyworld_: what do you think? [22:20] menn0: my vision was always for the release tag to be ignored, thus we would be testing upgrade steps along the way as if it were final release, so the former [22:20] wallyworld_: ok will do [22:20] we hadn't really got to the point of needed to do this till now [22:20] so it upgrades were sort of done initilly as a mvp [22:20] with improvements as needed [22:21] it should be easy to fix [22:21] and it's just the current crop of upgrade steps [22:21] that are exposing problems [22:21] this is also fallout from the release number change [22:23] yeah === kadams54 is now known as kadams54-away [22:30] wtf [22:32] katco: I love the tabular status output btw. I use it all the time. [22:37] menn0: so glad to hear it! i love it myself :) [22:38] katco: I like the oneliner, thanks :) [22:39] waigani: np :) [22:39] really i just coded it all up. not my ideas... thank you to ecosystems/tanzanite for guidance [22:40] menn0: so a sequence doc is being entered into the db with an old id. I can't track down where from (yet) [22:40] waigani: sounds fun :) [22:40] #$@$ [22:42] wallyworld_: I gave you a ship-it on #302 but you'll need someone else to follow up. :) [22:42] ericsnow: thank you, really appreciate it; the streams stuff can be hairy [22:43] wallyworld_: is it hairy because you pulled your hair out while writing it [22:44] ericsnow: yes, yes i did. and now you've done a review, you are an expert and fair game to write the next simplestreams patch :-P [22:44] wallyworld_: sorry, you're breaking up, I didn't catch that, going into a tunnel now [22:44] lol [22:45] menn0, wallyworld_ davecheney was one of you designated lead in thumper's absence? [22:45] oops [22:46] alexisb: i don't believe so [22:46] not me [22:46] menn0, ack [22:46] sorry wallyworld_ I meant waigani [22:46] ah yeah :-) [22:47] davecheney, we have had a request come in that requires some arm skills, I am going to send you and mwhudson a note [22:47] davecheney, please work with me on priorities compared with other tasks [22:48] wallyworld_: I have a branch to sort out the 1.21 upgrades [22:48] menn0: awesome, i'll review. did you test live? [22:48] wallyworld_: all the steps are currently assigned against 1.21-alpha1 [22:48] wallyworld_: yep, tested an upgrade of a complex env from 1.20 [22:48] not 1.21? [22:48] wallyworld_: that will come in a later change today [22:49] +1 [22:49] wallyworld_: but this change solves the immediate problem [22:49] first priority is to unblock the release [22:49] yep [22:49] wallyworld_: pushing now [22:49] tyvm [22:49] wallyworld_: nothing like impending family commitment to focus your efforts :) [22:50] :-) [22:51] wallyworld_: http://reviews.vapour.ws/r/309/ [22:51] looking [22:57] menn0: +1, ship that sucker [22:57] wallyworld_: thanks. merging now. [22:58] no, thank you [23:25] ericsnow: thnx for the review :-) i agree readibility is important [23:25] anastasiamac: glad to help :) [23:29] waigani, talk to jw4 re sequence documents [23:29] ericsnow: i also agree to not changing code unrelated to my patch... [23:29] * fwereade really should sleep now [23:29] fwereade: sorted it, thanks [23:30] anastasiamac: no worries, sometimes it's okay to slip a few little things like that in [23:30] anastasiamac, ericsnow: it's a hard balance, but I will not complain about driveby readability fixes that don't change behaviour [23:30] fwereade: +1 === Guest89108 is now known as bodie_ [23:30] anastasiamac, ericsnow: and if you spot such things, and separate them out into separate PRs, I will be positively delighted [23:31] fwereade: thnx :-) will aim to delight u positively :-) [23:31] anastasiamac, ericsnow: because apart from anything else, separating them out makes it possible to do them that little bit better [23:31] fwereade: (waigani) fwiw, we're using sequence() far less widespread than I thought, and I think we can fold it into the same transaction that we use it in (need to verify though) [23:32] jw4, waigani: my main concern there is that sequence() output ends up user-visible [23:33] fwereade: I see. If it's within the same transaction that seems acceptable [23:33] jw4, waigani: and if we're returning values that depend on sequence() result -- ie "what unit we just created" -- we don't want to unnecessarily burn sequence numbers [23:34] jw4, waigani: actually, dammit, it is more complex [23:34] waigani, you said it was sorted? expand super-quickly please? [23:34] jw4, the issue there is [23:35] jw4, that if we return "someservice/123" from a sequence() call outside a txn [23:35] jw4, we can be sure we really did create someservice/123 [23:36] jw4, if we're doing a sequence call inside a txn [23:36] jw4, we either lose visibility on what we actually created [23:36] jw4, or we end up adding an effective txn-revno assert on the sequence doc [23:37] jw4, thus screwing parallelisability [23:37] jw4, am I making sense? [23:37] fwereade: yes... [23:37] fwereade: but... [23:38] perrito666: did you get the msg i just posted? my connection may have dropped? [23:38] fwereade: I'm thinking we might be able to guess at the next sequence number, set the id's for both the sequence and the usage of it in a multi-try txn loop, and then assert in the transaction that the number is what we predicted [23:39] jw4, so we can probably guess pretty well, that's not the issue [23:39] if the assert doesn't pass it fails the txn loop just like any other contention [23:39] no, I mean guess *before* we use it [23:39] jw4, the issue is that the *existence* of the assert enforces txn serialization [23:40] jw4, thus preventing parallel uses of the same sequence doc [23:40] fwereade: hmm - since all our multi doc transactions are two phase commit's that's what would happen anyway right? [23:40] jw4, well, that's the advantage of assigning sequence numbers outside the txn [23:40] jw4, there may well be [23:41] fwereade: if we want to ensure ordered sequence numbers we give up parallelism (or is it concurrency? i forget) [23:41] jw4, -- in the add-unit case, there certainly are -- [23:41] jw4, other considerations that prevent concurrent db ops [23:42] fwereade: brb - keep going [23:42] wallyworld: I did not [23:42] jw4, but in the general case we need to choose between knowing what sequence number we picked, and allowing parallel ops against the same sequence [23:42] perrito666: hi, bug 1320543 [23:42] did you mean to assign that to yourself? [23:42] anastasia is working on it already, but it hadn't been assigned to her (sorry) [23:42] Bug #1320543: debug-log uses internal names for filtering [23:43] wallyworld: I did but did not yet work on it so be my guest re-assigning it [23:43] already done :-) [23:43] jw4, wait, actually, I'm net even sure it's a choice [23:43] I just started reading code and was suddenly redirected to fix one of my PRs [23:43] wallyworld: thnx :-) [23:43] jw4, atucally that's an overstatement [23:43] jw4, so [23:44] jw4, picking the sequence inside the txn *requires* that we sacrifice parallelism, because we have to be able to assert the state of the sequence doc [23:44] jw4, in some situations that's acceptable [23:44] jw4, ie actions, I think, but push back if you're not 100% convinced [23:46] jw4, it's *coincidentally* acceptable even in the add-unit case [23:46] jw4, because the write to the service doc serialises the txns anyway [23:46] jw4, and unless we get *really* clever that's unavoidable, because refcount [23:47] jw4, even if it's not on the service doc [23:47] jw4, I think I need a refresher on where we're using sequence() :) [23:47] fwereade: good news [23:48] fwereade: not very many places [23:48] jw4, brb myself, keep talking yourself :) [23:49] fwereade: grep -r 'sequence(' yields 3 usages : relations, actions, machines [23:49] fwereade: now, maybe other collections are using the same mechanism not with the state/sequence.go helper func [23:50] need to dig on that one [23:50] * jw4 is juggling bread in the oven - brb [23:50] (figuratively)