[00:02] wallyworld_: what is the watcher I look for for lxc containers on a machine? [00:02] thumper: machine.WatchContainers() [00:02] wallyworld_: from state.Machine? [00:03] where machine is a machine object [00:03] machine is a state.Machine [00:03] * thumper nods [00:04] wallyworld_: umm... no [00:04] let me check, my memory has failed me perhaps [00:04] no method there [00:05] / WatchContainers returns a watcher that notifies of changes to the lifecycle of containers on a machine. [00:05] func (m *Machine) WatchContainers(ctype ContainerType) *LifecycleWatcher { [00:05] in state/watchers.go [00:05] maybe that branch hasn't landed yet, let me check [00:07] thumper: no, it should be in trunk [00:07] wallyworld_: which file? [00:07] not in machine.go [00:07] ^^^^^^^^^^ [00:07] state/watchers.go [00:07] if your IDE did method name completion..... [00:07] WHY? [00:08] because that's where all the other watchers are defined [00:08] I really strongly dislike splitting functions for types between different files [00:08] it's dumb! [00:08] i just folllowed convention [00:09] blame the previous guy :-) [00:09] anyways, a good IDE makes it a moot point :-P [00:13] marcoceppi: ping [00:13] davecheney: o/ [00:13] which method of communication would you like to attempt ? [00:15] How well does G+ work for you? [00:15] wfm, two secs [00:15] ARGH [00:16] thwere are two marcoceppi's who both work for canonical [00:16] and BOTH are well dressed !?! [00:16] davecheney: haha, you can hit either one of them [00:16] The one with the canonical bullseye is my "work" account [00:16] i'm going to choose the one with the best moustache [00:18] gameshow music ... dialing [00:57] aw, wtf, how is that the time [00:58] thumper, wallyworld_: despite my shameful failure to review any of your code today, I have one for you: https://codereview.appspot.com/10420045 [00:58] * wallyworld_ looks [00:58] just finishing another review firat [00:58] huh [00:58] first [00:58] * thumper needs beer [00:59] or a wine [00:59] perhaps something italian [00:59] whine? [00:59] I'm getting frustrated at some of our tests [00:59] thumper, wallyworld_: it is not the nicest of branches because it involves getting up to the elbows in state [00:59] :-( [00:59] as soon as you add something that depends on what would be expected in a normal environment [00:59] shit breaks [01:00] thumper, wallyworld_: but code gets removed, tests get bigger, comments get more detailed, and buried in there are a couple of crucial lines that actually change behaviour [01:00] yep, that's Go for you [01:00] thumper, wallyworld_: and all of those are, I believe, reflected explicitly in new test cases [01:00] fwereade_: cool. sounds good [01:09] (oh, there are also 4 tests that fail elsewhere that I just saw, but it's obvious why in each case by inspection [01:09] I'll fix them tomorrow) [01:47] Can we use Go 1.1 features yet? [02:00] bigjools: no [02:00] bigjools: see jam's email about it on juju-dev [02:01] * bigjools doesn't pay much attention to that list [02:03] I guess I had better familiarise myself with the new features as the online Go docs refer to 1.1 now [02:42] wallyworld_: http://paste.ubuntu.com/5785676/ [02:42] wallyworld_: less buggering around with this to get them going [02:42] \o/ [02:42] wallyworld_: although there is an 'orrible hack [02:43] time.Sleep(1 * time.Second) :) [02:43] yuk [02:43] wallyworld_: as the lxc provisioner tries to get the tools from state before the machiner has set them [02:43] and the jujud process doesn't yet restart subtasks properly [02:44] sounds like we need a channel or something [02:44] it will do when it integrates the worker runner code [02:44] not really [02:44] it should just die [02:44] and get restarted [02:44] don't add dependencies [02:44] instead of waiting on an event from the host saying "ready now" or something? [02:45] I don't want to add extra dependencies right now [02:46] I woner why instance-state says "missing" on status [02:46] what extra dependencies? adding a channel isn't a dependency except on the std lib [02:46] but it is between go routines [02:46] that shouldn't be tied together [02:46] trust me on this one [02:47] ok [02:47] with units: http://paste.ubuntu.com/5785689/ [02:48] don't know about why instance state is missing [02:48] the code says it is because ec2 couldn't find that instance id [02:49] not ec2 [02:49] well, duh, obviously [02:49] our environment [02:51] wallyworld_: what are you doing again? [02:51] I'm about to write the end of week email [02:52] apart from reviews [02:52] and bug triage [02:52] OCR today. but main work is to get into state the instance metadata. there's a rework of the data model so it's become complicated [02:52] 3 failing tests - gotta fix the mega watcher [02:53] next step - machine characteristics in status [02:54] i'll also need to look at some other critical bugs thrown my way and the simple streams metadata [03:02] hey guys, if I want to include a data file as part of my source, and have it required at runtime, what's the best way of doing this? [03:05] what sort of file? [03:06] binary [03:06] it's a trailer for a VHD [03:06] you want the code to write it out somewhere when it runs? [03:06] no, it needs to read it and then we send it to Azure [03:07] (as part of other stuff) [03:07] i wonder if it shouldl be packaged in the tools tarball? [03:07] it's a jigsaw piece of a bigger file if you like [03:07] how big is it? [03:08] I need it to live in the provider library though, not juju core [03:08] 512 bytes [03:08] I'm wondering if there's a way of compiling the data in somehow [03:08] i'd maybe uuencode it [03:08] or base64 [03:08] yeah [03:08] hmmm :) [03:09] and write it out [03:09] you're not just an ugly face [03:09] well, with enough beer perhaps [03:09] it would need a lot [03:09] sure, but you'd need more [03:12] bigjools: you can use "godoc" to run the docs locally, I believe. Which should match the version of go you are using. [03:13] jam: hey there. ah yes I keep getting confused with godoc [03:14] bigjools: for a 512byte content, I would just base64 encode it in source. I think there is a way to get to argv[0] (not sure what it is in go), but there won't be a directory of files like python. [03:15] yeah base64 does the job nicely [03:15] 6+ years since I used a compiled language... it's slowly coming back [03:17] bigjools: for the test suite, there is stuff like "find the source directory", but that doesn't work as well in final binaries :) [03:17] yeah [03:48] wallyworld_: your watcher returned the same id twice [03:48] wallyworld_: when I removed it [03:48] causing the provisioner to crash :) [03:48] \o/ [03:49] when it tried to stop the container the second time [03:49] i jut use a lifecycle watcher [03:49] i'll have to see what's going on with that [03:52] wallyworld_: it is possible that it is me, I can add more tracing to check, but not today [03:52] finishing early due to insane meetings last night [03:52] sure. i'm also knee deep in this other branch [03:52] hagw === tasdomas_afk is now known as tasdomas [07:04] morning [07:58] thumper: ping [07:58] TheMue: hiya [09:16] wallyworld_: any chance of a review of https://codereview.appspot.com/10364046/ ? [09:16] wallyworld_: (you reviewed the branch that had it as a prereq) [09:16] ok, but i'm onto my 3rd beer for the evening :-) [09:17] might improve the quality of my reviews :-) [09:18] wallyworld_: thanks for the other review, BTW - the "password" oops was a good catch! [09:18] now worries :-) [09:18] no [09:21] rogpeppe2: done. btw, i got the watcher stuff sorted out thanks to your advice where to look [09:22] wallyworld_: cool, nice one. [09:22] wallyworld_: thanks [09:29] fwereade_: there are 2 untriaged bugs for juju-core which i didn't really feel i could accurately process today. ie they seem important to me but ymmv. could you possibly take a look sometime? [09:30] wallyworld_, I hope so, I'm just trying to catch up on reviews this morning [09:30] wallyworld_, thanks for your review last night [09:30] no hurry as such [09:30] np, i liked your branch === danilos_ is now known as danilos === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: wallyworld | Bugs: 7 Critical, 74 High - https://bugs.launchpad.net/juju-core/ [11:10] fwereade_: would appreciate a review of https://codereview.appspot.com/10449044 - another prereq for getting the agent connected to the API [11:15] rogpeppe2, looking [11:19] rogpeppe2, lovely, LGTM [11:19] fwereade_: thanks! [11:20] rogpeppe2, if you get a mo, I'd appreciate your thoughts on https://codereview.appspot.com/10420045/ [11:23] fwereade_: looking [11:24] anyone interested in reviewing https://codereview.appspot.com/10441044 (config-get output) [11:24] ? [11:25] fwereade_: btw, had a chance to take a look on the ec2 http reader moving? [11:25] TheMue, sorry, it's *right* at the bottom of the page [11:26] TheMue, I'm on the 3rd last though [11:26] fwereade_: ok, no pro, i've got more in the queue so it can wait ;) [11:26] TheMue, cool [11:27] TheMue, I proposed something last night that'll fix a noticeable bug when the cleaner integration lands :) [11:32] fwereade_: oh, will take a look (when I return from lunch) [11:32] fwereade_: so... just trying to get this straight in my head [11:33] * fwereade_ braces himself for helpfully difficult questions ;) [11:33] fwereade_: your CL changes things so that a service is only removed when the cleaner gets around to removing its units? [11:33] fwereade_: i'm trying to understand the role of the cleaner here [11:33] rogpeppe2, not exactly [11:34] rogpeppe2, it just runs unit.Destroy for each unit sooner than the unit agents themselves will (potentially, anyway -- and usefully, in case cases where there's no agent inplay yet) [11:35] rogpeppe2, the fundamental problem is that the UA was solely responsible for its lifecycle advancement past Dying once it was on a provisioned machine [11:36] rogpeppe2, but there's really quite a long delay between a machine being provisioned and any assigned unit agents actually starting to run [11:36] fwereade_: indeed there is [11:37] fwereade_: so... why can't the initial operation run unit.Destroy when appropriate? [11:38] rogpeppe2, checking the analysis captured in unit.Destroy is the big one, if that's crack then so is the whole idea [11:38] rogpeppe2, the idea was that we didn't want to tie the client up destroying everything one by one [11:39] rogpeppe2, and I think that's still reasonable [11:39] fwereade_: ah, seems like a good plan [11:39] fwereade_: i'd have asked the other way if you'd done it like that [11:39] rogpeppe2, but I don;t really mind tying up the cleaner worker, that's its job [11:39] ;p [11:39] fwereade_: i think this is the right approach, but i just wanted to get it all straight in my head [11:40] fwereade_: does the cleaner worker execute everything sequentially? [11:41] rogpeppe2, yeah [11:41] fwereade_: if not, i wonder if it might be good to make it execute some operations concurrently (assuming that gives some speed up) in the future [11:41] rogpeppe2, sounds reasonable, yeah [11:41] fwereade_: i can imagine that it might be really slow to destroy large services [11:42] rogpeppe2, indeed, but don't forget the unit agents are still busily killing themselves where possible on service destroy [11:42] * fwereade_ has food on the table [11:43] fwereade_: that's true, but if you've accidentally typed an extra 0 onto --num-units and are trying to remove the service ASAP, it might be an issue [11:43] fwereade_: enjoy! [11:58] rogpeppe2, yeah, point taken, this is the simplest possible v1 of what will otherwise become a disturbingly big change [11:58] fwereade_: oh yes, i wasn't suggesting we do it now [11:58] fwereade_: or even in the near future [11:59] fwereade_: just that it's something to bear in mind [11:59] rogpeppe2, however, any that do manage to short-circuit will have to be executed serially anyway, because they all use the service doc [12:00] fwereade_: not if there are two (or more) services being destroyed, i guess [12:00] rogpeppe2, very true [12:00] rogpeppe2, concurrent handling of actual cleanup docs themselves would be nice [12:01] TheMue, LGTM [12:02] fwereade_: yeah - otherwise this might tie up the cleaner for a long time when actually all the units are deployed and happy to remove themselves, stopping another service from being cleaned up. [12:03] fwereade_: reviewed [12:05] rogpeppe2, thanks, +1 on TransactionChecker, very nice [12:06] fwereade_: cool [12:06] rogpeppe2, yeah, the cleanup thing is pretty ghetto at this stage, at least it'll hopefully be relatively simple to massage into shape at some point [12:07] rogpeppe2, but suboptimal cleanup beats no-cleanup-at-all pretty handily :) [12:07] fwereade_: indeed so :-) [12:09] TheMue: reviewed [12:09] fwereade_: i don't think you published your TheMue LGTM [12:09] TheMue: would appreciate a review of https://codereview.appspot.com/10449044 [12:09] or from anyone else, for that matter [12:10] rogpeppe2, TheMue: I was thinking of https://codereview.appspot.com/10296046/ -- looks LGTMed to me [12:10] fwereade_: ah, i thought you were referring to g https://codereview.appspot.com/10441044 [12:10] rogpeppe2, ha, missed that one, thanks [12:17] TheMue: you have another review [12:18] TheMue, reviewed https://codereview.appspot.com/10441044 as well [12:19] fwereade_: do you know if live tests have been fixed now? === rogpeppe2 is now known as rogpeppe [12:19] rogpeppe2, I'm afraid I don't [12:19] fwereade_: :-( [12:20] fwereade_: i guess i'll try with trunk and see what happens [12:20] fwereade_: if i'm to land the agent API stuff, i really need to be able to test live [12:20] rogpeppe, I have a vague recollection of flinging those at you for verification the other day [12:20] * fwereade_ looks a little shamefaced [12:21] fwereade_: flinging what? [12:21] rogpeppe, a couple of bugs related to those [12:21] rogpeppe, I may be wrong, that bug weekend is a bit of a blur [12:22] * rogpeppe starts running TestBootstrapAndDeploy [12:23] fwereade_: you'll probably be happy to know i got the machine agent actually talking to the API live yesterday. [12:23] rogpeppe, I saw! awesome news :D [12:23] rogpeppe, you were gone before I saw it so I couldn't fling virtual underwear in appreciation [12:24] fwereade_: BTW we seem to have reverted to dialling more often than we should (about 3 times a second) [12:24] rogpeppe, whaa? :( [12:25] I'm trying to figure out how we might take advantage of the upcoming containerization feature... but I need to test density. [12:25] Do you think it'd be a decent comparison to just bootstrap juju locally and deploy a bunch of nodes via LXC ? [12:29] fwereade_: ah, looks like live tests do work on trunk, phew [12:29] rogpeppe, excellent ;p [12:29] rogpeppe, would you open a bug for the redialing though please? feels like a bad bug to release with [12:30] rogpeppe, so I think it's critical until we figure it out [12:30] FunnyLookinHat, what are you looking to explore in particular? [12:31] How much density I can get with lamp-stack deployments for a PHP + MySQL application [12:31] fwereade_, i.e. I want to compare between simply having 100 virtual hosts on a single apache w/ a single mysql in separate directories w/ having 100 lxc container'd hosts including mysql [12:31] rogpeppe: reviewed [12:31] TheMue: thanks [12:32] FunnyLookinHat, ok, so you're thinking, say, 1 node with N containers, each container holding one php app and one mysql? [12:32] rogpeppe, fwereade_ : thx for your reviews [12:32] fwereade_, Yeah, exactly [12:32] "Entire Stack" in each LXC - with the purpose of serving a single PHP application. [12:33] I'm hoping that will make backup and restoration a breeze [12:33] FunnyLookinHat, ok, cool -- this in contrast to 1 machine with N+1 containers, in which N hold a php app and one holds a single unit of a shared mysql service? [12:33] FunnyLookinHat, yeah, that sounds sensible [12:34] FunnyLookinHat, I know thumper has been making good progress there [12:34] fwereade_, Sort of - yes. [12:34] fwereade_, Yeah I saw his update this morning :) [12:34] FunnyLookinHat, I have architecture quibbles but I expect they'll all come out in the wash [12:35] fwereade_, Realistically I'm trying to figure out "how dense" juju will be able to get me, or if I need to figure something else out... i.e. how much overhead is in LXC [12:35] FunnyLookinHat, so I would *hope* that trunk will be letting you play with that manually within the next week or so [12:35] fwereade_, but you're essentially saying that I'd be able to get a decent idea just by running LXC within a Compute Instance or something [12:35] FunnyLookinHat, yep, makes sense, thanks [12:35] FunnyLookinHat, I think so yeah [12:36] fwereade_, Ok thanks - much appreciated :) [12:36] FunnyLookinHat, the juju agents don't take up too many resources last time I checked [12:36] Yeah - my only concern is running 250 mysql daemons and 250 apache daemons instead of 1 and 1 [12:36] That's a lot of overhead [12:36] thus the need to test [12:36] FunnyLookinHat, quite so [12:46] ha, don't accidentally type "go get -u" - you'll pull --overwrite your current repo. ouch. [12:51] LOL [12:59] rogpeppe: my IDE has local history built in so when i did go get -u one time, it was easy to recover :-) [13:00] wallyworld_: i thought i'd interrupted it in time, but i hadn't. luckily i had another copy of the branch that i was using to test against go1.0.3. i've proposed https://codereview.appspot.com/10455043 to fix the issue. [13:01] oh cool :-) === wedgwood_away is now known as wedgwood [13:39] mramm: hi, i almost have a fix for bug 1188815 ready to test. do you know what cloud the bug was seen with, and if i can get some credentials to do a live test? [13:39] <_mup_> Bug #1188815: security group on quantum/grizzly is uuid, goose chokes on it [13:39] rogpeppe, responded to https://codereview.appspot.com/10420045 [13:40] rogpeppe, I would be interested if yu had any thoughts on the Cleanup-after-service-destroy-in-client idea [13:40] rogpeppe, it's kinda nasty [13:40] rogpeppe, *but* it means that new tools will do quick-destroys of old services in old environments [13:41] rogpeppe, but then maybe we don't want to encourage people to keep their 1.10 deployments going, long-term..? [13:41] wallyworld_: I am not sure, it is whatever the Landscape team is using... Best to ask beret... He's online now. [13:41] will do [13:41] ahasenack, can you lend wallyworld_ a hand with that one? [13:41] Beret: hi, i almost have a fix for bug 1188815 ready to test. do you know what cloud the bug was seen with, and if i can get some credentials to do a live test? [13:41] <_mup_> Bug #1188815: security group on quantum/grizzly is uuid, goose chokes on it [13:41] ahasenack, it's probably dpb that found it [13:42] fwereade_: i'm not sure exactly what idea you're referring to. is it mentioned in the comments? or are you just talking about our discussion earlier? [13:42] hm? [13:42] * ahasenack reads [13:42] rogpeppe, it's at the end of the CL description [13:43] wallyworld_: it was seen in our internal serverstack deployment, one that uses quantum for networking, not canonistack, which uses nova networking [13:43] ahasenack: any chance of getting some credentials so i can test the fix against that? [13:43] fwereade_: ah, i think i'd probably gone into "tl;dr" mode by that stage and had just dived into the code :-) [13:43] rogpeppe, not surprised [13:43] wallyworld_: I'll see what I can do, I'm not sure even I have credentials, that thing was just deployed [13:44] rogpeppe, sorry about that [13:44] ahasenack: we have test service doubles i can test with, and also regression test against canonistack, but i want to be sure the issue is really fixed for you too [13:45] wallyworld_: yeah, canonitsack will only help in terms of regression testing, since the bug doesn't happen there [13:45] we need openstack with quantum for this [13:45] exactly [13:46] fwereade_: i think it's not worth calling Cleanup [13:46] rogpeppe, cool [13:46] rogpeppe, upgrade tools, upgrade environment, get fast destroys [13:46] fwereade_: precisely [13:46] ahasenack: see what you can do and drop me an email. it's late friday evening for me now so no rush as such. i've still got a bit of coding to finish on the issue and will try and get that done over the weekend so i can get a fix committed asap for you [13:46] rogpeppe, gets people using the code we want them to too :) [13:46] wallyworld_: where can I get a build/branch? [13:46] wallyworld_: attached to the bug? [13:47] fwereade_: we have upgrade - people can use it (and they can downgrade too if there are problems) [13:47] rogpeppe, if an upgrade has problems I fear for its downgrade too tbh [13:48] ahasenack: i'll commit the fix to goose trunk when it's done. it's still uncommitted on my harddrive [13:48] wallyworld_: can you push somewhere, or would you like to test it first yourself? [13:48] fwereade_: hopefully the upgrade stuff doesn't touch much of the stuff that might be problematic for people [13:49] fwereade_: BTW i worked out why my second API-connecting machine agent wasn't working - the provisioner wasn't calling SetPassword on the new machine. fingers crossed this live test will work fine... [13:49] ahasenack: you can certainly pull from my lp branch once i push it up (before it is reviewed and committed to trunk). but i have a little coding work to finish first. it's about 80% done [13:49] * fwereade_ also crosses fingers [13:50] wallyworld_: ah, ok [13:50] ahasenack: i thought getting access to the cloud to test might take a little time so thought i'd ask ahead of time to minimise delays [13:50] wallyworld_: ok [13:50] i was hoping to have access or something say monday :-) [13:50] rogpeppe, yeah -- I just worry that there's some subtle incompatibility in the data that gets stored by the current code [13:51] ahasenack: i only started on the problem after my EOD today since it was marked as critical :-) [13:51] and i had other stuff to get done first [13:51] wallyworld_: ok, thanks [13:53] even if i can't get access to the cloud, i'm pretty confident it will work if you grab the code and test [13:53] 8:04.529 PASS: live_test.go:0: LiveTests.TestBootstrapAndDeploy 479.767s [13:53] yay! [13:53] * fwereade_ cheers, hoots, flings underwear [14:01] hmm, plus.google.com hates me again today [14:05] wallyworld_: if you give a branch and tell me how to pull it, I can test it [14:06] wallyworld_: I'm running trunk in that cloud already, so it should be easy to update nad overlay your stuff [14:06] dpb1: awesome thanks. i'll have something done within a day. just finishing a bit more coding [14:09] wallyworld_: ok, I'm in the US Mountain Time, so if you get something finished before I eod, I'll look at it. [14:10] dpb1: i'm GMT+10 so it's midnight here and i may not stay awake to get it done. i can send you an email over the weekend with the branch details. i'm really hopeful of getting it done real soon now [14:11] ffs [14:12] wallyworld_: hehe, ok, goodnight then (or close to it!) [14:12] sorry guys, didn't contribute much there [14:12] TheMue, how's the worker integration looking? [14:12] wallyworld_: given your nick, I was wondering if you were down under. [14:12] dpb1: good night [14:13] yep, am in brisbane australia [14:13] wallyworld_: ah, cool. :) [14:16] gaah -- has anyone tried to --upload-tools from current trunk? I'm assuming that the problem is that my network appears to be very flaky today, but it'd be nice to have confirmation that it really does work [14:17] fwereade_: not yet started, currently working on the review feedback for config-get [14:18] TheMue, ok, it's become critical for me -- shall I try and pick it up? [14:19] (btw, just managed to --upload-tools -- it's not you it's me) [14:19] fwereade_: it worked fine for me earlier [14:19] rogpeppe, cheers [14:20] fwereade_: ok, feel free to pick it, then i'll do auto sync tools next [14:23] fwereade_: in your review of the config-get you said it's a good opportunity to change the way of the format comparison [14:23] fwereade_: could you explain it more? [14:24] TheMue, asserting on precise bytes is not great, because YAML can produce all sorts of valid representations of the same data [14:25] TheMue, we would like it if, were we to (say) tweak the formatting details in cmd/out.go, these tests would continue to pass [14:25] TheMue, so, rather than asserting precise output, unmarshal that output and compare against the actual data you expect [14:26] TheMue, it's not foolproof, though, the precise numeric types might be unmarshalled differently by different implementations, say -- but I'm less worried about that [14:26] TheMue, sane? [14:26] fwereade_: ah, thx, then i had it already understood right [14:27] fwereade_: yes, so i can continue as i already started. it's the final change before a repropose :) [14:27] TheMue, sweet [14:28] TheMue, can I ask you to do a quick proposal before you do auto sync tools please? I think it demands a little bit of discussion [14:41] fwereade_: a quick proposal? you mean for the way i will do it? [14:42] TheMue, yeah, I think it deserves just a little thought before we rush in [14:42] fwereade_: sure, will come back to you first when read a bit more about the problem [14:42] TheMue, just to figure out the drawbacks of whatever we pick and whether they're worth it [14:42] fwereade_: sounds fine to me, yes [14:42] i'm getting a compile problem trying to merge: https://code.launchpad.net/~rogpeppe/juju-core/325-machineagent-api-setpassword/+merge/170788/comments/380527 [14:43] it's really weird, because it builds fine locally [14:43] and i tried pulling trunk and merging into that and it still builds fine [14:43] mgz: any ideas what the problem might be? [14:44] particularly annoying because it's preventing me from proposing my agent-connects-to-state branch [14:45] jam: ^ [14:45] assignement count mismatch, aha [14:45] how do those lines look like? [14:46] ...oooOOO( before navigating through the code ) [14:47] TheMue: ha ha, they're still wrong. [14:47] TheMue: it *built* fine, but tests don't [14:47] rogpeppe: not off the top of my head... [14:47] ah, tests fail on the merged code? [14:49] mgz: yeah. i think i've sorted it though. there must've been a problem with tests failing after merging with trunk. i was sure i'd tested it properly. but there y'go [14:51] right, another 15 minute wait [15:49] fwereade__: ping [15:49] rogpeppe, pong [15:49] rogpeppe, how's it going? [15:49] fwereade__: i've got a bit of a testing problem [15:49] rogpeppe, oh yes? [15:50] fwereade__: i just propose -wip'd my branch (after endless hassles with dependencies) [15:50] fwereade__: and going through it, i saw a test-related hack i'd forgotten about [15:51] fwereade__: which is a time.Sleep(2 * time.Second) inside runStop (this is in cmd/jujud) [15:51] fwereade__: this is when testing the password changing stuff [15:51] fwereade__: the issue is that we don't have any way of knowing *when* the agent changes the password [15:52] rogpeppe, hmm... poll the conf file for changes? [15:52] rogpeppe, except I can't remember whether that happens before or after [15:54] fwereade__: hmm, i thought of that and wrote it off because i thought there were cases where we don't actually change the conf file [15:54] rogpeppe, surely we do if we're changing the password? [15:54] fwereade__: there are several checks in that test [15:55] fwereade__: but actually, i think the conf file is changed in all of them [15:55] fwereade__: which probably means it's not an adequate test, ironically [15:55] rogpeppe, how about a watcher on the state.Unit, waiting for the old password to give an error? [15:56] fwereade__: too specific - sometimes it doesn't actually change the password even though it changes the conf file [15:56] fwereade__: i think i'll go with polling the conf file and see if that works ok [15:57] rogpeppe, if it's tricky to test, I may be hearing the extract-type bells ringing a little [15:59] fwereade__: you know, actually i think you're right - i'll just test openAPIState in isolation. there's no particular virtue in testing it in situ, as the other tests would fail if it wasn't being called. [15:59] rogpeppe, cool [16:00] fwereade__: thanks for the suggestion === tasdomas is now known as tasdomas_afk === tasdomas_afk is now known as tasdomas === tasdomas is now known as tasdomas_afk [17:11] fwereade__: https://codereview.appspot.com/10259049/ [17:11] fwereade__: (finally!) [17:13] if anyone's still around, i'd appreciate a review of the above [17:14] and that's a good place to stop for the week. [17:14] see y'all monday! === wedgwood is now known as wedgwood_away