thumper | wallyworld_: what is the watcher I look for for lxc containers on a machine? | 00:02 |
---|---|---|
wallyworld_ | thumper: machine.WatchContainers() | 00:02 |
thumper | wallyworld_: from state.Machine? | 00:02 |
wallyworld_ | where machine is a machine object | 00:03 |
wallyworld_ | machine is a state.Machine | 00:03 |
* thumper nods | 00:03 | |
thumper | wallyworld_: umm... no | 00:04 |
wallyworld_ | let me check, my memory has failed me perhaps | 00:04 |
thumper | no method there | 00:04 |
wallyworld_ | / WatchContainers returns a watcher that notifies of changes to the lifecycle of containers on a machine. | 00:05 |
wallyworld_ | func (m *Machine) WatchContainers(ctype ContainerType) *LifecycleWatcher { | 00:05 |
wallyworld_ | in state/watchers.go | 00:05 |
wallyworld_ | maybe that branch hasn't landed yet, let me check | 00:05 |
wallyworld_ | thumper: no, it should be in trunk | 00:07 |
thumper | wallyworld_: which file? | 00:07 |
thumper | not in machine.go | 00:07 |
wallyworld_ | ^^^^^^^^^^ | 00:07 |
wallyworld_ | state/watchers.go | 00:07 |
wallyworld_ | if your IDE did method name completion..... | 00:07 |
thumper | WHY? | 00:07 |
wallyworld_ | because that's where all the other watchers are defined | 00:08 |
thumper | I really strongly dislike splitting functions for types between different files | 00:08 |
thumper | it's dumb! | 00:08 |
wallyworld_ | i just folllowed convention | 00:08 |
wallyworld_ | blame the previous guy :-) | 00:09 |
wallyworld_ | anyways, a good IDE makes it a moot point :-P | 00:09 |
davecheney | marcoceppi: ping | 00:13 |
marcoceppi | davecheney: o/ | 00:13 |
davecheney | which method of communication would you like to attempt ? | 00:13 |
marcoceppi | How well does G+ work for you? | 00:15 |
davecheney | wfm, two secs | 00:15 |
davecheney | ARGH | 00:15 |
davecheney | thwere are two marcoceppi's who both work for canonical | 00:16 |
davecheney | and BOTH are well dressed !?! | 00:16 |
marcoceppi | davecheney: haha, you can hit either one of them | 00:16 |
marcoceppi | The one with the canonical bullseye is my "work" account | 00:16 |
davecheney | i'm going to choose the one with the best moustache | 00:16 |
davecheney | gameshow music ... dialing | 00:18 |
fwereade_ | aw, wtf, how is that the time | 00:57 |
fwereade_ | thumper, wallyworld_: despite my shameful failure to review any of your code today, I have one for you: https://codereview.appspot.com/10420045 | 00:58 |
* wallyworld_ looks | 00:58 | |
wallyworld_ | just finishing another review firat | 00:58 |
thumper | huh | 00:58 |
wallyworld_ | first | 00:58 |
* thumper needs beer | 00:58 | |
thumper | or a wine | 00:59 |
thumper | perhaps something italian | 00:59 |
wallyworld_ | whine? | 00:59 |
thumper | I'm getting frustrated at some of our tests | 00:59 |
fwereade_ | thumper, wallyworld_: it is not the nicest of branches because it involves getting up to the elbows in state | 00:59 |
wallyworld_ | :-( | 00:59 |
thumper | as soon as you add something that depends on what would be expected in a normal environment | 00:59 |
thumper | shit breaks | 00:59 |
fwereade_ | thumper, wallyworld_: but code gets removed, tests get bigger, comments get more detailed, and buried in there are a couple of crucial lines that actually change behaviour | 01:00 |
wallyworld_ | yep, that's Go for you | 01:00 |
fwereade_ | thumper, wallyworld_: and all of those are, I believe, reflected explicitly in new test cases | 01:00 |
wallyworld_ | fwereade_: cool. sounds good | 01:00 |
fwereade_ | (oh, there are also 4 tests that fail elsewhere that I just saw, but it's obvious why in each case by inspection | 01:09 |
fwereade_ | I'll fix them tomorrow) | 01:09 |
bigjools | Can we use Go 1.1 features yet? | 01:47 |
thumper | bigjools: no | 02:00 |
thumper | bigjools: see jam's email about it on juju-dev | 02:00 |
* bigjools doesn't pay much attention to that list | 02:01 | |
bigjools | I guess I had better familiarise myself with the new features as the online Go docs refer to 1.1 now | 02:03 |
thumper | wallyworld_: http://paste.ubuntu.com/5785676/ | 02:42 |
thumper | wallyworld_: less buggering around with this to get them going | 02:42 |
wallyworld_ | \o/ | 02:42 |
thumper | wallyworld_: although there is an 'orrible hack | 02:42 |
thumper | time.Sleep(1 * time.Second) :) | 02:43 |
wallyworld_ | yuk | 02:43 |
thumper | wallyworld_: as the lxc provisioner tries to get the tools from state before the machiner has set them | 02:43 |
thumper | and the jujud process doesn't yet restart subtasks properly | 02:43 |
wallyworld_ | sounds like we need a channel or something | 02:44 |
thumper | it will do when it integrates the worker runner code | 02:44 |
thumper | not really | 02:44 |
thumper | it should just die | 02:44 |
thumper | and get restarted | 02:44 |
thumper | don't add dependencies | 02:44 |
wallyworld_ | instead of waiting on an event from the host saying "ready now" or something? | 02:44 |
thumper | I don't want to add extra dependencies right now | 02:45 |
thumper | I woner why instance-state says "missing" on status | 02:46 |
wallyworld_ | what extra dependencies? adding a channel isn't a dependency except on the std lib | 02:46 |
thumper | but it is between go routines | 02:46 |
thumper | that shouldn't be tied together | 02:46 |
thumper | trust me on this one | 02:46 |
wallyworld_ | ok | 02:47 |
thumper | with units: http://paste.ubuntu.com/5785689/ | 02:47 |
wallyworld_ | don't know about why instance state is missing | 02:48 |
wallyworld_ | the code says it is because ec2 couldn't find that instance id | 02:48 |
wallyworld_ | not ec2 | 02:49 |
thumper | well, duh, obviously | 02:49 |
wallyworld_ | our environment | 02:49 |
thumper | wallyworld_: what are you doing again? | 02:51 |
thumper | I'm about to write the end of week email | 02:51 |
thumper | apart from reviews | 02:52 |
thumper | and bug triage | 02:52 |
wallyworld_ | OCR today. but main work is to get into state the instance metadata. there's a rework of the data model so it's become complicated | 02:52 |
wallyworld_ | 3 failing tests - gotta fix the mega watcher | 02:52 |
wallyworld_ | next step - machine characteristics in status | 02:53 |
wallyworld_ | i'll also need to look at some other critical bugs thrown my way and the simple streams metadata | 02:54 |
bigjools | hey guys, if I want to include a data file as part of my source, and have it required at runtime, what's the best way of doing this? | 03:02 |
wallyworld_ | what sort of file? | 03:05 |
bigjools | binary | 03:06 |
bigjools | it's a trailer for a VHD | 03:06 |
wallyworld_ | you want the code to write it out somewhere when it runs? | 03:06 |
bigjools | no, it needs to read it and then we send it to Azure | 03:06 |
bigjools | (as part of other stuff) | 03:07 |
wallyworld_ | i wonder if it shouldl be packaged in the tools tarball? | 03:07 |
bigjools | it's a jigsaw piece of a bigger file if you like | 03:07 |
wallyworld_ | how big is it? | 03:07 |
bigjools | I need it to live in the provider library though, not juju core | 03:08 |
bigjools | 512 bytes | 03:08 |
bigjools | I'm wondering if there's a way of compiling the data in somehow | 03:08 |
wallyworld_ | i'd maybe uuencode it | 03:08 |
bigjools | or base64 | 03:08 |
wallyworld_ | yeah | 03:08 |
bigjools | hmmm :) | 03:08 |
wallyworld_ | and write it out | 03:09 |
bigjools | you're not just an ugly face | 03:09 |
wallyworld_ | well, with enough beer perhaps | 03:09 |
bigjools | it would need a lot | 03:09 |
wallyworld_ | sure, but you'd need more | 03:09 |
jam | bigjools: you can use "godoc" to run the docs locally, I believe. Which should match the version of go you are using. | 03:12 |
bigjools | jam: hey there. ah yes I keep getting confused with godoc | 03:13 |
jam | bigjools: for a 512byte content, I would just base64 encode it in source. I think there is a way to get to argv[0] (not sure what it is in go), but there won't be a directory of files like python. | 03:14 |
bigjools | yeah base64 does the job nicely | 03:15 |
bigjools | 6+ years since I used a compiled language... it's slowly coming back | 03:15 |
jam | bigjools: for the test suite, there is stuff like "find the source directory", but that doesn't work as well in final binaries :) | 03:17 |
bigjools | yeah | 03:17 |
thumper | wallyworld_: your watcher returned the same id twice | 03:48 |
thumper | wallyworld_: when I removed it | 03:48 |
thumper | causing the provisioner to crash :) | 03:48 |
wallyworld_ | \o/ | 03:48 |
thumper | when it tried to stop the container the second time | 03:49 |
wallyworld_ | i jut use a lifecycle watcher | 03:49 |
wallyworld_ | i'll have to see what's going on with that | 03:49 |
thumper | wallyworld_: it is possible that it is me, I can add more tracing to check, but not today | 03:52 |
thumper | finishing early due to insane meetings last night | 03:52 |
wallyworld_ | sure. i'm also knee deep in this other branch | 03:52 |
wallyworld_ | hagw | 03:52 |
=== tasdomas_afk is now known as tasdomas | ||
TheMue | morning | 07:04 |
rogpeppe2 | thumper: ping | 07:58 |
rogpeppe2 | TheMue: hiya | 07:58 |
rogpeppe2 | wallyworld_: any chance of a review of https://codereview.appspot.com/10364046/ ? | 09:16 |
rogpeppe2 | wallyworld_: (you reviewed the branch that had it as a prereq) | 09:16 |
wallyworld_ | ok, but i'm onto my 3rd beer for the evening :-) | 09:16 |
wallyworld_ | might improve the quality of my reviews :-) | 09:17 |
rogpeppe2 | wallyworld_: thanks for the other review, BTW - the "password" oops was a good catch! | 09:18 |
wallyworld_ | now worries :-) | 09:18 |
wallyworld_ | no | 09:18 |
wallyworld_ | rogpeppe2: done. btw, i got the watcher stuff sorted out thanks to your advice where to look | 09:21 |
rogpeppe2 | wallyworld_: cool, nice one. | 09:22 |
rogpeppe2 | wallyworld_: thanks | 09:22 |
wallyworld_ | fwereade_: there are 2 untriaged bugs for juju-core which i didn't really feel i could accurately process today. ie they seem important to me but ymmv. could you possibly take a look sometime? | 09:29 |
fwereade_ | wallyworld_, I hope so, I'm just trying to catch up on reviews this morning | 09:30 |
fwereade_ | wallyworld_, thanks for your review last night | 09:30 |
wallyworld_ | no hurry as such | 09:30 |
wallyworld_ | np, i liked your branch | 09:30 |
=== danilos_ is now known as danilos | ||
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: wallyworld | Bugs: 7 Critical, 74 High - https://bugs.launchpad.net/juju-core/ | ||
rogpeppe2 | fwereade_: would appreciate a review of https://codereview.appspot.com/10449044 - another prereq for getting the agent connected to the API | 11:10 |
fwereade_ | rogpeppe2, looking | 11:15 |
fwereade_ | rogpeppe2, lovely, LGTM | 11:19 |
rogpeppe2 | fwereade_: thanks! | 11:19 |
fwereade_ | rogpeppe2, if you get a mo, I'd appreciate your thoughts on https://codereview.appspot.com/10420045/ | 11:20 |
rogpeppe2 | fwereade_: looking | 11:23 |
TheMue | anyone interested in reviewing https://codereview.appspot.com/10441044 (config-get output) | 11:24 |
TheMue | ? | 11:24 |
TheMue | fwereade_: btw, had a chance to take a look on the ec2 http reader moving? | 11:25 |
fwereade_ | TheMue, sorry, it's *right* at the bottom of the page | 11:25 |
fwereade_ | TheMue, I'm on the 3rd last though | 11:26 |
TheMue | fwereade_: ok, no pro, i've got more in the queue so it can wait ;) | 11:26 |
fwereade_ | TheMue, cool | 11:26 |
fwereade_ | TheMue, I proposed something last night that'll fix a noticeable bug when the cleaner integration lands :) | 11:27 |
TheMue | fwereade_: oh, will take a look (when I return from lunch) | 11:32 |
rogpeppe2 | fwereade_: so... just trying to get this straight in my head | 11:32 |
* fwereade_ braces himself for helpfully difficult questions ;) | 11:33 | |
rogpeppe2 | fwereade_: your CL changes things so that a service is only removed when the cleaner gets around to removing its units? | 11:33 |
rogpeppe2 | fwereade_: i'm trying to understand the role of the cleaner here | 11:33 |
fwereade_ | rogpeppe2, not exactly | 11:33 |
fwereade_ | rogpeppe2, it just runs unit.Destroy for each unit sooner than the unit agents themselves will (potentially, anyway -- and usefully, in case cases where there's no agent inplay yet) | 11:34 |
fwereade_ | rogpeppe2, the fundamental problem is that the UA was solely responsible for its lifecycle advancement past Dying once it was on a provisioned machine | 11:35 |
fwereade_ | rogpeppe2, but there's really quite a long delay between a machine being provisioned and any assigned unit agents actually starting to run | 11:36 |
rogpeppe2 | fwereade_: indeed there is | 11:36 |
rogpeppe2 | fwereade_: so... why can't the initial operation run unit.Destroy when appropriate? | 11:37 |
fwereade_ | rogpeppe2, checking the analysis captured in unit.Destroy is the big one, if that's crack then so is the whole idea | 11:38 |
fwereade_ | rogpeppe2, the idea was that we didn't want to tie the client up destroying everything one by one | 11:38 |
fwereade_ | rogpeppe2, and I think that's still reasonable | 11:39 |
rogpeppe2 | fwereade_: ah, seems like a good plan | 11:39 |
rogpeppe2 | fwereade_: i'd have asked the other way if you'd done it like that | 11:39 |
fwereade_ | rogpeppe2, but I don;t really mind tying up the cleaner worker, that's its job | 11:39 |
fwereade_ | ;p | 11:39 |
rogpeppe2 | fwereade_: i think this is the right approach, but i just wanted to get it all straight in my head | 11:39 |
rogpeppe2 | fwereade_: does the cleaner worker execute everything sequentially? | 11:40 |
fwereade_ | rogpeppe2, yeah | 11:41 |
rogpeppe2 | fwereade_: if not, i wonder if it might be good to make it execute some operations concurrently (assuming that gives some speed up) in the future | 11:41 |
fwereade_ | rogpeppe2, sounds reasonable, yeah | 11:41 |
rogpeppe2 | fwereade_: i can imagine that it might be really slow to destroy large services | 11:41 |
fwereade_ | rogpeppe2, indeed, but don't forget the unit agents are still busily killing themselves where possible on service destroy | 11:42 |
* fwereade_ has food on the table | 11:42 | |
rogpeppe2 | fwereade_: that's true, but if you've accidentally typed an extra 0 onto --num-units and are trying to remove the service ASAP, it might be an issue | 11:43 |
rogpeppe2 | fwereade_: enjoy! | 11:43 |
fwereade_ | rogpeppe2, yeah, point taken, this is the simplest possible v1 of what will otherwise become a disturbingly big change | 11:58 |
rogpeppe2 | fwereade_: oh yes, i wasn't suggesting we do it now | 11:58 |
rogpeppe2 | fwereade_: or even in the near future | 11:58 |
rogpeppe2 | fwereade_: just that it's something to bear in mind | 11:59 |
fwereade_ | rogpeppe2, however, any that do manage to short-circuit will have to be executed serially anyway, because they all use the service doc | 11:59 |
rogpeppe2 | fwereade_: not if there are two (or more) services being destroyed, i guess | 12:00 |
fwereade_ | rogpeppe2, very true | 12:00 |
fwereade_ | rogpeppe2, concurrent handling of actual cleanup docs themselves would be nice | 12:00 |
fwereade_ | TheMue, LGTM | 12:01 |
rogpeppe2 | fwereade_: yeah - otherwise this might tie up the cleaner for a long time when actually all the units are deployed and happy to remove themselves, stopping another service from being cleaned up. | 12:02 |
rogpeppe2 | fwereade_: reviewed | 12:03 |
fwereade_ | rogpeppe2, thanks, +1 on TransactionChecker, very nice | 12:05 |
rogpeppe2 | fwereade_: cool | 12:06 |
fwereade_ | rogpeppe2, yeah, the cleanup thing is pretty ghetto at this stage, at least it'll hopefully be relatively simple to massage into shape at some point | 12:06 |
fwereade_ | rogpeppe2, but suboptimal cleanup beats no-cleanup-at-all pretty handily :) | 12:07 |
rogpeppe2 | fwereade_: indeed so :-) | 12:07 |
rogpeppe2 | TheMue: reviewed | 12:09 |
rogpeppe2 | fwereade_: i don't think you published your TheMue LGTM | 12:09 |
rogpeppe2 | TheMue: would appreciate a review of https://codereview.appspot.com/10449044 | 12:09 |
rogpeppe2 | or from anyone else, for that matter | 12:09 |
fwereade_ | rogpeppe2, TheMue: I was thinking of https://codereview.appspot.com/10296046/ -- looks LGTMed to me | 12:10 |
rogpeppe2 | fwereade_: ah, i thought you were referring to g https://codereview.appspot.com/10441044 | 12:10 |
fwereade_ | rogpeppe2, ha, missed that one, thanks | 12:10 |
rogpeppe2 | TheMue: you have another review | 12:17 |
fwereade_ | TheMue, reviewed https://codereview.appspot.com/10441044 as well | 12:18 |
rogpeppe2 | fwereade_: do you know if live tests have been fixed now? | 12:19 |
=== rogpeppe2 is now known as rogpeppe | ||
fwereade_ | rogpeppe2, I'm afraid I don't | 12:19 |
rogpeppe | fwereade_: :-( | 12:19 |
rogpeppe | fwereade_: i guess i'll try with trunk and see what happens | 12:20 |
rogpeppe | fwereade_: if i'm to land the agent API stuff, i really need to be able to test live | 12:20 |
fwereade_ | rogpeppe, I have a vague recollection of flinging those at you for verification the other day | 12:20 |
* fwereade_ looks a little shamefaced | 12:20 | |
rogpeppe | fwereade_: flinging what? | 12:21 |
fwereade_ | rogpeppe, a couple of bugs related to those | 12:21 |
fwereade_ | rogpeppe, I may be wrong, that bug weekend is a bit of a blur | 12:21 |
* rogpeppe starts running TestBootstrapAndDeploy | 12:22 | |
rogpeppe | fwereade_: you'll probably be happy to know i got the machine agent actually talking to the API live yesterday. | 12:23 |
fwereade_ | rogpeppe, I saw! awesome news :D | 12:23 |
fwereade_ | rogpeppe, you were gone before I saw it so I couldn't fling virtual underwear in appreciation | 12:23 |
rogpeppe | fwereade_: BTW we seem to have reverted to dialling more often than we should (about 3 times a second) | 12:24 |
fwereade_ | rogpeppe, whaa? :( | 12:24 |
FunnyLookinHat | I'm trying to figure out how we might take advantage of the upcoming containerization feature... but I need to test density. | 12:25 |
FunnyLookinHat | Do you think it'd be a decent comparison to just bootstrap juju locally and deploy a bunch of nodes via LXC ? | 12:25 |
rogpeppe | fwereade_: ah, looks like live tests do work on trunk, phew | 12:29 |
fwereade_ | rogpeppe, excellent ;p | 12:29 |
fwereade_ | rogpeppe, would you open a bug for the redialing though please? feels like a bad bug to release with | 12:29 |
fwereade_ | rogpeppe, so I think it's critical until we figure it out | 12:30 |
fwereade_ | FunnyLookinHat, what are you looking to explore in particular? | 12:30 |
FunnyLookinHat | How much density I can get with lamp-stack deployments for a PHP + MySQL application | 12:31 |
FunnyLookinHat | fwereade_, i.e. I want to compare between simply having 100 virtual hosts on a single apache w/ a single mysql in separate directories w/ having 100 lxc container'd hosts including mysql | 12:31 |
TheMue | rogpeppe: reviewed | 12:31 |
rogpeppe | TheMue: thanks | 12:31 |
fwereade_ | FunnyLookinHat, ok, so you're thinking, say, 1 node with N containers, each container holding one php app and one mysql? | 12:32 |
TheMue | rogpeppe, fwereade_ : thx for your reviews | 12:32 |
FunnyLookinHat | fwereade_, Yeah, exactly | 12:32 |
FunnyLookinHat | "Entire Stack" in each LXC - with the purpose of serving a single PHP application. | 12:32 |
FunnyLookinHat | I'm hoping that will make backup and restoration a breeze | 12:33 |
fwereade_ | FunnyLookinHat, ok, cool -- this in contrast to 1 machine with N+1 containers, in which N hold a php app and one holds a single unit of a shared mysql service? | 12:33 |
fwereade_ | FunnyLookinHat, yeah, that sounds sensible | 12:33 |
fwereade_ | FunnyLookinHat, I know thumper has been making good progress there | 12:34 |
FunnyLookinHat | fwereade_, Sort of - yes. | 12:34 |
FunnyLookinHat | fwereade_, Yeah I saw his update this morning :) | 12:34 |
fwereade_ | FunnyLookinHat, I have architecture quibbles but I expect they'll all come out in the wash | 12:34 |
FunnyLookinHat | fwereade_, Realistically I'm trying to figure out "how dense" juju will be able to get me, or if I need to figure something else out... i.e. how much overhead is in LXC | 12:35 |
fwereade_ | FunnyLookinHat, so I would *hope* that trunk will be letting you play with that manually within the next week or so | 12:35 |
FunnyLookinHat | fwereade_, but you're essentially saying that I'd be able to get a decent idea just by running LXC within a Compute Instance or something | 12:35 |
fwereade_ | FunnyLookinHat, yep, makes sense, thanks | 12:35 |
fwereade_ | FunnyLookinHat, I think so yeah | 12:35 |
FunnyLookinHat | fwereade_, Ok thanks - much appreciated :) | 12:36 |
fwereade_ | FunnyLookinHat, the juju agents don't take up too many resources last time I checked | 12:36 |
FunnyLookinHat | Yeah - my only concern is running 250 mysql daemons and 250 apache daemons instead of 1 and 1 | 12:36 |
FunnyLookinHat | That's a lot of overhead | 12:36 |
FunnyLookinHat | thus the need to test | 12:36 |
fwereade_ | FunnyLookinHat, quite so | 12:36 |
rogpeppe | ha, don't accidentally type "go get -u" - you'll pull --overwrite your current repo. ouch. | 12:46 |
FunnyLookinHat | LOL | 12:51 |
wallyworld_ | rogpeppe: my IDE has local history built in so when i did go get -u one time, it was easy to recover :-) | 12:59 |
rogpeppe | wallyworld_: i thought i'd interrupted it in time, but i hadn't. luckily i had another copy of the branch that i was using to test against go1.0.3. i've proposed https://codereview.appspot.com/10455043 to fix the issue. | 13:00 |
wallyworld_ | oh cool :-) | 13:01 |
=== wedgwood_away is now known as wedgwood | ||
wallyworld_ | mramm: hi, i almost have a fix for bug 1188815 ready to test. do you know what cloud the bug was seen with, and if i can get some credentials to do a live test? | 13:39 |
_mup_ | Bug #1188815: security group on quantum/grizzly is uuid, goose chokes on it <serverstack> <Go OpenStack Exchange:Confirmed for wallyworld> <juju-core:Triaged by wallyworld> <https://launchpad.net/bugs/1188815> | 13:39 |
fwereade_ | rogpeppe, responded to https://codereview.appspot.com/10420045 | 13:39 |
fwereade_ | rogpeppe, I would be interested if yu had any thoughts on the Cleanup-after-service-destroy-in-client idea | 13:40 |
fwereade_ | rogpeppe, it's kinda nasty | 13:40 |
fwereade_ | rogpeppe, *but* it means that new tools will do quick-destroys of old services in old environments | 13:40 |
fwereade_ | rogpeppe, but then maybe we don't want to encourage people to keep their 1.10 deployments going, long-term..? | 13:41 |
mramm | wallyworld_: I am not sure, it is whatever the Landscape team is using... Best to ask beret... He's online now. | 13:41 |
wallyworld_ | will do | 13:41 |
Beret | ahasenack, can you lend wallyworld_ a hand with that one? | 13:41 |
wallyworld_ | Beret: hi, i almost have a fix for bug 1188815 ready to test. do you know what cloud the bug was seen with, and if i can get some credentials to do a live test? | 13:41 |
_mup_ | Bug #1188815: security group on quantum/grizzly is uuid, goose chokes on it <serverstack> <Go OpenStack Exchange:Confirmed for wallyworld> <juju-core:Triaged by wallyworld> <https://launchpad.net/bugs/1188815> | 13:41 |
Beret | ahasenack, it's probably dpb that found it | 13:41 |
rogpeppe | fwereade_: i'm not sure exactly what idea you're referring to. is it mentioned in the comments? or are you just talking about our discussion earlier? | 13:42 |
ahasenack | hm? | 13:42 |
* ahasenack reads | 13:42 | |
fwereade_ | rogpeppe, it's at the end of the CL description | 13:42 |
ahasenack | wallyworld_: it was seen in our internal serverstack deployment, one that uses quantum for networking, not canonistack, which uses nova networking | 13:43 |
wallyworld_ | ahasenack: any chance of getting some credentials so i can test the fix against that? | 13:43 |
rogpeppe | fwereade_: ah, i think i'd probably gone into "tl;dr" mode by that stage and had just dived into the code :-) | 13:43 |
fwereade_ | rogpeppe, not surprised | 13:43 |
ahasenack | wallyworld_: I'll see what I can do, I'm not sure even I have credentials, that thing was just deployed | 13:43 |
fwereade_ | rogpeppe, sorry about that | 13:44 |
wallyworld_ | ahasenack: we have test service doubles i can test with, and also regression test against canonistack, but i want to be sure the issue is really fixed for you too | 13:44 |
ahasenack | wallyworld_: yeah, canonitsack will only help in terms of regression testing, since the bug doesn't happen there | 13:45 |
ahasenack | we need openstack with quantum for this | 13:45 |
wallyworld_ | exactly | 13:45 |
rogpeppe | fwereade_: i think it's not worth calling Cleanup | 13:46 |
fwereade_ | rogpeppe, cool | 13:46 |
fwereade_ | rogpeppe, upgrade tools, upgrade environment, get fast destroys | 13:46 |
rogpeppe | fwereade_: precisely | 13:46 |
wallyworld_ | ahasenack: see what you can do and drop me an email. it's late friday evening for me now so no rush as such. i've still got a bit of coding to finish on the issue and will try and get that done over the weekend so i can get a fix committed asap for you | 13:46 |
fwereade_ | rogpeppe, gets people using the code we want them to too :) | 13:46 |
ahasenack | wallyworld_: where can I get a build/branch? | 13:46 |
ahasenack | wallyworld_: attached to the bug? | 13:46 |
rogpeppe | fwereade_: we have upgrade - people can use it (and they can downgrade too if there are problems) | 13:47 |
fwereade_ | rogpeppe, if an upgrade has problems I fear for its downgrade too tbh | 13:47 |
wallyworld_ | ahasenack: i'll commit the fix to goose trunk when it's done. it's still uncommitted on my harddrive | 13:48 |
ahasenack | wallyworld_: can you push somewhere, or would you like to test it first yourself? | 13:48 |
rogpeppe | fwereade_: hopefully the upgrade stuff doesn't touch much of the stuff that might be problematic for people | 13:48 |
rogpeppe | fwereade_: BTW i worked out why my second API-connecting machine agent wasn't working - the provisioner wasn't calling SetPassword on the new machine. fingers crossed this live test will work fine... | 13:49 |
wallyworld_ | ahasenack: you can certainly pull from my lp branch once i push it up (before it is reviewed and committed to trunk). but i have a little coding work to finish first. it's about 80% done | 13:49 |
* fwereade_ also crosses fingers | 13:49 | |
ahasenack | wallyworld_: ah, ok | 13:50 |
wallyworld_ | ahasenack: i thought getting access to the cloud to test might take a little time so thought i'd ask ahead of time to minimise delays | 13:50 |
ahasenack | wallyworld_: ok | 13:50 |
wallyworld_ | i was hoping to have access or something say monday :-) | 13:50 |
fwereade_ | rogpeppe, yeah -- I just worry that there's some subtle incompatibility in the data that gets stored by the current code | 13:50 |
wallyworld_ | ahasenack: i only started on the problem after my EOD today since it was marked as critical :-) | 13:51 |
wallyworld_ | and i had other stuff to get done first | 13:51 |
ahasenack | wallyworld_: ok, thanks | 13:51 |
wallyworld_ | even if i can't get access to the cloud, i'm pretty confident it will work if you grab the code and test | 13:53 |
rogpeppe | 8:04.529 PASS: live_test.go:0: LiveTests.TestBootstrapAndDeploy479.767s | 13:53 |
rogpeppe | yay! | 13:53 |
* fwereade_ cheers, hoots, flings underwear | 13:53 | |
fwereade_ | hmm, plus.google.com hates me again today | 14:01 |
dpb1 | wallyworld_: if you give a branch and tell me how to pull it, I can test it | 14:05 |
dpb1 | wallyworld_: I'm running trunk in that cloud already, so it should be easy to update nad overlay your stuff | 14:06 |
wallyworld_ | dpb1: awesome thanks. i'll have something done within a day. just finishing a bit more coding | 14:06 |
dpb1 | wallyworld_: ok, I'm in the US Mountain Time, so if you get something finished before I eod, I'll look at it. | 14:09 |
wallyworld_ | dpb1: i'm GMT+10 so it's midnight here and i may not stay awake to get it done. i can send you an email over the weekend with the branch details. i'm really hopeful of getting it done real soon now | 14:10 |
fwereade_ | ffs | 14:11 |
dpb1 | wallyworld_: hehe, ok, goodnight then (or close to it!) | 14:12 |
fwereade_ | sorry guys, didn't contribute much there | 14:12 |
fwereade_ | TheMue, how's the worker integration looking? | 14:12 |
dpb1 | wallyworld_: given your nick, I was wondering if you were down under. | 14:12 |
wallyworld_ | dpb1: good night | 14:12 |
wallyworld_ | yep, am in brisbane australia | 14:13 |
dpb1 | wallyworld_: ah, cool. :) | 14:13 |
fwereade_ | gaah -- has anyone tried to --upload-tools from current trunk? I'm assuming that the problem is that my network appears to be very flaky today, but it'd be nice to have confirmation that it really does work | 14:16 |
TheMue | fwereade_: not yet started, currently working on the review feedback for config-get | 14:17 |
fwereade_ | TheMue, ok, it's become critical for me -- shall I try and pick it up? | 14:18 |
fwereade_ | (btw, just managed to --upload-tools -- it's not you it's me) | 14:19 |
rogpeppe | fwereade_: it worked fine for me earlier | 14:19 |
fwereade_ | rogpeppe, cheers | 14:19 |
TheMue | fwereade_: ok, feel free to pick it, then i'll do auto sync tools next | 14:20 |
TheMue | fwereade_: in your review of the config-get you said it's a good opportunity to change the way of the format comparison | 14:23 |
TheMue | fwereade_: could you explain it more? | 14:23 |
fwereade_ | TheMue, asserting on precise bytes is not great, because YAML can produce all sorts of valid representations of the same data | 14:24 |
fwereade_ | TheMue, we would like it if, were we to (say) tweak the formatting details in cmd/out.go, these tests would continue to pass | 14:25 |
fwereade_ | TheMue, so, rather than asserting precise output, unmarshal that output and compare against the actual data you expect | 14:25 |
fwereade_ | TheMue, it's not foolproof, though, the precise numeric types might be unmarshalled differently by different implementations, say -- but I'm less worried about that | 14:26 |
fwereade_ | TheMue, sane? | 14:26 |
TheMue | fwereade_: ah, thx, then i had it already understood right | 14:26 |
TheMue | fwereade_: yes, so i can continue as i already started. it's the final change before a repropose :) | 14:27 |
fwereade_ | TheMue, sweet | 14:27 |
fwereade_ | TheMue, can I ask you to do a quick proposal before you do auto sync tools please? I think it demands a little bit of discussion | 14:28 |
TheMue | fwereade_: a quick proposal? you mean for the way i will do it? | 14:41 |
fwereade_ | TheMue, yeah, I think it deserves just a little thought before we rush in | 14:42 |
TheMue | fwereade_: sure, will come back to you first when read a bit more about the problem | 14:42 |
fwereade_ | TheMue, just to figure out the drawbacks of whatever we pick and whether they're worth it | 14:42 |
TheMue | fwereade_: sounds fine to me, yes | 14:42 |
rogpeppe | i'm getting a compile problem trying to merge: https://code.launchpad.net/~rogpeppe/juju-core/325-machineagent-api-setpassword/+merge/170788/comments/380527 | 14:42 |
rogpeppe | it's really weird, because it builds fine locally | 14:43 |
rogpeppe | and i tried pulling trunk and merging into that and it still builds fine | 14:43 |
rogpeppe | mgz: any ideas what the problem might be? | 14:43 |
rogpeppe | particularly annoying because it's preventing me from proposing my agent-connects-to-state branch | 14:44 |
rogpeppe | jam: ^ | 14:45 |
TheMue | assignement count mismatch, aha | 14:45 |
TheMue | how do those lines look like? | 14:45 |
TheMue | ...oooOOO( before navigating through the code ) | 14:46 |
rogpeppe | TheMue: ha ha, they're still wrong. | 14:47 |
rogpeppe | TheMue: it *built* fine, but tests don't | 14:47 |
mgz | rogpeppe: not off the top of my head... | 14:47 |
mgz | ah, tests fail on the merged code? | 14:47 |
rogpeppe | mgz: yeah. i think i've sorted it though. there must've been a problem with tests failing after merging with trunk. i was sure i'd tested it properly. but there y'go | 14:49 |
rogpeppe | right, another 15 minute wait | 14:51 |
rogpeppe | fwereade__: ping | 15:49 |
fwereade__ | rogpeppe, pong | 15:49 |
fwereade__ | rogpeppe, how's it going? | 15:49 |
rogpeppe | fwereade__: i've got a bit of a testing problem | 15:49 |
fwereade__ | rogpeppe, oh yes? | 15:49 |
rogpeppe | fwereade__: i just propose -wip'd my branch (after endless hassles with dependencies) | 15:50 |
rogpeppe | fwereade__: and going through it, i saw a test-related hack i'd forgotten about | 15:50 |
rogpeppe | fwereade__: which is a time.Sleep(2 * time.Second) inside runStop (this is in cmd/jujud) | 15:51 |
rogpeppe | fwereade__: this is when testing the password changing stuff | 15:51 |
rogpeppe | fwereade__: the issue is that we don't have any way of knowing *when* the agent changes the password | 15:51 |
fwereade__ | rogpeppe, hmm... poll the conf file for changes? | 15:52 |
fwereade__ | rogpeppe, except I can't remember whether that happens before or after | 15:52 |
rogpeppe | fwereade__: hmm, i thought of that and wrote it off because i thought there were cases where we don't actually change the conf file | 15:54 |
fwereade__ | rogpeppe, surely we do if we're changing the password? | 15:54 |
rogpeppe | fwereade__: there are several checks in that test | 15:54 |
rogpeppe | fwereade__: but actually, i think the conf file is changed in all of them | 15:55 |
rogpeppe | fwereade__: which probably means it's not an adequate test, ironically | 15:55 |
fwereade__ | rogpeppe, how about a watcher on the state.Unit, waiting for the old password to give an error? | 15:55 |
rogpeppe | fwereade__: too specific - sometimes it doesn't actually change the password even though it changes the conf file | 15:56 |
rogpeppe | fwereade__: i think i'll go with polling the conf file and see if that works ok | 15:56 |
fwereade__ | rogpeppe, if it's tricky to test, I may be hearing the extract-type bells ringing a little | 15:57 |
rogpeppe | fwereade__: you know, actually i think you're right - i'll just test openAPIState in isolation. there's no particular virtue in testing it in situ, as the other tests would fail if it wasn't being called. | 15:59 |
fwereade__ | rogpeppe, cool | 15:59 |
rogpeppe | fwereade__: thanks for the suggestion | 16:00 |
=== tasdomas is now known as tasdomas_afk | ||
=== tasdomas_afk is now known as tasdomas | ||
=== tasdomas is now known as tasdomas_afk | ||
rogpeppe | fwereade__: https://codereview.appspot.com/10259049/ | 17:11 |
rogpeppe | fwereade__: (finally!) | 17:11 |
rogpeppe | if anyone's still around, i'd appreciate a review of the above | 17:13 |
rogpeppe | and that's a good place to stop for the week. | 17:14 |
rogpeppe | see y'all monday! | 17:14 |
=== wedgwood is now known as wedgwood_away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!