[00:01] anyone able to do a (likely quick) review: https://codereview.appspot.com/100620046/ === menn0_ is now known as menn0 [00:12] menn0: this is is a purely personal thought, but decodeyaml could escalate that error to decodeyamlfrostdout who could escalateit up and so the actual error assertion would be in the test using it [00:13] perrito666: the reason for doing it the way I did is that it avoids boilerplate in each test method [00:14] perrito666: there's no backtrace in the failure output is there? [00:14] menn0: iirc no [00:14] perrito666: my Python background is showing :) [00:15] perrito666: I'll change the tests according to your suggestion. [00:15] oh, I am as pythonish as you, worry not [00:27] menn0: you might want to get a better review, I am relatively new here too [00:28] perrito666: understood. thanks for looking. [01:04] morning axw [01:05] hey wallyworld [01:05] sadly jenkins is not very happy, but the failures are seemingly random, one off things [01:07] :( [01:07] for now though, there's a couple of bugs that need looking at for a 1.19.3 release. i plan to grab one after i run some gccgo tests, so if you are at a loose end, feel free to grab one also [01:07] sure [01:07] I was going to fix that utopic one first [01:08] then look at gccgo things [01:08] sure ok [01:08] wallyworld: turns out 1.18 *does* use juju-mongodb on local, but *only* on trusty [01:08] should be trusty onwards [01:08] lol ok [01:10] axw: feel free then to look at gccgo stuff after the mongo fix - first thing we need to do is fully understand how many test failures are left to fix [01:10] nps [01:12] hopefully it won't be too much effort to ensure that bugs in lp reflect the situation - one bug tagged ppc64el / test-failure per issue [01:12] then we can drive that count down to 0 [01:47] wallyworld: ping [01:48] wallyworld: JujuConnSuite already calls dummy.Reset() in its TearDownTest. That was basically my fix :( [01:48] ah ok [01:49] well there's still something wrong sadly :-( [01:49] yep [01:50] we can see how often it comes up [01:50] okay, you know where to fine me [02:01] wallyworld: https://codereview.appspot.com/99410053/ please [02:01] looking === jcsackett_ is now known as jcsackett === bodie_ is now known as Guest96714 === Guest96714 is now known as bodie_ [03:09] to check whether my dependencies.tsv is suitable for use with gccgo, I can just "make" in the juju-core root directory, right? [03:11] bodie_: are you wanting to check that the dependencies themselves are compatible with gccgo? [03:12] bodie_: AFAICT the Makefile knows nothing about dependencies.tsv, though we should probably fix that [03:13] well, I figured if I've aligned deps with godeps, then if I make with gccgo, I should be able to tell whether the build is broken or not [03:13] I just don't know how to make with gccgo, which I'm sure is something simple :) just not seeing it [03:13] bodie_: so "make" will only switch to gccgo if you aren't on x86, armel or armhf [03:13] yeah, I was just noticing that [03:13] go build -compiler=gccgo launchpad.net/juju-core/... [03:14] okay, thanks [03:14] the "..." means recursively [03:14] that's simpler than I expected [03:34] jam: bodie_ that is an accurate summary [03:35] iff you are using trusty you can test with both compilers on your amd64 machine [03:35] just use the -compiler=gccgo flag to switch to gccgo [03:35] if that passes, that is all that we expect [03:37] hmmm [03:37] I thought I was on 14.04, but I'm on saucy [03:37] I'll have to try on my pc [03:41] bodie_: we can probably get the compiler in a backport ppa [03:41] but you probably want to upgrade to trusty pretty soon [03:41] a. it'll be easier [03:41] b. saucy suport expores at the end of JUly ? I think [03:42] c. it's the smoothest upgrade i've had, it's very polished from saucy -> trusty [03:42] makes sense :) there was some reason I'd set up our dev remote as saucy... I think there was some kind of weird build issue when we were coming onboard [03:42] and this laptop has gone from P -> Q -> R -> S [03:42] nice [03:42] -> T [04:15] axw: did you take https://bugs.launchpad.net/juju-core/+bug/1321492 ? [04:15] <_mup_> Bug #1321492: provider/openstack: gccgo test failures [04:18] davecheney: I did [04:19] davecheney: working through gccgo bugs now [04:19] sorry, just proposed a fix [04:19] no worries, was trivial anyway [04:24] axw: i'm going to fix all the trivials in the ppc tests today [04:24] then we'll be able to see the underlying problems better [04:24] there are, roughtly [04:24] 3 [04:24] one tools related failure [04:24] a timeout with the joyent tests [04:25] and a runtime or compiler crash [04:25] i want to expose those as the only failures [04:25] https://codereview.appspot.com/95550045 [04:25] ok, cool [04:25] the rest are just noise [04:25] davecheney: the one in worker may not be trivial, if that's the runtime/compiler crash you're referring to [04:25] yup [04:25] looks like it's related to the receive loop in init() [04:26] axw: you take a look at https://bugs.launchpad.net/juju-core/+bug/1303583 [04:26] <_mup_> Bug #1303583: provider/azure: new test failure [04:26] i see you've snagged it [04:26] davecheney: already proposed [04:27] link ? [04:27] https://codereview.appspot.com/93520047/ [04:27] axw: do you use lbox propose -bug=NNN [04:28] the it links the bug to the branch [04:28] davecheney: I don't because of the milestone thing [04:28] so when you get an email that the branch is merged [04:28] I usually link manually, but sometimes forget [04:28] you can click through to the bug [04:28] fix the milestone and mark it fix comitted [04:30] axw: fwiw i tag all these bugs as gccgo and pcc64el [04:31] davecheney: yup, have been looking at the gccgo tag so far [04:31] i consider them to be synonyms, but others like to track them independanty [04:44] I think my MR is mostly ready to roll, but I'm having some trouble when I try to build with GCCGo. I'm not sure if I'm doing something wrong. Can anyone else verify? [04:44] https://codereview.appspot.com/94540044/ [04:44] it adds a few dependencies and one of them doesn't seem to be happy. [04:45] bodie_: which one [04:45] i'll try [04:45] github.com/sigu-399/gojsonpointer [04:45] % cover [04:45] PASS [04:45] coverage: 66.7% of statements [04:45] hmmm ... [04:46] % go test -compiler=gccgo [04:46] PASS [04:46] ok github.com/sigu-399/gojsonpointer 0.047s [04:46] lucky(~/src/github.com/sigu-399/gojsonpointer) % [04:46] works for me [04:46] bodie_: if you are using saucy [04:46] you will have gccgo-4.8 [04:46] which is, sadly, not up to the task [04:46] I'm on Trusty on my PC here [04:47] bodie_: can you show what you see, use paste.ubuntu.com [04:47] or install pastebinit [04:47] yeah === vladk|offline is now known as vladk [04:48] http://paste.ubuntu.com/7500153/ [04:48] perhaps it's because I'm using zsh... [04:49] oh [04:49] negative [04:49] same issue using bash [04:49] that is a differnt problem [04:49] here is what's happened [04:50] 1. you did go get -u -v launchpad.net/juju-core/... [04:50] which is going to fetch all the _current_ juju dependencies [04:50] right [04:50] then you switch to your branch [04:50] which essentially has new dependencies [04:50] which is why godeps is whinging [04:50] the simplest solution for today would be [04:50] go get -u -v github.com/sigu-399/gojsonpointer [04:50] I figured I should switch before doing a godeps so that I get the same version I had in my code [04:50] and the same for the other two new deps [04:50] then godeps should be happy [04:51] oh.... but it's complaining because my code is an older version of core...? [04:51] wouldn't ... oh [04:51] I see [04:51] could I go get lp:~binary132/juju-core/charm-actions ? [04:53] davecheney: I got to the bottom of the issue in worker [04:53] davecheney: seems that closing a channel twice in gccgo will abort hte process, even if you recover the error [04:53] panic* [04:54] axw: hmm [04:54] is that worth a bug upstream ? [04:54] one nice thing about gccgo is gdb actuially works [04:54] davecheney: http://play.golang.org/p/eUJw-e1GRJ [04:54] davecheney: yeah I think so [04:55] axw: two secs [04:55] bodie_: are you going ok for the moment ? [04:55] how do I patch name, username and id of the current user for testing? I've tried s.PatchEnvironment("USER", "admin001"), but in the test switch still prints "jesse" [04:55] axw: what happens if you make the call stack a bit longer [04:56] i suspect that the main function, or main gorotuine might be a bit special in gccgo [04:56] yes, I'm about to head to bed actually (UTC-4) [04:56] davecheney: it wasn't in the main function in the test, that's just a minimal repro [04:56] would the right way to do things from the beginning be to fetch via lp:~binary132/juju-core/charm-actions directly? [04:56] axw: http://play.golang.org/p/VNezaP7ADp [04:56] bang on [04:57] davecheney: FYI it's in worker/simpler.go, the Kill method [04:57] we can work around it, but a upstream bug is warranted [04:57] hrm, gccgo not found on path on trusty either [04:57] axw: do you want to raise it ? [04:57] or i can do it [04:58] I can [04:58] cool, excellent [04:58] surely double closing a channel is a bug [04:58] and using recover to cover for that is a horrid smell [04:58] yeah [04:58] axw: raise the bug on golang.org/issue [04:59] will do [04:59] i don't think the issuetracker on the gofront-end project is used as much [04:59] can I just apt-get install gccgo? [04:59] will that be suitable? [04:59] bodie_: yes [04:59] okay, that's nice [04:59] bodie_: actually, you'll have to do [04:59] apt-get install gccgo-4.9 [05:00] uh oh, already started build. do I need to clean [05:00] ? [05:00] er, it looks like I may have installed gccgo-4.9 by default (since trusty) [05:01] gccgo -v [05:01] % gccgo -v 2>&1 | tail -n1 [05:01] gcc version 4.9.0 20140405 (experimental) [trunk revision 209157] (Ubuntu 4.9-20140406-0ubuntu1) [05:02] http://paste.ubuntu.com/7500172/ [05:02] bodie_: run godeps -u dependencies.tsv [05:02] if it whinges, fix the problem and run it again [05:02] davecheney: filed https://code.google.com/p/go/issues/detail?id=8070 [05:03] davecheney: I can fix it in our code, unless you're already doing that [05:03] added some code [05:03] sorry, tags [05:03] axw: if you're there have a crack [05:03] i'm doing another ppc test run to find some more issues [05:03] all righty [05:04] there we go, seems to be working [05:04] thanks [05:04] and built! [05:04] win [05:04] is the bot running ? [05:04] it's been a long time since I marked that change as accepted [05:04] approved [05:04] bot? [05:05] bodie_: the commit landing bot [05:05] aww crap [05:05] i see the problme [05:07] I need to lbox propose again, right? [05:08] always [05:08] propose early, propose often [05:08] if in doubt.... propose again [05:08] then propose s'more! [05:09] yeah, my last propose was -wip [05:11] and there it should be [05:11] https://codereview.appspot.com/94540044 [05:12] wallyworld: axw what are we going to do about the joyent tests ? [05:12] which bit? [05:12] the time taken to run them? [05:13] i'll got pull requests waiting to be merged by upstream [05:13] that will fix the execution tim [05:13] is there anything else? [05:15] that's basically it [05:15] they run so slowly on ppc they always timeout [05:15] yeah. i did fix it over a week ago [05:15] just need to get my changes to upstream libs merged in, hopefully this week [05:16] davecheney: axw : so, do you have a feel for how long we need to get the tests running under gccgo and ppc (assuming joyent ones are fixed)? 1 week? 2 weeks? [05:17] wallyworld: today i feel confident that we can close this off in a week [05:17] o/ [05:17] \o/ [05:17] ask me tomrrow, we might have a different answer [05:17] lol ok [05:17] but it's looking pretty good so far [05:18] is there anything outside our control? what's the exposure to compiler issues? [05:18] ask axw, i think it's managable [05:18] wallyworld: so far seems fine [05:18] compiler issues? [05:18] wallyworld: only one gccgo-specific issue, but that's because our code is a bit crap [05:18] brtb [05:19] ok, so we can fix our code hopefully [05:19] yes I am fixing that one atm [05:20] axw: looking at your address polling branch - it seems that DNSName is not really used with the changes made [05:20] http://paste.ubuntu.com/7500164/ [05:20] some of these have fixes waiting to be landed by the bot [05:20] wallyworld: it's not meant to be anymore [05:21] axw: yeah, i had heard that i thin, just wanted to confirm. so we should follow up and remove it i guess [05:21] davecheney: that pastebin doesn't look too bad [05:21] wallyworld: oh sorry, misunderstood [05:21] wallyworld: if it's not used anywhere, definitely [05:22] wallyworld: I thought it might still be used in status [05:22] axw: it doesn't *seem* to be at first glance. not sure about status [05:22] but it wouldn't... it uses machhine's public address now [05:22] yeah [05:22] wallyworld: I will follow up and wipe it out [05:22] only issue is we would want to give people te dns name to connect to [05:22] rather than an ip address [05:22] we can do that still, with Addresses [05:23] there's an address type [05:23] ah yes, true [05:23] 'night all [05:23] night [05:23] seeya [05:23] axw: ok, i'll get a 2nd opinion on getting rid of dnsname and we can nuke it [05:24] cool [05:30] davecheney: https://codereview.appspot.com/98480046 fixes worker [05:31] man, that is a nasty code smell [05:31] why isn't this code using a tomb ? [05:31] this is exactly what a tomb is for [05:32] * axw shrugs [05:33] tomb seems a little heavy here, it's pretty trivial code [05:33] lol [05:47] wallyworld: dumb question... jc.DeepEquals is not the same as gc.DeepEquals right? [05:47] wallyworld: if I want to use jc.SameContents what import is that? [05:47] jcw4: IIRC it differs based on whether []slice(nil) == []slice() [05:47] jcw4: same end result, but jc.DeepEquals gives far better errors, so use that [05:47] is the empty slice the same as a nil slice [05:48] plus the slice difference, yeah [05:48] jam, wallyworld cool thanks [05:48] gc.DeepEquals is considered deprecated [05:48] what is the import for jc.* [05:48] juju-core? [05:48] launchpad.net/juju-core/checkers i think [05:48] k, tx! [05:49] no [05:49] bad memory [05:49] github.com/juju/testing/checkers [05:50] wallyworld: I see it now... it's in a handful of tests already [05:50] yeah [06:00] hmm; gc.DeepEquals is okay for maps, but use jc.SameContents for slices? [06:01] it looks like even gc.DeepEquals is intended for slices... [06:01] is there any similar comparison tool for maps? [06:01] Or do I need to compare the keys and values separately? [06:02] jam, hey [06:02] morning dimitern [06:03] jam, so everybody else than niemeyer is chickening out of giving me an lgtm for this goamz branch, 3rd day in a row, and i'm starting to get frustrated.. https://codereview.appspot.com/98430044/ [06:03] jcw4: DeepEquals works for slices sure, but it fails if order is different [06:04] and he even gave me a not lgtm, but i fixed what we discussed, and i'll be waiting for him to appear later today and hopefully approve it [06:04] SameContents doesn't care about order, so you can think of it as treating a slice like a set [06:04] well, except sets can't have dupes [06:04] wallyworld: I'm getting an error using SameContents with two maps [06:05] use DeepEquals [06:05] wallyworld: ok [06:05] wallyworld: tx! [06:05] np [06:21] http://paste.ubuntu.com/7500334/ [06:21] axw: getting closer [06:23] nice [06:23] mgz, fwereade : I think this is the last go round: https://codereview.appspot.com/98260043 [06:28] dimitern: I have some comments, but adding reviews just adds more chefs to the pot :). I still feel like you're in the best position to be comfortable with it, and if you are, LGTM [06:29] jam, thank you! [06:52] mornin' all [06:59] hey rogpeppe [06:59] dimitern: yo! [07:00] rogpeppe, just back from holiday? or you're hanging in other channels usually? [07:00] :) [07:00] dimitern: just back from holiday [07:00] dimitern: was in Cypress for two weeks [07:00] rogpeppe, excellent! [07:01] dimitern: yeah, it was! [07:06] night all [07:06] 'night davecheney [07:10] jam, so no standups today, just the x-team meeting @14.30 utc? [07:11] axw: i will add another test before landing - you're right, it's messy to do [07:11] wallyworld_: okey dokey [07:11] i was being lazy :-) [07:12] wallyworld_: I just noticed there's a card "TestEnsureAdminUser is still broken" [07:12] wallyworld_: did CI fail again? [07:12] yeah, the latest test run has it failing, and also several over the past few days [07:12] not every time [07:12] but often enough [07:12] the other failures all seem to be one off [07:12] wallyworld_: latest run of "walk tests" on trusty is buggered, the machine is out of space [07:13] this is the one i looked at just before [07:13] http://juju-ci.vapour.ws:8080/job/walk-unit-tests-amd64-trusty/330/console [07:13] ok [07:13] plus a few others over the past few days [07:13] it seems much better but not quite fully fixed [07:14] * wallyworld_ -> soccer [07:14] later [07:50] dimitern: right now we didn't have the team standup because of the x-team meeting, I might reintroduce it, but we hadn't talked about it, so I'll skip it today [07:52] jam, ok, do any of us need to be at the x-team? [07:52] dimitern: the goal is that if you *can* be there you should, as we'd like people to go to 2 out of 3 of the meetings. [07:53] I will not be able to be there tonight, as it is my 10-year anniversary weekend [07:53] but I'll probably be going to most of them. [07:53] jam, right, I'll try to attend then - it is one of the poor-quality-voip-conference calls right? [07:54] morning [07:54] dimitern: ah sorry, this is the whole team juju meeting at 18:00 UTC [07:54] dimitern: sorry, the x-team meeting at 14:30UTC is not one you have to be at [07:54] we need a smattering of people from Juju to be there. [07:55] jam, ah, ok [07:55] dimitern: the one at 18:00UTC is the replacement for the weekly whole team meeting that we used to have at 10:00UTC on Thursdays [07:55] * TheMue is looking forward to the 1800UTC meeting, see some of the new US colleagues. [07:55] jam, it'll be more difficult to attend the one at 18 utc - it is the only one this week and will rotate next week at another time iirc? [07:56] I think 18:00 is around 20:00 for you, right? [07:56] dimitern: right, next week it will be 8 hours later [07:56] (16 hours earlier, depends on POV) [07:56] jam: it is 20 CEST, yes [07:56] jam, :) so +8h each week? [07:56] dimitern: right [07:56] hey TheMue [07:56] we'd like people to make it to 2 out of 3 [07:56] dimitern: heya [07:57] jam, ok, so 18 utc, 2 utc and 10 utc [07:57] jam: but don’t expect me to join the 200UTC meeting :D only if I cannot sleep [07:57] dimitern: yep [07:57] TheMue: yep. [07:57] It is at 6am here, so I can *just* make it. [07:57] jam, why 2 out of 3 ? [07:58] dimitern: it is an approximate, but hopefully we can make that work [07:58] 6am is hard enough *yawn* [07:58] we want people to see each other as much as we can [07:59] dimitern: it isn't hard and fast, but we used to see everyone every week, but can't actually do that with 21 people [07:59] jam, i see - that way at least for one of the meetings everyone will make an effort to attend at an inconvenient time [08:00] it is possible that we could make it at the start of and end of some people's work day [08:00] but it is too hard to actually try to do the numbers to actually figure out what time of day that would be [08:00] this way at least 1 per 3 weeks should definitely be in your normal TZ [08:01] or perhaps make 4 meetings instead of 3, that way the intended goal is easier to achieve i think [08:01] each one 6h apart [08:02] jam, was that an option? [08:03] it wasn't ever brought up, it would be something to consider === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline [08:34] morning all [09:09] dimitern: the next patch in my API versioning saga: https://codereview.appspot.com/97630045 [09:09] jam, looking === vladk|offline is now known as vladk [09:11] * jam takes his dog to the kennel for the weekend, bb < 1hr [09:15] morning [09:18] perrito666: morning [09:47] * jam is back [09:47] morning perrito666 and voidspace [09:52] jam: morning [10:01] have I actually got internet right now... [10:02] mgz: yes you have :-) you ok for 1:1? [10:03] now google is responding, I'm there [10:10] gah, the srvRoot authorizer methods aren't tested [10:10] I've added a new one [10:10] so testing it means working out how to test those methods at all [10:12] unless they're tested elsewhere [10:20] jam: ping [10:20] voidspace: ? [10:20] jam: state/apiserver/root.go [10:20] jam: srvRoot represents a client's connection [10:20] well, it represents the API [10:21] jam: and it is the main implementation of the Authenticator interface [10:21] for clients or agents [10:21] jam: the Authenticator methods on it aren't tested directly that I can find [10:21] and in fact most tests use the FakeAuthorizer [10:21] voidspace: I'm pretty sure they are all tested indirectly, but yes, there are no direct tests of srvRoot behavior [10:21] I've added a (simple) method to this, and the only test is against the FakeAuthorizer (which also now has this method) [10:22] voidspace: so things that test the APIServer directly use FakeAuthorizer, but there are a bunch of client-side tests that go end-to-end that have a real srvRoot in the middle. [10:22] voidspace: but if you can write some direct tests, I'd be happier to see them. [10:22] jam: so the only way to make that work would be to create a client that is neither representing a MachineAgent nor a UnitAgent [10:23] jam: and we struggled to do that as it seemed the way to get the API was "OpenAPIAsMachine" [10:23] voidspace: we have 3 types of entities that access the API, Client, MachineAgent and UnitAgent [10:23] right, but how to get a test client entity wasn't obvious [10:23] voidspace: so opening the API as the admin user is not a machine or unit agent [10:23] ah, ok [10:24] voidspace: things that access the "Client" facade are not machine or units either. [10:24] jam, reviewed [10:24] if a direct test is preferable (srvRoot is a private type) can you suggest how I would do that or point me at something that gets at it for testing [10:24] voidspace: I know of nothing that tests it directly, and I'm working on that right now for some API versioning tests [10:25] ah right [10:25] *I* like to have direct Unit tests of each layer [10:25] not everyone felt the same [10:25] namely, the people who implemented this the first time [10:25] yep, me too - it seems wrong to only indirectly test [10:25] felt that it was better to test from the actual client code [10:25] because those tests can be changed or removed as it's not obvious what they're testing [10:25] dimitern: thanks [10:25] unless you add a note [10:25] it's also not obvious where to go to find them / update them [10:26] voidspace, yeah, +1 to direct tests [10:26] well, +1 in theory [10:26] voidspace, it helps with interface sanity more than anything else I'm aware of [10:26] private types that are hard to construct make it difficult [10:26] obviously not built with testability in mind [10:26] voidspace, +1 also to tiny weeny little packages that don't have internal layers that really need tests but don't have them [10:30] voidspace: fwiw I'm currently overhauling srvRoot a *lot* so you may not want to poke at it too much. [10:32] jam: ok, it's literally a one line method [10:33] jam: maybe it's not warranted as there's only one use for it - if I see the need again I'll add it [10:33] jam: and by then it should be easier to test due to your work [10:47] sgtm === vds` is now known as vds === vladk|offline is now known as vladk [10:55] fwereade: don't forget the team leads call in 5min. If you're available, I have a quick question wrt API that I'd like to run by you (if you can join early) [11:00] fwereade: in terms of making GetRsyslogConfig bulky, it takes no params [11:00] fwereade: so is it just a question of renaming to Configs and returning a slice of results? [11:01] voidspace, shouldn't we be asking for the rsyslog config for ourselves, though,much as we watch the config for ourselves? [11:01] dimitern: https://codereview.appspot.com/97570051 implements the "Does this Facade return the exact type I expected" [11:01] fwereade: no, units and state servers all need to log to "all the state servers" [11:02] fwereade: so we want "the global config for everything" in all cases [11:02] voidspace: fwiw ^^ removes some of the coupling so you can create one with apiserver.TestingSrvRoot(State) which would let you write the tests you wanted. (once all this stuff lands) [11:02] jam: ah, cool [11:02] voidspace, the answers for "where should machine 7 log to" and "where should machine 9 log to" may happen to always be the same [11:03] voidspace, but I think they're different questions [11:03] voidspace, and I think it's a bit nicer to keep the object of the sentence implied by the API call explicit ratherthan implicit in the connection [11:03] voidspace, any evidence of sanity there? [11:03] fwereade: it sounds like unnecessary complexity in the name of trying to be consistent [11:04] fwereade: the *explicit task* we are achieving is "make sure everything logs to all the state servers" [11:04] that's the goal of HA - present a single world view with redundancy [11:05] so to provide an api that looks like we could have multiple world views actually breaks that model [11:05] (conceptually) [11:05] voidspace, it's more about (1), yes, consistency, but (2) trying to make it easy to build the API out of easily composable chunks: and methods with implicit params don't really help there [11:05] voidspace, I may have to think about that for a mo, but I'm not sure I agree [11:06] I defer to you of course, but I'm a bit wary of the "make all API calls bulk calls even when not appropriate" approach [11:06] I'd rather we think about whether it makes sense for each endpoint (and yes bearing the future in mind because API churn is a pain) [11:06] but I still defer to you :-) === vladk|offline is now known as vladk [11:07] but we *want* a single logging endpoint for the user - and we want HA to provide redundancy for that *behind the scenes* [11:07] jam, thanks, will look in a bit [11:07] so it seems to me that the model here is a single call [11:07] voidspace, it is certainly less compelling than it originally was, because at long last we're getting api versioning, so the cost and complexity of changing an API is much smaller [11:07] voidspace, there are 2 questions in play though [11:08] versioning isn't a silver bullet - you still have to maintain the obsolete version, which can be a real nuisance [11:08] so better to get it right :-) [11:08] voidspace, one is, should inform the server who's sending the logs when we ask where they should be sent, and I think that's a yes [11:09] "should (?) inform the server" ? [11:09] voidspace, "we" or "the client" or whoever [11:10] voidspace, "where do all logs go always" != "where do $entity's logs go", even if the answers are currently the same [11:10] that's the one I'm disputing I think - we explicitly want all logs to go to the same place [11:10] doorbell - back in 2 [11:11] hello === wwitzel3_ is now known as wwitzel3 [11:11] wwitzel3: hello [11:12] voidspace, the other is "is the cost/benefit of mandating all-bulk calls likely to pay off" and I *think* they actually are: the consistency argument is stronger than you might think, because people do what they see us doing already; and IMO enough calls are worth bulking that it's a win to provide a consistent API that always allows it even if we don't always use it [11:12] fwereade: if we start sending different logs to different places, we'll be breaking the HA model - which is, all logs are always available and it doesn't matter which state server you contact [11:12] fwereade: right, I certainly don't want to fight strongly on that point [11:13] voidspace, strawman: additional external logging targets for units of particular services [11:13] fwereade: so, the args should be a slice of entities and the result a slice of configs? === vladk|offline is now known as vladk [11:13] voidspace, yes please [11:14] fwereade: hmmm... [11:14] fwereade: ok [11:14] voidspace, my argument is not that it always makes sense; it's that it often does, and that the bonus from consistency pushes it past the line to just-always-do-this [11:15] fwereade: ok === motter_ is now known as motter === vladk|offline is now known as vladk === vladk|offline is now known as vladk [11:53] fwereade: for bulk calls, is the convention that, assuming no "global error" (causing the whole call to fail) happens, you return "result, nil" - where result is a collection (.Results) with an Error field for individual errors [11:54] voidspace: yes [11:54] so even where this one call with one error, you return nil for the error - but result.Results[0].Error has the real error [11:54] cool [11:55] jam, fwereade: hiya, i was talking earlier with frankban about moving store out of core, and we think that it's a reasonable principle that nothing outside juju-core imports juju-core's TestBase. does that seem right to you? [11:56] rogpeppe, agreed; but ideally we'd be moving stuff to github.com/juju/testing rather than duping or dropping functionality [11:56] fwereade: yup [11:56] rogpeppe, perfect [11:56] fwereade: but TestBase in particular has very core-specific functionality [11:57] fwereade: but packages like filetesting will move too [11:57] rogpeppe, yep, +1 to that [11:58] rogpeppe: in general we'd rather have things not depend on code inside juju-core, and have things nicely pulled out into smaller modules. I'd be willing to be a bit pragmatic about it [11:59] jam: i hope we can be good about it. in particular, i very much hope that we can make a strictly non-cyclic dependency graph between repositories. [12:07] fwereade: https://codereview.appspot.com/100460045/ is the one patch along the way that didn't actually get an LGTM, I think I've finished the work on it that I wanted to do. [12:07] dimitern: ^^ if you want to give that a look as well [12:08] it is *mostly* a mechanical application of the previous patches. [12:08] rogpeppe: I didn't notice that reply came from you [12:08] rogpeppe: thanks, and hi [12:08] voidspace: hiya :-) [12:09] hey rogpeppe wb === psivaa is now known as psivaa-afk [12:09] perrito666: ta! [12:10] fwereade: https://codereview.appspot.com/97630045/ is a followup that Dimiter reviewed, and https://codereview.appspot.com/97570051/ is one that needs reveiew [12:18] jam, reviewed https://codereview.appspot.com/97570051/ [12:18] dimitern: thanks [12:19] * perrito666 wonders why ctrl+w doesnt work on the screen he is looking at instead of the one with focus [12:19] perrito666: heh, I do that all the time [12:38] hah, I wondered why I suddenly had all these failures on our rsyslog branch [12:38] wwitzel3: ping [12:39] wwitzel3: revision 2755 (most recent one) of your rsyslog branch merges a branch of mine that was a dead end [12:39] wwitzel3: I restarted the work in a different branch [12:39] wwitzel3: can you back out that revision? [12:42] wwitzel3: I have further commits so it's harder for me to do [12:44] in the meantime [12:44] * voidspace lunches [13:04] voidspace: "bzr merge -r 2755..2754" should do what you want, fwiw [13:24] voidspace: was eating breakfast and getting Jessa off to work, ping me when you're back, I don't even see that revision on my branch of rsyslog-api [13:32] fwereade, hi [13:33] fwereade, I've started working on moving juju-core/cmd to a separate package (github.com/juju/cmd probably) - what is the process of creating a new juju repo on github? === psivaa-afk is now known as psivaa [13:51] tasdomas, I'm a bit surprised by cmd... oh, yeah, the store commands use it now [13:51] tasdomas, are you in the team on github? [13:57] wwitzel3: http://bazaar.launchpad.net/~wwitzel3/juju-core/009-ha-rsyslog-api/revision/2755 [13:58] wwitzel3: is that not your branch? https://code.launchpad.net/~wwitzel3/juju-core/009-ha-rsyslog-api [13:58] voidspace: huh, I just don't see it locally I guess [13:58] heh [13:59] wwitzel3: the command that jam gave is the one you should run [13:59] voidspace: it is indeed my branch, but locally when I look at the log, I see 2747 as the latest [13:59] it would be more painful for me [13:59] hah [14:01] voidspace: ok, try now? .. [14:01] not sure if I should cross my fingers or plug my ears [14:01] both? .. [14:02] hah [14:03] wwitzel3: now I see the latest revision as 2747 (!?) but with the offending revision still in it [14:03] I believe [14:03] when I merged I got no changes anyway [14:03] ok what is the offending revision # now? [14:03] 2747 [14:05] voidspace: ok, pushed up the revert of that (I hope) === hatch__ is now known as hatch [14:22] wwitzel3: ah, I see what happened I think [14:23] wwitzel3: you intended to merge in my changes removing the obsolete check that Port and CACert had changed [14:23] wwitzel3: this was the branch that should have been merged [14:23] https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-shortcut-removal/+merge/220105 [14:24] wwitzel3: lp:~mfoord/juju-core/ha-rsyslog-shortcut-removal [14:26] ok, merged and pushed === cory_fu2 is now known as cory_fu [14:29] wwitzel3: and rsyslog worker tests are passing again for me! [14:29] and I don't think we lost anything... [14:30] great :) [14:30] hmmm... [14:30] except this on your branch [14:30] wwitzel3: [14:30] Diff: 12160 lines (+2221/-2600) 262 files modified (has conflicts) [14:30] great [14:30] wwitzel3: you may need to merge trunk and resolve conflicts [14:31] :-o [14:31] yep, doing that now [14:32] 11 conflicts encountered. [14:32] awesome [14:32] weird [14:32] wwitzel3: files you've not touched you can resolve with --take-other [14:33] well that is handy :) [14:33] wwitzel3: and in fact, the conflicts are pretty much all in files we've not touched [14:33] voidspace: pushed [14:33] wwitzel3: thanks [14:34] voidspace: we should pair after this call [14:34] wwitzel3: ok [14:37] anyone recognise this error: "missing agent version in environment config" [14:37] in provider/dummy [14:38] actually comes from testing.AssertStartInstance [14:38] voidspace: your config file is being generated without version ? [14:38] perrito666: sure, but it's from a test [14:38] sorry, yeah - test failure error [14:38] ah, never mind [14:39] I have build failures [14:39] hah [14:39] priorities [14:39] :P [14:39] in the openstack provider [14:40] I wonder if dependencies.tsv wasn't updated on trunk [14:41] if I merge from trunk it says nothing to do [14:42] but if I switch to trunk and run godeps it *does* update some dependencies [14:42] wwitzel3: you still have an 11000 line diff [14:42] wwitzel3: I think you somehow collapsed a bunch of changes into that single revision === hatch__ is now known as hatch [14:43] or something [14:43] somehow our branch has undone a load of changes [14:43] goddammit [14:44] well, I can undo the merge I did earlier [14:44] where I jumped back a revision [14:44] right, that would bring back the unwanted changes from my branch [14:45] let's look at the revision history and see if we can work out where we want to go back to [14:46] wwitzel3: it seems like you want to go back to 2746 (which I thought was what you'd done previously) [14:46] and then merge trunk in [14:46] which should leave just our changes [14:49] ah [14:50] so the problem is that trunk has already been merged into your branch [14:50] so the reverse merge *undoes* trunk revisions [14:50] merging trunk again says "Nothing to do" because it sees those revisions in the merge history [14:50] mgz: ping [14:51] niemeyer, hey [14:51] dimitern: Heya [14:51] niemeyer, I fixed what we discussed in https://codereview.appspot.com/98430044/, can you please take a look if it's good to land? [14:52] dimitern: Will do, thanks [14:54] fwereade / others, looks like all the libs in question are apache2-licensed in their source code but LICEN(S|C)E[.txt] isn't included in the two deps for gojsonschema. should I open MRs or is it sufficient to have the license in the files themselves? [14:54] wwitzel3: I'm going to try and fix it by branching trunk and merging from revision 2747 of your branch [14:54] voidspace: ok [14:54] wwitzel3: that *seems* to have worked [14:55] wwitzel3: we may need to abandon your one - the history seems to be irrevocably screwed now (?) [14:56] wwitzel3: I'm in moonstone by the way [14:56] dimitern: Does it make sense to have the same domain name for different private IPs? [14:57] dimitern: Ah, nevermind [14:57] dimitern: The loop breaks out on the first entry [14:57] niemeyer, right [14:57] dimitern: Okay, LGTM [14:57] niemeyer, thanks! [14:59] wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220667 [15:06] wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220669 [15:07] voidspace: second link looks right [15:09] someone help me understand what fwereade means by "d" in these comments? https://codereview.appspot.com/94540044/diff/60001/charm/actions_test.go [15:10] bodie_: delete [15:15] wwitzel3: https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-2/+merge/220603 [15:17] voidspace: hey, sorry missed ping earlier [15:17] o/ mgz [15:17] hey bodie [15:18] mgz: we have a screwed bazaar branch that we can't recover [15:18] mgz: we've mostly worked round it now and are abandoning the branch [15:18] mgz: but I thought you might be able to help [15:18] voidspace: fallback is just pull out the changes in the tree into a fresh branch normally [15:19] mgz: right, that's what I'm doing [15:19] mgz: except it's changes from three different branches we had consolidated [15:19] and it was the consolidation that screwed us [15:19] what fun [15:19] and then unfortunately there was a merge back the other way - making it hard to back out changes without also reverting a load of trunk changes [15:19] which is what we did [15:19] and why we're screwed [15:20] we reverted 11000 lines of changes from trunk [15:20] and the history shows those changes as already merged - so we can't bring them back again [15:20] they probably weren't needed anyway [15:20] yeah, probably [15:24] wwitzel3: this is now the full consolidated branch [15:24] https://code.launchpad.net/~mfoord/juju-core/ha-rsyslog-good/+merge/220669 [15:26] voidspace: losing the history shouldn't matter much, as we're going to be losing history anyway next week [15:26] mgz: really, we're just dumping into github rather than converting the repo? [15:26] mgz, losing history? [15:26] what? [15:27] easy to convert with history [15:27] voidspace: we're converting, but that's a lossy conversion [15:27] oh, ok [15:27] we'll just pretend it's git's fault [15:27] I mean, I'm sure nothing like this will ever happen once we're using git [15:28] git is not *at all* renowned for mangling repos into completely unusable states [15:28] voidspace: nope, git users are though [15:28] voidspace, perrito666, bodie_ is anyone planning to land a branch in the next hour? I want to take CI offline to upgrade how it builds test packages [15:29] sinzui: nope [15:29] sinzui I'm hoping to in a couple hours [15:29] hoping to get one in asap, but struggling a little with tests right now. I can push it back [15:29] perrito666: git will happily chainsaw your repo into an unusable mess [15:30] and smile whilst doing it [15:30] sinzui: an hour should be fine [15:30] voidspace: but will blame me [15:30] so technically its my fault [15:30] sure... [15:30] I'll happily blame you too if you like :-D [15:31] bodie_, does the landing bot hate you? [15:31] voidspace: usually when I use git, which I like, I feel like barely above using diff+patch+ftp [15:31] I assume I would know if it did [15:31] heh [15:31] and I don't, so it must not [15:31] fwereade: we've created a new CL as we managed to "break" the other one [15:31] fwereade: https://codereview.appspot.com/91630045/ [15:32] fwereade: that addresses your previous review comments as discussed (I believe) [15:48] jcw4, mgz, I'm not certain I'll be able to have a voice session at 1600 -- michelle will be getting home any minute now and i failed to communicate the plan to her so I might have to do text this time [15:49] I think we're preferring IRC right now anyway [15:49] that's fine [15:49] #jujuskunkworks to reduce the noise here though? [15:49] lets === vladk is now known as vladk|offline === hatch__ is now known as hatch [16:48] fwereade: I'm assuming this is what you wanted with the err handling and return of newActionID: http://bazaar.launchpad.net/~johnweldon4/juju-core/action/view/head:/state/unit.go#L1343 [16:49] mgz: ping [16:52] jcw4, yeah, looks good [16:52] fwereade: ta [16:53] fwereade: so I assume that when we release 1.20, what is currently "upgrades/steps118.go" will be changed to "upgrades/steps120.go" [16:54] fwereade, https://codereview.appspot.com/94540044 [16:54] hopefully this is satisfactory :) [16:55] I made a few tweaks to the regexp that I'd not caught until I added the param name tests you'd requested [16:55] voidspace, no? those are needed for anything pre-1.18 to be valid for 1.18; they don't need to be run again from 1.18 to 1.20; others might, and they'd be 1.20 ones (or I guess 1.19 ones, I hope we'd discover them earlier than 1.20...) [16:56] voidspace: hey [16:57] fwereade: but we don't have steps116.go, I thought we only supported 1.18 -> 1.20 [16:57] mgz: hi, I was going to ask what I'm asking will [16:57] right -- so the deps I added with gojsonschema have apache v2 license in the source files but not repos themselves [16:58] I'd better my changes [16:58] do I need to get the author to put LICENSE.txt in the repos or are we good? [16:58] *revert my changes [17:10] voidspace, the idea is that we *should* be able to support arbitrary upgrades, and that we run them from src->dst in order [17:10] I have a local provider deployment of mysql to kvm that hangs in pending state. any idea if that is fixed by bug 1317197? [17:10] <_mup_> Bug #1317197: juju deployed services to lxc containers stuck in pending [17:10] fwereade: ok, I'll leave the syslog port upgrade step in place [17:11] sinzui, maybe you know ^ [17:13] coreycb, sorry, the bug is unrelated. That bug is about a lock dir used when working with lxc templates [17:15] sinzui, ok need a bug for my issue? [17:17] coreycb, common reasons for localhosts stuck in pending is a bad cloud image cache. I don't know how kvm manages that [17:17] coreycb, I think we do need a bug https://bugs.launchpad.net/juju-core/+bugs?field.tag=kvm doesn't describe the issue [17:30] fwereade, or mgz : if you get a chance to review -- https://codereview.appspot.com/98260043/ [17:30] sinzui, thanks, I opened bug 1322281 [17:30] <_mup_> Bug #1322281: local provider deployment of mysql to kvm hangs in pending state [17:30] thank you [17:37] Ahh, there's nothing like starting the day at 4am with a puking 11 month old. [17:38] natefinch: all uphill from here [17:38] haha, yeah [17:40] I've ended the night with a puking 30 year old before, if that makes you feel any better :P [17:40] wwitzel3: yourself doesn't count :P [17:41] I'd puke, thats fiscally irresponsible [17:41] I don't [17:41] wwitzel3: I don't know... with a 30 yro you can laugh at them [17:42] and... walk awy [17:42] I feel like you can laugh at an 11 month old too, doubt they would remember [17:42] haha [17:42] oh, yeah, well the walk away part .. not so much [17:43] At least I didn't have to catch her puke in my hands this time, that was nice. [17:44] I see as a parent your definition of nice has to change slightly [17:44] lol yup [17:44] haha yep [17:51] natefinch: morning [17:52] voidspace: morning :) [17:54] hmmm... so adding an attribute to the slice of immutableAttributes in config.go isn't sufficient to make it actually immutable [17:54] I thought that was a bit hopeful [17:54] ah, wait [17:54] I didn't go install first [17:55] maybe it still is [17:55] and it looks like it is === vladk|offline is now known as vladk [17:56] ERROR cannot change syslog-port from 6514 to 3030 [17:56] that's what I wanted :-) [17:56] nice [17:56] ship it! [17:57] I should probably test it [17:57] rogpeppe, you around? [17:57] but that can wait until the morning [17:57] g'night all [17:57] EOD [17:57] o/ === vladk is now known as vladk|offline === vladk|offline is now known as vladk [18:56] natefinch, ping [18:56] alexisb: howdy [18:56] given you are the lead that is awake :) [18:57] heh [18:57] can you take a look at 2 bugs real fast and get them to triaged state [18:57] most likely they are both critical given they are blocking teams but I want someone to take a quick look first [18:58] 1322302 and 1322281 [18:58] ok [18:58] thanks [18:58] natefinch, does that mean you can give some input on whether we should use a dep of relatively unknown quality from github? rogpeppe indicated some concern [18:58] bodie_: [18:58] bodie_: er yep [18:58] bodie_: what's the repo? [18:59] https://github.com/xeipuuv/gojsonschema [18:59] AFAIK, it's the go-to JSON-Schema implementation for Go -- the only other github repo for gojsonschema is a fork of it which was recently merged [18:59] I could loop you into the MR email thread if you'd like [18:59] bodie_: I think I saw part of it at leasty [19:00] about licensing, right [19:00] I should let you take care of what alexisb wanted, but let me know if you have any input [19:00] bodie_: I'll take a look at it. [19:20] alexisb: I made the first one high, since slow is not the same as "doesn't work" [19:20] alexisb: the second one is "doesn't work" so I made it critical [19:20] alexisb: both need investigation by dev or QA to confirm it's reproducible, though [19:21] natefinch, thank you, can you make sure to comment on any needs that block forward progress from juju-core in the bug? [19:28] alexisb: sure [19:59] natefinch: here is the everybody is happy patch https://codereview.appspot.com/92320043 [20:01] perrito666: nice === vladk is now known as vladk|offline [21:22] morning all [21:23] how do you find the current juju user? [21:34] wallyworld_: et'all are folks seeing issues on HP Cloud [21:34] mbruzek: is seeing hex output in juju status [21:34] oh really? 1.18.3? [21:35] wallyworld_: 1.19.3 I think [21:35] I am seeing some very strange output [21:35] http://paste.ubuntu.com/7503161/ [21:35] juju status [21:36] ok, so crappy error message :-( looks like the security group quota may have been exceeded perhaps [21:36] let me have a look to see if i can figure it out [21:40] mbruzek: so, looking at the hp console, it seems your machine 1 reports an " Error(spawning) " [21:40] so at first glance it looks like juju tried to start the machine and got an error back from hp cloud [21:40] and reported the error in juju status but in a crappy way [21:41] you could try destroying the machine in juju [21:41] wallyworld_, I will do that now [21:41] and possibly manually deleting from hp cloud [21:41] perhaps file a bug about the poor error message [21:42] wallyworld_, this is my second time seeing these problems, I believe they will persist after I destroy-environment and re-bootstrap [21:43] mbruzek: it seem though that some machines start ok? [21:43] correct, but there always seems to be one that does not. [21:43] i tried clicking on the console log in the hp cloud admin page page it just hung [21:43] $ juju destroy-machine 1 [21:43] ERROR no machines were destroyed: machine 1 has unit "cassandra/0" assigned [21:43] Do I need to --force? [21:44] you either need to destroy the unit/service first or i think there's now a --force option? [21:44] perhaps the all-machine.log on the start server will give clues [21:45] you could juju debug-log in another terminal window as you try and deploy charms [21:45] and then see what it says if the machine provisioning on the cloud fails [21:45] not start server, state server [21:46] arosales: you meeting with the joyent folks today? [21:47] wallyworld_, I already had juju debug-log running in another window! [21:47] what did it say as machine 1 failed to start? [21:47] pastebin it if you want [21:50] wallyworld_: they defered till next wed [21:50] Is there somewhere all-machine.log is on my system? I am cut and pasting this to pastebin and it is not very fast [21:50] wallyworld_: any traction on your pull requests? [21:50] arosales: no :-(, still pending. we really need them merged [21:50] mbruzek: the log file is on the state server [21:51] wallyworld_, Here is the top of the juju debug-log command after I started the deployement [21:51] http://pastebin.ubuntu.com/7503212/ [21:51] wallyworld_: ugh. ok I'll ping again. Really they shouldn't have to meet with me to get the pull request merged. I thought they were taking a look. [21:51] wallyworld_, machine-0: 2014-05-22 19:48:53 ERROR juju.provisioner provisioner_task.go:417 cannot start instance for ma [21:51] chine "1": cannot run instance: failed to run a server with nova.RunServerOpts [21:52] arosales: part of the issue is that the ppc tests keep timing out because of the issue, and we are under prssure to get those sorted out [21:52] arosales: maybe they are looking at them :-) i'll check later today to see if they're still pending [21:53] mbruzek: so, it's saying that the cloud has reported an error starting the instance but doesn't really say what the error is (eg error code) [21:55] wallyworld_: ok I'll add that into my ping. [21:55] thank you :-) [21:57] mbruzek: did you just destory environment? [21:57] wallyworld_, Yes I wanted to try a simpler case. [21:58] the console shows just a state server machine now which is green, but also an old machine 1 [21:58] and i think that's the issue [21:58] the old machine 1 isn't getting removed because it's broken [21:58] I am on the hpconsole but not seeing that information what page are you on? [21:58] https://console.hpcloud.com/compute/az-1_region-a_geo-1/servers [21:58] if the old machine 1 is there, juju can't start another one with the same name [21:59] OK I see [21:59] that would be a logical explanation for the error [21:59] Yes [21:59] I just bootstrapped so I should only have 1 server out there. Let me destroy and clean that one up manually [21:59] yep, i reckon that will solve your issue (i hope) [22:00] wallyworld_: pinged again re joyent pull requests [22:00] awesome, thanks [22:07] wallyworld_, Terminating juju-hp-mbruzek-machine-1 seems to have done the trick. My deployment is looking to be in better shape. [22:08] great :-) [22:08] my guess there was an error starting it in the first place and hence a subsequent desroy env couldn't kill it [22:08] I agree [22:13] hazmat: you around? [22:16] wallyworld_, am now [22:16] waigani, what's up [22:16] hazmat: hey :) [22:16] doh. [22:16] :-) [22:17] ah you wanted wallyworld_ [22:17] hazmat: with dan's bug report about lxc staying in pending a long time - that's just because the first machine has to download the lxc image [22:17] wallyworld_, hmm.. [22:17] we added cloning for them so the next one started fast [22:17] wallyworld_, so let me pose it differently [22:17] i agree though that's possibly the primary issue there [22:18] we could do something like allow juju to copy an image across at bootstrap [22:18] wallyworld_, if i have containers already running (via manual).. it still takes up to a minute for them to show as started [22:18] during which they go through various odd permutations in status agent-state: (started) [22:18] ok, that's a bit different [22:18] it feels like the pinger/heartbeat is a bit schizo there [22:19] that seems like a sepaate but legitimate issue [22:19] wallyworld_, the fix there would be lxc template hooks respecting proxy settings [22:19] re dan's issue [22:19] that would decrease the time [22:19] alternatively downloading the image while giving feedback to the user [22:19] adding feedback is on the todo list [22:20] ie.. we don't call out 'install' hook running.. we just leave things in pending [22:20] i'm not sure if fetching the template honours the proxy settings [22:20] you think it doesn't? [22:20] its just doing a wget, but not sure the env variables propagate [22:20] from juju's invocation [22:20] ok, we can check that [22:21] there's no good short term soluation though sadly [22:21] apart from the proxy thing [22:22] wallyworld_, any feedback to the user via status would be helpful to users re long running ops [22:22] agree +100, but it's not necessarily trivial, it's on the todo list for sure [22:22] ack [22:23] i hope that dan's group can bear the pain of the initial download and use cloning for speedier deployment for the thers [22:23] others [22:23] i'll comment on the bug [22:24] hazmat: seems also bug 1322281 which was just raised is the same thing [22:24] <_mup_> Bug #1322281: local provider deployment of mysql to kvm hangs in pending state [22:24] "hangs" might mean slow template download [22:25] or no http egress except via proxy perhaps [22:25] so i think if we check the proxy thing, that may be what we can do short term [22:25] i'm sort a hoping we do not currently use the proxy settings so we have something to fix [22:28] wallyworld_, its more than proxy.. its respect a cache [22:28] * wallyworld_ -> breakfast bbiab [22:28] although typically the same [22:28] most of thse are orange boxes [22:28] which cache? [22:28] a squid cache? [22:28] which i guess the proxy would point to [22:28] wallyworld_, yeah.. squid-deb-proxy cache.. also orange boxes have full archive mirrors [22:29] which we don't configure/use [22:29] ok, i think we can/should add in proxy support for getting template [22:29] that should be a simple fix [22:29] sounds good [22:29] thanks for the input [22:30] now i am really off to eat and caffeinate myself [23:20] hazmat: i've done some checking. the lxc-create which calls wget is invoked via golang's exec.Command(). this does pass through env vars like http_proxy. so obstently, if they were to set http-proxy and /or apt-htty-proxy in their env config, those should be passed through and used when the template is fetched [23:21] do you know if they set http-proxy in their env? [23:23] wallyworld_, talking to kirkland atm about setting up a transparent proxy on the orangebox, so everything hits the proxy cache. [23:23] wallyworld_, i've heard different reports from different folks, i don't have a solid baseline for analysis [23:23] hazmat: great, so i hope/expect that will solve the bug [23:23] can you let me know how you get on? [23:23] so i can schedule juju work if needed [23:24] even without a transparent cache, setting http-proxy in env config should work