[00:44] davecheney: i've just pushed more wrench changes again. can you have another quick look? [00:44] if the scanner returns an error it's logged now [00:45] and various other things you raised have been addressed (except where I don't think they should be :-p) [00:45] menn0: yup, LGTM, thanks [00:45] davecheney: cheers [01:54] argh [01:54] thumper: [01:54] all this canWatch("") stuff is only so the tests can check the error path [01:54] in the real code they are hard coded to return true [01:54] but in the tests we set them to return false just so we can cause an error which we can then check ... [01:55] hmm... [01:55] seems suspect [01:55] if _, ok := <-watch.Changes(); ok { [01:55] result.NotifyWatcherId = p.resources.Register(watch) [01:55] what the code wants to do is cause this line to fail [01:55] but it can't, instead it causese the line above [01:55] if !canWatch("machine-0") { [01:55] return result, common.ErrPerm [01:55] to fail [01:55] so it gets it's error [01:57] thumper: so i can do two things [01:57] 1. supply a dummy, but valid tag [01:58] 2. remove the logic and the test [01:58] as it can never be try in production [01:58] 1. solves todays problem by pushing it off onto someone else later on [01:58] where is the code that always returns true? [01:58] state/apiserver/provisioner:NewProvsionerApi [01:58] for example [01:59] the tests's don't use that function, the construct a provisioner by hand [02:00] all I'd want to see a test for is that the provisioner api end point can watch the environment [02:00] any tests for failures there is a waste IMO [02:01] the auth func tests the tag [02:01] ok, let me prepare a proposal [02:01] axw: morning, you have a minute? [02:01] wallyworld: heya, yep [02:02] although it does seem that there should be a check around whether or not the machine can read the secrets [02:02] davecheney: as that is dependent on the state server job [02:02] thumper: yes, that is different [02:02] ok [02:02] axw: standup hangout? [02:15] thumper: https://github.com/juju/juju/pull/533 [02:15] i'd appreciate your thoughts [02:16] ok [02:16] this is option (2) [02:16] which is more wholesome [02:16] but more difficult to stoumach [02:24] thumper: thanks for the review [02:24] np [02:24] this resolves 80% of the outstanding problems [02:24] 20% coming real soon now [02:24] then this is DONE [02:24] and we can change common.AuthFunc [02:47] thumper: https://github.com/juju/juju/pull/534 [02:58] lucky(~/src/github.com/juju/juju) % juju ssh 0 [02:58] ERROR upgrade in progress - Juju functionality is limited [02:58] grrr [02:59] menn0: http://paste.ubuntu.com/8076514/ [02:59] why did this happen, i thought it was fixed [03:01] davecheney: I believe when you do --upload-tools the initial version in the machine agent's conf file is different from version.Current [03:01] so upgrades still happen [03:01] "upgrade mode" should be pretty short lived [03:01] that sounds odd [03:01] have you got the machine agent log handy [03:01] ? [03:02] --upload-tools derives the versino of the tools from the local juju that just did --upload-tools [03:03] the logs will show the previous and next version, and also when the upgrade steps worker finished [03:03] menn0: http://paste.ubuntu.com/8076553/ [03:03] * menn0 looks [03:03] menn0: there should be no version difference [03:03] there was no previous environment [03:03] i bootstrapped it from the version of juju i just built [03:04] here's the clue: "starting upgrade from 1.21-alpha1-precise-amd64 to 1.21-alpha1.1-precise-amd64" [03:04] menn0: this is going to generate issues from the juju deployer folks if they use --upload-tools [03:05] the agent started at 02:57:34 and upgrades were done at 02:58:14 [03:05] sure [03:05] it is slow [03:05] so "upgrade mode" was 45s long [03:05] but i was able to trigger it in 3 out of 3 cases [03:06] juju bootstrap --upload-tools && juju deploy cs:mysql [03:06] will fail [03:06] if there's an actual upgrade to perform, we need to limit the API (aka "upgrade mode") [03:07] right, but why is there an upgrade [03:07] ther eis nothing to upgrade from [03:07] no idea [03:07] I didn't write that [03:07] but I'm happy to have a look [03:07] thanks [03:07] i just know its going to generate more bugs from the juju installer folks [03:07] will log issue [03:08] I suspect it's to do with some functionality that keeps giving you new patch version every time you run upgrade-juju --upload-tools [03:08] assign it to me [03:08] it would be nice if we could avoid this following bootstrap [03:11] https://bugs.launchpad.net/juju-core/+bug/1358078 [03:11] Bug #1358078: cmd/juju: juju bootstrap --upload-tools on a fresh environment triggers upgrade mode [03:11] menn0: i know CTS always script their juju deployments and they assume if a command returned 0 then it is safe to continue with the next command [03:16] davecheney: although that's ok with most commands I wonder if you be 100% that the machine agent is ready to go immediately after the bootstrap client command returns [03:17] it's certainly worse now with the restricted API [03:17] so I'll try to fix that [03:17] menn0: its not the machine agent [03:17] it's being able to create entires in the state database [03:17] maybe that is what you were asking [03:18] yeah but the bootstrap machine agent runs the API server [03:18] did you mean machine agent == provisioner [03:18] sorry, I should have said state server, not machine agent [03:18] iff you can connect to the api, the expectaion was it would accept commands [03:19] agreed. [03:19] I guess because the client retries API requests it does just tend to work [03:20] the API server doesn't come up immediately when the bootstrap machine agent comes up [03:20] but the client side retries mask that from the user's persepective [03:20] anyway... let me have a look at this problem [03:20] thanks [03:33] menn0: does the above problem affect 1.20? [03:34] i would argue that we don't do upgrade steps if just the build number is different [03:34] * menn0 checks 1.20 tag [03:35] sorry, i could have checked too, thought you may have known ottofh [03:35] ottoyh even [03:35] wallyworld: i think it only affects using --upload tools [03:36] davecheney: sure, but they use that in 1.20 :-) [03:36] wallyworld: yes it does affect 1.20 [03:36] :-( [03:36] ok, i'll assign the bug to 1.20 also [03:36] wallyworld: it only afects you if you start with an empty enviromment [03:36] then bootstrap --upload-tools [03:37] which is done for local provider [03:37] BALLS! [03:37] and it's worse there because my recent work to avoid upgrade mode to if it's not required isn't in 1.20 [03:37] ooops [03:37] or at least it's not in 1.20.2 [03:37] we are about to release 1.20.5 [03:38] tomorrow [03:38] this problem has been there all through 1.20 [03:38] but this new issue will need to be fixed for 1.20.6 [03:38] there was a bug raised actually [03:38] but we closed it [03:39] i think it does need some love however [03:40] menn0: i fixed the milestones on the bug - generally juju-core is assigned to a dev milestone; you mark a bug as affecting 1.20 series and then assign to a 1.20.x milestone for work done on that series [03:41] waigani: a lot of problems? [03:42] thumper: just had to redo everything [03:42] wallyworld: ok thanks. I'm a launchpad newbie [03:42] waigani: why? [03:42] menn0: np at all, just an fyi :-) [03:42] is anyone paying attention to how many times the tests fail in CI and only pass because we retry them with less load ? [03:42] davecheney: yes [03:43] wallyworld: good [03:43] there will be an email this week [03:43] wallyworld: right [03:43] thanks, glad to know you're on top of it [03:43] i dunno what you are thinking, but I'm thinking 'remove the retry' [03:43] we can use humans for this [03:43] been busy, would have liked to have initiated something sooner [03:44] davecheney: the plan is to have people who wrote the tests be responsible for fixing them as a matter of priority; and yes the retry will go as soon as practical; the expectation will be that tests will pass first time, not the other way around [03:45] ie we should be surprised when they fail, not when they pass [03:45] QA have offered to do a report on the failing tests [03:46] will make it easier to see what fails and how often [03:46] thumper: okay it's up [03:47] wallyworld: sgtm [03:47] thumper: shall I jump back in hanyout? [03:48] wallyworld, davecheney: I've targeted bug 1350111 to 1.20 too because we need something there too. It'll be a somewhat different patch however as the code has changed a lot since 1.20 [03:48] Bug #1350111: machine agent enters "upgrade mode" unnecessarily [03:48] waigani: yeah [03:48] menn0: thanks, will be good to get that sorted in 1.20 also [03:49] wallyworld: did you think we'll have a 1.22 stable release for U ? [03:50] supporting 1.20 for the whole of U and backported to T would be unpleasant [03:50] davecheney: i am hoping so [03:50] but there are so many bugs still to fix :-( [03:52] 。・゚゚・(>д<)・゚゚・。 [03:59] davecheney: there there [04:33] wallyworld: do you have time for a quick hangout? [04:33] sure [04:34] wallyworld: https://plus.google.com/hangouts/_/canonical.com/onyx-standup ? [04:35] menn0: waiting in hangout [05:14] * thumper sends off an email and calls it a day [05:14] laters [05:18] davecheney, wallyworld: I have a fix for bug 1350111 (for trunk anyway). Will propose shortly. [05:18] Bug #1350111: machine agent enters "upgrade mode" unnecessarily [05:18] great [05:19] sorry, wrong number. I meant bug 1358078. [05:19] Bug #1358078: cmd/juju: juju bootstrap --upload-tools on a fresh environment triggers upgrade mode [05:27] menn0: sweet [05:28] davecheney, wallyworld: https://github.com/juju/juju/pull/535 [05:28] looking [05:32] wallyworld: thanks for the review [05:32] thank you for the fix [05:33] i've tested local already and am doing EC2 now [05:33] \o/ [05:33] I have to deal with kids for a bit but will be back later [05:48] wallyworld: fyi, I've gone down a bit of a rabbit hole - there's lots of badness in our maas provider code. going to try and fix it a bit while I'm there [05:48] ok [05:48] \o/ [05:48] /o\ [05:48] we don't convert juju architectures to maas ones, so our constraints are just broken [05:48] \o/ [05:49] we don't return hardware characteristics [05:49] we choose arbitrary tools after acquring a node of an arbitrary arch [05:49] :| [05:52] and nobody noticed? Is any of this tested? :) [05:53] bigjools: there's a warning and a bug, I'm surprised nobody is kicking up more of a stink. I think we're just lucky because we default to amd64, which is what most people would be using I assume. [05:53] yeah agreed [05:57] morning all [05:59] morning dimitern [06:09] bigjools: it hasn't been fully tested because we have been begging for maas hardware for 9 months [06:10] I offered use of our CI lab as well [06:11] wallyworld: bigjools: actually turns out we were just not passing the subarch, and we happen to have the same arch identifiers. false alarm on that specific bit... [06:12] I was gonna say, that would have been spectacular [06:12] phew [06:12] not surprised about subarch [06:12] you don't *need* it [06:13] axw: how complete is the bit of code to pass the tools to target nodes via cloud init? [06:13] wallyworld: 100% functional, just fixing bits around the edge and will have a bunch of tests to update. [06:14] wallyworld: I have tested local and ec2, I haven't tested manual yet but that shouldn't be an issue [06:14] wallyworld: sorry, I guess not quite 100% because we still need to update metadata... [06:14] morning all [06:14] axw: i'm thinking of cherry picking just that bit for use in 1.20, because the tools url hacking i need for local to use a mounted dir is messy, plus it won't work for kvm [06:15] o/ [06:15] wallyworld: it's not something that can easily be cherrypicked. provider/maas needs updates to return the arch name, for one thing [06:15] morning voidspac_ [06:15] axw: this would just be for local provider [06:16] to eliminate the need to get tools from http server [06:16] wallyworld: it's much more intrusive than that... I've had to move all the "EnsureTools" stuff *outside* the providers [06:16] so it's all or none [06:16] rightio [06:27] menn0: I'm not sure that change is right [06:27] menn0: we always bootstrap the same version as the CLI [06:27] then upgrade toe the desired version [06:27] to* [06:39] axw: is this the --upload-tools bug ? [06:40] davecheney: https://github.com/juju/juju/pull/535 [06:40] ok [06:40] so yes, looks like it [06:41] but whern I do --upload-tools, why is there a disagreemnt between the tools I uploaded, and the tools that the env thinks it is running [06:41] dunno, that sounds broken [06:41] davecheney: ah, probably because upload increments the build number on the tools [06:41] axw: i think we have some logic that fudges the tools version uplaoded to not match any existing tools [06:42] ie, the .1 that gets stuffed in there [06:42] yeh [06:42] p [06:42] that's the bug [06:43] gtg get my daughter from school, bbl [07:01] morning dimitern [07:01] morning jam! welcome back :) [07:01] thanks [07:05] dimitern: you just hung, but it might be me, I'll try reconnecting [07:06] jam, i've reconnected as well [07:07] jam, you seem hung in the g+ === uru_ is now known as urulama [07:12] jam1, i've rejoined several times, each time it says you're in the room, but then once I'm in it says waiting for people to join [07:12] jam1, wanna try juju-sapphire g+ instead? [07:12] dimitern: my internet just died for 10 sec [07:13] but I should be able to connect now [07:35] axw: ping? [07:35] menn0: pong [07:36] axw: so this proposed fix... I think it's ok but you have concerns [07:36] as far as I can tell, when --upload-tools is used [07:36] the wrong version gets written to agent.conf [07:37] so that when the machine agent comes up it think it needs to run upgrade steps [07:37] menn0: ignore --upload-tools for a moment [07:37] ok [07:37] menn0: when we bootstrap, we look for the most recent tools that matches major.minor of the CLI's tools [07:37] menn0: we bootstrap with the exact same tools the CLI is running, but set agent-version=most recent [07:38] the effect of this is that the machine agent comes up and immediately upgrades to agent-version [07:38] that's what we *want* to happen [07:38] and it does happen like that now [07:39] menn0: IIANM, your change makes it so that the machine agent thinks it's running agent-version already, and so it doesn't run the upgrade steps [07:39] ok [07:39] it'll still replace the binary, it just won't run the upgrade steps [07:39] menn0: the issue with --upload-tools is that it increments the Build number in the tools [07:40] yep, I understand the --upload-tools case [07:40] so what's deployed is never the same as the CLI [07:40] I didn't know about the standard bootstrap case [07:40] I will have to rethink then [07:40] what we want is: [07:40] - normal bootstraps to work as you just described [07:41] - juju upgrade-juju --upload-tools which increments just the build number to still trigger upgrade steps (for developers) [07:41] - juju bootstrap --upload-tools to NOT trigger the upgrade steps [07:42] yup [07:42] menn0: I think it's as simple as not incrementing the build number on bootstrap [07:42] I don't see why we'd ever want to do that [07:42] yep [07:42] that makes sense [07:42] * menn0 looks at code [07:43] * axw should really document bootstrap some time [07:46] axw: I think I know why we still increment the version on bootstrap when --upload-tools is passed [07:46] if the tools storage is shared between envs [07:46] axw: is it? [07:47] menn0: I don't think you can do that without running into problems [07:47] it's going to change real soon now anyway [07:47] ok then it should be fine [07:47] we're getting rid of provider storage [07:50] axw: I've been hunting through revision history [07:51] do you think the incrementing the build number is done to ensure that the uploaded tools are used in preference to any tools in the public streams? [07:52] axw: I'm wondering about another way to skin this cat: [07:52] menn0: ugh, that may be a problem, yeah. [07:52] menn0: TBH, it might be worth waiting till I'm done with my changes... this may be fixed incidentally [07:52] if bootstrap is given --upload-tools, we get that version into the agent.conf, if not use version.Current [07:53] except that won't fix 1.20 [07:53] menn0: can do, but sounds like it might be messy [07:53] how will you convey that information? [07:54] via a field in MachineConfig perhaps? [07:56] axw: so is the Tools field in cloudinit.MachineConfig definitely the target agent version, not the initial one? [07:56] menn0: until my changes go in, each provider's Bootstrap creates its own MachineConfig [07:57] hmm [07:57] menn0: it is the bootstrap tools, but hmmmm [07:57] menn0: I think I may have misunderstood the change [07:57] I'll take another look... [07:57] axw: np [07:58] menn0: sorry, your original change was right :) [07:58] phew! [07:58] it'll bootstrap with those tools [07:58] then it'll see agent-version is different and upgrade [07:58] sorry about that [07:59] no problems [07:59] I'd like to test the non-uploading case but that's kinda hard without having these changes in a official release [08:00] is there another way? (custom streams or something) [08:02] axw: ^^ [08:02] urhm [08:03] you could use sync-tools to generate the metadata [08:03] menn0: simplest way would probably be to tweak version.go, sync-tools, then revert version.go [08:03] axw: cool. I will look in to that tomorrow. [08:03] I also need to read simplestreams-metadata.txt [08:04] thanks for your help and for being concerned about my change [08:04] no worries :) [08:04] I'd rather have that than bad code getting in [08:04] I'm EOD [08:07] morning [08:27] morning TheMue, voidspac_ [08:28] dimitern: I pushed my change to my repo on Friday, only the tests are missing. but I needed an additional API call to see if a machine is manually provisioned. looks good so far. [08:29] TheMue, sweet! I'm looking forward to seeing it [08:29] dimitern: the current changes are at https://github.com/TheMue/juju/compare/capability-detection-for-networker [08:30] TheMue: dimitern: morning [08:33] voidspac_: hello [08:34] TheMue, looking [08:36] TheMue, looks nice, although for the IsManual API call, I'd implement it a bit differently [08:37] TheMue, like getting the IsManual flag as part of getting the machine's live value [08:37] dimitern: it’s able for bulk calls [08:37] TheMue, i.e. caching it, so you can return it directly without an extra call [08:37] TheMue, yeah, LifeGetter also supports bulk calls [08:38] dimitern: will take a look [08:39] TheMue, cheers [09:00] TheMue: I'm just finishing up lunch, I'll be a little bit late. [09:00] jam1: ok, just ping [09:00] fwereade_, ping? [09:22] TheMue: I'm in the hangout [10:18] given a package name, how do I tell what files / binaries it provides? [10:18] voidspac_, you can run a godoc server for that package locally [10:19] dimitern: by package I mean ubuntu package, sorry [10:19] voidspac_, ah :) [10:19] dimitern: go doesn't have packages, does it? [10:19] I mean, it doesn't use that term [10:20] it has "dependencies", which can be anything pretty much [10:20] voidspac_, they are packages [10:20] :) [10:20] well, they're not [10:20] voidspac_, for debs: $ apt-cache showsrc juju-core [10:20] they're a hodge-podge of files [10:20] voidspac_, :) [10:20] dimitern: thanks [10:21] dimitern: hmmm... not quite it, I want to know what files it will put where [10:21] dimitern: is there a determinstic way of knowing that? [10:21] I guess now as the install scripts execute code [10:23] I mean, I guess not [10:23] voidspac_, well, these 3 files are the only ones in the deb archive actually [10:23] dimitern: the tarballs? [10:24] voidspac_, if you want to see the source itself, try apt-get source [10:24] dimitern: so download the package and inspect it [10:24] fair enough [10:24] voidspac_, yeah, and take a look at the debian tarball for hooks I guess [10:25] voidspac_, and rules (which is as readable as any generated Makefile :) [10:27] dimitern: thanks :-) [10:36] fwereade_: morning, you up for a chat about health checks sometime? [10:36] wallyworld, heyhey [10:38] let me know when you have time and we can do a hangout [10:38] wallyworld, what time is it for you? [10:39] 20:30 [10:41] wallyworld, hmm, and how early do you like to get up? [10:41] i'm up around 6 but need to take the kid to school, back around 7 [10:47] jam1, standup? [12:25] * fwereade_ restarting then hopefully with wallyworld [13:49] perrito666: ping [13:49] Command failed: mongodump --dbpath /var/lib/juju/db [13:49] Error: bash: line 9: mongodump: command not found [13:50] perrito666: do you recognise that? No mongodump on my state server. [14:04] voidspac_: /var/lib/juju/mongodump is installed as part of the tools in a local juju environment [14:04] ericsnow: this isn't local this is an openstack environment [14:04] ericsnow: and that error message was the output from "juju backup", that location is where the backup script was looking === lazyPower_ is now known as lazyPower [14:05] ericsnow: but thanks :-) [14:05] voidspac_: (yeah, drop the "local" part) [14:05] voidspac_: weird [14:06] ericsnow: I just destroyed the environment and will try again [14:06] ericsnow: I think I started the backup too early [14:06] voidspac_: FYI horacio is out today [14:06] ericsnow: hmmm, installed as part of the tools? [14:06] voidspac_: ah, that makes sense [14:07] voidspac_: right [14:07] I wonder if that works when you do --upload-tools [14:07] ericsnow: I don't need upload-tools anyway, as restore is run on the client [14:07] although that might be weird... [14:07] we'll see [14:08] WARNING no prepackaged tools available [14:08] uploading tools [14:08] so I get no choice in the matter :-) [14:09] voidspac_: while restore runs on the client, most of the work is happening on the remote host [14:09] ericsnow: yeah, but it's mostly batch scripts uploaded from the client :-) [14:10] ericsnow: the part I'm *changing* runs on the client [14:10] so that's the bit I want to check works [14:10] voidspac_: disclaimer--most of what I know of restore is from working on backup :) [14:10] hah [14:10] ericsnow: I'm working on hacking a fix into the old plugin anyway [14:10] ericsnow: not touching the shiny new stuff you did :-) [14:10] voidspac_: right [14:10] voidspac_: still polishing :) [14:11] :-) === stokachu_ is now known as stokachu === mup_ is now known as mup [14:19] ericsnow: so when I bootstrap an openstack environment from dev I don't see a mongodump in the uploaded tools [14:20] I might have to blow away dev and try with 1.18/1.20 [14:20] it might be a curiousity of upload-tools [14:20] voidspac_: not sure then [14:21] voidspac_: I never got around to figuring out for myself where mongodump, etc. came from [14:21] voidspac_: it should be in the same dir as mongod [14:22] ah right [14:22] mongod isn't in the uploaded tools directory either, must be elsewhere [14:22] I'm trying with 1.18 now [14:22] voidspac_: yeah, not the uploaded tools dir [14:23] voidspac_: pretty sure /var/lib/juju holds just the juju-built mongo binaries [14:23] ericsnow: ok, will check when this environment has bootstrapped [14:24] thanks [14:24] (again) [14:24] :-) [14:24] :) [14:24] voidspac_: yeah! I was helpful to someone :) [14:31] ericsnow: /usr/lib/juju/bin [14:32] not /var/lib/juju [14:32] voidspac_: ah [14:41] ericsnow: ok, interesting [14:41] ericsnow: so now with a freshly bootstrapped machine from dev [14:41] I *do* have mongodump in /usr/lib/juju/bin [14:41] but it's not in path [14:41] voidspac_: right [14:41] I'm giving it a chance for bootstrap to complete before I try mongodump [14:41] looks like just a path issue then [14:42] voidspac_: if I recall correctly it tries /usr/lib/juju/bin explicitly first and then falls back to $PATH [14:43] ericsnow: backup and restore CI test is passing, so it's not broken - it just may be hard / impossible to manually test without recreating some of the CI infastructure [14:43] (they avoid using --upload-tools in tests for other reasons) [14:43] voidspac_: makes sense [14:44] brb [15:35] right [15:35] goodnight === hatch__ is now known as hatch [16:27] i need some help with facades: i see a FacadeCall("ContainerConfig",...), but i can't figure out where "ContainerConfig" is registered as a Facade... [16:31] katco: registration happens with a call to state/apiserver/common.RegisterStandardFacade() (or similar) [16:31] katco: e.g. state/apiserver/client/client.go [16:31] ericsnow: yeah, i gathered that, but a quick grep shows no registrations for "ContainerConfig"... indeed the only _reference_ i can find to that facade is the call... [16:31] katco: I don't see one under apiserver for ContainerConfig [16:31] katco: is it a facade or something on a facade? [16:32] katco: perhaps it's dead code? [16:33] ericsnow: perhaps... i'm trying to wind my way through a failing test [16:33] ericsnow: it's in api/provisioner/provision.go::CreateConfig() [16:37] katco: looks like FacadeCall is just a wrapper around a facade for calling a method on that facade [16:37] katco: so ContainerConfig is a method on some facade rather than a facade itself [16:38] ericsnow: ah ok... so i'm trying to trace down the facade that's passed in; it's probably a method on there? [16:38] katco: yep [16:38] ericsnow: ok we'll see where this yarn ends then :) [16:38] ericsnow: thank you [16:39] katco: apiserver/provisioner/provisioner.go defines a ContainerConfig method [16:39] ericsnow: yeah i had been looking at that, but it looks correct. i'm probably missing something eyeballing it [17:23] ericsnow: arrrrrrg, it was a test mocking out that call. [17:24] katco: :( [18:01] katco, ericsnow: have another version increment for stable because we are releases 1.20.5 today. https://github.com/juju/juju/pull/537 [18:02] sinzui: thanks, sinzui. very exciting :) lgtm too [18:03] oh, for anyone looking at recent history of CI, Hp and AWS had soem bad hours. The QA team cleanup some machines and restarted the tests. master was *never* broken === urulama is now known as urulama-afk === viperZ28_ is now known as viperZ28 === StoneTable is now known as aisrael === tvansteenburgh1 is now known as tvansteenburgh [21:01] mramm: morning, we hanging out this morning? [21:06] yep [21:06] sorry I am late [21:08] thumper: you still around? [21:08] mramm: aye [21:08] * thumper jumps into hangout === jcsackett_ is now known as jcsackett [21:35] wallyworld_: no worries. emacs just crashed when you sent that >.< [22:00] thumper, on and ready when you are [22:00] alexisb: ack, coming [22:00] no rush [22:00] I just have a hard stop at the top of the hour === perrito6` is now known as perrito666 [23:35] thumper, davecheney, waigani_ : regarding that bootstrap version behaviour that you didn't like the sound of [23:35] looking at the code [23:35] it seems like it might be ok [23:35] good [23:35] it's asking the cloud instance itself for the tools metadata it has [23:35] right... [23:35] and therefore it's only checking through tools that are available /on that cloud/ [23:35] yeah... [23:36] but does it make sense for it it automatically upgrade? Just wondering [23:36] and if so, to what version? [23:36] so it's not going to ask the new environment to upgrade to tools that it doesn't have [23:36] if I bootstrap 1.18 [23:36] what happens? [23:36] assuming I have a 1.18.0 client [23:36] from what I have gathered from the code [23:36] it will bootstrap 1.18.0 [23:37] but set the env agent-version to the latest 1.18.X available on the cloud [23:37] so when the bootstrap machine agent comes up it will immediately upgraded to 1.18.X [23:37] if there is no 1.18 available on the cloud [23:38] it'll upload the local tools and the env will be running 1.18.0 (no upgrade) [23:38] does that sound reasonable? [23:38] if not, talk to Andrew [23:38] (I'm just the messenger!) [23:38] hmm... [23:38] but the initial version is the same as the client? [23:39] yes, but only for a short time [23:39] waigani: if you want to be on call reviewer for today, that would be good, we only have brits [23:39] the upgrade (i.e. restart into the new version) happens almost immediately [23:39] and it is only the patch number that is allowed to change? [23:39] so major.minor have to be the same? [23:39] yes [23:39] davecheney: okay [23:39] it filters on major.minor [23:40] I suppose that is ok [23:40] my guess for the reasoning is to ensure the the env is running with the most bugs fixed [23:42] thumper, davecheney: one problem that just comes to mind is that it has the potential the break the "juju bootstrap && juju deploy foo" use case. [23:43] if the bootstrap machine agent comes up and then restarts soon after [23:43] and then goes in to "upgrade mode" soon after [23:46] well... hopefully the user has an up to date client :)