menn0 | davecheney: i've just pushed more wrench changes again. can you have another quick look? | 00:44 |
---|---|---|
menn0 | if the scanner returns an error it's logged now | 00:44 |
menn0 | and various other things you raised have been addressed (except where I don't think they should be :-p) | 00:45 |
davecheney | menn0: yup, LGTM, thanks | 00:45 |
menn0 | davecheney: cheers | 00:45 |
davecheney | argh | 01:54 |
davecheney | thumper: | 01:54 |
davecheney | all this canWatch("") stuff is only so the tests can check the error path | 01:54 |
davecheney | in the real code they are hard coded to return true | 01:54 |
davecheney | but in the tests we set them to return false just so we can cause an error which we can then check ... | 01:54 |
thumper | hmm... | 01:55 |
thumper | seems suspect | 01:55 |
davecheney | if _, ok := <-watch.Changes(); ok { | 01:55 |
davecheney | result.NotifyWatcherId = p.resources.Register(watch) | 01:55 |
davecheney | what the code wants to do is cause this line to fail | 01:55 |
davecheney | but it can't, instead it causese the line above | 01:55 |
davecheney | if !canWatch("machine-0") { | 01:55 |
davecheney | return result, common.ErrPerm | 01:55 |
davecheney | to fail | 01:55 |
davecheney | so it gets it's error | 01:55 |
davecheney | thumper: so i can do two things | 01:57 |
davecheney | 1. supply a dummy, but valid tag | 01:57 |
davecheney | 2. remove the logic and the test | 01:58 |
davecheney | as it can never be try in production | 01:58 |
davecheney | 1. solves todays problem by pushing it off onto someone else later on | 01:58 |
thumper | where is the code that always returns true? | 01:58 |
davecheney | state/apiserver/provisioner:NewProvsionerApi | 01:58 |
davecheney | for example | 01:58 |
davecheney | the tests's don't use that function, the construct a provisioner by hand | 01:59 |
thumper | all I'd want to see a test for is that the provisioner api end point can watch the environment | 02:00 |
thumper | any tests for failures there is a waste IMO | 02:00 |
thumper | the auth func tests the tag | 02:01 |
davecheney | ok, let me prepare a proposal | 02:01 |
wallyworld | axw: morning, you have a minute? | 02:01 |
axw | wallyworld: heya, yep | 02:01 |
thumper | although it does seem that there should be a check around whether or not the machine can read the secrets | 02:02 |
thumper | davecheney: as that is dependent on the state server job | 02:02 |
davecheney | thumper: yes, that is different | 02:02 |
thumper | ok | 02:02 |
wallyworld | axw: standup hangout? | 02:02 |
davecheney | thumper: https://github.com/juju/juju/pull/533 | 02:15 |
davecheney | i'd appreciate your thoughts | 02:15 |
thumper | ok | 02:16 |
davecheney | this is option (2) | 02:16 |
davecheney | which is more wholesome | 02:16 |
davecheney | but more difficult to stoumach | 02:16 |
davecheney | thumper: thanks for the review | 02:24 |
thumper | np | 02:24 |
davecheney | this resolves 80% of the outstanding problems | 02:24 |
davecheney | 20% coming real soon now | 02:24 |
davecheney | then this is DONE | 02:24 |
davecheney | and we can change common.AuthFunc | 02:24 |
davecheney | thumper: https://github.com/juju/juju/pull/534 | 02:47 |
davecheney | lucky(~/src/github.com/juju/juju) % juju ssh 0 | 02:58 |
davecheney | ERROR upgrade in progress - Juju functionality is limited | 02:58 |
davecheney | grrr | 02:58 |
davecheney | menn0: http://paste.ubuntu.com/8076514/ | 02:59 |
davecheney | why did this happen, i thought it was fixed | 02:59 |
menn0 | davecheney: I believe when you do --upload-tools the initial version in the machine agent's conf file is different from version.Current | 03:01 |
menn0 | so upgrades still happen | 03:01 |
menn0 | "upgrade mode" should be pretty short lived | 03:01 |
davecheney | that sounds odd | 03:01 |
menn0 | have you got the machine agent log handy | 03:01 |
menn0 | ? | 03:01 |
davecheney | --upload-tools derives the versino of the tools from the local juju that just did --upload-tools | 03:02 |
menn0 | the logs will show the previous and next version, and also when the upgrade steps worker finished | 03:03 |
davecheney | menn0: http://paste.ubuntu.com/8076553/ | 03:03 |
* menn0 looks | 03:03 | |
davecheney | menn0: there should be no version difference | 03:03 |
davecheney | there was no previous environment | 03:03 |
davecheney | i bootstrapped it from the version of juju i just built | 03:03 |
menn0 | here's the clue: "starting upgrade from 1.21-alpha1-precise-amd64 to 1.21-alpha1.1-precise-amd64" | 03:04 |
davecheney | menn0: this is going to generate issues from the juju deployer folks if they use --upload-tools | 03:04 |
menn0 | the agent started at 02:57:34 and upgrades were done at 02:58:14 | 03:05 |
davecheney | sure | 03:05 |
davecheney | it is slow | 03:05 |
menn0 | so "upgrade mode" was 45s long | 03:05 |
davecheney | but i was able to trigger it in 3 out of 3 cases | 03:05 |
davecheney | juju bootstrap --upload-tools && juju deploy cs:mysql | 03:06 |
davecheney | will fail | 03:06 |
menn0 | if there's an actual upgrade to perform, we need to limit the API (aka "upgrade mode") | 03:06 |
davecheney | right, but why is there an upgrade | 03:07 |
davecheney | ther eis nothing to upgrade from | 03:07 |
menn0 | no idea | 03:07 |
menn0 | I didn't write that | 03:07 |
menn0 | but I'm happy to have a look | 03:07 |
davecheney | thanks | 03:07 |
davecheney | i just know its going to generate more bugs from the juju installer folks | 03:07 |
davecheney | will log issue | 03:07 |
menn0 | I suspect it's to do with some functionality that keeps giving you new patch version every time you run upgrade-juju --upload-tools | 03:08 |
menn0 | assign it to me | 03:08 |
menn0 | it would be nice if we could avoid this following bootstrap | 03:08 |
davecheney | https://bugs.launchpad.net/juju-core/+bug/1358078 | 03:11 |
mup | Bug #1358078: cmd/juju: juju bootstrap --upload-tools on a fresh environment triggers upgrade mode <juju-core:New> <https://launchpad.net/bugs/1358078> | 03:11 |
davecheney | menn0: i know CTS always script their juju deployments and they assume if a command returned 0 then it is safe to continue with the next command | 03:11 |
menn0 | davecheney: although that's ok with most commands I wonder if you be 100% that the machine agent is ready to go immediately after the bootstrap client command returns | 03:16 |
menn0 | it's certainly worse now with the restricted API | 03:17 |
menn0 | so I'll try to fix that | 03:17 |
davecheney | menn0: its not the machine agent | 03:17 |
davecheney | it's being able to create entires in the state database | 03:17 |
davecheney | maybe that is what you were asking | 03:17 |
menn0 | yeah but the bootstrap machine agent runs the API server | 03:18 |
davecheney | did you mean machine agent == provisioner | 03:18 |
menn0 | sorry, I should have said state server, not machine agent | 03:18 |
davecheney | iff you can connect to the api, the expectaion was it would accept commands | 03:18 |
menn0 | agreed. | 03:19 |
menn0 | I guess because the client retries API requests it does just tend to work | 03:19 |
menn0 | the API server doesn't come up immediately when the bootstrap machine agent comes up | 03:20 |
menn0 | but the client side retries mask that from the user's persepective | 03:20 |
menn0 | anyway... let me have a look at this problem | 03:20 |
davecheney | thanks | 03:20 |
wallyworld | menn0: does the above problem affect 1.20? | 03:33 |
wallyworld | i would argue that we don't do upgrade steps if just the build number is different | 03:34 |
* menn0 checks 1.20 tag | 03:34 | |
wallyworld | sorry, i could have checked too, thought you may have known ottofh | 03:35 |
wallyworld | ottoyh even | 03:35 |
davecheney | wallyworld: i think it only affects using --upload tools | 03:35 |
wallyworld | davecheney: sure, but they use that in 1.20 :-) | 03:36 |
menn0 | wallyworld: yes it does affect 1.20 | 03:36 |
wallyworld | :-( | 03:36 |
wallyworld | ok, i'll assign the bug to 1.20 also | 03:36 |
davecheney | wallyworld: it only afects you if you start with an empty enviromment | 03:36 |
davecheney | then bootstrap --upload-tools | 03:36 |
wallyworld | which is done for local provider | 03:37 |
davecheney | BALLS! | 03:37 |
menn0 | and it's worse there because my recent work to avoid upgrade mode to if it's not required isn't in 1.20 | 03:37 |
wallyworld | ooops | 03:37 |
menn0 | or at least it's not in 1.20.2 | 03:37 |
wallyworld | we are about to release 1.20.5 | 03:37 |
wallyworld | tomorrow | 03:38 |
menn0 | this problem has been there all through 1.20 | 03:38 |
wallyworld | but this new issue will need to be fixed for 1.20.6 | 03:38 |
wallyworld | there was a bug raised actually | 03:38 |
wallyworld | but we closed it | 03:38 |
wallyworld | i think it does need some love however | 03:39 |
wallyworld | menn0: i fixed the milestones on the bug - generally juju-core is assigned to a dev milestone; you mark a bug as affecting 1.20 series and then assign to a 1.20.x milestone for work done on that series | 03:40 |
thumper | waigani: a lot of problems? | 03:41 |
waigani | thumper: just had to redo everything | 03:42 |
menn0 | wallyworld: ok thanks. I'm a launchpad newbie | 03:42 |
thumper | waigani: why? | 03:42 |
wallyworld | menn0: np at all, just an fyi :-) | 03:42 |
davecheney | is anyone paying attention to how many times the tests fail in CI and only pass because we retry them with less load ? | 03:42 |
wallyworld | davecheney: yes | 03:42 |
davecheney | wallyworld: good | 03:43 |
wallyworld | there will be an email this week | 03:43 |
davecheney | wallyworld: right | 03:43 |
davecheney | thanks, glad to know you're on top of it | 03:43 |
davecheney | i dunno what you are thinking, but I'm thinking 'remove the retry' | 03:43 |
davecheney | we can use humans for this | 03:43 |
wallyworld | been busy, would have liked to have initiated something sooner | 03:43 |
wallyworld | davecheney: the plan is to have people who wrote the tests be responsible for fixing them as a matter of priority; and yes the retry will go as soon as practical; the expectation will be that tests will pass first time, not the other way around | 03:44 |
wallyworld | ie we should be surprised when they fail, not when they pass | 03:45 |
wallyworld | QA have offered to do a report on the failing tests | 03:45 |
wallyworld | will make it easier to see what fails and how often | 03:46 |
waigani | thumper: okay it's up | 03:46 |
davecheney | wallyworld: sgtm | 03:47 |
waigani | thumper: shall I jump back in hanyout? | 03:47 |
menn0 | wallyworld, davecheney: I've targeted bug 1350111 to 1.20 too because we need something there too. It'll be a somewhat different patch however as the code has changed a lot since 1.20 | 03:48 |
mup | Bug #1350111: machine agent enters "upgrade mode" unnecessarily <juju-core:Fix Committed by menno.smits> <juju-core 1.20:In Progress by menno.smits> <https://launchpad.net/bugs/1350111> | 03:48 |
thumper | waigani: yeah | 03:48 |
wallyworld | menn0: thanks, will be good to get that sorted in 1.20 also | 03:48 |
davecheney | wallyworld: did you think we'll have a 1.22 stable release for U ? | 03:49 |
davecheney | supporting 1.20 for the whole of U and backported to T would be unpleasant | 03:50 |
wallyworld | davecheney: i am hoping so | 03:50 |
wallyworld | but there are so many bugs still to fix :-( | 03:50 |
davecheney | 。・゚゚・(>д<)・゚゚・。 | 03:52 |
menn0 | davecheney: there there | 03:59 |
menn0 | wallyworld: do you have time for a quick hangout? | 04:33 |
wallyworld | sure | 04:33 |
menn0 | wallyworld: https://plus.google.com/hangouts/_/canonical.com/onyx-standup ? | 04:34 |
wallyworld | menn0: waiting in hangout | 04:35 |
* thumper sends off an email and calls it a day | 05:14 | |
thumper | laters | 05:14 |
menn0 | davecheney, wallyworld: I have a fix for bug 1350111 (for trunk anyway). Will propose shortly. | 05:18 |
mup | Bug #1350111: machine agent enters "upgrade mode" unnecessarily <juju-core:Fix Committed by menno.smits> <juju-core 1.20:In Progress by menno.smits> <https://launchpad.net/bugs/1350111> | 05:18 |
wallyworld | great | 05:18 |
menn0 | sorry, wrong number. I meant bug 1358078. | 05:19 |
mup | Bug #1358078: cmd/juju: juju bootstrap --upload-tools on a fresh environment triggers upgrade mode <juju-core:In Progress by menno.smits> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1358078> | 05:19 |
davecheney | menn0: sweet | 05:27 |
menn0 | davecheney, wallyworld: https://github.com/juju/juju/pull/535 | 05:28 |
wallyworld | looking | 05:28 |
menn0 | wallyworld: thanks for the review | 05:32 |
wallyworld | thank you for the fix | 05:32 |
menn0 | i've tested local already and am doing EC2 now | 05:33 |
wallyworld | \o/ | 05:33 |
menn0 | I have to deal with kids for a bit but will be back later | 05:33 |
axw | wallyworld: fyi, I've gone down a bit of a rabbit hole - there's lots of badness in our maas provider code. going to try and fix it a bit while I'm there | 05:48 |
wallyworld | ok | 05:48 |
bigjools | \o/ | 05:48 |
wallyworld | /o\ | 05:48 |
axw | we don't convert juju architectures to maas ones, so our constraints are just broken | 05:48 |
wallyworld | \o/ | 05:48 |
axw | we don't return hardware characteristics | 05:49 |
axw | we choose arbitrary tools after acquring a node of an arbitrary arch | 05:49 |
axw | :| | 05:49 |
bigjools | and nobody noticed? Is any of this tested? :) | 05:52 |
axw | bigjools: there's a warning and a bug, I'm surprised nobody is kicking up more of a stink. I think we're just lucky because we default to amd64, which is what most people would be using I assume. | 05:53 |
bigjools | yeah agreed | 05:53 |
dimitern | morning all | 05:57 |
axw | morning dimitern | 05:59 |
wallyworld | bigjools: it hasn't been fully tested because we have been begging for maas hardware for 9 months | 06:09 |
bigjools | I offered use of our CI lab as well | 06:10 |
axw | wallyworld: bigjools: actually turns out we were just not passing the subarch, and we happen to have the same arch identifiers. false alarm on that specific bit... | 06:11 |
bigjools | I was gonna say, that would have been spectacular | 06:12 |
wallyworld | phew | 06:12 |
bigjools | not surprised about subarch | 06:12 |
bigjools | you don't *need* it | 06:12 |
wallyworld | axw: how complete is the bit of code to pass the tools to target nodes via cloud init? | 06:13 |
axw | wallyworld: 100% functional, just fixing bits around the edge and will have a bunch of tests to update. | 06:13 |
axw | wallyworld: I have tested local and ec2, I haven't tested manual yet but that shouldn't be an issue | 06:14 |
axw | wallyworld: sorry, I guess not quite 100% because we still need to update metadata... | 06:14 |
voidspac_ | morning all | 06:14 |
wallyworld | axw: i'm thinking of cherry picking just that bit for use in 1.20, because the tools url hacking i need for local to use a mounted dir is messy, plus it won't work for kvm | 06:14 |
wallyworld | o/ | 06:15 |
axw | wallyworld: it's not something that can easily be cherrypicked. provider/maas needs updates to return the arch name, for one thing | 06:15 |
axw | morning voidspac_ | 06:15 |
wallyworld | axw: this would just be for local provider | 06:15 |
wallyworld | to eliminate the need to get tools from http server | 06:16 |
axw | wallyworld: it's much more intrusive than that... I've had to move all the "EnsureTools" stuff *outside* the providers | 06:16 |
axw | so it's all or none | 06:16 |
wallyworld | rightio | 06:16 |
axw | menn0: I'm not sure that change is right | 06:27 |
axw | menn0: we always bootstrap the same version as the CLI | 06:27 |
axw | then upgrade toe the desired version | 06:27 |
axw | to* | 06:27 |
davecheney | axw: is this the --upload-tools bug ? | 06:39 |
axw | davecheney: https://github.com/juju/juju/pull/535 | 06:40 |
davecheney | ok | 06:40 |
axw | so yes, looks like it | 06:40 |
davecheney | but whern I do --upload-tools, why is there a disagreemnt between the tools I uploaded, and the tools that the env thinks it is running | 06:41 |
axw | dunno, that sounds broken | 06:41 |
axw | davecheney: ah, probably because upload increments the build number on the tools | 06:41 |
davecheney | axw: i think we have some logic that fudges the tools version uplaoded to not match any existing tools | 06:41 |
davecheney | ie, the .1 that gets stuffed in there | 06:42 |
axw | yeh | 06:42 |
axw | p | 06:42 |
davecheney | that's the bug | 06:42 |
axw | gtg get my daughter from school, bbl | 06:43 |
jam | morning dimitern | 07:01 |
dimitern | morning jam! welcome back :) | 07:01 |
jam | thanks | 07:01 |
jam | dimitern: you just hung, but it might be me, I'll try reconnecting | 07:05 |
dimitern | jam, i've reconnected as well | 07:06 |
dimitern | jam, you seem hung in the g+ | 07:07 |
=== uru_ is now known as urulama | ||
dimitern | jam1, i've rejoined several times, each time it says you're in the room, but then once I'm in it says waiting for people to join | 07:12 |
dimitern | jam1, wanna try juju-sapphire g+ instead? | 07:12 |
jam1 | dimitern: my internet just died for 10 sec | 07:12 |
jam1 | but I should be able to connect now | 07:13 |
menn0 | axw: ping? | 07:35 |
axw | menn0: pong | 07:35 |
menn0 | axw: so this proposed fix... I think it's ok but you have concerns | 07:36 |
menn0 | as far as I can tell, when --upload-tools is used | 07:36 |
menn0 | the wrong version gets written to agent.conf | 07:36 |
menn0 | so that when the machine agent comes up it think it needs to run upgrade steps | 07:37 |
axw | menn0: ignore --upload-tools for a moment | 07:37 |
menn0 | ok | 07:37 |
axw | menn0: when we bootstrap, we look for the most recent tools that matches major.minor of the CLI's tools | 07:37 |
axw | menn0: we bootstrap with the exact same tools the CLI is running, but set agent-version=most recent | 07:37 |
axw | the effect of this is that the machine agent comes up and immediately upgrades to agent-version | 07:38 |
axw | that's what we *want* to happen | 07:38 |
axw | and it does happen like that now | 07:38 |
axw | menn0: IIANM, your change makes it so that the machine agent thinks it's running agent-version already, and so it doesn't run the upgrade steps | 07:39 |
menn0 | ok | 07:39 |
axw | it'll still replace the binary, it just won't run the upgrade steps | 07:39 |
axw | menn0: the issue with --upload-tools is that it increments the Build number in the tools | 07:39 |
menn0 | yep, I understand the --upload-tools case | 07:40 |
axw | so what's deployed is never the same as the CLI | 07:40 |
menn0 | I didn't know about the standard bootstrap case | 07:40 |
menn0 | I will have to rethink then | 07:40 |
menn0 | what we want is: | 07:40 |
menn0 | - normal bootstraps to work as you just described | 07:40 |
menn0 | - juju upgrade-juju --upload-tools which increments just the build number to still trigger upgrade steps (for developers) | 07:41 |
menn0 | - juju bootstrap --upload-tools to NOT trigger the upgrade steps | 07:41 |
axw | yup | 07:42 |
axw | menn0: I think it's as simple as not incrementing the build number on bootstrap | 07:42 |
axw | I don't see why we'd ever want to do that | 07:42 |
menn0 | yep | 07:42 |
menn0 | that makes sense | 07:42 |
* menn0 looks at code | 07:42 | |
* axw should really document bootstrap some time | 07:43 | |
menn0 | axw: I think I know why we still increment the version on bootstrap when --upload-tools is passed | 07:46 |
menn0 | if the tools storage is shared between envs | 07:46 |
menn0 | axw: is it? | 07:46 |
axw | menn0: I don't think you can do that without running into problems | 07:47 |
axw | it's going to change real soon now anyway | 07:47 |
menn0 | ok then it should be fine | 07:47 |
axw | we're getting rid of provider storage | 07:47 |
menn0 | axw: I've been hunting through revision history | 07:50 |
menn0 | do you think the incrementing the build number is done to ensure that the uploaded tools are used in preference to any tools in the public streams? | 07:51 |
menn0 | axw: I'm wondering about another way to skin this cat: | 07:52 |
axw | menn0: ugh, that may be a problem, yeah. | 07:52 |
axw | menn0: TBH, it might be worth waiting till I'm done with my changes... this may be fixed incidentally | 07:52 |
menn0 | if bootstrap is given --upload-tools, we get that version into the agent.conf, if not use version.Current | 07:52 |
axw | except that won't fix 1.20 | 07:53 |
axw | menn0: can do, but sounds like it might be messy | 07:53 |
axw | how will you convey that information? | 07:53 |
menn0 | via a field in MachineConfig perhaps? | 07:54 |
menn0 | axw: so is the Tools field in cloudinit.MachineConfig definitely the target agent version, not the initial one? | 07:56 |
axw | menn0: until my changes go in, each provider's Bootstrap creates its own MachineConfig | 07:56 |
axw | hmm | 07:57 |
axw | menn0: it is the bootstrap tools, but hmmmm | 07:57 |
axw | menn0: I think I may have misunderstood the change | 07:57 |
axw | I'll take another look... | 07:57 |
menn0 | axw: np | 07:57 |
axw | menn0: sorry, your original change was right :) | 07:58 |
menn0 | phew! | 07:58 |
axw | it'll bootstrap with those tools | 07:58 |
axw | then it'll see agent-version is different and upgrade | 07:58 |
axw | sorry about that | 07:58 |
menn0 | no problems | 07:59 |
menn0 | I'd like to test the non-uploading case but that's kinda hard without having these changes in a official release | 07:59 |
menn0 | is there another way? (custom streams or something) | 08:00 |
menn0 | axw: ^^ | 08:02 |
axw | urhm | 08:02 |
axw | you could use sync-tools to generate the metadata | 08:03 |
axw | menn0: simplest way would probably be to tweak version.go, sync-tools, then revert version.go | 08:03 |
menn0 | axw: cool. I will look in to that tomorrow. | 08:03 |
menn0 | I also need to read simplestreams-metadata.txt | 08:03 |
menn0 | thanks for your help and for being concerned about my change | 08:04 |
axw | no worries :) | 08:04 |
menn0 | I'd rather have that than bad code getting in | 08:04 |
menn0 | I'm EOD | 08:04 |
TheMue | morning | 08:07 |
dimitern | morning TheMue, voidspac_ | 08:27 |
TheMue | dimitern: I pushed my change to my repo on Friday, only the tests are missing. but I needed an additional API call to see if a machine is manually provisioned. looks good so far. | 08:28 |
dimitern | TheMue, sweet! I'm looking forward to seeing it | 08:29 |
TheMue | dimitern: the current changes are at https://github.com/TheMue/juju/compare/capability-detection-for-networker | 08:29 |
voidspac_ | TheMue: dimitern: morning | 08:30 |
TheMue | voidspac_: hello | 08:33 |
dimitern | TheMue, looking | 08:34 |
dimitern | TheMue, looks nice, although for the IsManual API call, I'd implement it a bit differently | 08:36 |
dimitern | TheMue, like getting the IsManual flag as part of getting the machine's live value | 08:37 |
TheMue | dimitern: it’s able for bulk calls | 08:37 |
dimitern | TheMue, i.e. caching it, so you can return it directly without an extra call | 08:37 |
dimitern | TheMue, yeah, LifeGetter also supports bulk calls | 08:37 |
TheMue | dimitern: will take a look | 08:38 |
dimitern | TheMue, cheers | 08:39 |
jam1 | TheMue: I'm just finishing up lunch, I'll be a little bit late. | 09:00 |
TheMue | jam1: ok, just ping | 09:00 |
mattyw | fwereade_, ping? | 09:00 |
jam1 | TheMue: I'm in the hangout | 09:22 |
voidspac_ | given a package name, how do I tell what files / binaries it provides? | 10:18 |
dimitern | voidspac_, you can run a godoc server for that package locally | 10:18 |
voidspac_ | dimitern: by package I mean ubuntu package, sorry | 10:19 |
dimitern | voidspac_, ah :) | 10:19 |
voidspac_ | dimitern: go doesn't have packages, does it? | 10:19 |
voidspac_ | I mean, it doesn't use that term | 10:19 |
voidspac_ | it has "dependencies", which can be anything pretty much | 10:20 |
dimitern | voidspac_, they are packages | 10:20 |
dimitern | :) | 10:20 |
voidspac_ | well, they're not | 10:20 |
dimitern | voidspac_, for debs: $ apt-cache showsrc juju-core | 10:20 |
voidspac_ | they're a hodge-podge of files | 10:20 |
dimitern | voidspac_, :) | 10:20 |
voidspac_ | dimitern: thanks | 10:20 |
voidspac_ | dimitern: hmmm... not quite it, I want to know what files it will put where | 10:21 |
voidspac_ | dimitern: is there a determinstic way of knowing that? | 10:21 |
voidspac_ | I guess now as the install scripts execute code | 10:21 |
voidspac_ | I mean, I guess not | 10:23 |
dimitern | voidspac_, well, these 3 files are the only ones in the deb archive actually | 10:23 |
voidspac_ | dimitern: the tarballs? | 10:23 |
dimitern | voidspac_, if you want to see the source itself, try apt-get source <package> | 10:24 |
voidspac_ | dimitern: so download the package and inspect it | 10:24 |
voidspac_ | fair enough | 10:24 |
dimitern | voidspac_, yeah, and take a look at the debian tarball for hooks I guess | 10:24 |
dimitern | voidspac_, and rules (which is as readable as any generated Makefile :) | 10:25 |
voidspac_ | dimitern: thanks :-) | 10:27 |
wallyworld | fwereade_: morning, you up for a chat about health checks sometime? | 10:36 |
fwereade_ | wallyworld, heyhey | 10:36 |
wallyworld | let me know when you have time and we can do a hangout | 10:38 |
fwereade_ | wallyworld, what time is it for you? | 10:38 |
wallyworld | 20:30 | 10:39 |
fwereade_ | wallyworld, hmm, and how early do you like to get up? | 10:41 |
wallyworld | i'm up around 6 but need to take the kid to school, back around 7 | 10:41 |
dimitern | jam1, standup? | 10:47 |
* fwereade_ restarting then hopefully with wallyworld | 12:25 | |
voidspac_ | perrito666: ping | 13:49 |
voidspac_ | Command failed: mongodump --dbpath /var/lib/juju/db | 13:49 |
voidspac_ | Error: bash: line 9: mongodump: command not found | 13:49 |
voidspac_ | perrito666: do you recognise that? No mongodump on my state server. | 13:50 |
ericsnow | voidspac_: /var/lib/juju/mongodump is installed as part of the tools in a local juju environment | 14:04 |
voidspac_ | ericsnow: this isn't local this is an openstack environment | 14:04 |
voidspac_ | ericsnow: and that error message was the output from "juju backup", that location is where the backup script was looking | 14:04 |
=== lazyPower_ is now known as lazyPower | ||
voidspac_ | ericsnow: but thanks :-) | 14:05 |
ericsnow | voidspac_: (yeah, drop the "local" part) | 14:05 |
ericsnow | voidspac_: weird | 14:05 |
voidspac_ | ericsnow: I just destroyed the environment and will try again | 14:06 |
voidspac_ | ericsnow: I think I started the backup too early | 14:06 |
ericsnow | voidspac_: FYI horacio is out today | 14:06 |
voidspac_ | ericsnow: hmmm, installed as part of the tools? | 14:06 |
ericsnow | voidspac_: ah, that makes sense | 14:06 |
ericsnow | voidspac_: right | 14:07 |
voidspac_ | I wonder if that works when you do --upload-tools | 14:07 |
voidspac_ | ericsnow: I don't need upload-tools anyway, as restore is run on the client | 14:07 |
voidspac_ | although that might be weird... | 14:07 |
voidspac_ | we'll see | 14:07 |
voidspac_ | WARNING no prepackaged tools available | 14:08 |
voidspac_ | uploading tools | 14:08 |
voidspac_ | so I get no choice in the matter :-) | 14:08 |
ericsnow | voidspac_: while restore runs on the client, most of the work is happening on the remote host | 14:09 |
voidspac_ | ericsnow: yeah, but it's mostly batch scripts uploaded from the client :-) | 14:09 |
voidspac_ | ericsnow: the part I'm *changing* runs on the client | 14:10 |
voidspac_ | so that's the bit I want to check works | 14:10 |
ericsnow | voidspac_: disclaimer--most of what I know of restore is from working on backup :) | 14:10 |
voidspac_ | hah | 14:10 |
voidspac_ | ericsnow: I'm working on hacking a fix into the old plugin anyway | 14:10 |
voidspac_ | ericsnow: not touching the shiny new stuff you did :-) | 14:10 |
ericsnow | voidspac_: right | 14:10 |
ericsnow | voidspac_: still polishing :) | 14:10 |
voidspac_ | :-) | 14:11 |
=== stokachu_ is now known as stokachu | ||
=== mup_ is now known as mup | ||
voidspac_ | ericsnow: so when I bootstrap an openstack environment from dev I don't see a mongodump in the uploaded tools | 14:19 |
voidspac_ | I might have to blow away dev and try with 1.18/1.20 | 14:20 |
voidspac_ | it might be a curiousity of upload-tools | 14:20 |
ericsnow | voidspac_: not sure then | 14:20 |
ericsnow | voidspac_: I never got around to figuring out for myself where mongodump, etc. came from | 14:21 |
ericsnow | voidspac_: it should be in the same dir as mongod | 14:21 |
voidspac_ | ah right | 14:22 |
voidspac_ | mongod isn't in the uploaded tools directory either, must be elsewhere | 14:22 |
voidspac_ | I'm trying with 1.18 now | 14:22 |
ericsnow | voidspac_: yeah, not the uploaded tools dir | 14:22 |
ericsnow | voidspac_: pretty sure /var/lib/juju holds just the juju-built mongo binaries | 14:23 |
voidspac_ | ericsnow: ok, will check when this environment has bootstrapped | 14:23 |
voidspac_ | thanks | 14:24 |
voidspac_ | (again) | 14:24 |
voidspac_ | :-) | 14:24 |
ericsnow | :) | 14:24 |
ericsnow | voidspac_: yeah! I was helpful to someone :) | 14:24 |
voidspac_ | ericsnow: /usr/lib/juju/bin | 14:31 |
voidspac_ | not /var/lib/juju | 14:32 |
ericsnow | voidspac_: ah | 14:32 |
voidspac_ | ericsnow: ok, interesting | 14:41 |
voidspac_ | ericsnow: so now with a freshly bootstrapped machine from dev | 14:41 |
voidspac_ | I *do* have mongodump in /usr/lib/juju/bin | 14:41 |
voidspac_ | but it's not in path | 14:41 |
ericsnow | voidspac_: right | 14:41 |
voidspac_ | I'm giving it a chance for bootstrap to complete before I try mongodump | 14:41 |
voidspac_ | looks like just a path issue then | 14:41 |
ericsnow | voidspac_: if I recall correctly it tries /usr/lib/juju/bin explicitly first and then falls back to $PATH | 14:42 |
voidspac_ | ericsnow: backup and restore CI test is passing, so it's not broken - it just may be hard / impossible to manually test without recreating some of the CI infastructure | 14:43 |
voidspac_ | (they avoid using --upload-tools in tests for other reasons) | 14:43 |
ericsnow | voidspac_: makes sense | 14:43 |
voidspac_ | brb | 14:44 |
voidspac_ | right | 15:35 |
voidspac_ | goodnight | 15:35 |
=== hatch__ is now known as hatch | ||
katco | i need some help with facades: i see a FacadeCall("ContainerConfig",...), but i can't figure out where "ContainerConfig" is registered as a Facade... | 16:27 |
ericsnow | katco: registration happens with a call to state/apiserver/common.RegisterStandardFacade() (or similar) | 16:31 |
ericsnow | katco: e.g. state/apiserver/client/client.go | 16:31 |
katco | ericsnow: yeah, i gathered that, but a quick grep shows no registrations for "ContainerConfig"... indeed the only _reference_ i can find to that facade is the call... | 16:31 |
ericsnow | katco: I don't see one under apiserver for ContainerConfig | 16:31 |
ericsnow | katco: is it a facade or something on a facade? | 16:31 |
ericsnow | katco: perhaps it's dead code? | 16:32 |
katco | ericsnow: perhaps... i'm trying to wind my way through a failing test | 16:33 |
katco | ericsnow: it's in api/provisioner/provision.go::CreateConfig() | 16:33 |
ericsnow | katco: looks like FacadeCall is just a wrapper around a facade for calling a method on that facade | 16:37 |
ericsnow | katco: so ContainerConfig is a method on some facade rather than a facade itself | 16:37 |
katco | ericsnow: ah ok... so i'm trying to trace down the facade that's passed in; it's probably a method on there? | 16:38 |
ericsnow | katco: yep | 16:38 |
katco | ericsnow: ok we'll see where this yarn ends then :) | 16:38 |
katco | ericsnow: thank you | 16:38 |
ericsnow | katco: apiserver/provisioner/provisioner.go defines a ContainerConfig method | 16:39 |
katco | ericsnow: yeah i had been looking at that, but it looks correct. i'm probably missing something eyeballing it | 16:39 |
katco | ericsnow: arrrrrrg, it was a test mocking out that call. | 17:23 |
ericsnow | katco: :( | 17:24 |
sinzui | katco, ericsnow: have another version increment for stable because we are releases 1.20.5 today. https://github.com/juju/juju/pull/537 | 18:01 |
katco | sinzui: thanks, sinzui. very exciting :) lgtm too | 18:02 |
sinzui | oh, for anyone looking at recent history of CI, Hp and AWS had soem bad hours. The QA team cleanup some machines and restarted the tests. master was *never* broken | 18:03 |
=== urulama is now known as urulama-afk | ||
=== viperZ28_ is now known as viperZ28 | ||
=== StoneTable is now known as aisrael | ||
=== tvansteenburgh1 is now known as tvansteenburgh | ||
thumper | mramm: morning, we hanging out this morning? | 21:01 |
mramm | yep | 21:06 |
mramm | sorry I am late | 21:06 |
mramm | thumper: you still around? | 21:08 |
thumper | mramm: aye | 21:08 |
* thumper jumps into hangout | 21:08 | |
=== jcsackett_ is now known as jcsackett | ||
katco | wallyworld_: no worries. emacs just crashed when you sent that >.< | 21:35 |
alexisb | thumper, on and ready when you are | 22:00 |
thumper | alexisb: ack, coming | 22:00 |
alexisb | no rush | 22:00 |
alexisb | I just have a hard stop at the top of the hour | 22:00 |
=== perrito6` is now known as perrito666 | ||
menn0 | thumper, davecheney, waigani_ : regarding that bootstrap version behaviour that you didn't like the sound of | 23:35 |
menn0 | looking at the code | 23:35 |
menn0 | it seems like it might be ok | 23:35 |
thumper | good | 23:35 |
menn0 | it's asking the cloud instance itself for the tools metadata it has | 23:35 |
thumper | right... | 23:35 |
menn0 | and therefore it's only checking through tools that are available /on that cloud/ | 23:35 |
thumper | yeah... | 23:35 |
thumper | but does it make sense for it it automatically upgrade? Just wondering | 23:36 |
thumper | and if so, to what version? | 23:36 |
menn0 | so it's not going to ask the new environment to upgrade to tools that it doesn't have | 23:36 |
thumper | if I bootstrap 1.18 | 23:36 |
thumper | what happens? | 23:36 |
thumper | assuming I have a 1.18.0 client | 23:36 |
menn0 | from what I have gathered from the code | 23:36 |
menn0 | it will bootstrap 1.18.0 | 23:36 |
menn0 | but set the env agent-version to the latest 1.18.X available on the cloud | 23:37 |
menn0 | so when the bootstrap machine agent comes up it will immediately upgraded to 1.18.X | 23:37 |
menn0 | if there is no 1.18 available on the cloud | 23:37 |
menn0 | it'll upload the local tools and the env will be running 1.18.0 (no upgrade) | 23:38 |
menn0 | does that sound reasonable? | 23:38 |
menn0 | if not, talk to Andrew | 23:38 |
menn0 | (I'm just the messenger!) | 23:38 |
thumper | hmm... | 23:38 |
thumper | but the initial version is the same as the client? | 23:38 |
menn0 | yes, but only for a short time | 23:39 |
davecheney | waigani: if you want to be on call reviewer for today, that would be good, we only have brits | 23:39 |
menn0 | the upgrade (i.e. restart into the new version) happens almost immediately | 23:39 |
thumper | and it is only the patch number that is allowed to change? | 23:39 |
thumper | so major.minor have to be the same? | 23:39 |
menn0 | yes | 23:39 |
waigani | davecheney: okay | 23:39 |
menn0 | it filters on major.minor | 23:39 |
thumper | I suppose that is ok | 23:40 |
menn0 | my guess for the reasoning is to ensure the the env is running with the most bugs fixed | 23:40 |
menn0 | thumper, davecheney: one problem that just comes to mind is that it has the potential the break the "juju bootstrap && juju deploy foo" use case. | 23:42 |
menn0 | if the bootstrap machine agent comes up and then restarts soon after | 23:43 |
menn0 | and then goes in to "upgrade mode" soon after | 23:43 |
thumper | well... hopefully the user has an up to date client :) | 23:46 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!