wallyworld | davechen1y: hey, as ocr, could you please look at https://codereview.appspot.com/10858043/ and https://codereview.appspot.com/10854043/. With the latter, thumper and I both would like to keep both assignment policies fwiw | 02:25 |
---|---|---|
davechen1y | wallyworld: /me looks | 02:28 |
wallyworld | ty | 02:28 |
davechen1y | wallyworld: i take no position on two policies or one | 02:34 |
davechen1y | as you and thumper are the ones doing the work, i think you get final say | 02:34 |
davechen1y | the rest is between you and your maker | 02:35 |
thumper | ta | 02:35 |
* thumper defers continued work, and goes back to land and fix | 02:35 | |
thumper | local provider pipeline is currently eight branches | 02:35 |
thumper | although the last one is empty | 02:36 |
thumper | time to bring that down a little. | 02:36 |
thumper | a reason not to use iOS: https://twitter.com/satefan/status/354046461383688193 | 02:36 |
* thumper wonders which trusted roots we have in ubuntu | 02:37 | |
davechen1y | about 150 at last count | 02:37 |
thumper | holy shit | 02:37 |
* thumper wonders how many are NSA plants | 02:37 | |
davechen1y | fucktonnes | 02:37 |
thumper | heh | 02:38 |
davechen1y | prety sure the china post office one is in there | 02:38 |
thumper | haha | 02:38 |
* thumper sighs | 02:38 | |
thumper | wow MITM attacks all round | 02:38 |
thumper | https://code.launchpad.net/~thumper/juju-core/move-cert-gen-to-config/+merge/173117/comments/387671 ?!?! | 02:50 |
davechen1y | out of disk space ? | 02:54 |
thumper | dunno, awaiting jam | 03:01 |
thumper | davechen1y: how do I find the OS that we are running on using go? | 03:05 |
davechen1y | runtime.GOOS | 03:06 |
davechen1y | runtime.GOARCH as well | 03:06 |
thumper | ta | 03:15 |
thumper | wallyworld: do we have existing storage smoke tests? | 03:15 |
thumper | somewhere? | 03:15 |
wallyworld | um. maybe in jujucore tests, or individually in each provider. can't recall | 03:15 |
wallyworld | if there are some in jujucore tests, then perhaps none are explicitly needed | 03:16 |
wallyworld | thumper: Testpersistence exists | 03:18 |
wallyworld | i think your provider will be run with that test if you have plugged in the common env tests | 03:18 |
thumper | wallyworld: where is that? | 03:19 |
wallyworld | environs/jujutest/tests.go | 03:20 |
wallyworld | davechen1y: thanks for the review | 03:22 |
davechen1y | wallyworld: don't thank me, i didn't doa very good job | 03:24 |
wallyworld | davechen1y: with the checkers import, the discussion about it happened after i made the code change | 03:24 |
wallyworld | i left it as is so it could be changed all in one go | 03:24 |
wallyworld | you gave a +1 which is what i needed to unblock, so it was a good job :-) | 03:24 |
wallyworld | thumper: if policy is clean and/or empty and constraint is lxc, and no clean/empty container found, do you think it should create the required new container on an existing instance, or go to the trouble of creating a whole new instance to host the container? | 03:38 |
thumper | wallyworld: that is an interesting question | 03:40 |
wallyworld | and one i need to answer :-) | 03:40 |
wallyworld | me thinks by default it should shive the new container on an existing instance | 03:40 |
wallyworld | and we maybe provide a way to alter that behaviour | 03:41 |
thumper | how would you tell it not to? | 03:41 |
thumper | I think here we are hitting the limit of shit we should care about | 03:41 |
wallyworld | yes | 03:41 |
thumper | and that we should move from there into the world of letting the user decide with their custom deployment script | 03:42 |
wallyworld | i think if the user wants a new instance, they should use that assignment pokicy | 03:42 |
wallyworld | AssignNewMachine | 03:42 |
wallyworld | if they use AssignClean(Empty), it will create a new container if required on an existing instance | 03:42 |
wallyworld | perhaps we can have a max containers per instance setting? | 03:43 |
thumper | that may make sense | 03:43 |
wallyworld | or just rely on hardware characteristics to help us control it | 03:43 |
thumper | can we defined the assignment policy as a deploy time constraint yet? | 03:43 |
wallyworld | if when an instance is "full", don't add more to it | 03:43 |
wallyworld | not yet. soon :-) | 03:43 |
wallyworld | next branch i think | 03:43 |
thumper | yeah, hardware characteristics I think | 03:44 |
thumper | until we add magic++ | 03:44 |
wallyworld | i do like the max containers idea though | 03:44 |
wallyworld | we may want to limit it even if hardware allows it | 03:44 |
wallyworld | s/we/users | 03:44 |
wallyworld | thumper: wtf. got all these cannot find package errors from the go bot | 03:46 |
thumper | wallyworld: me too | 03:47 |
wallyworld | :-( | 03:47 |
thumper | wallyworld: waiting for jam | 03:57 |
wallyworld | yep. if he weren't online in a bit, i'd ssh in to have a look | 03:57 |
wallyworld | might be a known issue perhaps | 03:57 |
=== thumper is now known as thumper-afk | ||
jam | thumper-afk: fixed | 04:45 |
jam | wallyworld: Hopefully I fixed it. It looks like jujud stopped being able to talk to the master server, and when it came back online, it reinstalled the charm | 04:45 |
wallyworld | ok, will try again, thanks | 04:46 |
dimitern | morning all! | 06:24 |
rvba | jam: Hi, could you please update gwacl in the landing environment? (No backward-incompatible changes have landed.) | 07:14 |
=== thumper-afk is now known as thumper | ||
thumper | fwereade: ping | 07:33 |
thumper | fwereade: I'm going to go organise dinner, will check back later | 07:37 |
=== thumper is now known as thumper-afk | ||
=== JoseAntonioR is now known as JoseeAntonioR | ||
jam | rvba: sorry about the delay, will do | 08:16 |
rvba | jam: no worries, ta! | 08:16 |
jam | rvba: from r146 => r166 | 08:19 |
rvba | jam: great, thanks again. | 08:19 |
jam | hey dimitern, I hope you enjoyed Euro Pycon | 08:44 |
dimitern | jam: hey, yeah it was useful and interesting | 08:44 |
dimitern | jam: and it would've been even better if I didn't try upgrading to saucy and bricked my laptop | 08:44 |
dimitern | jam: had to reinstall raring, and now what mongodb did I need to install - which ppa was it? | 08:45 |
jam | dimitern: I still use the tarball, but there is: https://launchpad.net/~juju/+archive/experimental | 08:46 |
jam | ouch on the Laptop. I'm a bit surprised. | 08:46 |
jam | The quality stuff has generally been a lot better for running beta | 08:46 |
dimitern | jam: thanks | 08:47 |
jam | dimitern: I have quite a few infrastruture-y patches related to API and consumers of the API. I'd be happy to get some feedback and go over them with you. | 08:47 |
dimitern | jam: well, it turned out I choose a really bad time to do it - there were problems with video drivers (i'm using the fglrx proprietary ones) and some mixup with proposed packages breaking unity | 08:48 |
jam | I'm hoping some of them will make new workers easier to bring up. | 08:48 |
dimitern | jam: cool, i'd like to know these | 08:49 |
jam | dimitern: yeah, I use proprietary drivers as well, If you don't need 3D then the "radeon" driver is often pretty stable. | 08:49 |
dimitern | jam: btw - I did add-apt-repository ppa:juju/experimental and now I install mongodb - will it get it from there? | 08:49 |
jam | dimitern: so probably the biggest change is doing a bit more standardizing on NotifyWatcher connected to the API, which can then use a api.NotifyWatcher on the client side, and a worker/NotifyWorker which is designed around workers that trigger based on that watcher. | 08:50 |
dimitern | jam: well the thing is - after the upgrade everything went black - even ttys ctrl+alt+f1 didn't respond | 08:50 |
jam | dimitern: for mongodb, you need to 'apt-get update' frist | 08:50 |
jam | first | 08:50 |
jam | so it sees the new PPA | 08:50 |
jam | then it should, provided the PPA is "newer" | 08:50 |
jam | dimitern: ouch | 08:50 |
dimitern | jam: ok | 08:50 |
dimitern | jam: i'm thinking of continuing with the deployer stuff - i didn't see progress on that | 08:52 |
jam | dimitern: We want to focus on getting one agent fully into the API, which deployer works for that. | 08:53 |
jam | I think deployer is NotifyWatcher based as well. | 08:53 |
jam | (old EntityWatcher) | 08:53 |
jam | dimitern: also, this is particularly trivial, and dfc has LGTM'd it on Launchpad: https://codereview.appspot.com/10871045/ | 08:53 |
dimitern | jam: yeah, probably, will take a look | 08:54 |
jam | dimitern: I realize william is in UK, but have you seen him around this morning? | 08:54 |
jam | I wanted to get fwereade's feedback on the basic design I've been doing, to see if it matches what he and I talked about. | 08:55 |
fwereade | jam, dimitern, heyhey | 08:55 |
dimitern | jam: no i haven't | 08:55 |
dimitern | fwereade: hey | 08:55 |
dimitern | fwereade: took the car for a ride on sunday btw :) | 08:55 |
jam | I'm going to restart IRC real quick, as notifications aren't working. bbiab | 08:56 |
fwereade | dimitern, cool, thanks | 08:56 |
jam | fwereade: I trust the wedding was beautiful and your weather is pleasant? | 08:57 |
fwereade | jam, it indeed was, and... yeah, the weather's still ok too :) that's nice | 08:58 |
fwereade | jam, sorry, hadn't looked | 08:58 |
jam | fwereade: so probably the big one as far as design goes is: https://codereview.appspot.com/10978043/ | 08:59 |
jam | which is creating a NotifyWorker structure | 08:59 |
jam | to match the NotifyWatcher | 08:59 |
fwereade | jam, cool, I'm just going through https://codereview.appspot.com/10939043/ now | 09:01 |
jam | fwereade: great. Certainly needed before the other one. | 09:01 |
jam | I did end up consuming the initial event in the API Server, and then triggering a local event in the client Watcher | 09:01 |
TheMue | jam: just reviewing it | 09:02 |
TheMue | jam: already seen the usage inside the cleaner and i lie it | 09:02 |
jam | TheMue: thanks. I'm quite happy with how it shaped up. | 09:04 |
jam | The workers had a lot of "interact with the system" logic tied up with "do my actual work", and I'm hoping this decouples it nicely. | 09:04 |
jam | fwereade: by the way, you still have https://code.launchpad.net/~fwereade/juju-core/errors-cleanup/+merge/168928 sitting around. Is it bitrotten/landed in trunk but not in "old trunk", ? | 09:10 |
dimitern | jam: so now (3.5h from now) is the combined blue+core standup, right? | 09:10 |
fwereade | jam, that branch is somewhat bitrotten I'm afraid :/ | 09:10 |
jam | dimitern: my clock says 2.5 hours | 09:11 |
dimitern | jam: oh, right, yes | 09:12 |
jam | mgz: also, just a quick status check on https://code.launchpad.net/~danilo/juju-core/python-env-fails/+merge/171997 | 09:15 |
jam | ISTR it was failing while running on the bot (lovely hangs causing 600+s timeouts. | 09:15 |
=== thumper-afk is now known as thumper | ||
TheMue | jam: you've got a review for the 10978043 | 09:23 |
jam | TheMue: thanks | 09:25 |
dimitern | guys, i'm getting multiple test failures on trunk - http://paste.ubuntu.com/5854870/ | 09:25 |
thumper | dimitern: have you updated gwacl? | 09:25 |
thumper | i think it had work recently | 09:26 |
dimitern | thumper: yes, all of the 3rd parties | 09:26 |
jam | thumper: they just had me update it on the bot today, but the compat one was a while ago. | 09:26 |
jam | dimitern: "Bad record MAC" is what happens when you use a packaged mongo, IIRC. | 09:26 |
jam | You can try putting the tarball mongo in your path, and see if that fixes it. | 09:27 |
dimitern | i had to reinstall raring after a failed upgrade to saucy (but /home was unaffected), so I may still need some packages installed (I already installed what was obvious, like git, mongodb from the ppa) | 09:27 |
jam | dimitern: the env is already bootstrapped is a follow-on failure because the previous test didn't clean itself up. | 09:27 |
dimitern | jam: I did add the ppa, then update, then install mongodb, but it sill got the wrong one (the one in the archive is 2.2.4, which is newer than the ppa one - 2.2.3+ssl2) | 09:28 |
dimitern | ha it seems I didn't manage to uninstall the 2.2.4 mongo properly | 09:28 |
dimitern | how can I install the ppa version of mongo? it has the same name, different version | 09:29 |
jam | dimitern: there is some way to force it something like "apt-get install mongodb==2.2.3+ssl2" but I'm not 100% sure on that bit. | 09:30 |
dimitern | jam: hmm apt-get is too strict, I had to specify it as sudo apt-get install 'mongodb=1:2.2.3-0ubuntu2+ssl2' | 09:32 |
jam | fwereade: A Watch call returns the initial event.... Is a bit confusing when an event doesn't have any actual content. To *me* the fact that we are doing "out := w.out" in the loop means we are generating an event locally which wasn't explicitly sent from remote. | 09:32 |
fwereade | jam, I think the viewpoints are basically equivalent | 09:33 |
fwereade | jam, I see it as "empty event sent, nothing actually regenerated" | 09:33 |
fwereade | jam, but it's academic really | 09:34 |
jam | fwereade: you're right that it makes more sense on watchers that actually transmit content, as you would have a field and have to save the content for a while until someone polled for it. | 09:38 |
jam | (the Result object has to hold that state, which gets cached a bit on the client Watcher object until it can get rid of it with the out <- content statement. | 09:39 |
fwereade | jam, yeah, the only reason I'm pushing that viewpoint is to make the similarity clear | 09:39 |
jam | fwereade: I tweaked the comments. Now who can we get to follow up review it ? :) | 09:41 |
fwereade | jam, dimitern's back :) | 09:41 |
jam | dimitern: https://codereview.appspot.com/10939043/ looks more imposing than I think it really is. It moves some code around so that we can pull out Watcher | 09:41 |
jam | as an API object | 09:41 |
dimitern | jam: will look once I resolve my issue with mongo - it seems the ppa doesn't have amd64 deb for raring (only - other series have both i386 and am64 available) - there is an option to retry the build on LP, but it says it'll destroy history etc. - should I do it? wish davecheney was around to ask | 09:43 |
jam | dimitern: go ahead and retry, it kills the failure log, etc. But since it failed to build, I think we can just retryi it. | 09:46 |
jam | nobody is actively debugging the failure | 09:46 |
jam | it is claimed it was "Cancelled Build" | 09:46 |
jam | dimitern: it is a bit concerning that it took "Finished on 2013-04-19 (took 2 days, 13 hours, 40 minutes, 24.0 seconds)" | 09:46 |
dimitern | jam: scheduled for retry in 16 minutes | 09:46 |
jam | and was cancelled. | 09:46 |
jam | so there may be an explicit problem with that build | 09:46 |
dimitern | jam: probably wasn't "finished" but canceled instead | 09:47 |
jam | dimitern: right. I just mean it sat in the building state for 2 days before someone had to manually notice and cancell it. | 09:48 |
jam | We should try to keep an eye on it. | 09:48 |
dimitern | jam: yeah | 09:48 |
dimitern | jam: will look at your branch now, while waiting | 09:48 |
dimitern | jam: ah, it's william's actually | 09:49 |
dimitern | jam: ah, no - the sidebar or rietveld is confusing sometimes | 09:49 |
dimitern | jam: reviewed | 10:07 |
jam | TheMue: fwereade: I've responded to both of your feedback on https://codereview.appspot.com/10978043/ | 10:17 |
dimitern | jam: i'll take a look at that as well | 10:18 |
dimitern | (while still waiting for the mongodb build) | 10:18 |
jam | fwereade: for 'tools' package. I would be fine calling the top level thing 'agent'. The key bits I care about are: | 10:19 |
jam | a) it is environ agnostic | 10:19 |
jam | b) it hides more of the details from callers, so they don't have to track something like 'dataDir'. | 10:19 |
jam | dimitern: to quote William, "EnvironConfigWatcher should never have been implemented" | 10:20 |
jam | it exposes too many secrets | 10:20 |
jam | that shouldn't be exposed in the API | 10:20 |
jam | things might to think about stuff that is *in* the environ config | 10:20 |
jam | but they shouldn't get the whole thing out. (AIUI) | 10:20 |
dimitern | jam: ah, i see | 10:21 |
fwereade | jam, responded | 10:24 |
fwereade | jam, and I'm +1 on your (a) and (b), I absolutely think it's a good change | 10:24 |
fwereade | jam, just quibbling about package placement | 10:25 |
fwereade | jam, dimitern: so, on thursday I tried about 3 times to impose some consistency on the params package and every time it felt like it was running away with me | 10:25 |
jam | fwereade: :) | 10:26 |
fwereade | jam, dimitern: but I'm going to try again today | 10:27 |
dimitern | fwereade: +1 | 10:27 |
fwereade | jam, dimitern: and if it ends up a huge ugly conflicty change I can at least get it in front of you for sanity checking | 10:27 |
dimitern | fwereade: sgtm | 10:27 |
fwereade | jam, dimitern: and if I do it ~right then I hope we'll be able to share stuff like the underlying code for machiner.Life and deployer.Life (and for more facades in the future) | 10:28 |
fwereade | dimitern, since you weren't there, one important bit I thought I should run by you is... | 10:29 |
fwereade | dimitern, ...implementing the API in terms of Tag where possible rather than name/id/whatever's relevant to the particular entity | 10:29 |
fwereade | dimitern, is there anything obviously stupid about that? | 10:30 |
dimitern | fwereade: well, it has advantages, but the main thing is we'll need to slightly modify the code that uses the api to accomodate that - not terribly high a price though | 10:31 |
fwereade | dimitern, there's enough dancing back and forth between id and tag in jujud already that I think it might work out a win in the end | 10:31 |
fwereade | dimitern, it's more crap over the wire that we don't really need though | 10:32 |
dimitern | fwereade: and it's still not too late to impose this api-wide change i think | 10:32 |
fwereade | dimitern, yeah, and there STM to be a bunch of things api-wide that don't really look consistent, and I'd like to force a bit of that before we fossilize | 10:32 |
dimitern | fwereade: sounds good | 10:33 |
dimitern | fwereade: what happened with the magical bulk ops at the rpc layer? | 10:33 |
fwereade | dimitern, when rog proposed those I rejected them because it forced domain-object-style at the rpc layer | 10:34 |
dimitern | fwereade: are we still doing the bulk ops as originally agreed: array-of-structs as args, arrays-of-structs as results (incl. errors and results at the same place)? | 10:34 |
fwereade | dimitern, yes, I think so | 10:34 |
fwereade | dimitern, haven't seen any controversy there | 10:34 |
dimitern | fwereade: cool! | 10:35 |
jam | fwereade: well I'm on board with it, and rogpeppe is on vacation :) | 10:38 |
jam | fwereade, dimitern: I would clarify that it is a struct-of-array-of-structs as args and struct-of-array-of-structs as results. | 10:38 |
jam | So it is ApiFunction(args params.Type) (params.ResultType, error) | 10:38 |
jam | not | 10:38 |
dimitern | jam: yes | 10:39 |
jam | ApiFunction(args []params.Type) ([]params.ResultType, error) | 10:39 |
dimitern | jam: of course | 10:39 |
jam | dimitern: well, there have been times when wrapping the array in a struct seemed silly | 10:39 |
jam | but fwereade and I discussed it and it does make sense. | 10:39 |
jam | Since it gives us nice wiggle room if we want to extend the api at all. | 10:39 |
dimitern | jam: and in addition, the rpc layer supports that only - you cannot have (args []type) as it is now | 10:42 |
dimitern | jam: please wait for my review before landing https://codereview.appspot.com/10978043/ | 10:43 |
* fwereade bbiab | 10:44 | |
jam | dimitern: I responded to https://codereview.appspot.com/10939043/ | 10:46 |
jam | I'll do fwereade's requests and then wait for your review. | 10:46 |
dimitern | jam: cheers! | 10:46 |
dimitern | jam: reviewed | 10:51 |
jtv | Reviewers needed, hopefully for a "trivial" vote: https://codereview.appspot.com/10858049 | 10:57 |
dimitern | jtv: looking | 10:57 |
jtv | thanks | 10:57 |
dimitern | jtv: LGTM, trivial | 10:58 |
dimitern | jtv: make simplify does that? seems pretty smart :) | 10:58 |
jtv | Thanks dimitern, for your vote and your compliment. :) | 11:03 |
jtv | It's not particularly smart, but it does free you from the distraction. | 11:03 |
dimitern | jtv: i might give it a go | 11:04 |
jtv | Do it regularly. :) It won't find much, but that makes it all the easier. | 11:04 |
dimitern | jtv: are you familiar with lp-propose and lp-submit aaron did? | 11:04 |
jtv | Or "make format" for a regular formatting run. We may choose to integrate them. | 11:04 |
jtv | Yes, I used to use those... but it's been a while. | 11:04 |
dimitern | jtv: there was a talk in oakland that these two now support rietveld as well, so can be used instead of lbox | 11:05 |
dimitern | jtv: which i'd acclaim | 11:05 |
jtv | A step in the right direction, yes... It may save us some work on the Tarmac integration. | 11:05 |
jtv | Also, I haven't seen those bzr plugins ignore errors like I have lbox. :) | 11:06 |
dimitern | jtv: exactly, and better integration with LP | 11:06 |
jam | fwereade, dimitern: Is this actually better? https://codereview.appspot.com/10858049/patch/1/1003 | 11:07 |
jam | double nesting of brackets tends to be more confusing for me. | 11:07 |
jam | I guess it is more apparent when they are on separate lines. | 11:08 |
dimitern | jam: i think so yes - no need to repeat the type when the slice has it explicitly anyway | 11:08 |
fwereade | jam, yeah, +1 | 11:10 |
jam | dimitern: responded | 11:11 |
jam | I think that means enough LGTMs for both api-watchers and notify-worker to land | 11:12 |
dimitern | jam: thanks | 11:14 |
dimitern | mongodb in the ppa for raring/amd64 built successfully | 11:16 |
jam | dimitern: nice... now will it pass the test suite. | 11:17 |
dimitern | jam: hopefully - trying now :) | 11:17 |
jam | dimitern: standup in 10 min, just in case the clock sync wasn't working. | 11:20 |
dimitern | jam: yep, will be there - hangout from the link in the calendar, right? | 11:21 |
jam | yep | 11:21 |
jam | making a coffee myself, should be there on time, thougd. | 11:21 |
jam | though | 11:21 |
jam | fwereade: re: "tools" as a package. I'm happy to have the ToolsManager as an interface in an agent package. The big concern is having it outside of "environs/". Do you feel it is better just to move environs/agent to a top level package? | 11:27 |
fwereade | jam, +100 | 11:27 |
jam | fwereade: also, now that my queue has been flushed, I'm reminded of the patch I'm having trouble with. | 11:27 |
fwereade | jam, oh yes? | 11:27 |
jam | specifically, trying to test the client side of an Upgrader.WatchAPIVersion() | 11:27 |
jam | afaict, I'm setting up the test identically to the one on the server-side of the api, and the state side | 11:28 |
jam | but afaict the lowest level watcher isn't firing | 11:28 |
jam | (the one with the actual watch on the DB) | 11:28 |
fwereade | jam, is it possible you're using a different *state.State, and so not seeing the effect of a sync? | 11:28 |
fwereade | jam, what if you set the timeout to 6s? | 11:28 |
jam | mgz: poke | 11:33 |
jam | fwereade: I thought JujuConnSuite shares the state | 11:35 |
jam | between the two sides | 11:35 |
jam | I'll have to look into that | 11:35 |
jam | because yes, after 5s the test passes. | 11:35 |
jam | mgz: In case you're just not seeing it: https://plus.google.com/hangouts/_/f497381ca4d154890227b3b35a85a985b894b471 | 11:36 |
TheMue | mramm: ping | 11:59 |
mramm | TheMue: pong | 12:04 |
dimitern | i think this might be an actual bug: http://paste.ubuntu.com/5855203/ (i updated goamz and there were changes) | 12:06 |
TheMue | mramm: Germanys interest in Juju grows. I've been asked if I want to talk at the http://webtechcon.de (German) about Juju | 12:06 |
TheMue | mramm: and maybe also write about it in one of the magazines of the publisher behind this conference | 12:07 |
dimitern | installed mongo 2.2.0 from the tarball, as specified in the readme and all other tests pass | 12:07 |
mramm | cool | 12:07 |
mramm | TheMue: sounds awesome | 12:07 |
TheMue | mramm: yep, cloud and devops is a trending topic | 12:08 |
TheMue | mramm: so we'll see how we can get our part of this cake :D | 12:08 |
dimitern | also, anybody seen this? 2013-07-08 12:03:21 WARNING juju.environs.config config.go:429 unknown config field "future" | 12:09 |
TheMue | dimitern: so far not seen, can you isolate the test? | 12:11 |
dimitern | TheMue: trying now | 12:13 |
dimitern | it seems to be related to danilos branch about py-juju and juju-core compatibility checks about the environment config | 12:14 |
jam | dimitern: I've seen a fair number of WARNINGs while the test suite is running. nothing that fails the suite, though. | 12:15 |
dimitern | jam: all these warnings are valid, but I think we need to be able to suppress them for tests | 12:15 |
dimitern | jam: the tests themselves are about checking unknown keys | 12:15 |
jam | dimitern: agreed, I don't think the test suite should put stuff onto stdout/stderr | 12:18 |
dimitern | aha! found it | 12:19 |
dimitern | wallyworld: changed goamz in r37 to support EC2_ env vars as fallbacks for AWS_ ones, and the test sets both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to "", but does not set the fallbacks | 12:20 |
dimitern | and i have both set because of openstack tests | 12:21 |
dimitern | i'll file a bug and propose a fix | 12:21 |
=== tasdomas` is now known as tasdomas_afk | ||
=== tasdomas_afk is now known as tasdomas | ||
fwereade | dimitern, whoops, "future" was my branch | 12:33 |
fwereade | dimitern, didn't realise it pooed to stderr | 12:33 |
dimitern | fwereade: it does only when there's a failure | 12:33 |
dimitern | fwereade: or when you run with -gocheck.v actually | 12:33 |
dimitern | jtv: ping | 12:55 |
jam | dimitern: wrt https://code.launchpad.net/~dimitern/juju-core/062-fix-bug-1198936/+merge/173486 | 13:01 |
jam | Do we need to update other bits of the test suite infrastructure? | 13:01 |
jam | Or do we clear those out elsewhere already? | 13:01 |
dimitern | jam: well, this is the only test that uses this behavior that i can see - all the others pass | 13:02 |
dimitern | i'm having issues with lbox now.. panics on "redirect blocked" after login to rietveld. I had a patch that fixed this, but alas after the reinstall got lost | 13:03 |
jam | dimitern: I thought that was one of wallyworld's patches to lbox ? | 13:04 |
jam | might also be a go version. | 13:04 |
dimitern | jam: it's a go version issue definitely, still using 1.0.3, but I might need to switch to 1.1.1 just to build lbox | 13:04 |
jam | dimitern: I'm using go 1.0.3 for lbox | 13:05 |
dimitern | jam: haven't seen any patches by wallyworld to lbox in the commit log | 13:05 |
=== teknico1 is now known as tekNico | ||
jam | dimitern: it didn't land, but he had proposed it, IIRC | 13:05 |
jam | dimitern: most likely it was a patch to goetveld | 13:06 |
jam | dimitern: https://code.launchpad.net/~wallyworld/goetveld/auth-cookie-fix perhaps? | 13:07 |
jam | but that one is merged, so maybe not. | 13:07 |
dimitern | jam: i'll just re-get lbox and its reqs to see if it solves it | 13:08 |
jam | fwereade: worker/resumer/resumer.go . Is it worth implementing time.After as a watcher so that it can re-use the NotifyWatcher code? time.After can make a pretty trivial NotifyWatcher if we want to go that route. | 13:09 |
jam | though it is arguably the same amount of code we would be saving | 13:10 |
jam | fwereade: http://bazaar.launchpad.net/~jameinel/juju-core/upgrader-api-client-watcher/revision/1400# is my answer to make testing not have to wait 5s for the Sync to occur. | 13:41 |
jam | It exposes a helper on DummyEnviron that allows you to trigger e.state.apiState.Sync() | 13:41 |
jam | which is then exposed to JujuConnSuite. | 13:41 |
jam | fwereade: is it terrible to do? | 13:41 |
jam | I felt it was slightly better than giving you raw access to e.state.apiState object. | 13:42 |
jam | (you can click on expand all) | 13:42 |
jtv | dimitern: what's up? | 13:43 |
fwereade | jam, sorry, back from lunch | 13:44 |
dimitern | jtv: can you send me the patch I gave you for lbox to fix the error I got (you got it too) | 13:44 |
jam | fwereade: no rush, I'm done for tonight. | 13:44 |
jtv | dimitern: where is it? | 13:44 |
dimitern | jtv: i think if you do bzr diff in $GOPATH/src/launchpad.net/lbox/ you should see it (if you haven't committed) | 13:45 |
fwereade | jam, offhand it's no worse than anything else in the dummy environ ;p | 13:45 |
fwereade | jam, and at least it's in the service of saner testing :) | 13:45 |
jtv | dimitern: no diff... I'm on r57, last committer is Roger. Not sure what patch you're talking about. | 13:46 |
dimitern | jtv: maybe geotveld then? | 13:46 |
dimitern | geotveld | 13:46 |
jtv | goetveld | 13:47 |
dimitern | yep | 13:47 |
jtv | Never messed with that | 13:47 |
jtv | dimitern: is this a conversation you had with someone else maybe? | 13:48 |
dimitern | jtv: what go version are you using? | 13:48 |
dimitern | jtv: could be, it was some months ago | 13:49 |
jtv | dimitern: go 1.0.2 . | 13:51 |
dimitern | jtv: I see | 13:51 |
dimitern | mramm: can you send me the kanban link please? | 14:02 |
dimitern | (for the hangout) | 14:03 |
dimitern | TheMue: or you? ^^ | 14:04 |
dimitern | fwereade: ^^ ? | 14:09 |
dimitern | are we doing the kanban meeting now? | 14:10 |
TheMue | dimitern: no kanban now | 14:12 |
TheMue | dimitern: only the one at 1:30pm | 14:12 |
dimitern | TheMue: ah, ok, thanks | 14:12 |
TheMue | dimitern: we've consolidated it ;) | 14:12 |
dimitern | TheMue: nice! about time :) | 14:13 |
TheMue | dimitern: yep, I like it too | 14:13 |
dimitern | https://codereview.appspot.com/11002043 | 14:18 |
dimitern | so I managed to fix my broken lbox by compiling it under 1.1.1 | 14:18 |
dimitern | jam: can you please LGTM it in rietveld now? sorry for the trouble | 14:19 |
dimitern | TheMue: can you take a look as well ? ^^ | 14:27 |
TheMue | dimitern: sure | 14:29 |
TheMue | dimitern: lgtm | 14:30 |
dimitern | TheMue: thanks | 14:30 |
mramm | Hey, I just wanted to point out an e-mail thread that needs a response: Chris.Frantz is trying to get juju working | 15:21 |
mramm | https://bugs.launchpad.net/juju-core/+bug/1178328 | 15:22 |
_mup_ | Bug #1178328: error: cannot log in to admin database: auth fails <ui> <juju-core:Incomplete> <https://launchpad.net/bugs/1178328> | 15:22 |
=== tasdomas is now known as tasdomas_afk | ||
ackk | hi, counld anyone please have a look at https://code.launchpad.net/~ack/juju-core/uuid-in-environment-info/+merge/173485 ? it's a pretty trivial change | 15:56 |
fwereade | ackk, reviewed, LGTM; dimitern, would you take a quick look please? | 16:17 |
ackk | fwereade, thanks! | 16:18 |
fwereade | ackk, just hold off until there's a second review, then set commit message and approve (or ping one of us if you can't) | 16:18 |
ackk | fwereade, cool, thanks, will it be merged automatically? | 16:19 |
fwereade | ackk, yeah | 16:19 |
dimitern | fwereade: looking | 16:19 |
dimitern | ackk, fwereade: reviewed | 16:20 |
ackk | dimitern, thanks | 16:20 |
ackk | dimitern, fwereade I can't set "approved", could you please do it? | 16:21 |
dimitern | ackk: will do | 16:21 |
fwereade | ackk, done | 16:21 |
dimitern | :) | 16:21 |
ackk | tnx | 16:22 |
dimitern | ackk: have you tried running the full test suite before proposing that change? | 16:46 |
dimitern | ackk: i mean like "go build ./... && go test ./..." in juju-core/ ? | 16:47 |
ackk | dimitern, oh I think I ran a different command, let me try to reproduce the failure | 16:49 |
dimitern | ackk: it was a temporary failure, it's merged now once I reapproved it | 17:04 |
dimitern | anyone willing to look at a really trivial review? https://codereview.appspot.com/10858050/ | 17:08 |
mgz | will have a look dimitern | 17:09 |
dimitern | mgz: hey, cheers | 17:09 |
fwereade | dimitern, LGTM | 17:16 |
dimitern | fwereade: cheers, landing then | 17:17 |
ackk | dimitern, thanks | 17:21 |
ackk | dimitern, I got the same error in trunk before the merge, fwiw | 17:22 |
dimitern | ackk: we have occasionally these intermittent test failures - if you happen across one of them after running the full tests multiple times and it occurs more than once, please file a bug and tag it as "intermittent-failure" (but search first if it's already reported) | 17:24 |
ackk | dimitern, ok | 17:25 |
Beret | hmm | 17:55 |
ahasenack | how does juju-core find the bootstrap node ip? I have a case where it has two ips, and juju status is trying to connect to the "wrong" one | 20:06 |
ahasenack | nova list lists the bootstrap node running, and with an IP in each network. I don't see a way to control which IP juju will use | 20:07 |
ahasenack | pyjuju seems to pick the correct one (could be luck) | 20:07 |
mgz | ahasenack: possibly bug 1188126? | 20:13 |
_mup_ | Bug #1188126: Juju unable to interact consistently with OpenStack/Quantum deployment where tenant has multiple networks configured <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126> | 20:13 |
ahasenack | mgz: looks like it, even though no quantum is being used in this case | 20:14 |
mgz | probably not then | 20:15 |
mgz | similar issue though, currently juju-core is pretty network ignorant | 20:15 |
ahasenack | mgz: I think I found it in the code | 20:19 |
ahasenack | state/api/apiclient.go | 20:19 |
ahasenack | func Open(info *Info, opts DialOpts) (*State, error) { | 20:19 |
ahasenack | // TODO Select a random address from info.Addrs | 20:19 |
ahasenack | // and only fail when we've tried all the addresses. | 20:19 |
ahasenack | // TODO what does "origin" really mean, and is localhost always ok? | 20:19 |
ahasenack | cfg, err := websocket.NewConfig("wss://"+info.Addrs[0]+"/", "http://localhost/") | 20:19 |
ahasenack | it's always the first address that it picks | 20:19 |
thumper | morning | 21:38 |
ahasenack | good morning | 21:47 |
thumper | fwereade: you up? | 21:57 |
fwereade | thumper, heyhey, more or less | 21:57 |
fwereade | thumper, how's it going? | 21:57 |
thumper | just clearing lots of email :) | 21:57 |
=== JoseeAntonioR is now known as j | ||
=== j is now known as JoseeAntonioR | ||
=== JoseeAntonioR is now known as jose | ||
utlemming | wallyworld: what is the noun for changing the simplestream location in the Juju yaml? | 23:13 |
wallyworld_ | utlemming: is this for an openstack deployment? | 23:14 |
utlemming | wallyworld_: yes, I want to test a merge of HP and EC2 simple streams | 23:14 |
utlemming | wallyworld_: i.e. start publishing the simple streams for HP | 23:14 |
wallyworld_ | utlemming: for openstack, if you want custom simplestreams metadata, you put the json files in a directory "streams/v1/index" off the public bucket | 23:16 |
wallyworld_ | the public bucket is specified using the "public-bucket-url" config key | 23:17 |
wallyworld_ | so where the tools tarballs are, add the "streams/v1/index" container under that location | 23:17 |
arosales | wallyworld_, fyi utlemming is working on the "official" simple streams for HP. | 23:30 |
wallyworld_ | \o/ | 23:31 |
arosales | wallyworld_, utlemming builds the Ubuntu cloud images we put on certified public clouds | 23:31 |
wallyworld_ | arosales: will that include the new tools metdata? | 23:31 |
utlemming | wallyworld_, arosales: looks like I still have a bug to shake out, though | 23:31 |
arosales | utlemming, and wallyworld_ is working on the simple stream implementation in Juju :-) | 23:31 |
arosales | hopefully that helps with introductions :-) | 23:31 |
utlemming | wallworld_: my beta index is http://people.canonical.com/~ben if you're interested | 23:32 |
* wallyworld_ is interested | 23:32 | |
arosales | wallyworld_, I don't think that index has the tools in it yet | 23:32 |
arosales | wallyworld_, but I think it could if you and utlemming sync up | 23:32 |
wallyworld_ | arosales: ok. i'm keen to progress that bit. scott was away last week so it's stalled till he gets back | 23:33 |
arosales | utlemming, actually if you include the tools in your index we could test out your index file by a simple juju environment yaml update | 23:33 |
arosales | utlemming, is your man then :-) | 23:33 |
arosales | utlemming, also works closely with smoser | 23:34 |
wallyworld_ | arosales: utlemming: we had sort of decided to have potentially separate urls for the tools and image metadata. so two separate index files | 23:34 |
wallyworld_ | the tools and image metadata index files will likely live in the same place but don't have to | 23:34 |
arosales | wallyworld_, would you have another env yaml file for each index? | 23:34 |
arosales | ie public-bucket-url? | 23:35 |
wallyworld_ | arosales: no, potentially a separate config key | 23:35 |
wallyworld_ | yes | 23:35 |
arosales | so perhaps public-bucket-cloud-url and public-bucket-tool-url ? | 23:35 |
arosales | or something like that? | 23:35 |
wallyworld_ | yes, something like that | 23:36 |
wallyworld_ | arosales: for CPC, it will just be the same location for both i *think* | 23:36 |
wallyworld_ | but having separate urls allows different folks to maintain the tools and image metadata if required | 23:36 |
arosales | it could very well live at the same location, but juju would need to find two different index files at that location, correct? | 23:36 |
wallyworld_ | yes | 23:36 |
arosales | ok | 23:36 |
wallyworld_ | arosales: are you ok with that approach? | 23:37 |
arosales | wallyworld_, that seems like a sane approach. Did you envision CPC data living at http://cloud-images.ubuntu.com/ | 23:37 |
arosales | utlemming, sound ok to you ^ | 23:38 |
arosales | wallyworld_, but generally a +1 from me on having two streams for tools and image data | 23:38 |
wallyworld_ | arosales: i thuink the image metadata would go there but smoser envisisaged the tools metadata living in a charms related url now that i think about it | 23:39 |
arosales | having the flexibility for that to be the same URL or different ones only seems like a value add plus too | 23:39 |
wallyworld_ | arosales: utlemming: as soon as I have some tools metadata files to work with, i'll do the juju code ti plug it in | 23:40 |
arosales | wallyworld_, btw I *think* the current HP public bucket doesn't have AZ2 or AZ3 info init | 23:41 |
arosales | s/init/in it/ | 23:41 |
wallyworld_ | arosales: yes, correct. i just did that as a quick thing to get us going | 23:41 |
arosales | I get a precise image not found when trying to deploy to AZ2 or AZ3 | 23:41 |
arosales | ok | 23:41 |
wallyworld_ | arosales: the correct images can easily be added | 23:42 |
arosales | I ran into that in a project that just had AZ2 configured per HP support suggestion | 23:42 |
arosales | work around it by using AZ1 | 23:42 |
wallyworld_ | but sounds like we are really close now anyway | 23:42 |
arosales | pending when simple streams and tools are available if it is easy it would be worth while to add az2 and az3 info | 23:42 |
wallyworld_ | yes indeed. that was my expecation at least - that the image metadata update process would include all reuired regions | 23:43 |
wallyworld_ | arosales: utlemming: once this is fully done, for CPC, *no* public-bucket-url info will be needed in the user's env yaml | 23:43 |
wallyworld_ | it will "just work" | 23:44 |
wallyworld_ | hopefully :-) | 23:44 |
arosales | wallyworld_, ack | 23:44 |
arosales | juju should already have pre knowledge of what the simple stream data is for cpc | 23:44 |
wallyworld_ | yes, it looks at cloud-images.... | 23:44 |
wallyworld_ | but public bucket still needed for tools | 23:45 |
wallyworld_ | we also need to improve the tooling for setting up private clouds | 23:45 |
arosales | gotcha | 23:45 |
wallyworld_ | but that is being worked on | 23:45 |
* thumper sighs | 23:57 | |
thumper | damn frustrating tests.. | 23:57 |
* thumper tries to work out just how many hoops to jump through is too many | 23:58 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!