[02:08] <thumper> axw, wallyworld: a review plz? https://codereview.appspot.com/55840043/
[02:42] <davecheney> can I bootstrap trusty environments yet with juju
[02:42] <davecheney> serious question
[02:42] <davecheney> i'm having trouble getting a working debootstrap'd chroot
[03:06] <waigani> http://www.amazon.com/Haribo-Gummy-Candy-Sugarless-5-Pound/dp/B000EVQWKC
[03:08] <waigani> axw,wallyworld,thumper http://www.buzzfeed.com/michaelrusch/haribo-gummy-bear-reviews-on-amazon-are-the-most-insane-thin
[06:15] <bigjools> is it intentional that the parameter for --config= gets appended to `pwd` ?
[06:16] <bigjools> oh ha, it looks for a leading / otherwise assumes it's relative.  Which fails when I start my path with ~
[07:00] <dimitern> façadeName string ??
[07:00] <dimitern> c'mon guys :)
[07:01] <dimitern> should I start using cyrillic in variable names as well?
[07:14] <jam> dimitern: I had not seen that. While go is fully UTF-8, I'm pretty sure we want to stick to ascii
[07:14] <dimitern> jam, i though so as well
[08:33] <jamespage> davecheney, I have been bootstrapping trusty with juju
[08:33] <jamespage> not sure what your schroot problem is - I'm not seeing that (indeed I build the juju-core with gccgo in a sbuild schroot OK)
[08:50] <rogpeppe> wallyworld_: i just tried to get utils/ssh tests to pass with cgo disabled. they failed because user.Current doesn't work. i wonder whether a better approach than user.Current might be to set the owner to the same as the owner of the parent directory.
[08:51] <rogpeppe> trivial review, anyone, BTW? https://codereview.appspot.com/55950043
[08:51] <dimitern> rogpeppe, LGTM
[08:51] <dimitern> (was looking already :)
[08:51] <rogpeppe> dimitern: thanks
[08:51] <rogpeppe> dimitern: i thought that was quick!
[08:52] <dimitern> rogpeppe, just link the bugs to the branch please
[08:52] <dimitern> rogpeppe, and mark them as appropriate
[08:53] <rogpeppe> dimitern: what do you mean by "mark them as appropriate"?
[08:53] <dimitern> mgz, are you about?
[08:54] <dimitern> rogpeppe, i mean fix committed and assign them to yourself?
[08:54] <rogpeppe> dimitern: i'd only mark one of them as fix committed
[08:54] <dimitern> rogpeppe, right, the other once is just skipped
[08:54] <rogpeppe> dimitern: i seem to be unable to link the bugs to the branch
[08:54] <dimitern> rogpeppe, error?
[08:55] <rogpeppe> dimitern: actually, one just succeeded - the search was failing. it worked eventually.
[08:55] <dimitern> rogpeppe, good :)
[08:56] <dimitern> mgz, if you're not working on goamz's vpc support i might pick it up now
[09:14] <jam> mgz: did you get a chance to look into status? Should I ?
[09:33] <rogpeppe> dimitern: ping
[09:33] <dimitern> rogpeppe, hey
[09:33] <rogpeppe> dimitern: i'm investigating a problem with provisionerSuite.TestLifeAsMachineAgent
[09:34] <rogpeppe> dimitern: it tries to destroy a state manager machine
[09:34] <rogpeppe> dimitern: i'm can't see quite why it wants to do that
[09:34] <dimitern> rogpeppe, to test life changes?
[09:34] <rogpeppe> dimitern: (it was succeeding before now because of a bug in Machine.Destroy)
[09:35] <dimitern> rogpeppe, what bug?
[09:35] <rogpeppe> dimitern: it allowed destruction of a machine with JobManageState
[09:36] <dimitern> rogpeppe, well, for the environ provisioner, there should be a way to test Life() and the only way to do that is to call EnsureDead/Destroy on it
[09:36] <dimitern> rogpeppe, at least in tests
[09:37] <rogpeppe> dimitern: the odd thing is that we making the machine dead that we're connecting as
[09:38] <dimitern> rogpeppe, because that's how authentication works, right? you need an existing manager machine to login as
[09:38] <rogpeppe> dimitern: yes, but why does it need to be dying?
[09:38] <rogpeppe> dimitern: sorry, dead
[09:39] <dimitern> rogpeppe, so Life() will return something else than alive
[09:40] <rogpeppe> dimitern: i don't quite get what exactly we're trying to test here.
[09:40] <rogpeppe> dimitern: what provisioner functionality are we testing by having the authorized agent dead?
[09:41] <dimitern> rogpeppe, we're testing the api, not the worker
[09:41] <rogpeppe> dimitern: because we *can't* test in exactly the same way as currently, because it's not valid to destroy a state server machine. so i'm trying to work out the best way to fix the test.
[09:41] <dimitern> rogpeppe, you mean the state/api/provisioner tests, right?
[09:41] <rogpeppe> dimitern: no, state/apiserver/provisioner
[09:42] <rogpeppe> dimitern: that's the only suite that has TestLifeAsMachineAgent
[09:42] <dimitern> rogpeppe, ah, it easier there - there's no requirement to add a JobManageState machine for the test, because we can override the authorizer to think the logged in user is an environ manager
[09:43] <rogpeppe> dimitern: i don't want to change the tests too much, but i'd like to know what provisioner facade functionality we're testing by setting machines[0] to dead
[09:43] <dimitern> rogpeppe, the Life() method
[09:44] <rogpeppe> dimitern: isn't that checked by seeing the return from the other dead machine?
[09:45] <dimitern> rogpeppe, i think the fix is to just delete line 53 in provisioner_test
[09:45] <rogpeppe> dimitern: i don't think that works
[09:45] <dimitern> rogpeppe, we don't really care if machine 0 has JobManageState
[09:46] <rogpeppe> dimitern: because we want to test:
[09:46] <rogpeppe> 	// 2. Environment managers can access any machine without
[09:46] <rogpeppe> 	// a parent.
[09:46] <dimitern> rogpeppe, yes?
[09:46] <dimitern> rogpeppe, it won't have a parent
[09:47] <rogpeppe> dimitern: so if machine 0 isn't an environment manager (which it won't be if we delete line 53) then we won't have an environment manager to call Life as
[09:47] <dimitern> rogpeppe, why are you concerned with line 53?
[09:48] <dimitern> rogpeppe, if we add 3 machines with JobHostUnits, and use EnvironManager: true in the authorizer, it should work
[09:48] <rogpeppe> dimitern: ah yes, i see that now. i thought that we needed to use machines[0] because it was the environ manager
[09:49] <rogpeppe> dimitern: but we're bypassing all that
[09:49] <dimitern> rogpeppe, exactly
[09:49] <dimitern> rogpeppe, i would be more concerned if we had a similar case for the client-side api tests
[09:50] <dimitern> rogpeppe, because there we can't play with a fake authorizer
[09:50] <rogpeppe> dimitern: unfortunately that breaks another test, sigh.
[09:53] <dimitern> rogpeppe, which one?
[09:53] <rogpeppe> dimitern: TestAPIAddresses and TestStateAddresses, which both require a state server
[09:54] <rogpeppe> dimitern: i'm changing the tests to add a state server machine and 3 others, then renumbering everything
[09:54] <dimitern> rogpeppe, is this still in the apiserver tests?
[09:54] <rogpeppe> dimitern: yes
[09:55] <dimitern> rogpeppe, ah, I see - yeah for these two the set up will be a bit different
[09:56] <rogpeppe> oh bugger it, i'll make another test suite
[10:03] <fwereade> dimitern, mgz: meeting
[10:04] <jam> fwereade: so my machine is acting *really* choppy, can you try to run the meeting?
[11:00]  * dimitern lunch
[13:14] <TheMue> rogpeppe: around?
[13:14] <rogpeppe> TheMue: yup
[13:15] <TheMue> rogpeppe: could it be that in case of stopping an application with ctrl-c a defer doesn't run?
[13:15] <rogpeppe> TheMue: that's definitely true
[13:15] <rogpeppe> TheMue: unless you specifically arrange for it
[13:15] <rogpeppe> TheMue: same as calling os.Exit
[13:15] <TheMue> rogpeppe: ok, then I have to do it
[13:16] <rogpeppe> TheMue: in the tests?
[13:16] <TheMue> rogpeppe: how would you do it? by runtime.SetFinalizer?
[13:16] <rogpeppe> TheMue: no, that won't work
[13:16] <rogpeppe> TheMue: is this in the tests, or the production code?
[13:16] <TheMue> rogpeppe: no, debug-log will be stopped by ctrl-c, and in this case the severside logger isn't stopped
[13:17] <rogpeppe> TheMue: that's easy to arrange; i don't think you want to rely on explicit shutdown for that
[13:17] <TheMue> rogpeppe: if there's a nice way I'm happy
[13:18] <rogpeppe> TheMue: you can register the server side logger as a resource that will be stopped when the connection goes away
[13:18] <TheMue> rogpeppe: so far I had a defer, but that showed troubles
[13:18] <TheMue> rogpeppe: it is registered
[13:19] <rogpeppe> TheMue: so why isn't it stopping when the client drops the connection?
[13:19] <TheMue> rogpeppe: but calling debug-log another time later shows troubles (will make a paste next time)
[13:19] <rogpeppe> TheMue: is this code that's already in trunk?
[13:19] <TheMue> rogpeppe: the logged content somehow looks self-contained (older api call and response data contained in new ones)
[13:20] <TheMue> rogpeppe: no, it's under development
[13:21] <TheMue> rogpeppe: let me just make a fix at one of the branches that is already in review, found my mysterious bug there today, and later I can post the successor
[13:21] <TheMue> rogpeppe: first one is the server side, second one the new debug-log command (in a very initial stadium)
[13:22] <TheMue> rogpeppe: but yeah, I now get my debug log filtered via the api :D
[13:26] <TheMue> rogpeppe: if you wonna see some of the head-aching shit I get: http://paste.ubuntu.com/6802890/ :/ scroll right and you'll see
[13:27] <rogpeppe> TheMue: wow!
[13:27] <rogpeppe> TheMue: that's some serious backslash escaping
[13:28] <TheMue> rogpeppe: yeah, absolutely, the responses contain other responses
[13:29] <TheMue> rogpeppe: btw, what I wonna change now is really weird too. we've got code like watcher, ok := resources.Get(Id).(state.Watcher)
[13:30] <TheMue> rogpeppe: if not ok we report that the id is unknown
[13:30] <rogpeppe> TheMue: BTW I find launchpad.net/rjson/cmd/rjson useful for unpacking json requests into something readable
[13:30] <TheMue> rogpeppe: but in my case the id has been ok, but the interface hasn't been implemented
[13:30] <TheMue> rogpeppe: thx for the hint, will take a look at it
[13:30] <rogpeppe> TheMue: oh, of course i see why you're seeing responses inside responses
[13:31] <rogpeppe> TheMue: because you're using the API to get the log stream
[13:31] <rogpeppe> TheMue: I think that's actually not a good idea
[13:31] <TheMue> rogpeppe: sure, that has been the idea and my task
[13:32] <rogpeppe> TheMue: i think it would probably be quite a bit better if the log streamed out through a normal http GET request
[13:32] <rogpeppe> TheMue: in a similar way to the way that charms are currently uploaded.
[13:33] <rogpeppe> TheMue: does that make sense?
[13:34] <TheMue> rogpeppe: beside the command the UI guys want the API
[13:34] <rogpeppe> TheMue: the GUI guys can issue a GET request easily too
[13:34] <TheMue> rogpeppe: has been a task by jam and william already started reviewing it
[13:35] <TheMue> rogpeppe: could you tell me what's wrong with a watcher?
[13:35] <rogpeppe> TheMue: watchers are not idea for streaming large quantities of data when you can't drop messages
[13:36] <rogpeppe> TheMue: they're designed for watching a stateful system (the juju state)
[13:36] <rogpeppe> TheMue: when streaming the log, we want as much bandwidth as possible, otherwise we're limited by network latency
[13:36] <TheMue> rogpeppe: where is the limit?
[13:36] <rogpeppe> TheMue: TCP is ideal for that, and that's what a GET request gives you
[13:37] <rogpeppe> TheMue: the limit on what?
[13:37] <TheMue> rogpeppe: using watchers. sure TCP may be used with higher bandwidth, but if this is no requirement.
[13:38] <rogpeppe> TheMue: of course it's a requirement - the log is huge amounts of data. it's really good if clients can keep up.
[13:38]  * TheMue has troubles with changing a concept and implementation, where multiple people already agreed on, after a short discussion here on irc
[13:39] <rogpeppe> TheMue: i understand that, sorry
[13:39] <rogpeppe> TheMue: i have suggested this implementation multiple times in the past
[13:39] <TheMue> rogpeppe: the log is filtered and limited on server-side, that's the job of the tailer
[13:39] <rogpeppe> TheMue: that's fine - you can still do that
[13:40] <TheMue> rogpeppe: so you start with initially let's say 10 lines and get deltas, filtered for the entity (machine/unit) you want
[13:40] <rogpeppe> TheMue: you'll just send the output to the http.ResponseWriter rather than using the stringswatcher
[13:40] <TheMue> rogpeppe: that reduces the bandwith roblem
[13:41] <rogpeppe> only if you're actually filtering
[13:41] <TheMue> rogpeppe: would you mind to discuss this as a design decision with jam and william?
[13:41] <rogpeppe> TheMue: i would be happy to, if either of them was around
[13:41] <TheMue> rogpeppe: can do so later
[13:42] <TheMue> rogpeppe: so I'll continue with the current way, there are right now many stakeholders expecting it that way
[13:42] <rogpeppe> TheMue: the other advantage of this approach is that you don't have to worry about exponentially repeating log requests
[13:43] <rogpeppe> TheMue: ok. but you know what i'll say in the review :-)
[13:44] <TheMue> rogpeppe: you can do so, no prob. then there will be a good discussion. the review is around since a longer time and william and dimitern took a look at it (only comments by william so far)
[13:45] <rogpeppe> TheMue: what's the link for the review?
[13:45] <TheMue> rogpeppe: will post it here again after the next update to not interfere, have to do one change there before
[13:46] <TheMue> rogpeppe: doesn't last long, I've already tested it in the successor branch
[13:52] <mgz> sinzui: two things: I retargetted a bunch of bugs, yell at me if you have issues; please review https://codereview.appspot.com/56020044
[13:52] <sinzui> mgz, I saw. I was going to retarget them to 1.17.2 myself
[13:52]  * sinzui has to create it first though
[13:52] <mgz> renaming the milestone would also work as a trick, no?
[13:54] <mgz> (I don't rely like th
[13:54] <mgz> *really like the workflow of targetting at next minor dev release, and bumping and bumping)
[13:54] <mgz> (but it does work kinda with the landing bot flipping the fix committed bit itself)
[14:18] <sinzui> mgz: We are getting pressure from users who want o to bleed. We need to release more often...I don't think we will ever know how many point releases are needed to get to stable. maybe we target to 18, and pull the few bugs we care about into the point release.
[14:24] <mgz> sinzui: that would be better I think
[14:26] <mgz> our bleeding users need to love destroy-environment though, dev releases need to break things
[14:42] <rogpeppe> fwereade: ping
[14:42] <fwereade> rogpeppe, meeting, reping in 15 please?
[14:43] <rogpeppe> fwereade: ok
[14:53] <rogpeppe> dimitern: i think i've discovered a bug in worker/firewaller; can i run through it with you please?
[14:53] <dimitern> rogpeppe, sure, what is it?
[14:53] <rogpeppe> (i'm not sure it was the one i'm looking for that causes the cmd/jujud tests to hang up, but it still looks buggy)
[14:54] <rogpeppe> dimitern: in Firewaller.startMachine, if we get an error calling fw.unitsChanged, we return immediately
[14:54] <rogpeppe> dimitern: but we've just added a new machined to the machineds map
[14:54] <rogpeppe> dimitern: so it hasn't got a running watch loop
[14:54] <rogpeppe> dimitern: so when the firewaller tries to quit later, it'll wait forever for that machined
[14:55] <rogpeppe> dimitern: does that make sense to you?
[14:55] <dimitern> rogpeppe, is this something i changed with the api introduction or?
[14:55] <rogpeppe> dimitern: i dunno
[14:56] <rogpeppe> dimitern: perhaps the API introduced some other error which triggered this issue
[14:56] <rogpeppe> dimitern: yes, it looks like it's a recent change
[14:57] <dimitern> rogpeppe, some parts of the code had to be changed, because all api methods return an error as well
[14:57] <dimitern> rogpeppe, so I had to change a few places to either return the error or kill the tomb and return
[14:58] <rogpeppe> dimitern: actually it looks like the bug was still there before
[14:58] <rogpeppe> dimitern: i'll see if fixing it fixes my problem
[14:58] <dimitern> rogpeppe, whew at least it's not my doing
[14:59] <rogpeppe> dimitern: :-)
[14:59] <dimitern> rogpeppe, and i did test it thoroughly on ec2 after merging the api changes
[14:59] <rogpeppe> dimitern: (i know that feeling very well)
[15:00] <dimitern> rogpeppe, actually i had to stop myself from tearing most of the firewaller tests apart and getting rid of 50% duplicated code, i'm sure the main code can be improved in such a manner as well
[15:03] <jamespage> fwereade, sorry to be a pita but can I bring bug 1271941 to your attention
[15:03] <jamespage> https://bugs.launchpad.net/juju-core/+bug/1271941
[15:03] <hatch> yesterday I built juju-core from trunk and it doesn't looks like it works to deploy locally on precise. After bootstrapping and deploying the GUI I only get a list of two machine names when I do juju status -e local
[15:04] <hatch> has the functionality of status been changed?
[15:06] <hatch> version is 1.17.1
[15:08] <natefinch> hatch: status is mostly the same.... what information is it missing?
[15:08] <hatch> natefinch https://www.evernote.com/shard/s219/sh/e97b60b5-20e8-4d3e-bded-3c0f62efa179/23d15a7ed70c5bdade557146fc08e377
[15:09] <hatch> natefinch so...all of it? ;)
[15:09] <hatch> lxc-ls shows two machines
[15:10] <hatch> I can tear it down and try again but I wanted to check here first
[15:10] <natefinch> hatch: heh, that does seem somewhat... spartan
[15:10] <natefinch> hatch: it definitely should have all the same information
[15:10] <hatch> ok thanks I'll tear it down and try again, if I get the same issue I'll file a bug
[15:10] <fwereade> jamespage, I'm going to bounce that one straight over to thumper (I'll mail him) -- last time I saw him he said he was going to look into lxc on trusty, and this is highly likely to be related
[15:11] <jamespage> fwereade, I confirmed with lxc upstream folks that the way autostart is specified has changed
[15:11] <jamespage> is now lxc config for the container
[15:11] <fwereade> jamespage, ok, that makes sense, thanks
[15:12] <fwereade> jamespage, I see that's already in the bug, double-thanks
[15:30] <sinzui> fwereade, jam, I was planning on 1.17.1 release today. Are we tracking mramm's request to update goose with a bug?
[15:31] <mramm> sinzui: I think mgz said he had that fix in, but was working on a follow up.
[15:32] <fwereade> sinzui, mramm, mgz: sorry, I'm not clear what *updating goose* gives us
[15:32] <fwereade> sinzui, mramm, mgz: my understanding was that it was a prerequ for the forthcoming branch
[15:32] <mramm> fwereade: right, it requires both goose and juju changes
[15:32] <mramm> or at least that's what I understood as well
[15:33] <mgz> that's it.
[15:33] <mgz> if we do 1.17.1 today, the networking fix is not making it
[15:34] <sinzui> mgz, I can delay till tomorrow.
[15:35] <fwereade> mgz, and your ETA for reviewable code was later today?
[15:35]  * sinzui want to stop explaining to people daily builds wont get them fixes
[15:35] <fwereade> mgz, sinzui: tomorrow sounds smart then
[15:36] <mgz> okay, sounds like a plan.
[15:39]  * rogpeppe goes for lunch
[15:40] <TheMue> fwereade: any chance to take the next look at my branch?
[16:02] <fwereade> TheMue, if not by EOD, I'll do it before bed
[16:06] <TheMue> fwereade: EOD or tomorrow, don't destroy your evening
[16:07] <TheMue> fwereade: I'm currently on the client side and it looks good so far, only one weird behavior left
[16:08] <TheMue> fwereade: but rogpeppe has a different idea about the design, not using API watchers but HTTP requests
[16:09] <TheMue> fwereade: I asked him to discuss it with jam and you
[16:23] <fwereade> TheMue, added a couple of comments, please let me know your responses, expect mine slow, I'll be meeting in a sec
[16:24] <rogpeppe> fwereade: dammit, missed you again :-)
[16:25] <rogpeppe> fwereade: ok, so i think i said this before: i think the logging streaming would be better done outside the main RPC-oriented API interface
[16:25] <rogpeppe> fwereade: either as a long lived http GET request or a unidirectional websocket connection, or whatever is most appropriate
[16:26] <rogpeppe> fwereade: the API is well designed for state-oriented watchers, but not so much for streaming large quantities of data
[16:27] <rogpeppe> fwereade: so I'm suggesting we go for an approach similar to the charm upload stuff (except the other direction, of course)
[16:27] <TheMue> rogpeppe: btw, where is the log in a HA environment? would be in interesting aspect for the implementation too
[16:27] <fwereade> rogpeppe, elaborate on the impact please?
[16:27] <rogpeppe> fwereade: the impact of what?
[16:28] <rogpeppe> fwereade: as far as implementation goes, i think this makes both server and client simpler
[16:28] <rogpeppe> fwereade: as far as runtime performance goes, the download speed will not be latency-limited
[16:29] <rogpeppe> fwereade: (and you'll win quite a bit by losing JSON encapsulation
[16:29] <rogpeppe> )
[16:35] <fwereade> rogpeppe, I guess what I'm asking is at what point you think the current implementation would actually hurt us, weighed against the benefits of providing an implementation that works for the CLI (probably whole-env) and the GUI (almost certainly unit-/machine-specific)
[16:36] <rogpeppe> fwereade: yes, i think that it will actively affect the efficacy of debug-log in real environments
[16:36] <rogpeppe> fwereade: GUI connections tend to be very high-latency
[16:37] <rogpeppe> fwereade: plus i don't think it would actually be at all difficult to implement - there's actually less logic required than when using a watcher
[16:43] <TheMue> fwereade: thx for review, added notes
[16:59] <rogpeppe> TheMue, fwereade: just looking at the apiserver/debugger implementation. it's really not right, i'm afraid.
[16:59] <rogpeppe> TheMue: there is absolutely no need to drop data on the floor
[17:00] <TheMue> rogpeppe: we already agreed on not doing that
[17:00] <TheMue> rogpeppe: see comments
[17:00] <rogpeppe> TheMue: you don't need to return an error either
[17:00] <rogpeppe> TheMue: we can just wait until the client is ready to read.
[17:00] <TheMue> rogpeppe: if a client doesn't drop the connection but also doesn't call Next()?
[17:01] <rogpeppe> TheMue: yes
[17:01] <TheMue> rogpeppe: but the tailer fill its buffer during this time
[17:01] <rogpeppe> TheMue: with a separate websocket connection, we can drop all the logic from that file
[17:01] <rogpeppe> TheMue: why does that matter?
[17:02] <TheMue> rogpeppe: too much memory consumption?
[17:02] <rogpeppe> TheMue: what? the data is in a file, no?
[17:03] <rogpeppe> TheMue: surely all that's needed here is NewTailer(logfile, websocketConnection, ...) ?
[17:03] <TheMue> rogpeppe: the tailer polls out of it and write to the writer
[17:03] <TheMue> rogpeppe: so you now also want to change the tailer?
[17:03] <rogpeppe> TheMue: there's no need to change the tailer
[17:04] <rogpeppe> TheMue: the tailer does not use unlimited memory if a call to Write blocks, does it?
[17:04] <TheMue> rogpeppe: afaik not, yes
[17:05] <TheMue> rogpeppe: it simply calls Write() of the writer
[17:05] <rogpeppe> TheMue: exactly
[17:05] <rogpeppe> TheMue: and that writer can be a websocket
[17:06] <TheMue> rogpeppe: ok, and how would the client side (cli and ui) would look like?
[17:06] <rogpeppe> TheMue: so in that case you don't need any of the logic in debugger/logtailer.go at all
[17:06] <rogpeppe> TheMue: the client side would just make a websocket connection exactly the same as it does for the current API
[17:07] <rogpeppe> TheMue: you would need a little bit of extra logic to allow the client to adjust the filter
[17:07] <jcastro> natefinch, hah, I was doing "juju status foo"
[17:07] <jcastro> I forgot the -e!
[17:07] <rogpeppe> jcastro: i saw that!
[17:07] <TheMue> rogpeppe: a second, different way beside the current way. in in an HA env? where do they connect to
[17:07] <natefinch> jcastro: I was thinking "hey yeah, that would be useful...." and then I was like.... "wait a minute..."
[17:08] <rogpeppe> TheMue: it's similar to the way that the charm upload currently works
[17:08] <jcastro> I was like "man that is a great idea let me try it." but it always returned "local" no matter what
[17:08] <natefinch> jcastro: sounds like the bug is that we don't complain about extra parameters
[17:08] <jcastro> natefinch, it's nice when you ask for things and someone else has thought of it and it's like nearly finished or done by the time you ask, heh.
[17:10] <TheMue> rogpeppe: sounds relative ok so far (only having another way beside the api doesn't convince me). but to change that john, william, gary (who is waiting for the api access) and curtis have to agree
[17:11] <rogpeppe> TheMue: i've chatted with gary and the gui team. they think it's a reasonable idea (except they don't want to delay things)
[17:11] <TheMue> rogpeppe: currently cannot estimate the effort
[17:11] <rogpeppe> TheMue: i'm quickly putting together an example so you can see what i'm talking about
[17:12] <TheMue> rogpeppe: thanks
[17:14]  * TheMue is off for today, dinner. will take a look in again later
[17:22] <hatch> I'm trying to build juju-core on another machine and after following the steps in the readme, when I get to using `go install` it throws a few errors saying it can't find crypto, openpgp, websocket, clearsign... any ideas on how to resolve this?
[17:27] <hatch> for example `cannot find package "code.google.com/p/go.crypto/ssh"`
[17:42] <mgz> argh. arghargh.
[18:01] <mgz> rogpeppe: have a moment? by what mechanism is coretesting.CACert treated as valid by the default go http stuff?
[18:02] <mgz> I want a not-valid cert to add a test for an error path
[18:02] <rogpeppe> mgz: it's added to the valid root certificates in the request
[18:02] <rogpeppe> pwd
[18:02] <mgz> where?
[18:03] <rogpeppe> g
[18:03] <rogpeppe> one mo
[18:03] <mgz> :)
[18:03] <rogpeppe> mgz: look in state/api/apiclient.go:/^Open
[18:04] <mgz> specifically, if I get http.DefaultClient, that doesn't complain i
[18:04] <mgz> ona
[18:04] <mgz> on an http request against a tls-created server
[18:04] <mgz> I think I'm missing a core idea here
[18:05] <rogpeppe> mgz: https request, right?
[18:06] <mgz> ...probably not, that may well be the issue
[18:06] <mgz> it's pretty confusing http: scheme works against tls servers at all
[18:07] <rogpeppe> mgz: yeah
[18:07] <rogpeppe> TheMue: https://codereview.appspot.com/56100043/
[18:07] <rogpeppe> TheMue: see state/apiserver/log.go which implements all the logic
[18:07] <rogpeppe> TheMue: at least, it might do - i haven't actually run the code
[18:08] <rogpeppe> TheMue: but i *think* that's about all that's needed
[18:08] <mgz> ...this is no longer a simple change
[18:08] <rogpeppe> TheMue: it allows the client to asynchronously change the filtering, and it use a websocket with one frame for each line of the log
[18:09] <rogpeppe> mgz: what error path are you trying to check?
[18:10] <rogpeppe> fwereade: this is what i was thinking of for the logging interface: https://codereview.appspot.com/56100043/
[18:16] <mgz> rogpeppe: well, that gets me on to an obscure https error "local error: record overflow"... possibly not from inside our code
[18:16] <rogpeppe> mgz: ha
[18:16] <rogpeppe> mgz: what *are* you doing :-)
[18:16] <rogpeppe> pwd
[18:17] <mgz> trying to write some tests for LoadStateFromURL after making it do the right thing
[18:19] <rogpeppe> mgz: can't you just use a different CACert to connect?
[18:20] <mgz> the code for the server (which may be at fault?) is in environs/httpstorage/backend.go
[18:20] <mgz> rogpeppe: I could, at the moment I'm using the same one from coretesting both sides and it's not happy
[18:20] <rogpeppe> mgz: you can't use CACert on the server side
[18:21] <mgz> can I get a better traceback for this error
[18:21] <mgz> rogpeppe: ah. that's what some existing testing did.
[18:21] <rogpeppe> mgz: find out where "record overflow" is generated and add a log print
[18:21] <mgz> rogpeppe: I'm pretty sure it's from inside net/http
[18:22] <rogpeppe> mgz: that's ok, just bung a log print in there
[18:22] <mgz> fair enough
[18:23] <rogpeppe> mgz: (use the standard log package)
[18:23] <rogpeppe> mgz: the reason CACert doesn't work on the server side is that it hasn't got the private key in there. you need to use CAKey
[18:24] <mgz> the server gets setup with both
[18:24] <rogpeppe> mgz: ah, well that should be fine
[18:24] <rogpeppe> mgz: BTW for your debug print, i often find this package useful: code.google.com/p/rog-go/exp/runtime/debug
[18:25] <rogpeppe> mgz: log.Printf("something; callers: %s", debug.Callers(0, 20))
[18:25] <rogpeppe> mgz: it'll print the current stack trace on a single line (just file/line numbers)
[18:26] <mgz> ta
[18:52] <mgz> rogpeppe: proposing what I have in case you have any insight from looking at it
[18:53] <rogpeppe> mgz: ok. i'm afraid i've got no more time tonight
[18:53] <rogpeppe> mgz: i'm trying to get a branch proposed before supper happens in 10 minutes :-)
[18:53] <mgz> :D
[18:54] <mgz> I think it's just our testing stuff is too naive around tls somewhere. but it's not clear to me how
[18:54] <rogpeppe> mgz: it's entirely possible
[19:03] <mgz> proposed, I may have inspiration going back to it later.
[19:04] <rogpeppe> if anyone cares for a review, there are lot of files in this one, but the change is quite simple: https://codereview.appspot.com/54230044/
[19:04] <rogpeppe> right, that's me
[19:04] <rogpeppe> i smell fooood
[19:04] <rogpeppe> g'night all
[19:17] <hatch> is there anyone besides dimitern who is familiar with the PutCharm code? In testing the GUI implementation I am getting a 405 (Method not allowed) when trying to post the charm on a trunk build of juju-core
[19:31] <natefinch> hatch: I don't know anything about putcharm, unfortunately.  However, the log might have more useful information
[19:34] <hatch> natefinch is there a separate log from the ones in /var/log/juju which may contain more information? (I've already checked those)
[19:38] <natefinch> hatch: no, that's where I'd look..... 405 sounds like the server isn't liking the request
[19:39] <hatch> yeah which is odd because that's what it's supposed to do :) oh well no problem I'll chat with Dimiter when he gets in tomorrow thanks anyways
[19:42] <natefinch> if you overlap with ian, it looks like he's had his fingers in there semi-recently too
[20:06] <_thumper_> fwereade: around?
[20:07] <natefinch> o/ thumper
[20:07] <thumper> hi natefinch
[20:14] <thumper> waigani: https://bugs.launchpad.net/juju-core/+bug/1271941
[20:22] <natefinch> thumper: is there an environment variable that describes where the upstart directory is, or is it just always /etc/init?
[20:22] <thumper> natefinch: always /etc/init
[20:22] <thumper> pretty sure
[20:23] <natefinch> dang... trying to mock out upstart stuff so I can test the code I wrote (which interacts with upstart)
[20:26] <axw> natefinch: I'm pretty sure in the juju-core/upstart package you can specify the init dir
[20:26] <axw> not sure if that helps you or not
[20:27] <axw> yeah - the local provider in trunk changes it in testing (I'm about to change all that though)
[20:27] <natefinch> axw: you can change it per-service that gets created, but that doesn't help from outside a function that I want to test. There's no global override.  Which pretty much means I have to make one.... not the end of the world, but kind of ugly to expose
[20:27] <axw> ah ok
[20:27] <axw> yep
[20:35] <natefinch> ....wow, I can't believe that worked on the first try
[20:59] <natefinch> thumper: I made a bunch of changes and then realized I was on the wrong branch.  Nothing committed yet. How can I switch branches and keep the local edits?
[21:00] <thumper> natefinch: I think if you just do 'bzr switch' it should just work
[21:00] <natefinch> thumper: ahh, cool
[21:00] <thumper> alternatively, use 'bzr shelve' change, then 'bzr unshelve'
[21:02] <natefinch> thumper: thanks, bzr switch just worked
[21:02] <thumper> coolio
[21:46] <wallyworld> thumper: https://pastebin.canonical.com/103503/