[00:27] <arosales> do folks know which ports juju client needs to be open?
[00:27] <arosales> also I am working with mxc in #juju and he is hitting http://pastebin.com/AQArivJz
[00:28] <arosales> doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:406 picked tools
[00:28] <arosales> but that is where the bad request comes from
[00:29] <arosales> does anyone know why juju.provider.azure environ.go:406 would fail in a bad request on bootstrap?
[00:31] <thumper> arosales: um...
[00:31] <thumper> arosales: the api/state ports to the server
[00:32] <thumper> arosales: i'd poke axw when he starts about the bootstrap issue
[00:38] <arosales> thumper, thanks i'll see if axw has any insights when he comes online
[00:39] <arosales> thumper, re the api/state ports to the server
[00:39] <thumper> yes...
[00:39] <arosales> on the client
[00:40] <arosales> does juju-core binary need to call out on any specific port?
[00:40] <thumper> probably also needs access to the charm store (not sure which port) if deploying standard charms
[00:41] <arosales> aside from deploying and even at that the client wouldn't do the the deploying
[00:41] <thumper> the default port for state is 37017 and api is 17070
[00:41] <arosales> client in this context is where juju commands are being issues form
[00:41] <arosales> *from
[00:41] <thumper> right, the juju binary talks to the bootstrap node primarily over the api now (17070), but a few still use state directly (37017)
[00:53] <arosales> thumper, gotcha thanksfor the info
[00:58] <arosales> axw, hello
[00:58] <axw> arosales: howdy
[00:59] <arosales>  I am working with mxc in #juju and he is hitting http://pastebin.com/AQArivJz
[00:59] <arosales>  doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:40
[00:59] <arosales>  doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:406
[01:00] <arosales> do you know why that section of code would fail with a 400: bad request
[01:00] <arosales> axw, ^
[01:00] <axw> arosales: not sure, looking
[01:01] <arosales> axw thanks
[01:01] <arosales> I wasn't able to reproduce the issue he was seeing
[01:05] <arosales> axw, I am going to step away from the computer but any insights you have would be appreciated.  I think at this early in the process he may have a cert config issue, but I am uncertain on how to confirm that.
[01:05] <axw> arosales: no worries, I will see what I can see
[01:06] <axw> arosales: is there a bug or something I can update?
[01:06] <axw> no mxc on #juju atm
[01:07] <arosales> axw, unfortunately no, he only posted http://askubuntu.com/questions/388419/juju-bootstrap-fails-in-azure-badrequest-the-affinity-group-name-is-empty-or
[01:07] <arosales> he was in #juju but left
[01:07] <arosales> he also had a post to the juju mailing list
[01:08] <arosales> axw, don't spend too much time I thought I would just check with you on why that section of code was failing as a clue to what may have been going on.
[01:08] <axw> okey dokey, I will update the list if I find something, or file a bug if there is one
[01:08] <arosales> axw, thanks
[01:08] <axw> arosales: sure, nps
[02:00] <jam> hazmat: There will be a couple of new APIs coming in 1.18 still, but we don't expect any existing ones to be removed. (status is added, PutCharm/Deploy/UpgradeCharm added, etc) MachineConfig is likely to stick around as is (though wasn't in 1.16)
[02:04] <axw> uh oh, I think my hard drive is failing
[02:06]  * axw fscks, bbs
[03:26]  * thumper heads to the supermarket
[04:31] <dimitern> thumper, did you get my mail yesterday about cadmin?
[04:31] <thumper> hi dimitern
[04:31] <dimitern> thumper, hey
[04:32] <thumper> dimitern: yeah, I got it, but not looked yet sorry
[04:32] <dimitern> thumper, no worries
[04:34] <dimitern> hatch, hey, you around?
[04:34] <hatch> dimitern sortof :) what's up?
[04:35] <dimitern> hatch, I wanted to check with you about the charms upload implementation
[04:35] <hatch> oh ok cool, so where are we on that?
[04:36] <dimitern> hatch, basically, we settled on no API calls, just a POST /charms?series=<series> and a multipart/form-data body containing a single zip file with the charm, and returning a json response in the form {"code":<int status code>, "error": "message", "charmUrl": "the url"}
[04:37] <hatch> what about the auth?
[04:37] <dimitern> hatch, if there is an error code and error are populated, charmUrl is missing, on success charmUrl is populated and error and code are missing
[04:37] <dimitern> hatch, basic http auth with a user-tag and password
[04:38] <dimitern> hatch, i'm finishing the proposal later today
[04:38] <hatch> hmm
[04:39] <hatch> so https post with username and secret?
[04:39] <dimitern> hatch, the implementation allows further development, like implementing GET /charms?url=<charmUrl>&file=icon.svg, but not implemented for now
[04:40] <dimitern> hatch, the same creds as for Login() - tag and password, where the tag must be a "user-xxx"
[04:41] <hatch> and what does this give us over the previously discussed pattern?
[04:41] <dimitern> hatch, it's the simplest approach
[04:43] <hatch> The only part I'm concerned about is the http auth stuff
[04:45] <hatch> not that we can't do it
[04:45] <hatch> I can't remember how the auth data is handled across the various login methods
[04:45] <hatch> I'll have to check when I'm back at my computer
[04:46] <dimitern> hatch, well, it's standard "Authorization" header containing `Basic realm="juju" <base64-encoded "tag:password" plain text string>`
[04:46] <hatch> right but I'm not sure if we store that information or not
[04:46] <dimitern> hatch, ah, sorry `Basic <base64-part>` no realm
[04:47] <hatch> it's been quite a while since I looked at the login code
[04:47] <dimitern> hatch, it should be almost exactly the same
[04:47] <hatch> I mean that the user will already be logged in
[04:47] <hatch> so if we don't store their creds we will need to ask for it again
[04:48] <dimitern> hatch, hmm, yes, but you can generate the base64 token and store it in a cookie or something i guess
[04:48] <hatch> right yeah
[04:49] <dimitern> hatch, anyway, all that is just a heads up for what's coming, not set in stone, we'll change accordingly to accommodate the gui as needed
[04:50] <hatch> right, yeah thanks, I've got this all down and when I'm not so darn tired I'll take a look at what we have/need - but it sounds good :)
[04:50] <hatch> thanks
[04:51] <dimitern> hatch, sure, sorry to bug you so late btw :)
[04:51] <hatch> dimitern can you cc me on the proposal so I can take a look in the morning?
[04:51] <hatch> haha no problem :) thanks for keeping me up to date
[04:52] <dimitern> hatch, I will update the document and send you a link
[04:52] <hatch> great thanks
[04:52] <hatch> have a good one!
[04:53] <dimitern> hatch, you too!
[05:06]  * thumper has a "fuck yeah!" moment
[05:10] <dimitern> thumper, \o/
[05:11] <thumper> dimitern: yay tests is all I can say
[05:11] <thumper> dimitern: http://pastebin.ubuntu.com/6549442/ are the tests that are passing :-)
[05:12] <dimitern> thumper, :) looking
[05:12] <thumper> dimitern: working on the server side of juju-run
[05:12] <dimitern> thumper, great!
[05:12] <thumper> dimitern: so you can do something like this on any machine hosting a unit
[05:13] <dimitern> thumper, looking forward to trying it out when done
[05:13] <thumper> juju-run my-unit/4 "some magic"
[05:13] <thumper> inside cron
[05:13] <thumper> so runs in a hook context etc...
[05:13]  * thumper pops the stack and writes tests for uniter.RunCommands
[05:14] <dimitern> nice!
[05:15] <thumper> actually, now might be a good time to stop for the day
[05:15] <thumper> finish on a high and all that
[05:15]  * thumper does some admin bits
[05:17] <dimitern> :)
[05:18] <davecheney> axw: i hope that failure is not related to not having a tty
[05:18] <axw> davecheney: which failure is that?
[05:19] <axw> davecheney: the azure/ec2/etc. bootstrap failure?
[05:19] <davecheney> the same
[05:19] <davecheney> my money is still on timeouts
[05:19] <axw> davecheney: nfi, the gpgv thing is weird
[05:20] <axw> I've found a bunch of bug reports on this, and all the resolutions are "oh I just re-ran apt-get update and it fixed it"
[05:20] <davecheney> axw: do you pass ssh -t when dialing the bootstrap node ?
[05:20] <axw> davecheney: yes, for sudo
[05:20] <davecheney> why would sudo require that
[05:20] <davecheney> ubuntu user doesn't need a password
[05:21] <axw> davecheney: it's needed for manual provisioning/bootstrap
[05:21] <axw> could be disabled for cloud provider
[05:21] <davecheney> nonsense
[05:22] <axw> davecheney: nonsense? it's needed because the ubuntu user doesn't necessarily exist on a machine you're manually bootstrapping
[05:22] <davecheney> axw: seriously ?
[05:22] <davecheney> oh ffs
[05:22] <davecheney> that's just peachy
[05:23] <axw> davecheney: why does that matter?
[05:23] <davecheney> i guess if you pass -t to ssh
[05:23] <davecheney> that is all we need
[05:23] <axw> yup
[05:25] <thumper> night all
[07:57] <fwereade_> jam, so, looks like we can pull the trigger on 1.16.5?
[07:57] <jam> fwereade_: I believe so. I don't have anything stuck in my head we're waiting for
[08:00] <fwereade_> jam, \o/
[08:00] <jam> 1.17.0 on the other hand ... :)
[08:04] <axw> does anyone know if there are cloud (ec2, azure, ...) mirrors for cloud-archive?
[08:04] <axw> seems that installing monogdb-server from cloud archive is what takes so damn long on azure
[08:06] <fwereade_> jam, yeah :(
[08:06] <jam> axw: I don't believe it is mirrored by anyone, ATM.
[08:06] <fwereade_> jam, actually, hmm, I should make force-destroy-machine delegate actual removal to the provisioner
[08:07] <fwereade_> jam, axw, who should we be talking  to about that? ben howard?
[08:07] <axw> jam: okey dokey
[08:07] <jam> fwereade_: I think it is possible, but hard to mirror stuff that isn't in the central archive.
[08:08] <jam> I actually think the juju-mongodb proposal, possibly with V8 stripped out will get us big wins here
[08:08] <jam> as we can make the package that gets installed a lot smaller
[08:08] <jam> (today we install 'mongodb' which gives us client and server, and client is 60-90MB)
[08:08] <jam> while I agree local mirrors will still be faster, I'm not sure how that works with non Ubuntu archives
[08:09] <jam> (when we say "add cloud-archive:tools" how is that found in a mirror?)
[08:30] <axw> jam, fwereade_: https://bugs.launchpad.net/juju-core/+bug/1259453   -- marked as Low, feel free to increase if you think it's worth pursuing
[08:30] <_mup_> Bug #1259453: Bootstrap is significantly delayed by installing mongodb-server from cloud-archive <juju-core:Triaged> <https://launchpad.net/bugs/1259453>
[08:30] <axw> it won't be a problem with Trusty
[08:39] <fwereade_> axw, I think precise is still pretty important, but it sounds like it may be hard to do
[08:39] <axw> yeah, probably
[09:58]  * fwereade_ wrote some code!
[09:59] <fwereade_> rogpeppe1, you're OCR -- https://codereview.appspot.com/39970043/ and https://codereview.appspot.com/37610044/
[09:59] <fwereade_> rogpeppe1, they're identical
[09:59] <rogpeppe1> fwereade_: looking
[10:04] <jam> fwereade_: dimitern did comment on it. if you set safe-mode: true will this actually stop instances?
[10:04] <fwereade_> jam, safe mode distinguishes between "stopping" and "unknown"
[10:05] <axw> if anyone has spare cycles, this could do with a review sooner rather than later (it fixes a 1.17 critical): https://codereview.appspot.com/39940043/
[10:05] <jam> fwereade_: did you actually test this?
[10:05] <dimitern> fwereade_, so safe-mode is selectively destructive then :)
[10:05]  * axw eods
[10:06] <fwereade_> jam, I'm running it now, I *think* I'm being clever and parallel rather than slapdash ;p
[10:09] <jam> fwereade_: so this is 'juju status' in manual provisioning when you try to add, and it can create the machine in the DB but calls DestroyMachine http://paste.ubuntu.com/6550335/
[10:10] <jam> that is what I was saying about it ending up 'pending but dying'
[10:11] <jam> but never alive
[10:12] <fwereade_> jam, so what is it that failed after the machine was created in the db?
[10:12] <jam> fwereade_: well, I haven't finished the manual stuff, so MachineConfig failed in the middle, and then it couldn't set up upstart, but the theorical "something after allocating a machine id but before the agent is actually running"
[10:12] <jam> we try to clean up
[10:13] <jam> but I don't *quite* see the point, as the cleanup doesn't actually work
[10:13] <fwereade_> jam, yeah, the problem is (I think) that an instance id is assigned so we don't fast-forward the destroy
[10:14] <fwereade_> jam, I'm not reallycomfortable with differentiating based on the "manual:" bit
[10:14] <jam> fwereade_: well, we're in manual provisioning code right then, if we have a way to actually signal it should be destroyed
[10:14] <fwereade_> jam, just because I'm sure that one day some provider will give us a real instance id starting with "manual:"
[10:14] <jam> we *could* call something to remove the instance id as we clean up
[10:15] <fwereade_> jam, well, yeah, we can write the code :)
[10:16] <fwereade_> jam, but, generally, removing an instance id is a pretty surprising thing to do
[10:16] <fwereade_> jam, and I'm reluctant to normalise it
[10:17] <fwereade_> jam, really we should have flagged manual instances in some way that wasn't just a magic string embedded in the instance id
[10:21] <fwereade_> jam, plausibly we could start doing that now, though..?
[10:21] <jam> fwereade_: given that this is all about compatibility with 1.16 direct DB access, I'm not planning on changing DB internals just yet :)
[10:21] <fwereade_> jam, rogpeppe1: fwiw, yes, the provisioner works as expectedlive
[10:21] <jam> I think it is fine that in the "cleanup we have an error while manual provisioning" for us to do the steps we need to clean it up
[10:21] <jam> fwereade_: great
[10:22] <jam> fwereade_: we'll need to think what that looks like in the API-only case as well, because that is also broken in trunk
[10:22] <jam> so there is certainly "it is out of scope for today" but I think it is a bug to be fixed
[10:22] <rogpeppe1> fwereade_: sorry, some context?
[10:23] <fwereade_> jam, yeah, indeed -- an easy fix here is fine -- but for trunk, and going forward, I think we should be a bit more precise about specific machines' providers
[10:23] <fwereade_> rogpeppe1, jam asked if I'd tested it live, I just finisheddoing so
[10:23] <rogpeppe1> fwereade_: ah, cool
[10:24] <rogpeppe1> fwereade_: i *thought* that's what you meant, but just checking
[10:24] <fwereade_> jam, the trouble is that we still don't have an explicit concept of provider in state, it's still all gummed up with the environment
[10:25] <jam> fwereade_: anyway, I'm at the "file a bug, and get on with what I'm actually trying to solve" point.
[10:32] <fwereade_> jam, yeah, that sounds reasonable
[10:34] <jam> fwereade_: so there isn't anyway to actually get rid of the machine in 1.16.3, right? (We might be able to call ForceDestroyMachines if it was available)
[10:38] <fwereade_> jam, I think that is so, yes
[10:39] <fwereade_> jam, and with my change there still won't be in 1.16.5
[10:40] <fwereade_> jam, it'll get to Dead but won't actually be removed, I think
[10:40] <fwereade_> jam, would you bug it for 1.18 please?
[10:40] <jam> fwereade_: and I think we would still want to distinguish from "I did get it set up, so I want the agent to clean up after itself" from "I create the record, but the agent will never come up"
[10:40] <jam> fwereade_: https://bugs.launchpad.net/juju-core/+bug/1259496
[10:40] <_mup_> Bug #1259496: juju add-machine ssh: may not clean up properly on failure <manual-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1259496>
[10:41] <fwereade_> jam, definitely, it's a very specific situation
[10:41] <fwereade_> jam, I worry most about the code being abused once it exists
[10:42] <jam> fwereade_: bug #1259490 ... It sounds to me like he means "juju debug-log" is too verbose, but what he actually means is that the only way he has to see what is going on is an overly-verbose-for-debugging log
[10:42] <_mup_> Bug #1259490: juju-log in debug mode is too verbose <debug-log> <juju-core:New> <https://launchpad.net/bugs/1259490>
[10:45] <fwereade_> jam, maybe -- the hook-tool invocation line is probably more a DEBUG-level thing -- but it sounds like what he actually wants is to set the log level to INFO..?
[10:45] <jam> It *might* be that he just wants to be able to INFO level filter the debug-log, but it sounds mostly like he's having trouble getting appropriate feedback.
[10:48] <dimitern> jam, rogpeppe1, standup?
[11:44] <jam> natefinch: still up for 1:1?
[11:44] <natefinch> natefinch, yep
[11:44] <jam> k, I'm there
[12:03] <jam> natefinch: you're feed is paused, is it working for you?
[12:10] <natefinch> wow, had to hard-reset my laptop
[12:10] <jam> welcome back natefinch
[12:10] <jam> natefinch: I sent you an email, I'm not sure there's much to finish up
[12:11] <natefinch> jam: ok, cool. I sorta figured
[12:58] <TheMue> rogpeppe1: https://codereview.appspot.com/36540043/ is in again, looks and feels better now.
[12:59] <TheMue> rogpeppe1: thx for your review hints, helped a lot.
[12:59] <rogpeppe1>  TheMue: cool. looking.
[13:01] <rogpeppe1> TheMue: hmm, seems like you didn't use bzr mv, which is a pity as i can't easily see the diffs from my last review
[13:01] <TheMue> rogpeppe1: I used bzr mv
[13:01] <rogpeppe1> TheMue: weird
[13:01] <TheMue> rogpeppe1: and rietveld discovers it correctly
[13:01] <rogpeppe1> TheMue: oh well, it's probably just a rietveld thing
[13:02] <TheMue> rogpeppe1: our good old friend :D
[13:04] <TheMue> rogpeppe1: oh, interesting, changed the patch sets in display, see your troubles :(
[14:11] <mgz> okay, I'm back around reliably again now
[14:27] <dimitern> fwereade_, jam: charm upload's state operations https://codereview.appspot.com/40160043
[14:41] <smoser> hey.
[14:41] <smoser> general question...
[14:41] <mgz> generic answer...
[14:41] <smoser> what woudl people think about having a 'environment' file for jujud
[14:42] <mgz> that does what?
[14:42] <smoser> my motivation for this is right now 'lxc-create' uses 'ubuntu-cloudimg-query', which *can* read a environment variable to set a mirror.
[14:42] <smoser> but there is no way to get environment variable into juju
[14:42] <smoser> same is true for 'http_proxy' and 'https_proxy'
[14:42] <smoser> which would be respected by utilities if they could be set.
[14:43] <smoser> but putting stuff in /etc/environment does'nt perculate through to daemons started with upstart.
[14:59] <mgz> smoser: I agree some way to chuck in extra cloud-init values seems really useful
[14:59] <mgz> though, that also means the bypass that manual provisioning currently does need to start using cloud-init...
[15:00] <smoser> mgz, well, i wasn't talking about cloud-init specifically
[15:00] <smoser> but for the manual provisioning, my solution for that is to actually use cloud-init
[15:01] <smoser> (and cloud-init provide a consistent interface to "run cloud-init now")
[15:01] <mgz> yeah, that would be good
[15:01] <mgz> smoser: I'd prefer cloud-init extra config to random envvars
[15:02] <smoser> well, cloud-config wouldn't even suffice here.
[15:02] <smoser> mgz, well... out oside of somewhat abusing it.
[15:02] <smoser> cloud-init can set http-proxy for apt
[15:03] <smoser> but doesn't do it for /etc/environment a
[15:03] <smoser> the only way to solve this would be to provide a boothook that dpkg-divert'd the lxc-create to a wrapper.
[15:06] <mgz> some guy hacked around this previously by modifying his base image to have HTTP_PROXY set for the ubuntu user... but it seems like a really fragile why of saying you need to go out through a gateway for your cloud
[15:13] <mgz> rogpeppe1: any hints on debugging "panic: Session already closed" in tests for status cmd after I've been fiddling with the api bits?
[15:14] <rogpeppe1> mgz: what does the traceback look like?
[15:15] <mgz> I shall pastebin
[15:16] <mgz> rogpeppe1: http://paste.ubuntu.com/6551589/
[15:17] <rogpeppe1> TheMue: you've got a review BTW
[15:19] <rogpeppe1> mgz: i think i'd start by trying to find out when the state.State was closed
[15:20] <rogpeppe1> mgz: as i think that's probably the only way that that panic can happen
[15:20] <rogpeppe1> mgz: usual drill: add some printfs...
[15:21] <mgz> it's a begger as it's one giant table test, so I can't really do a minimal run...
[15:22] <TheMue> rogpeppe1: just seen, thx
[15:22] <rogpeppe1> mgz: you could see if you get the issue with all but two of the tests omitted
[15:29] <mgz> rogpeppe1: heh, if I only use the fallback path, it works
[15:29] <rogpeppe1> mgz: the fallback path?
[15:29] <mgz> probably something in the testing reset state logic breaks the api
[15:29] <mgz> fallback to direct state access
[15:29] <rogpeppe1> mgz: ah yes
[15:29] <rogpeppe1> mgz: yeah, probably
[15:31] <mgz> kills some pingers and calls JujuConnSuite.Reset()...
[15:39] <mgz> rogpeppe1: okay, it involves how I'm using conn from inside the apiserver...
[15:40] <rogpeppe1> mgz: oh yes?
[15:40] <mgz> if I just don't close it after creating one, everything is fine... but that seems like a leak?
[15:41] <mgz> rogpeppe1: http://paste.ubuntu.com/6551726/ how bogus is that?
[15:42] <rogpeppe1> mgz: were you closing the Conn?
[15:43] <rogpeppe1> mgz: if so, that was definitely the reason for the problem
[15:43] <rogpeppe1> mgz: NewConnFromState just shares the State that's passed into it
[15:43] <mgz> I was, doubtingly, and not does indeed help. but if I'm calling New... every api call, what's doing the c... okay, ace
[15:44] <mgz> thanks!
[15:44] <rogpeppe1> mgz: np
[16:32] <rogpeppe1> dimitern: you've got a review https://codereview.appspot.com/40160043/
[16:33] <dimitern> rogpeppe1, cheers!
[16:38] <mgz> another fun mystery...
[16:38] <mgz> switching to the api has lead to a test mismatch, where setting a machine's agent state is expected to be:
[16:39] <mgz> "agent-state":"started"
[16:39] <mgz> but comes out as:
[16:39] <mgz> "agent-state":"down", "agent-state-info":"(started)"
[16:39] <rogpeppe1> mgz: is that a function of instance state?
[16:41] <mgz> it's all entwined in a bunch of declarative testing stuff, but I can't quite see how I have affected it at all with the api...
[16:44] <mgz> may well just be a fixup error I made somewhere, in whice case I'm happy for the test... but it's not obvious where the problem lies
[16:49] <mgz> hm, more likely, the provider bit is just not working
[16:54] <mgz> what could cause AgentAlive to be unhappy...
[17:15] <dimitern> rogpeppe1, fwereade_, next (and last for today) CL https://codereview.appspot.com/40290044
[17:16]  * dimitern reached eod
[17:16] <rogpeppe1> dimitern: looking
[17:16] <dimitern> rogpeppe1, thanks
[17:16]  * fwereade_ supper, might pop backonlater
[17:17] <rogpeppe1> dimitern: i'd like to see a version of the PutCharm document that encapsulates the current proposal
[17:17] <rogpeppe1> dimitern: i'm not sure the history matters so much
[17:17] <dimitern> rogpeppe1, well see it - i've updated the doc and even sent you (and the others a mail
[17:18] <dimitern> rogpeppe1, last section "Chosen Implementation"
[17:19] <rogpeppe1> dimitern: ah cool. perhaps that could go at the top.
[17:27] <dimitern> rogpeppe1, feel free to edit to your heart's content :)
[17:27] <rogpeppe1> dimitern: okeydokey :-)
[18:03] <webbrandon> I am getting a ERROR Get : 301 response missing Location header.  It started happening after I destroyed-environment.  I hadn't done anything to juju from the bootstrap previous to make this happen except a system update
[18:03] <webbrandon> I tried removing and reinstalling juju-core but it still exist
[18:09] <mgz> rogpeppe1, (et al): have posted https://codereview.appspot.com/40350043 with current progress and cry for help
[18:09] <rogpeppe1> mgz: will look after i've finished with dimitern's
[18:11] <mgz> thanks!
[18:37] <rogpeppe1> dimitern: reviewed
[18:37] <dimitern> rogpeppe1, thank you
[18:54] <rogpeppe1> mgz: you've got a review
[18:54] <mgz> rogpeppe1: thanks!
[18:57] <rogpeppe1> mgz: i don't quite understand your CL description
[18:57] <rogpeppe1> mgz: what are the #1, #3 etc referring to?
[19:03] <mgz> rogpeppe1: in cmd/juju/status_test.go statusTests is a list of test() things, which have several expect{} asserts withing
[19:03] <mgz> -g
[19:04] <rogpeppe1> mgz: yeah, i'm looking at it currently
[19:04] <rogpeppe1> mgz: are the #1 etc referring to steps of test 0 ?
[19:04] <mgz> yeah, I didn't 0-index
[19:04] <rogpeppe1> mgz: oh, confusing?
[19:04] <mgz> and the first test() fails
[19:04] <rogpeppe1> s/?/
[19:05] <mgz> see the string for which one it actually is if the numbering is confusing :)
[19:05] <dimitern> rogpeppe1, mgz, when I wrote those tests, each test() was a separate case
[19:06] <dimitern> rogpeppe1, mgz, and it seems it still is
[19:06] <rogpeppe1> mgz: i'm finding it difficult to parse: "
[19:06] <rogpeppe1> The test #1 expect #3 which does SetAgentAlive
[19:06] <rogpeppe1> passes
[19:06] <rogpeppe1> "
[19:06] <mgz> right
[19:06] <dimitern> rogpeppe1, mgz, it just written so that you can also build cases incrementally with multiple expects
[19:06] <mgz> and the following one fails
[19:07] <rogpeppe1> mgz: should that be "expects" ?
[19:07] <dimitern> mgz, once you're using the api, the setagentalive tests are meaningless
[19:07] <dimitern> mgz, because once you connect to the api as a machine or a unit the agent is set to alive
[19:08] <rogpeppe1> dimitern: surely they're not meaningless?
[19:08] <rogpeppe1> dimitern: 'cos we're connecting as a client
[19:08] <mgz> status still needs to care, is the issue
[19:08] <dimitern> rogpeppe1, oh, right
[19:08] <rogpeppe1> mgz: i see step #6 fail FWIW
[19:09] <rogpeppe1> mgz: ah, you're counting expects!
[19:09] <mgz> rogpeppe1: yeah, it's step #6, but expect #4
[19:09] <rogpeppe1> mgz: ... using 0-based counting for steps, but 1-based for expects :-)
[19:10] <dimitern> mgz, if it helps, split the test()s so that each one has a single expect(), then you can comment out the rest and run them one by one
[19:10] <dimitern> mgz, this will introduce some code duplication, because the first test case relies on setting and testing stuff as it goes
[19:15] <rogpeppe1> mgz: it's weird - i've just verified that SetAgentAlive is called and AgentAlive initially returns true but returns false a few moments later
[19:15] <dimitern> rogpeppe1, i had that issue before
[19:16] <rogpeppe1> dimitern: oh yeah?
[19:16] <rogpeppe1> dimitern: do you remember what the issue was?
[19:16] <dimitern> rogpeppe1, it was something to do with the pinger being killed at some point
[19:16] <mgz> rogpeppe1: right, that's why the cry for help :)
[19:16] <dimitern> rogpeppe1, or maybe it was related to startsync on the right state - BackkingState or State in the suite
[19:18]  * dimitern has can type? time to call it a night
[19:18] <dimitern> g'night all, see you tomorrow guys
[19:18] <mgz> night dimitern :)
[19:23] <rogpeppe1> mgz: ah, i think i know what the issue might be
[19:23] <rogpeppe1> mgz: the api's state presence hasn't seen the agent becoming alive yet
[19:24] <rogpeppe1> mgz: dimiter is right - if you startsync in the api's state, it should fix the issue.
[19:24] <mgz> ah, interesting
[19:25] <rogpeppe1> mgz: the logic in startAliveMachine is doing the right thing but on the wrong State
[19:25] <rogpeppe1> mgz: i have a feeling that it's there because of the issue that dimiter encountered before (the same issue)
[19:27] <rogpeppe1> mgz: hope that helps enough to get you through it.
[19:27] <mgz> rogpeppe1: hopefully! thanks!
[19:28] <rogpeppe1> mgz: perhaps this is a case for moving those tests closer to the implementation
[19:28] <mgz> rogpeppe1: indeed
[19:28] <rogpeppe1> mgz: and making the status tests in cmd/juju just a smoke test
[19:38] <mgz> rogpeppe1: how do I get a reference to the api, given that it's created with NewApiClientFromName on each Run invocation?
[19:51] <sinzui> mgz, I am seeing  a test failure in 1.16 tip. I think something in my own configuration is interfering. any insights? http://pastebin.ubuntu.com/6552812/
[19:51] <mgz> sinzui: looking
[19:51] <mgz> that looks like joy
[19:52] <sinzui> I am on trusty BTW, though I did a major git reconfiguration last week
[19:53] <mgz> it seems like you may have personal git config that breaks the expectations of how juju-core things git does things
[19:53] <mgz> which is pretty naive
[19:53] <rogpeppe1> mgz: ISTR there's a method on the dummy provider
[19:53] <rogpeppe1> mgz: that returns the State used by the API server
[19:58] <mgz> rogpeppe1: GetStateInAPIServer sounds good
[20:23] <mgz> okay, all fixed
[21:22] <abentley> thumper: Is there an equivalent to all-machines.log for the local provider?  So far, I've only found individual agent logs.
[21:22] <thumper> abentley: yes, all-machines.log
[21:23] <thumper> abentley: well, in trunk
[21:23] <thumper> abentley: there were changes recently to have the local provider use rsyslog too
[21:23] <thumper> but not in the 1.16 branch
[21:23] <abentley> thumper: Oh, cool.  Yes, I was using the 1.16 branch.
[21:25] <abentley> jcsackett: the big changes are done for splitting upgrade and deploy into separate tests.
[21:44] <natefinch> thumper: you know anything about the uniter test failures I mentioned in email?
[21:44] <thumper> hi natefinch
[21:44] <thumper> ah... which email?
[21:44]  * thumper looks at email
[21:44] <natefinch> thumper - very recemt
[21:44] <natefinch> recent
[21:45] <thumper> no, not seen it
[21:45] <thumper> different git?
[21:45] <thumper> seems only to be leading #
[21:46] <natefinch> yeah. thats what I was thinking
[21:47] <natefinch> thumper, what's your git version?  I'm 1.8.5
[21:47] <thumper>  Installed: 1:1.8.3.2-1
[21:48] <natefinch> so maybe that's it.  I'm too bleeding-edge for Juju :)
[21:48] <jcsackett> abentley: ack, thanks.
[21:52]  * thumper takes a deep breath
[21:52]  * thumper exhales slowly
[22:26]  * wallyworld -> dentist, yay :-(
[23:15] <thumper> hi wallyworld