#juju-dev 2012-03-12
<fwereade_> morning wrtp, TheMue
<wrtp> fwereade_: goat moanin'
<TheMue> fwereade_, wrtp: morning
<wrtp> TheMue: hiya
 * TheMue just fights with a network equipment change
 * wrtp managed to wire up the ethernet properly over the weekend. file server in the loft, GB ethernet throughout and no need to rely on dodgy wireless device drivers...
<fwereade_> wrtp, cool
<wrtp> (and i learned a new skill: how to wire an RJ45 jack)
<fwereade_> wrtp, handy :)
<fwereade_> TheMue, in my recent review, did my ramblings about naming make any sense?
<TheMue> fwereade_: sorry, had only been able to do a first scanning. will discuss it later, currently my network doesn't e'thing right
<TheMue> fwereade_: around?
<fwereade_> TheMue, heyhey
<fwereade_> TheMue, having lunch shortly but good to chat for now
<fwereade_> TheMue, ...or maybe not
<TheMue> fwereade_: ok, can wait
<fwereade_> TheMue, laura's making it complicated;)
<TheMue> fwereade_: already had lunch ;)
<fwereade_> TheMue, heyhey, all fed and peaceful now :)
<fwereade_> TheMue, what can I do for you?
<TheMue> fwereade_: ha, know that fealing
<TheMue> fwereade_: discussing about the retry mode
<fwereade_> TheMue, ah, yes,cool
<fwereade_> TheMue, what do you think of "resolution" as associated noun?
<fwereade_> TheMue, I'm against RetryMode now because, really, only one of the 3 states involves retrying anything
<TheMue> fwereade_: if that's the semantics behind it i'm fine with "resultution"
<TheMue> fwereade_: sadly inside the node the content is "retry = ..."
<fwereade_> TheMue, heh, yeah, I'm afraid we're stuck with that
<TheMue> fwereade_: but i think that's a weakness we can live with
<TheMue> fwereade_: i selected -1 for NotResolved because 0 is the default for an unset int. and i don't wan't this value by accident. -1 is more expressive here.
<fwereade_> TheMue, hm, it seemed to me that NotResolved would be a sensible default value for an unst var
<fwereade_> unset
<TheMue> fwereade_: NotResolved is ok for an unset reply, yes, but returning an uninitialized int by accident with this value may lead to a wrong handling later.
<TheMue> fwereade_: so with one of -1, 1000 or 1001 it's more clear
<fwereade_> TheMue, ok, makes sense, I'm comfortable with that :)
<TheMue> fwereade_: what do you say to the change after rogs idea of giving it an own type?
<fwereade_> TheMue, yeah, can't see any harm in that
<TheMue> fwereade_: i like it, even if it has just three valid values.
<fwereade_> TheMue, you can tack the Validate func onto thattype too
<TheMue> fwereade_: oh, havn't i, damn
<fwereade_> TheMue, heh maybe you have I didn't check
<TheMue> fwereade_: the new/old naming is ok, will change it
<fwereade_> TheMue, cool
<fwereade_> TheMue, just makes it a little easier for me to follow
<TheMue> fwereade_: it has already the type, phew
<fwereade_> TheMue, cool, sorry about that :)
<TheMue> fwereade_: np
<TheMue> fwereade_: btw, which kind of errors are handled with the Resolved functions of Unit?
<fwereade_> TheMue, any state transition error
<fwereade_> TheMue, ie hooks
<TheMue> fwereade_: ic, thx
<fwereade_> TheMue, but also potentially things that might go wrong before a hook starts
<fwereade_> TheMue, failing to download the new version of a charm while upgrading for example
<fwereade_> TheMue, anything that puts us into an error state
<TheMue> fwereade_: and in this case the "resolved" flag is set?
<fwereade_> TheMue, you do "juju resolved <broken-thing>" and tack on a --retry if you want torun the hook again
<TheMue> fwereade_: does that command set the flag?
<niemeyer> Good morning everybody
<TheMue> niemeyer: morning
<fwereade_> TheMue, yeah, to whichever value, controlled by --retry
<fwereade_> heya niemeyer
<TheMue> fwereade_: and then? i still don't have the worklow behind it. something fails and that needs to be resolved. after it's done (manually?) who clears that "resolved"?
<fwereade_> TheMue, the agent reads the value, clears it, and tries to do what it was asked
<fwereade_> TheMue, think of the clearing as the unit agent taking over responsibility
<TheMue> fwereade_: ok, i only wondered about the passive version "resolved" instead of "need error resolution"
<niemeyer> TheMue: That's intentional.. if you type "juju resolved" you're saying "the problem has been resolved and it's fine to continue"
<fwereade_> TheMue, not quite sure I follow
<fwereade_> TheMue, what niemeyer said :)
<niemeyer> TheMue: It's a statement saying it's good to go now, rather than "fix it for me"
<TheMue> niemeyer: ok, this way it sounds reasonable. thx
<niemeyer> TheMue: So, go-state-continued-machine is the bottom of your queue, right?
<TheMue> niemeyer: just took a look, yes, it is
<niemeyer> TheMue: Ok, I see it still has comments from fwereade_ unaddressed, though
<niemeyer> TheMue: Are you working on those now?
<TheMue> niemeyer: after re-propose the go-state-continued-unit a few moments ago i would now continue with that one.
<niemeyer> fwereade_: Btw, you've recommended: "It'd be quite nice to move this out into its own little internalMachineId func
<niemeyer> "
<fwereade_> niemeyer, yep; disagree?
<niemeyer> fwereade_: Ah, sorry, nevermind.. I totally missed what you actually meant
<fwereade_> niemeyer, ah ok,sorry unclear :)
<niemeyer> fwereade_: No.. I misunderstood.. and talking to myself helped :-D
<TheMue> niemeyer: *lol*
<niemeyer> fwereade_: It was clear, I was on crack
<niemeyer> TheMue: Is go-state-continued-unit dependent on the go-state-continued-machine, or is it an independent branch that I can review on its own?
<TheMue> niemeyer: it doesn't depend
<niemeyer> TheMue: Super, thanks
<fwereade_> need to pop out briefly, bbs
<niemeyer> TheMue: You've got a review
<niemeyer> fwereade_: Will pick one of yours when I'm back from lunch
<rogpeppe> niemeyer, fwereade_, TheMue: small review for you: https://codereview.appspot.com/5754086
<TheMue> niemeyer: thx
<TheMue> niemeyer: do you know the reason why remove_machine_state returns a boolean which only has the one use that the only caller raises an exception when it's false?
<niemeyer> TheMue: Isn't that the case we talked about last week?
<niemeyer> rogpeppe: LGTM, cheers
<rogpeppe> niemeyer: thanks
<fwereade_> whoops, eod, I think cath could use some help; nn all, and I'd love it if I came back to some reviews tomorrow ;)
<niemeyer> fwereade_: You'll have some of them
<niemeyer> :)
<niemeyer> fwereade_: Sorry for not getting to those today
<niemeyer> TheMue: You just got a LGTM, though
<TheMue> niemeyer: ah, fine
<rogpeppe> niemeyer: http://paste.ubuntu.com/880728/
<rogpeppe> niemeyer: but i gotta go 5 minutes ago
<rogpeppe> niemeyer: see y'all tomorrow
<TheMue> niemeyer: the topic above is the case of last week. before i change it i just wonna get sure that there's no needed idea behind it.
<niemeyer> rogpeppe: Please ping me when you have some time..
<niemeyer> rogpeppe: And enjoy your ROD
<niemeyer> fwereade_: and you got a review as well
<TheMue> niemeyer: last propose before submit is in
<niemeyer> TheMue: Checking
<niemeyer> TheMue: The comment for Resolved is still bogus
<niemeyer> TheMue: Arguably, the improvement is small, but since you said "Done", I'm not sure if you didn't appreciate the suggested comment or just overlooked it
<TheMue> niemeyer: mom, taking a look
<niemeyer> TheMue: "mom" isn't a great abbreviation for "moment", btw ;-D
<TheMue> niemeyer: hmm, knowing it as common abbrev: mom pls for one moment please
<niemeyer> TheMue: https://www.google.com/search?q=define%3Amom
<TheMue> niemeyer: found my mistake, only looked to the wrong naming that has been a result of the search'n'replace
<niemeyer> TheMue: np.. please submit it
<niemeyer> TheMue: (with the fix)
<TheMue> niemeyer: ok, will go in now
<TheMue> niemeyer: so, it's in, time to leave
<niemeyer> TheMue: Have a good time
<TheMue> niemeyer: thx, have a good ROD too
<niemeyer> fwereade_: One of your branches seems to be affected by the problem rogpeppe pointed out..
<niemeyer> fwereade_: https://codereview.appspot.com/5786051/
<fwereade_> niemeyer, huh, I'll take a look at the testutil weirdness
<niemeyer> fwereade_: I'm fixing a.. hmm.. questionable behavior I implemented in goetveld
<fwereade_> niemeyer, about the go-hook-package branch (if you have a mo?)
<niemeyer> fwereade_: Not sure if for some reason they started to forbid it
<niemeyer> fwereade_: I do
<fwereade_> niemeyer, (1) I suspect Env probably actually shouldn't be separate from context, it seemed separate at the time; in practice I think it'll make sense to tack it onto context
<fwereade_> niemeyer, (2) the temporary path addition is because it seems better to just make meaningless commands entirely unavailable, rather than available-but-guaranteed-to-error
<fwereade_> niemeyer, "this command doesn't exist" seems like the clearest way to denote thatthere's no good reason to call it
<fwereade_> niemeyer, and I like the idea of minimising pollution of command space
<niemeyer> fwereade_: I'm not following (2).. missing more foundational reasoning about why we're using "command" there at all
<fwereade_> niemeyer, same general idea as not importing stuff you don;t use
<niemeyer> fwereade_: It's a function to call a hook.. which "command" is that?
<fwereade_> niemeyer, it's called via jujuc but it's still fundamentally a command line tool
<fwereade_> niemeyer, (thatis, everything called from a hook is)
<niemeyer> fwereade_: Yeah, but why is this relevant at all to the function that calls a hook?
<fwereade_> niemeyer, the function that calls the hook makes (1) the env vars and (2) the relevant "executables" available for use by the hook
<niemeyer> fwereade_: Precisely.. why is (2) being done at all?
<fwereade_> niemeyer, for the reasons above... what's the benefit of making available a whole bunch of tools that can't be used?
<niemeyer> fwereade_: Having the command that runs another command create symlinks as it calls the other command is a bit novel and unexpected
<fwereade_> niemeyer, do the 3 uses of command there refer to agent, hook, hook-tool respectively?
<niemeyer> fwereade_: It refers to Linux commands in general
<niemeyer> fwereade_: Unset your DISPLAY environment variable and call "xeyes"
<niemeyer> fwereade_: You'll get an error like this: Error: Can't open display:
<niemeyer> fwereade_: The environment wasn't set properly for it to run
<niemeyer> fwereade_: We don't hide xeyes away just because we can't run it if there's no good environment for it to run
<niemeyer> fwereade_: The commands that are available to a hook are normal Linux executables, and they will be found in the PATH
<fwereade_> niemeyer, but if you could guarantee that the environment was bad, what benefit would there be to making it available?
<fwereade_> niemeyer, is there a situation in which it would be a good idea to let people hack up a JUJU_CLIENT_ID and call these tools out-of-band?
<niemeyer> fwereade_: a) Because that's how everything in Linux works; b) Because that won't write and remove several inodes in the disk on every access; c) Because we'll eventually run these commands out of band as we discussed previously
<fwereade_> niemeyer, heh, I had the impression that we explicitly restricted that idea to making other relations available to hooks
<fwereade_> niemeyer, do they ever make sense entirely outside hooks?
<niemeyer> fwereade_: That's always been the idea, at least
<niemeyer> fwereade_: relation-set $RELATION system-on-fire=true
<niemeyer> fwereade_: Think about what happens once the system reaches stability..
<niemeyer> fwereade_: all hooks were run
<niemeyer> fwereade_: If we don't allow out-of-band changes in relations, we're severely restricting what people can do via relations
<niemeyer> fwereade_: It basically means a working system can't change relations after started, unless it breaks
<fwereade_> niemeyer, so relation-set could be called entirely outside juju's control?
<niemeyer> fwereade_: Let's paint a brighter picture.. relation-set is always under juju's control.. :-)
<fwereade_> niemeyer, haha, ok, let me restate: relation-set can be *called* as part of an unrelated script on a system, and that should be meaningful?
<niemeyer> fwereade_: Even then, that's not very relevant to this discussion, to be honest.. even if that wasn't the case, it'd still not make sense to be creating and removing several symlinks *every time a hook is run*
<niemeyer> fwereade_: Yes, I hope we can make that meaningful in a clean way
<niemeyer> fwereade_: and I don't think it's hard even
<fwereade_> niemeyer, well IMO this is the crux of it, please expand; that'll probably convince me to drop it entirely
<niemeyer> fwereade_: Which statement is "this" referring to?
<fwereade_> niemeyer, sorry, the meaningful relation-set not inside a hook
<fwereade_> niemeyer, I don't see what could cause thattohappen
<niemeyer> fwereade_: Anything that happens on the machine
<niemeyer> fwereade_: Right now the usefulness of relations is severely restricted to being modifiable inside a hook only
<niemeyer> fwereade_: Which means all events in a relation happen as a side-effect of the relation going up, or the relation going down
<fwereade_> niemeyer, a use case would be a great help to me here
<niemeyer> fwereade_: Everything else in between, which is where the application will hopefully spend most of its time, can't make use of relations because there's no out-of-band changing of relations
<niemeyer> fwereade_: I just described one above..
<niemeyer> fwereade_: relation-set $MONITORING_RELATION the-system-is-on-fire
<niemeyer> fwereade_: relation-set $MONITORING_RELATION the-system-is-on-fire=1
<niemeyer> fwereade_: relation-set $RELATION load-is-too-high=1
<niemeyer> fwereade_: relation-set $RELATION I-got-a-new-user=1
<niemeyer> fwereade_: relation-set $RELATION I-need-another-database=customers
<niemeyer> fwereade_: relation-set $RELATION send-an-sms-to-fwereade
<niemeyer> :)
<niemeyer> Anything, really.. there are many more out-of-band use cases than there are for setup/teardown
<fwereade_> niemeyer, ok, this involves a somewhat different conception of a charm to what I'd had before... I'm still a bit unclear about the mechanism in play
<fwereade_> niemeyer, can we look at load-is-too-high in a bit more detail?
<niemeyer> fwereade_: Sure.. what's the question, more precisely?
<fwereade_> niemeyer, it seems we're talking about a charm that includes some sort of monitoring of the service, and which can call a shell script in response to ...something happening
<fwereade_> niemeyer, I'm unclear what that something would be
<niemeyer> fwereade_: We're talking about a software that can execute a command when something happens..
<niemeyer> fwereade_: This sounds pretty straightforward
<niemeyer> fwereade_: What are you unclear about there?
<fwereade_> niemeyer, sure, not an overwhelmingly original concept when put like that :)
<niemeyer> :-)
<fwereade_> niemeyer, ok, I think that scotches the idea, which rested entirely on my conception that hook commands would not be generally useful
<fwereade_> niemeyer, I maintain that, assuming the above, PATH manipulation is a perfectly normal thing and the cost of writing a few symlinks is not significant in the context of a hook execution, but that's neither here nor there
<fwereade_> niemeyer, huh, I just realised how that sounded
<fwereade_> niemeyer, yes, but ASSUMING there was a dragon in your bed it was perfectly reasonable to set of the fire extinguisher
<niemeyer> fwereade_: PATH manipulation is.. writing a handful of symlinks every single time one of the hooks is run sits at the questionable side for me
<fwereade_> niemeyer, it rested entirely on the assumption that the set of meaningful commands was context-dependent and potentially subject to change, which was what I took away from the recent discussions
<fwereade_> ah well :)
<niemeyer> fwereade_: A lot of /usr/bin/* is context dependent..
<niemeyer> fwereade_: Just think about how many of those only work under X, or after you started LXC, etc
<fwereade_> niemeyer, seems different somehow... similar to how you wouldn't even bother to install a gui tool for managing your database on a headless server
<niemeyer> fwereade_: Many other put binaries under /usr/lib/, that are only ever run by themselves
<fwereade_> niemeyer, but starting LXC makes commands meaningful in a range of contextx that the LXC-starting cannot control
<niemeyer> fwereade_: Ok.. let's go the other way around then.. please find me a command that creates symlinks on demand ;-)
<fwereade_> niemeyer, I'm not sure I buy the idea that if a tool is sometimes relevant it should *always* be available -- just thata tool should always be available in any context in which it *might* make sense
<fwereade_> niemeyer, er, `ln` :p
<fwereade_> (sorry)
<niemeyer> fwereade_: Haha
<fwereade_> niemeyer, but still, it remains a derail due to my own missed context
<fwereade_> niemeyer, maybe one day I'll find a situation in which it really does make sense
<fwereade_> niemeyer, btw, I'm now worrying that you'll disapprove of the general idea of the parallel pipeline -- which makes jujuc *purely* a proxy which sends command lines back tothe agent for interpretation, and gets back output+return code
<rogpeppe> fwereade_: i'm interested to see gustavo's point of view on this :-)
 * rogpeppe stands well back
<fwereade_> niemeyer, the tradeoff is in *slightly* more complexity in cmd, in exchange for removal of (what feels to me like) unnecessarily complex plumbing in the python, and surprisingly neat reuse of supercommand
<niemeyer> fwereade_: That sounds unrelated, even though if I have to be honest I'd say it's a bit suspect to have the juju server parsing command lines for its clients
<fwereade_> niemeyer, it's something I did recently that you haven't reviewed yet ;)
 * fwereade_ looks resigned
 * fwereade_ steps up gamely anyway
<fwereade_> niemeyer, well, can we agree to begin with that all the meaningful work in a hook command is actually performed by the agent?
<niemeyer> :-)
<niemeyer> fwereade_: Yes
<fwereade_> niemeyer, and the total work is: (1) figure out what the hook's asking for; (2) do it by interacting with the hook context in some way; (3) give the result back
<fwereade_> niemeyer, in python we have a whole lot of plumbing -- server with 8 methods, client with 8 methods, declaration of params and return types for all those methods, and individual executables that turn a command line into params and results back into output/exit codes
<niemeyer> fwereade_: You'll need pretty much all of that somewhere
<niemeyer> fwereade_: Reuse is great.. you can reuse in either side
<fwereade_> niemeyer, we get all the same functionality with one method -- "run this command line" -- with one result type -- out, err, code
<fwereade_> niemeyer, and that allows you to write all the commands as Command implementations that interact directly with the context
<niemeyer> fwereade_: You can write all commands as command implementations interacting with the context either way, and you have to do line parsing either way
<fwereade_> niemeyer, where the line parsing happens only matters in that *if* we do it in jujuc we need a lot more plumbing to interact with the context
<niemeyer> fwereade_: Why?
<niemeyer> fwereade_: What's that lot?  I think that's the crux there
<fwereade_> niemeyer, whereas doing it in the agent allows us to use Command implementations that already have the relevant context directly accessible
<fwereade_> niemeyer, mainly juju.hooks.protocol
<fwereade_> niemeyer, which feels to me like a lot of code whose only effect is to obfuscate what's actually happening :/
<niemeyer> fwereade_: Ok, I'm fine to see the branch implementing it.. if it's indeed considerably simpler, I'm game
<fwereade_> niemeyer, I fear I'm going to put you off irretrievably when I say it's 4 branches... but I really did try to make each as focused and independent as possible
<niemeyer> fwereade_: That doesn't put me off at all
<niemeyer> fwereade_: I'm all for several small branches
<fwereade_> niemeyer, good-oh
<fwereade_> niemeyer, the only trouble with that approach is that it opens the early branches up to a lot of "but why?" questions that are best answered "because $followup", and that ratherdetracts from the benefits of the several-small-branches approach
<niemeyer> fwereade_: Well, that's what the description is for :)
<niemeyer> fwereade_: But it has to be a reasonable way to walk towards the achievement
<fwereade_> niemeyer, I might take another look at those, the first one feels rather more what-than-why in hindsight
<niemeyer> fwereade_: What causes discussions is that when rallying to implement a larger portion, it's easy to forget about some of the details
<niemeyer> fwereade_: That's what happened with today's branches, for instance
<niemeyer> fwereade_: It's quite useless without more stuff
<niemeyer> fwereade_: BUT
<niemeyer> fwereade_: It's easy to see where the feature is going.. the debate was about how that was being done, not why
<fwereade_> niemeyer, yes indeed :)
<fwereade_> niemeyer, btw, tyvm for the clarification on out-of-band hook commands, was edifying
<niemeyer> fwereade_: You're very welcome. Glad to have had it too
<fwereade_> niemeyer, btw, if a branch is already WIP, I presume I still need to `lbox propose -prep` to prevent it being promoted?
<niemeyer> fwereade_: Right
<fwereade_> niemeyer, cheers, justchecking :)
<niemeyer> fwereade_: np
<niemeyer> fwereade_: Just in case you're still around, would you mind to run "apt-get update; apt-get install lbox" and "lbox propose" in that bogus branch again?
<fwereade_> niemeyer, sure, justa mo
<niemeyer> fwereade_: Superb, thanks a lot
<niemeyer> fwereade_: I've removed a silly hack I had, which might actually be considered a bug
<niemeyer> fwereade_: It used to embed the revision information at the top of the file
<niemeyer> fwereade_: I'm not sure if that's what confused Rietveld somehow after some upgrade they made
<niemeyer> fwereade_: Either way, that means we'll now see which files _actually_ changed across different "propose" commands, which is awesome
<fwereade_> niemeyer, https://codereview.appspot.com/5786051
<fwereade_> niemeyer, there's a surprising [revision details] in there now
<fwereade_> niemeyer, but OTOH atleast you can see the diffs
<niemeyer> fwereade_: Yeah, the surprising revision details isn't so surprising for me ;-)
<niemeyer> and another review..
<fwereade_> niemeyer, thanks
<fwereade_> niemeyer, I'm not strongly inclined to put all the `var _ = Suite(...`s into init()s in *this* branch -- that feels like a separate trivial to me
<fwereade_> niemeyer, sensible?
<bigjools> hi folks
<bigjools> can anyone point me at some docs help me out, I am trying to work out the sequence of events that happens during bootstrap and deployment
<fwereade_> bigjools, I'm not sure about docs, but I may be able to help?
<bigjools> fwereade_: ah col
<bigjools> cool
<bigjools> so for the maas stuff I have bootstrap saying it's worked - I see a node start up on the maas side
<bigjools> but then trying to use deploy results in it complaining that zookeeper is not running
<bigjools> I thought I'd better try and understand exactly what needs to happen to be able to debug this
<fwereade_> bigjools, hmm; it's definitely not that zk just isn't running *yet*?
<bigjools> oh how long does it take?
<bigjools> and what initiates that?
<fwereade_> bigjools, once you have an instance, just about everything that happens should be covered by the cloud-init stuff
<bigjools> ah ok, so bootstrap kicks it all off
<bigjools> I'll ssh in to the node and see if it's starting up
<fwereade_> bigjools, yeah; that's the best place to start
<fwereade_> bigjools, I'm afraid I know nothing about debugging cloud-init though
<bigjools> you and me both :)
<fwereade_> bigjools, I have a feeling something useful should be logged somewhere, but... yeah, that's not very helpful ;)
<bigjools> heh
#juju-dev 2012-03-13
<niemeyer> fwereade_: Yeah, sorry, I wasn't actually suggesting that
<niemeyer> fwereade_: Just to put it in its own line, out of the block
<fwereade_> niemeyer, cool, that was what I took it to mean, but best to check ;)
<niemeyer> fwereade_: Thanks for checking out :)
<bigjools> niemeyer or fwereade_, what starts up zookeeper?
<bigjools> I am still not sure
<niemeyer> bigjools: I believe it starts itself.. cloud-init installs it
<niemeyer> bigjools: I might be wrong, though.. the cloud-init setup will tell for sure
<bigjools> thanks, I was afraid of that
<bigjools> it's not installed on my testing odev thing :(
<bigjools> niemeyer: is it triggered via the kickstart file used in the cobbler's juju profile?
<niemeyer> bigjools: It's triggered via cloud-init
<bigjools> niemeyer: there's cloud-init stuff at the bottom of the kickstart
<bigjools> and there's a different profile for non-juju nodes
<bigjools> just trying to piece this together, it's hard
<niemeyer> bigjools: Yeah, I can imagine, sorry about that
<niemeyer> bigjools: The whole cobbler thing, though, was intended to simply run cloud-init, in a way attempting to make it a bit saner
<bigjools> niemeyer: do you know if it relies on preseed data from juju to install it?
<niemeyer> bigjools: So most of the magic there is intended to get closer to what EC2 would do
<niemeyer> bigjools: It relies on the cloud-init user-data which we indeed deliver via kickstart
<bigjools> ok
<bigjools> niemeyer: does it do that when bootstrapping?
<niemeyer> bigjools: Does it do what, more precisely?
<bigjools> niemeyer: supply the zoopkeeper info to cloud-init for bootstrap
<bigjools> or is it done later?
<niemeyer> bigjools: All of the process is a side-effect of "juju bootstrap"
<bigjools> ok thanks
<niemeyer> bigjools: np
<bigjools> niemeyer: ok I think I have it now - smoser changed cloud-init to pull all this from the metadata service instead of passing loads of stuff in the preseed.
<bigjools> thanks again
<niemeyer> bigjools: RIght, so that we can stay closer to EC2 and remain working as the logic advances
<TheMue> morning
<wrtp> TheMue: hiya
<niemeyer> Good morning jujuers!
<wrtp> niemeyer: hiya
<niemeyer> wrtp:  Ho
<wrtp> niemeyer: i updated lbox, but it didn't seem to make a difference to my checksum error.
<fwereade_> heya niemeyer
<niemeyer> wrtp: Hmm :(
<niemeyer> fwereade_: Heya!
<niemeyer> wrtp: What's the output again?
<wrtp> hmm, it's just gone and worked!
<niemeyer> wrtp: Hah.. love when that happens :)
<wrtp> niemeyer: this was the output earlier this morning: error: Failed to send patch set to codereview: can't upload base of environs/ec2/internal_test.go: ERROR: Checksum mismatch.
<fwereade_> niemeyer, heh, if anything that sort of thing utterly creeps me out ;)
<wrtp> niemeyer: oh
<wrtp> niemeyer: i thought it had worked, but it's still borked
<wrtp> https://codereview.appspot.com/5796078/diff/1/environs/ec2/ec2.go
<niemeyer> wrtp: I suggest deleting the CL, removing the reference to it from the description in the merge proposal, and running lbox again
<wrtp> niemeyer: i did that last night, but i'll try again
<niemeyer> wrtp: The new lbox might help
<wrtp> niemeyer: i think it's possible it has something to do with a contents conflict in the history
<wrtp> niemeyer: will try again
<wrtp> niemeyer: same error
<wrtp> niemeyer: error: Failed to send patch set to codereview: can't upload base of environs/ec2/internal_test.go: ERROR: Checksum mismatch.
<niemeyer> wrtp: What's the new CL?
<wrtp> niemeyer: it didn't tell me. but... i'll have a look in codereview
<niemeyer> wrtp: Can you please paste the error then?
<wrtp> niemeyer: http://codereview.appspot.com/5798076/
<wrtp> niemeyer: error: Failed to send patch set to codereview: can't upload base of environs/ec2/internal_test.go: ERROR: Checksum mismatch.
<niemeyer> wrtp: This CL was not created by the new lbox
<wrtp> niemeyer: the branch is at lp:~rogpeppe/juju/go-zk-connect if you want to clone it and try for yourself
<niemeyer> wrtp: Can you please update and try again?
<wrtp> niemeyer: i did apt-get update, and apt-get install says i've got the newest version
<wrtp> niemeyer: ok
<niemeyer> wrtp: Well, your new version is not the new version that fwereade_ installed last night somehow
<wrtp> niemeyer: running apt-get update again
<wrtp> (what *is* it doing when it says "Waiting for headers"?)
<wrtp> still updating...
<wrtp> niemeyer: it says lbox is already the newest version
<niemeyer> wrtp: What's the version you've got?
<wrtp> niemeyer: how do i tell that?
<niemeyer> wrtp: dpkg -l lbox
<wrtp> niemeyer: 1.0-42.58.37.1
<niemeyer> 37.1 or 37.10?
<wrtp> 37.1
<wrtp> niemeyer: i could try go getting it.
<niemeyer> wrtp: What does "which lpad" tell you?
<niemeyer> Erm
<niemeyer> wrtp: What does "which lbox" tell you?
<wrtp> niemeyer: bingo!
<niemeyer> wrtp: Are you using a locally installed one?
<wrtp> niemeyer: i'd done go get lbox before i guesws
<niemeyer> wrtp: Aha, cool
<niemeyer> wrtp: Not sure if this will really resolved the issue, though.. but let's see
<wrtp> niemeyer: sorry, same error
<niemeyer> wrtp: Can you please paste the command output again?
<wrtp> niemeyer: do you want the verbose version?
<niemeyer> wrtp: Yeah, that'd be helpful
<niemeyer> wrtp: Please let me know of the final CL as well
<wrtp> niemeyer: here's the CL it created BTW
<wrtp> http://codereview.appspot.com/5795079/
<wrtp> although i'm just about to delete that again, so hold on
<wrtp> niemeyer: http://paste.ubuntu.com/881866/
<wrtp> niemeyer: and the CL is this: http://codereview.appspot.com/5797083/
<niemeyer> wrtp: Thanks
<niemeyer> wrtp: When you mentioned there was a conflict in the history, what was that about?
<wrtp> niemeyer: the file i referred to has been deleted and recreated in the branch history
<wrtp> niemeyer: the branch i'm trying to push is not a direct descendant of the branches i've previously pushed, although it's been merged with them.
<niemeyer> wrtp: Ouch
<niemeyer> wrtp: Did the file exist at tip?
<niemeyer> s/Did/does/
<wrtp> niemeyer: yes, i think so
<niemeyer> wrtp: So why was it removed and added?
<wrtp> niemeyer: at some points in the history, there was no need for any internal tests, hence internal_test.go was removed.
<niemeyer> wrtp: I see
<niemeyer> wrtp: Hmm
<wrtp> niemeyer: if i do a bzr diff, it should internal_test.go being removed and then created again with exactly the same content.
<wrtp> s/should/shows/
<niemeyer> wrtp: Yeah, that means the file was removed and re-added.. it's unfortunate that it loses the whole history when doing that
<wrtp> niemeyer: yeah, it is
<wrtp> niemeyer: and history is the reason i'm wanting to submit the branch as is - i have a sentimental attachment to some of the historical pieces...
<niemeyer> I'm now just wondering if Rietveld itself is breaking down when it notices that the file has the same checksum before and after
<niemeyer> Ah, maybe not..
<niemeyer> wrtp: The file was removed and added in the same commit
<wrtp> niemeyer: how is that possible?
<wrtp> niemeyer: i guess i must have done bzr rm and then changed my mind
<niemeyer> wrtp: I don't know, but that's what history shows
<niemeyer> removed:
<niemeyer>   environs/ec2/internal_test.go  internal_test.go-20120215173103-mc3cbfukg0redhy0-1
<niemeyer> added:
<niemeyer>   environs/ec2/internal_test.go  internal_test.go-20120222161404-ukvwz2ypn0g6plcc-1
<wrtp> niemeyer: interesting question: what's the best way to undo a bzr rm ?
<niemeyer> wrtp: That's exactly what I'm looking for right now
<wrtp> niemeyer: most edits between commits are ephemeral...
<niemeyer> wrtp: This seems to work: % bzr revert -r 101 ./internal_test.go
<niemeyer> +N  environs/ec2/internal_test.go
<niemeyer> wrtp: Please try to do this:
<wrtp> niemeyer: i'm trying now
<niemeyer> wrtp: cp internal_test.go{,.tmp}
<niemeyer> wrtp: bzr revert -r 101 ./internal_test.go
<niemeyer> wrtp: mv -f internal_test.go{.tmp,}
<niemeyer> wrtp: bzr commit
<niemeyer> Not sure if it'll work, actually
<niemeyer> Nope
<wrtp> no indeed
<wrtp> lunch
<niemeyer> wrtp: I believe I fixed it
<niemeyer> wrtp: When you're back, please try to "bzr pull lp:~niemeyer/juju/go-zk-connect-fixup"
<niemeyer> wrtp: Then, lbox propose
<niemeyer> wrtp: If that works, I'll explain how I got it fixed, and how to avoid the problem in the future
<wrtp> niemeyer: bzr: ERROR: These branches have diverged. Use the missing command to see how.
<wrtp> oops
<niemeyer> wrtp: I guess you have committed something else in your local branch?
<wrtp> wrong directory!
<wrtp> erm
<wrtp> niemeyer: ah yes, i did a commit to try and solve the problem earlier
<wrtp> (as suggested)
<wrtp> i'll uncommit it
<fwereade_> gents, I'm really tired; I'll be around again at some stage later today but right now I need a walk and a lie down
<fwereade_> laters
<wrtp> fwereade_: ok. enjoy the walk...
<wrtp> niemeyer: ok, that worked, thanks!
<wrtp> niemeyer: so... how did you fix it?
<niemeyer> fwereade_: Have some good rest man
<niemeyer> wrtp: In general, reviving a file is trivial: revert -r N file_name and bang.. we have the file back
<niemeyer> wrtp: The problem in this case, is that the file has been revived with a different id
<niemeyer> s/revived/added/
<niemeyer> wrtp: So when we attempt to "bring it back", bzr actually picks the latest id used for that name
<niemeyer> wrtp: The trick for getting the real original file back was to create another branch that still had the old file
<niemeyer> wrtp: Then, remove the file locally, and add it back again with the new content, but using bzr add --file-ids-from $OLD_BRANCH
<niemeyer> wrtp: This tricked bzr into looking at the original revision
<wrtp> niemeyer: ah. i had no idea about --file-ids-from
<wrtp> interesting
<wrtp> niemeyer: BTW the CL now seems to have a file called "[revision details]"
<niemeyer> wrtp: Yeah, that was intentional
<niemeyer> wrtp: this used to be at the head of every diff, but unfortunately this makes Rietveld get lost in terms of diffing the diffs
<niemeyer> wrtp: Now we'll be able to see which files actually changed between revision, as usual
<wrtp> niemeyer: i'm not sure i understand - what exactly is that file showing?
<niemeyer> wrtp: Have you clicked on it?
<wrtp> niemeyer: yeah. i get two revision ids
<niemeyer> wrtp: That's what this file is showing :-)
<wrtp> niemeyer: what do the revision ids refer to? i'd've expected that one would refer to me, given i've created the latest revision in that branch.
<niemeyer> wrtp: Sorry, I'm not sure I understand your question, since the revision ids are labeled
<niemeyer> wrtp: They show the old revision id and the new revision id..
<wrtp> niemeyer: ah! of course. you created the latest revision!
<niemeyer> wrtp: :-)
<niemeyer> wrtp: This information enable any patch set to be recovered exactly as proposed
<niemeyer> enables
<wrtp> niemeyer: hmm:
<wrtp> % bzr diff --old themue@gmail.com-20120312190709-6cki3f36c8clo63t
<wrtp> % bzr diff --old gustavo@niemeyer.net-20120313144703-jz03znqyo2qt56qu
<wrtp> %
<wrtp> or maybe i can't use a revision id as a --old target
<niemeyer>   --old=ARG             Branch/tree to compare from.
<niemeyer> wrtp: you want -r
<wrtp> i've not used revision ids in bzr before.
<wrtp> i wonder why it didn't give an error
<niemeyer> wrtp: Yeah, curious indeed
<wrtp> niemeyer: ah, this worked:  bzr qdiff -r revid:themue@gmail.com-20120312190709-6cki3f36c8clo63t
<wrtp> i was missing the revid: prefix
<wrtp> and the -r of course
<niemeyer> wrtp: bzr diff -r themue@gmail.com-20120312190709-6cki3f36c8clo63t
<niemeyer> wrtp: This works for me
<wrtp> niemeyer: so it does.
<wrtp> (for me too
<wrtp> )
<wrtp> niemeyer: thanks, that's useful.
<niemeyer> wrtp: np
<niemeyer> wrtp: I'll see if I take that lbox hackaton and implement the pre-req stuff later
<niemeyer> after some further reviews
 * wrtp is hoping to get this branch reviewed today :-)
<wrtp> niemeyer: here's the cherry on the cake: https://codereview.appspot.com/5754103
<wrtp> it feels very good to have that finally proposed!
<niemeyer> wrtp: I've delivered a LGTM review
<wrtp> niemeyer: yay!
<niemeyer> wrtp: but I'm a bit unsure about what's going on there.. do we really have only a couple of changes to tests in that branch?
<niemeyer> wrtp: Doesn't seem to match the description
<wrtp> niemeyer: all it does is connect to the zookeeper that's already started as a result of the userdata changes in the previous branch.
<niemeyer> wrtp: The description seems bogus then.. you're changing tests only
<niemeyer> wrtp: It should say something like "Fix lack of tests!" :-)
<TheMue> niemeyer: next step for the continued machine branch, will now use williams 'presence' for my 'agent' draft.
<wrtp> hmm. you're right!
<TheMue> niemeyer: oh, forgot the link, https://codereview.appspot.com/5690051/
<niemeyer> wrtp: That TestBootstrap should be reverted too
<wrtp> niemeyer: TestBootstrap can't work against the ec2test server
<wrtp> hmm
 * wrtp goes to look
<wrtp> niemeyer: yeah, maybe we should have TestBootstrap and TestBootstrapAndOpen
<wrtp> niemeyer: and avoid running the latter against ec2test
<niemeyer> wrtp: It certainly feels pretty bad that we won't be testing all of that logic locally anymore
<wrtp> niemeyer: that will make the live tests take a lot longer though
<niemeyer> wrtp: Isn't that the whole purpose of having two different set of tests, one that opens and runs all tests, and one that does not
<niemeyer> wrtp: I feel like we're getting lost in that mess of suites
<wrtp> niemeyer: well, the point of the Live tests is that they're designed to be run against amazon. and as an added bonus we can run most of them locally as well.
<niemeyer> wrtp: That's not what I'm talking about.. we have 4 suites, right?
<niemeyer> wrtp: Why?
<wrtp> two cross-provider suites; two ec2-specific suites.
<wrtp> niemeyer: each for live vs local
<wrtp> niemeyer: that Bootstrap logic is also tested inside the non-live test suite
<wrtp> niemeyer: running the live suite locally is just a nice thing to do when we can. but in this case we can't.
<niemeyer> wrtp: Running the live suite locally is the base strategy we've been using for testing juju's ec2 provider
<niemeyer> wrtp: If we drop those tests, logic is effectively going untested
<wrtp> niemeyer: yeah, we do rely on some of those tests, but we don't really want to duplicate the code.
<wrtp> niemeyer: maybe it would be better the other way around - if LiveTests selectively imported from LocalTests.
<niemeyer> wrtp: I'm starting to dislike that organization of tests altogether
<niemeyer> wrtp: We're getting lost very easily
<wrtp> niemeyer: yeah, it's not ideal. but i think the ideal of having some provider-independent tests is a good one.
<niemeyer> wrtp: Yeah, maybe we should try to have the zk running in the local tests as well..
<niemeyer> wrtp: But we'll face a road block very soon
<niemeyer> wrtp: So I'm not really sure it's worth it
<wrtp> niemeyer: yeah, i dunno.
<niemeyer> wrtp: I think LiveTests should bootstrap just once
<niemeyer> wrtp: Unless we're testing something that knowingly destroys the environment
<niemeyer> wrtp: We have to avoid that fear that introducing new tests will slow things down
<niemeyer> wrtp: Otherwise we'll stop writing tests..
<niemeyer> wrtp: So the bootstrap itself would happen at SetUpSuite
<wrtp> niemeyer: what about the tests that don't require a bootstrapped environment?
<wrtp> niemeyer: maybe it should happen on demand, but once only.
<niemeyer> wrtp: Right, that sound snice
<wrtp> niemeyer: it's a pity in a way that gocheck runs tests in alphabetical order. it would be nice to arrange things so that the weaker tests run first.
<niemeyer> wrtp: In fact, we already do that..
<niemeyer> wrtp: That was the whole idea behind t.Open
<niemeyer> Somehow we lost it
<wrtp> niemeyer: Open is quite different from Bootstrap though
<niemeyer> wrtp: Maybe..
<wrtp> niemeyer: but i agree, that kind of thing should work
<niemeyer> wrtp: I'll have to step out for lunch as Ale is waiting.. let's continue that conversation in 1h
<wrtp> niemeyer: sounds good
<robbiew> niemeyer: fyi...movement on the charm store RT...they are requesting a call with you
<TheMue> fwereade_: arg, thx, when creating a function it indeed should be used. will fix it.
<fwereade_> TheMue, cheers
<niemeyer> robbiew: Aweosme!
<niemeyer> wrtp: So, tests..
<wrtp> niemeyer: yeah
<wrtp> niemeyer: i've just added a bootstrap flag
<niemeyer> wrtp: flag?
<wrtp> niemeyer: so that live tests can call Bootstrap and it'll only bootstrap if another test hasn't already bootstrapped
<wrtp> niemeyer: a state variable rather than a flag, really
<niemeyer> wrtp: This is a test-specific detail..
<niemeyer> wrtp: It's a problem for the suite to handle
<wrtp> niemeyer: the variable is inside the suite
<niemeyer> wrtp: So by "state" you mean "suite"?
<wrtp> niemeyer: yeah. i meant state in the generic sense of a variable that records the state of something (in this case whether we've bootstrapped already)
<niemeyer> wrtp: Ah, ok.. we have too many states :_)
<wrtp> :-)
<niemeyer> wrtp: I think the method in the suite should be something like BootstrapOnce, to differentiate from the Bootstrap method we already have in the Tests suite
<wrtp> niemeyer: i'm also adding a flag inside jujutest.LiveTests which indicates whether it's possible to connect to the juju state.
<niemeyer> wrtp: Ok
<wrtp> BootstrapOnce sounds good. (although it might do it again if you deliberately call Destroy)
<wrtp> niemeyer: BTW i'm interested to hear ideas as to how to better structure the test suite.
<niemeyer> wrtp: Yeah, not sure yet.. the complexity is just a bit overwhelming at the moment
<niemeyer> wrtp: 2 base suites, 4 suites in ec2, multiple scenarios, etc etc
<wrtp> niemeyer: yeah. perhaps the multiple scenario stuff should go.
<wrtp> niemeyer: (although it did catch some useful errors earlier on)
<wrtp> niemeyer: the other stuff seems difficult to avoid though.
<niemeyer> wrtp: I think we can just pick the harsher scenario and go with it
<wrtp> niemeyer: not always easy to know which is "harsher". different scenarios can expose different bugs.
<wrtp> niemeyer: but i think i'll just go with "extra-instances"
<niemeyer> wrtp: I find it easy to see which one is likely to yield bugs
<wrtp> niemeyer: i don't think we care too much if instances come up in initial running state.
<niemeyer> wrtp: Pre-existing instances is trickier to handle than an empty environment
<wrtp> niemeyer: yeah, that one's an easy call.
<niemeyer> wrtp: Handling an instance in an intermediate state before running also is the less trivial path than getting instances running upfront
<wrtp> niemeyer: another one though, that we might want to do in the future, is eventual consistency. but perhaps we just always test for maximum eventual consistency delay.
<niemeyer> wrtp: Yeah.. I think we should have a reasonable delay that is likely to yield bugs but does not cause the suite to be extremely boring
<niemeyer> wrtp: Then, we can have something like Go's -cpu setting for tests
<niemeyer> wrtp: So that we can tweak that value for specific runs
<wrtp> niemeyer: seems plausible
<wrtp> niemeyer: (it'd be nice if gocheck could run tests concurrently BTW)
<niemeyer> wrtp: Yeah, it will eventually
<niemeyer> rebooting.. apparently Unity is now fixed and won't pop up the HUD all the time.. Woohay.
<niemeyer> Heh.. the upgrade *removed* unity..
<niemeyer> But the HUD issue is gone indeed
<wrtp> niemeyer: this is why we need better error messages from state:
<wrtp> /home/rog/src/go/src/launchpad.net/juju/go/environs/jujutest/livetests.go:72:
<wrtp>     c.Assert(err, IsNil)
<wrtp> ... value syscall.Errno = 0x2 ("no such file or directory")
<wrtp> niemeyer: i've gotta go
<wrtp> niemeyer: PTAL; i haven't had a double check of it, but it might be ok :-)
<wrtp> https://codereview.appspot.com/5754103
<niemeyer> wrtp: This is why we need better *error messages*
<niemeyer> wrtp: Let's not take the leap from there onto saying we *have* to implement a given approach
<niemeyer> wrtp: We already discussed improving error messages, and haven't done it yet
<wrtp> niemeyer: sure. i'm not saying that my suggested approach is the only one that'll work.
<wrtp> anyway, i'm  being shouted for!
<niemeyer> wrtp: Have fun :)
<wrtp> niemeyer: see you tomorrow
<niemeyer> wrtp: These error messages look like an issue in your local setup, btw.. there are no elements of "launchpad.net/goamz/aws" in /home/rog/src/go/src
<niemeyer> wrtp: Sorry, that was about a pretty old paste you made
<niemeyer> wrtp: ECONTEXT
<niemeyer> hazmat: ping
<hazmat> niemeyer, pong
<niemeyer> hazmat: Yo
<niemeyer> hazmat: I'm reviewing the Go branch that implements RemoveMachine in state
 * hazmat nods
<niemeyer> hazmat: I recall we had some good exchanges around the idea that machines shouldn't simply disapear
<niemeyer> disappear
<niemeyer> hazmat: Have you ever touched that area in the Python incarnation?
<hazmat> niemeyer, that api should still be present for the final removal, but the machine agent shouldn't be killed by just removing its state and killing the machine, instead it should have a state communication/coordination around its termination to shutdown graceful before being executed permantently
<hazmat> ie. we don't kill identity of an actor before it has a chance to shutdown gracefully
<hazmat> but after it does or a suitable timeout, we do go ahead and remove the state
<hazmat> so the state api for removal should still be present
<hazmat> but the api act of stopping a machine uses a state protocol before the provisioning agent uses removemachine
<niemeyer> hazmat: Yeah, I recall that.. but have you ever implemented anything?
<hazmat> niemeyer, no.. its the last bug marked critical for 12.04, i'm going to a spec to the list this after i return from pycon sprints tomorrow
<hazmat> ^get
<niemeyer> hazmat: Ah, cool
<hazmat> niemeyer, i've got draft in lp:~hazmat/juju/unit-stop
<niemeyer> hazmat: RemoveMachine will certainly change.. it has to be split in two
<hazmat> niemeyer, i figure just a different api ... like stopmachine.. that will in turn remove machine when its done
<niemeyer> hazmat: "stop machine" implies something else.. "stop" has the "start" counterpart
<hazmat> true
<niemeyer> hazmat: Well, I guess we could model it that way even..
<niemeyer> hazmat: stop => remove.. or stop => start..
<niemeyer> hazmat: How's subordinates going, btw?
<niemeyer> TheMue: You've got another review
<TheMue> niemeyer: thx, done a first scan. your comment critics is ok. i'm just used to take them sometimes for optical block building. but i can live w/o them here.
<niemeyer> TheMue: Thanks!  I feel a bit on the other side.. if there's a comment, I'll always read it while skimming through the code.. the fact the comment says nothing is distracting
<TheMue> niemeyer: i'll remove them and be more keen on better helping comments if i feel they are needed
<niemeyer> TheMue: Thanks a lot
<niemeyer> TheMue: On the bright side, there are only trivial details really.. the branch is very nice
<TheMue> niemeyer: thx, williams argument regarding RemoveMachine() has been good
<niemeyer> TheMue: Please don't forget to lbox propose again (just got the review DOnes)
<TheMue> niemeyer: done the propose
<niemeyer> TheMue: LGTM!
<niemeyer> WOohay, one more branch..
<TheMue> niemeyer: great, thx, will submit it
<niemeyer> TheMue: Have you done any changes in https://codereview.appspot.com/5782053/ that are ready for review?
<niemeyer> TheMue: I'm asking only because you mentioned something earlier on
<niemeyer> TheMue: I know it's late for you, so just wondering if you have something that is already ready
<TheMue> niemeyer: the proposal works, but currently w/o the presence solution of william. i'll integrate it tomorrow.
<niemeyer> TheMue: Ok, super
<TheMue> niemeyer: but for today i say good night ;)
<niemeyer> TheMue: have a good night!
<TheMue> niemeyer: thx, cu tomorrow
<andrewsmedina> niemeyer: hi
<niemeyer> andrewsmedina: Hey
<andrewsmedina> niemeyer: tonight I will send another code for review related with lxc
<andrewsmedina> niemeyer: and I have a doubt
<niemeyer> andrewsmedina: Sure, what's up?
<andrewsmedina> niemeyer: I need create another branch for work with the local environment or I can use the same branch that I'm using for lxc?
<niemeyer> andrewsmedina: One branch, one proposal, one submit
<andrewsmedina> niemeyer: Will be a proposal for lxc and another for local environment?
<niemeyer> andrewsmedina: The smaller and more self contained a proposal/branch is, the best for us to review, and the best for you to fix it for inclusion
<niemeyer> andrewsmedina: Your initial lxc/local branch is great
<niemeyer> andrewsmedina: But it's pending some fixes
<andrewsmedina> niemeyer: I know
<niemeyer> andrewsmedina: It has to be fixed accordingly, and updated ("lbox propose" again)
<andrewsmedina> niemeyer: II already have fixed that issues, and I add another methods to lxc container type
<niemeyer> andrewsmedina: Other methods should be in a different proposal
<niemeyer> andrewsmedina: Once a proposal is made, the issues pointed out should be fixed, so that we can include it
<andrewsmedina> niemeyer: hm..
<niemeyer> andrewsmedina: Otherwise we get into a never-ending cycle
<andrewsmedina> niemeyer: you're right
<andrewsmedina> niemeyer: thanks
<niemeyer> andrewsmedina: Thank you
<niemeyer> Heading out for dinner with some friends.. back later
#juju-dev 2012-03-14
<fwereade_> wrtp, are you free for a vague rambling discussion about hook contexts?
 * wrtp enjoys vague rambling discussions.
<wrtp> fwereade_: certainly
<wrtp> mornin' BTW
<fwereade_> wrtp, ok: I think I've figured out how Context should look, but I'm not sure what path I should take to get there
<fwereade_> wrtp, mornin' :)
<wrtp> fwereade_: what general direction are you aiming in?
<fwereade_> wrtp, I'm pretty sure that we only need a single `type Context struct`, with methods that map 1:1 to hook commands, which is cool
<wrtp> fwereade_: ah, the Context that was called Env, right?
<fwereade_> wrtp, yeah
<wrtp> fwereade_: so this is from the agent's perspective?
<fwereade_> wrtp, (I retain a feeling that it's not insane to have an Env interface and a Context interface, just for clarity of testing, but that's by the by_
<fwereade_> wrtp, yeah
<fwereade_> wrtp, it seems that the way to do that is to have fields for local unit, unit relation, relation members, and remote unit
<wrtp> i'm not sure i understand
<fwereade_> wrtp, and simply to error sensibly when someone asks for a capability we cannot provide
<wrtp> oh, i see, i think.
<wrtp> you think it's possible to make all hook commands context insensitive.
<fwereade_> wrtp, so RelationSet, for example, just won't work if we don't have a .relation
<fwereade_> wrtp, it sounds like that's on the roadmap
<fwereade_> wrtp, and thinking about it from that perspective leads to IMO a nicer implementation right now
<wrtp> in which case, yeah, i could see that working. each field determines the behaviour of one or more callback methods.
<fwereade_> wrtp, exactly
<fwereade_> wrtp, ...hm ok I think I now know what I need to do in the exec branch
<wrtp> fwereade_: in fact, the callback methods need not be defined on the Context.
<fwereade_> wrtp, my original plan was not to have them on the context
<fwereade_> wrtp, I think it works out nicer if they are though
<wrtp> fwereade_: you can still use your approach of "remote command execution" if you want to.
<fwereade_> wrtp, yep, that's almost entirely orthogonal
<wrtp> fwereade_: yeah
<wrtp> fwereade_: but i think it would look nicer if they were there, from an implementation perspective. they wouldn't need to be exported.
<fwereade_> wrtp, the only difference is that JUJU_CONTEXT_ID maps back to a context which already has some of those fields filled in
<fwereade_> wrtp, and we need to be able to optionally override which state we're working with on the command line
<wrtp> fwereade_: so the agent proactively fills in those fields rather than wait for the callback before reacting?
<wrtp> i'm not sure i understand your last remark
<fwereade_> wrtp, sorry, it's somewhat speculative
<wrtp> what's "state" here?
<fwereade_> wrtp, for now it's not necessary: it's just so we have a path to `relation-set $SOME_RELATION` foo-bar
<fwereade_> wrtp, state in that case is $SOME_RELATION
<fwereade_> wrtp, foo=bar^^
<wrtp> hmm, still at sea
<fwereade_> wrtp, it's far enough in the future that I can't predict exactly how it will look: for now, pre-filled contexts are exactly what we need
<fwereade_> wrtp, sorry, I'll try to build it up a bit more logically
<wrtp> fwereade_: i'd like to understand your speculation...
<fwereade_> wrtp, proximate goal: given a known, er "platonic" context for a given hook, we can construct a Context that knows how to get to the correct state.Unit/Service/UnitRelation in order to extract or manipulate it
<fwereade_> wrtp, we can then implement Commands that just have the appropriate Context set on creation, and which can manipulate the context directly, concerning themselves only with input/output handling
<wrtp> fwereade_: the Commands are agent-side?
<fwereade_> wrtp, ultimate goal: make the Commands capable of handling additional args that allow them to construct their own Contexts
<fwereade_> wrtp, yes
<wrtp> hrmm
<fwereade_> wrtp, that's not something I'm offering a *plan* for at the moment
<wrtp> i don't see why they'd need to do that.
<fwereade_> wrtp, it's just a door I don't want to close
<wrtp> fwereade_: can't you ensure that there's always a context for a command?
<fwereade_> wrtp, so that relation-set $SOME_RELATION can work out-of-band
<wrtp> fwereade_: even out-of-band commands may have a context, no?
<wrtp> fwereade_: (they've got to know how to talk to the agent, for a start)
<fwereade_> wrtp, they do, but that context is essentially "what unit/relation are we working with here?"
<fwereade_> wrtp, yep, so JUJU_AGENT_SOCKET may end up having to be global, IMO that's not an exceptionally big deal
<fwereade_> wrtp, but I don't think we can know what relation relation-set wants to affect before it's called
<wrtp> fwereade_: i think that is a big deal when we allow multiple unit agents in the same container
<fwereade_> wrtp, huh, yeah, true
<fwereade_> bah
<wrtp> fwereade_: so given that, i think we can always assume JUJU_CONTEXT_ID, and hence some context.
<wrtp> niemeyer: hiya
<fwereade_> wrtp, I think they're different problems
<niemeyer> Heya!
<niemeyer> Mornings
<fwereade_> wrtp, out-of-band execution means we *can't* depend on JUJU_CONTEXT_ID
<fwereade_> morning niemeyer
<wrtp> fwereade_: i don't think it does
<wrtp> fwereade_: environment variables get exported
<wrtp> fwereade_: out-of-band doesn't mean no-relation-to.
<fwereade_> wrtp, get exported to where? my understanding is that we eventually intend `relation-set foo bar=baz` to work even when called in response to some action taken by the service itself in an entirely unpredictable environment
<wrtp> fwereade_: once a hook context has "expired" (when the hook has finished executing), we could make the old context id to a generic "out of band" context.
<wrtp> s/make the/map the/
<wrtp> fwereade_: i don't think that can possibly work
<fwereade_> wrtp, I don't see how that's any different to just not having a context id
<wrtp> fwereade_: maybe it's not. JUJU_AGENT_SOCKET is the important thing. but...
<fwereade_> wrtp, there's the how-do-we-talk-to-the-right-agent problem but that's probably soluble even if it doesn't turn out especially elegant
<wrtp> fwereade_: perhaps we might want to make callbacks within the context of a given hook (even if the hook has completed), still have some hook-specific behaviour
<niemeyer> fwereade_: <fwereade_> wrtp, out-of-band execution means we *can't* depend on JUJU_CONTEXT_ID
<niemeyer> fwereade_: Kind of.. JUJU_CONTEXT_ID is what says it's out-of-band in the first place, right?
<fwereade_> niemeyer, lack thereof, yes
<wrtp> niemeyer: i think there are two kinds of out-of-band here
<wrtp> niemeyer: 1) called after the hook has returned
<wrtp> niemeyer: 2) called from an independent environment
<wrtp> niemeyer: i'm thinking about 1), and i think we can rule out 2)
<fwereade_> wrtp, in (1), the old hook context is potentially *wrong*, so I think that's the case we should rule out
<wrtp> in 1), you've always got a JUJU_CONTEXT_ID
<fwereade_> wrtp, ...which maps to a context with a potentially out-of-date member set
<wrtp> fwereade_: at least we know when the context is wrong, and can do something appropriate
<fwereade_> wrtp, how's thatany different to just not having a context and knowing we have to "do something appropriate"? ;)
<fwereade_> wrtp, ie somehow obtain the correct context to work with
<wrtp> fwereade_: because i imagine that the behaviour might differ depending on the original hook context.
<fwereade_> wrtp, why?
<niemeyer> wrtp: I don't understand what you mean by that
<niemeyer> wrtp: 1 or 2
<niemeyer> fwereade_: Agree regarding the out-of-date member set
<niemeyer> fwereade_: But, it's a bit of a tricky situation..
<wrtp> ok... here's a tangential question: which hook callbacks are not ok to call in which hooks?
<niemeyer> fwereade_: Why is a command spawned in the background by a hook different from a command spawned in the background by something else?
<fwereade_> wrtp, *at the moment* you can't do relation-set,-get,-list outside a relation hook
<wrtp> fwereade_: ok, but we'd change that, right?
<wrtp> fwereade_: 'cos it's silly if we allow them out-of-band but not in certain hooks
<fwereade_> wrtp, and attempting a relation-set in a departed hook will error out (which I think we should also change -- no reason not to be able to set, even if nobody's going to read it)
<wrtp> niemeyer: that's an easy answer - because the command spawned in the background by something else can't know what unit it's executing in
<fwereade_> wrtp, yep, which is why this is a bitof a digression ;)
<fwereade_> wrtp, but why should it care?
<fwereade_> wrtp, ok, it should care because AGENT_SOCKET
<wrtp> fwereade_: exactly
<wrtp> fwereade_: and relation-get returns something different depending on what unit it's in
<niemeyer> wrtp: A command spawned in the background isn't necessarily executing in any unit..
<wrtp> niemeyer: no?
<fwereade_> wrtp, indeed
<niemeyer> wrtp: Sorry, bad terminology of my part
<niemeyer> wrtp: What I mean is that it's not necessarily executing in any relation.. you can spawn a server, and have it executing relation-set reactively, for instance
<wrtp> niemeyer: yeah, that seems fine.
<niemeyer> It shouldn't blow up just because it was started within a hook rather than by the init scripts
<niemeyer> In that sense, it's out-of-band as well, and it should work
<wrtp> niemeyer: i think we're leading towards the idea that all the callbacks should work independently of whether they're called inside a hook or not.
<wrtp> niemeyer: and all callbacks should be valid all the time.
<fwereade_> wrtp, exactly so
<wrtp> except...
<fwereade_> wrtp, I hadn't picked up on this until I spoke to niemeyer ?yesterday?
<wrtp> that what about races
<wrtp> ?
<wrtp> i.e. within a hook, you know that relation-get is consistent
<wrtp> but outside a hook, it might be constantly changing.
<wrtp> ha
<wrtp> maybe there could be some way of *asking* for a context id
<niemeyer> wrtp: Indeed. We can allow the creation of contexts at some point, for providing the same guarantees a hook has, but that's a distant future. Commands don't even work out of hooks yet.
<wrtp> JUJU_CONTEXT_ID=$(get-context-id)
<fwereade_> wrtp, yeah, was just pondering same
<wrtp> niemeyer: yeah, this all goes back to some current design decisions which were being influenced by this speculation...
<fwereade_> wrtp, we'd need some way to retire/discard them as well, but details details
<niemeyer> Might be intersting to marry context id and the socket
<niemeyer> fwereade_: ^
<fwereade_> niemeyer, hm, yeah
<niemeyer> fwereade_: While we still have time :)
<wrtp> niemeyer: i was thinking the same
<fwereade_> niemeyer, I don't think that door is closed
<niemeyer> fwereade_: JUJU_AGENT_SOCKET=<path>[@<n>]
<fwereade_> niemeyer, neither context id nor agent socket are actually intrinsic to the context, I think
<fwereade_> niemeyer, context id is needed to look up a context, but I think the only reason it's stored in the context is for convenience
<fwereade_> niemeyer, (...when constructing the env vars for a hook)
<niemeyer> fwereade_: Right
<wrtp> JUJU_CONTEXT=<n>:path ?
<fwereade_> niemeyer, similarly for the agent socket: they'll both need to be controlled by something other than the context itself
<niemeyer> fwereade_: It's also provably not necessary right now, which means we can make it optional
<niemeyer> fwereade_: Actually, now that I think of it, it's only optional because we don't have anything else but context-rich execution
<fwereade_> niemeyer, yeah, I don;t want this to affect the implementation right now, but I don;t want to make assumptions that will make it *harder* to implement this when the time comes
<niemeyer> fwereade_: So once we introduce out-of-band commands, context-rich execution needs the context info
<wrtp> niemeyer: yes
<niemeyer> fwereade_: I suggest taking the context stuff out
<niemeyer> fwereade_: I mean, the context id, more specifically
<niemeyer> fwereade_: Unless you're already making use of it in a way that moves us closer to these ideas we're talking about
<fwereade_> niemeyer, so Exec takes a context id and an agent socket in addition to the context itself, which doesn't itself know about them?
<niemeyer> fwereade_: Hmm.. context id and the agent socket are part of the context right now..
<fwereade_> niemeyer, feels like a bit of a speculative interface complication to me... not sure it makes the putative changes any easier
<wrtp> don't we need a context id to stop inappropriate execution of hook commands after the hook has finished?
<niemeyer> fwereade_: I was just suggesting dropping context id from the context, until we have a reason to hold it
<fwereade_> niemeyer, OTOH having them on the context potentially encourages people to use them inappropriately
<fwereade_> wrtp, we need it but it doesn't have to live on the context
<niemeyer> fwereade_: What's "context" in this sentence.. it feels like we're talking about different things?
<fwereade_> niemeyer, the type which (1) ultimately supplies env vars to Exec and (2) exposes useful methods to the hookcommand implementations
<fwereade_> niemeyer, the actual execution environment the hook runs in needs a context id ATM, no argument there
<niemeyer> fwereade_: Those feel like different things..
<fwereade_> niemeyer, well, I thought so too, hence separation between Env and Context
<fwereade_> niemeyer, but I'm starting to think otherwise
<niemeyer> fwereade_: Regarding your second sentence above, ok, so let's keep ContextId in Context (perhaps calling it Id, though?)
<fwereade_> niemeyer, yeah, that's how my current latest sketch looks
<niemeyer> fwereade_: Cool
<wrtp> fwereade_: hmm, i have to say i was quite convinced by our earlier discussion
<niemeyer> fwereade_: Exec needs a type that it can use to execute a command.. this feels isolated from the aspect of hook commands themselves, at least as far as I can foresee at the moment
<wrtp> fwereade_: i think Context works well combined with Env
<wrtp> type Context struct {
<wrtp> 	Id int
<wrtp> 	Vars map[string]string
<wrtp> 	Relations map[string]string
<wrtp> 	Unit *Unit
<wrtp> }
<wrtp> or whatever
<wrtp> but... yeah
<fwereade_> niemeyer, but it's true that as you suggested everything in Env can be derived from stuff we have in the context
<wrtp> the Env is independent, because we might have a context without running a hook
<fwereade_> niemeyer, and there's not much justification for a separate type/interface
<niemeyer> fwereade_: Right
<fwereade_> niemeyer, except in that testing becomes a bit easier
<niemeyer> fwereade_: I see two things, but they're not the same two things that were in the original branch
<wrtp> fwereade_: that's an excellent point though
<fwereade_> niemeyer, heh, I can't precisely remember the original branch now... I thought I had a Context that exposed Env()... and the Commands() thing is no longer really relevant to the conversation
<niemeyer> fwereade_: It's quite possible as well that you had a perfect grand vision, and I just assumed things incorrectly from looking into part of it.
<niemeyer> fwereade_: Context was lost in that branch. It may make sense for another concept, but it doesn't belong into that branch, or in that file even
<fwereade_> niemeyer, definitely not perfect, our discussion the other day modified it a certain amount, but I think the broad strokes are similar enough
<niemeyer> fwereade_: What we need is Exec and ExecVars (your Env, just avoiding the name conflict)
<fwereade_> niemeyer, ah yes, it was just Env which I was semi-expecting the eventual Context type to implement, but wasn't yet sure about: hence just an easy-to-independently-test interface
<fwereade_> niemeyer, I'm actually not so sure about that: I think we just need Context to have `Vars() []string` and leave it at that
<niemeyer> fwereade_: I can easily buy into that too
<fwereade_> niemeyer, whether we ultimately pass a Context or an []string to Exec depends on what makes sense in the final context
<niemeyer> fwereade_: I don't think it makes sense to pass a context to Exec..
<fwereade_> niemeyer, well, it needs Vars and a way to get to the actual hook, which IMO it's sensible to calculate from Context.CharmDir
<wrtp> fwereade_: i think that both Context and Env can be concrete types.
<fwereade_> niemeyer, we could ofc just have context take (vars []string, hook string) where hook is an absolute path claculated elsewhere
<niemeyer> fwereade_: Agreed. This whole design may actually change a bit once we see the rest of what's necessary
<niemeyer> fwereade_: It's strange to have CharmDir in Context, for example.
<fwereade_> niemeyer, agreed, now you mention it :)
<niemeyer> fwereade_: Many of those properties aren't properties of a context.. they are properties of a unit
<niemeyer> fwereade_: So it feels a bit like we're trying to reverse engineer how the bottom looks like by looking at the tip
<niemeyer> fwereade_: WHich IMO is why we're a bit lost in that back and forth debate
<fwereade_> niemeyer, true... but I'm not sure there's such a thing as a context without a unit, even when out-of-band
<niemeyer> fwereade_: There's no context without a unit.. but a unit has multiple contexts
<fwereade_> niemeyer, true
<wrtp> hmm, i wonder what happened there
<wrtp> fwereade_: i missed everything after "niemeyer, we could ofc just have context take (vars []string, hook string) where hook is an absolute path claculated elsewhere"
<wrtp> (if there was anything)
<fwereade_> <niemeyer> fwereade_: Agreed. This whole design may actually change a bit once we see the rest of what's necessary
<fwereade_>  fwereade_: It's strange to have CharmDir in Context, for example.
<fwereade_> <fwereade_> niemeyer, agreed, now you mention it :)
<fwereade_> <niemeyer> fwereade_: Many of those properties aren't properties of a context.. they are properties of a unit
<fwereade_> <-- wrtp has quit (Read error: Operation timed out)
<fwereade_> <niemeyer> fwereade_: So it feels a bit like we're trying to reverse engineer how the bottom looks like by looking at the tip
<fwereade_>  fwereade_: WHich IMO is why we're a bit lost in that back and forth debate
<fwereade_> <fwereade_> niemeyer, true... but I'm not sure there's such a thing as a context without a unit, even when out-of-band
<fwereade_> <niemeyer> fwereade_: There's no context without a unit.. but a unit has multiple contexts
<fwereade_> <fwereade_> niemeyer, true
<niemeyer> fwereade_: To solve the current debate and make some progress, I suggest we move on with you suggestion. Let's make Exec trivial.. Exec(vars []string, path) as you suggested.
<wrtp> fwereade_: thanks
<fwereade_> niemeyer, sounds good to me
<niemeyer> fwereade_: I suspect the correct thing to do will be a lot more obvious once we reach these fundamental layers
<fwereade_> niemeyer, yeah
<wrtp> vars map[string]string, presumably
<niemeyer> fwereade_: I also suggest sleeping on the idea of how a context relate to a unit, and what's the proper way to organize the relationship between those concepts
<niemeyer> wrtp: That'd work, yeah
<fwereade_> wrtp, not sure it matters much either way, not unreasonable to pass vars around in os.Environ style
<fwereade_> wrtp, once they're fixed anyway
<wrtp> fwereade_: i think os.Environ might be a map now...
<fwereade_> wrtp, ofc they're easier to manipulate as map[string]string
<wrtp> fwereade_: no, it's not
<wrtp> fwereade_: there was *something* like that :-)
<fwereade_> niemeyer, wrtp: I'm just going to try to flesh out my current sketch a little and see if any new enlightenment dawns
<fwereade_> niemeyer, wrtp: well *actually* it seems I'm going to eat lunch
<wrtp> lol
<fwereade_> ttyl :)
<wrtp> fwereade_: luncheon enlightenment?
<fwereade_> haha
<fwereade_> by light alone
<wrtp> fwereade_: you read that yet?
<niemeyer> fwereade_: Lunch.. hmm.. still several hours to go here, how unfortunate :)
<niemeyer> fwereade_: Enjoy :)
<wrtp> niemeyer: []string is better actually. that's the type of the Env field in exec.Cmd.
<niemeyer> wrtp: Right, but it may be easily assembled within Exec.. your suggestion is likely a better interface to work with from within the package
<wrtp> niemeyer: it's possible. but i suspect that in practice all the calls will look like: Exec([]string{"FOO="+foo, "BAR="+bar}, ...)
<wrtp> niemeyer: that is, i wonder how much we'll actually "work with" the environment variables.
<niemeyer> wrtp: That looks like a map to me
<wrtp> niemeyer: no more than os.Environ or exec.Cmd.Env :-)
<niemeyer> wrtp: Never built one of those by hand.
<wrtp> niemeyer: i'm happy to minimise impedance mismatch where it doesn't make much difference.
<wrtp> niemeyer: but i don't mind too much (i did suggest a map originally, after all)
<niemeyer> wrtp: :-)
<TheMue> wrtp: i would uses an own type based on map[string]string here, having methds to pass it as []string
<TheMue> wrtp: so the usage is simple and passing the content to Exec() too
<wrtp> niemeyer: one possible place where it's actually easier to work with a slice is if your env vars are mostly constant, with one changing. then you can do Exec(append(constantVars, "ONE_VAR="+...))
<niemeyer> wrtp: m["ONE_VAR"] = ...
<wrtp> niemeyer: that doesn't work if you don't want the map to be changed for everyone
<wrtp> niemeyer: you'd have to copy the map first
<niemeyer> wrtp: That's not how exec works..
<TheMue> wrtp: could be done with my type too. even more comfortable in a method myEnv.WithNewVariable(k, v string) []string
<TheMue> wrtp: w/o changing it, using the append internally
<wrtp> i think all this is overthinking it. i think in practice the usage will be ultra simple.
<niemeyer> Let's please move on. Whatever fwereade_ picks is fine.
<wrtp> agreed
<niemeyer> wrtp: Yes, and you've been redefining how to work with a map above..
<wrtp> niemeyer: ?
<niemeyer> wrtp: !
<wrtp> [11:21] <niemeyer> wrtp: That's not how exec works..
<wrtp> depends whether the map is used in a concurrent context
<wrtp> anyway, let's leave it.
<niemeyer> wrtp: Both structures are mutable.. a process environment isn't mutable.
<wrtp> niemeyer: i didn't say it was. i was referring to m["ONE_VAR"] which will change m for everyone unless you create it every time. it's not a big issue.
<niemeyer> wrtp: So will append..
<niemeyer> fwereade_: Please use a map.. let's see how it looks like.
<wrtp> niemeyer: not if the slice has cap == len. but perhaps that's too fragile.
<niemeyer> wrtp: Indeed it is. append mutates the slice.
<wrtp> niemeyer: only cap!=len
<TheMue> wrtp: take a look at http://paste.ubuntu.com/883111/
<niemeyer> wrtp: Indeed it is fragile.
<TheMue> wrtp: untested quick hack
<niemeyer> wrtp: Gosh..
<niemeyer> wrtp: These arguments really consume our time for no good reason.
<wrtp> yeah
<wrtp> let's not
<wrtp> sorry, it's my fault!
 * TheMue sighs
<TheMue> lunchtime
<fwereade_> niemeyer, I think that Exec does actually need a Context... we'll need to flush settings writes once the hook's complete
<wrtp> fwereade_: can't that be done by the caller of Exec?
<niemeyer> fwereade_: Agree with wrtp's questoin
<niemeyer> fwereade_: Also, who calls Exec
<fwereade_> wrtp, well, it could; but `Exec(vars map[string]string, path, dir string)` is actually so trivial a wrapper around os/exec that I'm not sure it's meaningful any more
<wrtp> niemeyer, fwereade_: is it ok if i leave the log prints in the live test until we hook up logging to the tests?
<niemeyer> fwereade_: I don't have the full picture, so don't really have any strong recommendations at this point.. I'd just try to work out a way to keep the concepts nicely isolated
<wrtp> fwereade_: we had too long a discussion about this earlier. go with what you feel like :-)
<fwereade_> wrtp, I question the value of logging those tests at all, really, but I don't feel all that strongly about it
<fwereade_> niemeyer, wrtp: cool, cheers
<wrtp> fwereade_: the logging only appears in verbose mode. sometimes it'll hang forever and it's useful to know where.,
<niemeyer> wrtp: Hooking up logging to the tests is done by log.Target = c in SetUpTest
<niemeyer> wrtp: We already do that in some suites
<fwereade_> wrtp, ok, cool, that makes sense then
<niemeyer> wrtp: I'm happy with the branch to move forward as it is, though, and have that in a separate one
<wrtp> niemeyer: i can do that, but i'd need to add log.Printf calls too. i'd prefer to do it in a separate branch
<wrtp> yeah
<niemeyer> fwereade_: Just as a nice realization, everything in that branch *is* a thing wrapper on top of Exec..
<niemeyer> s/thing/thin
<fwereade_> niemeyer, agreed, but imo it's sane for "hook.Exec" to know about the particular idiosyncracies of hook execution
<fwereade_> niemeyer, it'd be a different matter if it were "util.Exec" or something
<niemeyer> fwereade_: Agreed
<niemeyer> fwereade_: But there are none there, yet..
<niemeyer> fwereade_: I see your point, though.. it may well make sense to have a context there
<wrtp> fwereade_, niemeyer: submitted! thanks. 144 commits it took :-)
<fwereade_> wrtp, that's gross
 * fwereade_ tries to keep a straight face
<niemeyer> wrtp: 2 of those are mine! ;-)
<wrtp> niemeyer: 2 of your commits say "committer: Roger Peppe" ?
<niemeyer> wrtp: Nah, nevermind
<wrtp> anyway, i'm very happy to have got there in the end!
<fwereade_> aww, I was hoping for at least a groan of pain from someone, it's rare to get such a perfect opportunity for a really ugly pun
 * wrtp lets the pun sail way over his head.
 * wrtp lunches
<niemeyer> fwereade_: So, turns out I lied to you.. proposing again with lbox won't mark the branch as Needs Review
<niemeyer> fwereade_: I'm fixing that now, but just wanted to mention in case you reproposed some branch before that I'm not seeing!
<fwereade_> niemeyer, ah-ha, I suspect I have a branch or two languishing then, let me check
<fwereade_> niemeyer, nope, go-add-cmd-context is still NR (but then I don't think I ever WIPed it); I'll bear it in mind though
<fwereade_> niemeyer, thanks
<andrewsmedina> good morning
<niemeyer> andrewsmedina: Heya
<niemeyer> fwereade_: np
<niemeyer> fwereade_: Already fixing it, so hopefully won't bite in the future
<fwereade_> niemeyer, sweet, tyvm
<fwereade_> andrewsmedina, good morning
<andrewsmedina> niemeyer: I saw the reviews made for you and rog
<niemeyer> fwereade_: I hope to fix pre-req right after
<fwereade_> niemeyer, <3
<niemeyer> fwereade_: *that* will be a real win :)
<fwereade_> niemeyer, maybe even <4
<niemeyer> andrewsmedina: Super.. sorry for some confusion there
<niemeyer> fwereade_: LOL
<niemeyer> fwereade_: We should totally start using <4 as a convention :)
<fwereade_> niemeyer, I have a vague feeling I stole it from kingdom of loathing
<andrewsmedina> niemeyer: I realy need parse the lxc error output?
<niemeyer> andrewsmedina: Not really parsing.. the basic idea is to include it in the error.. but it'd be nice to strip out the usage message in case it's there
<niemeyer> andrewsmedina: so
<niemeyer> output, err := ...
<andrewsmedina> niemeyer: I'm already doing it
<niemeyer> if i := bytes.Index(output, []byte("\nUsage: ")); i > 0 {
<niemeyer>     output = output[:i]
<niemeyer> }
<niemeyer> andrewsmedina: That's all really
<andrewsmedina> niemeyer: I will do it now
<niemeyer> andrewsmedina: Thanks a lot
<niemeyer> I promise I won't ask why setStatus is its own method in a merge_proposal object, while everything else is just changing fields
<niemeyer> fwereade_, wrtp: I'll also invert the meaning of -same-log, and make it the default behavior, as suggested by wrtp before.
<niemeyer> wrtp: You were right.. showing the logs in all cases is boring, and people don't update the log even if it's in their face.
<fwereade_> niemeyer, I update the log (er, sometimes)
<wrtp> niemeyer: ECONTEXT
<fwereade_> niemeyer, but I can bear to type a few more chars when I need to ;)
<niemeyer> wrtp: Fixing lbox
 * wrtp can't remember what command -same-log is a flag to...
<wrtp> ah, i remember now
<niemeyer> fwereade_: It will continue to be shown on submit too, so that's always a good hooking point to get the log right
<wrtp> log==description
<fwereade_> niemeyer, excellent
<wrtp> i've said this before, but i think it's a pity that so much functionality is shoehorned into "lbox propose"
<wrtp> niemeyer: i know the conception of lbox is as a general interface to launchpad, but will it actually end up as any more than propose+submit?
<niemeyer> wrtp: Yes
<wrtp> niemeyer: BTW it would also be great if we could somehow reduce the volume of emails that are sent as a result of lbox.
<wrtp> each change generates about 3 emails.
<niemeyer> wrtp: Each change generates two emails, one from codereview, one from Launchpad. I don't know of any way to avoid the email from Launchpad without also losing the history in the merge proposal itself.
<wrtp> niemeyer: there's also the two emails that are sent when you actually send the comments on the codereview page...
<niemeyer> wrtp: Exactly.. each change generates two emails.
<wrtp> niemeyer: it means i can't respond to some comments with some changed code without generating 4 emails.
<niemeyer> wrtp: I apologize for the trouble. I don't know how to solve the issue.
<wrtp> niemeyer: yeah, i know. maybe i should just filter out all emails that simply say "please take a look"
<wrtp> niemeyer: because they're often noise, and the real ones are those sent direct from codereview
<niemeyer> wrtp: They're not noise for me, but whatever works for you is certainly fine for me.
<niemeyer> wrtp: Many times people don't answer the feedback, and simply do an "lbox propose". You'd miss those.
<wrtp> niemeyer: if there was a way of saying "please upload these changes *and* publish my codereview comments", that would be nice...
<niemeyer> wrtp: That's already what it does..
<wrtp> niemeyer: yeah, i know.
<wrtp> niemeyer: really?
<niemeyer> wrtp: I believe..
<niemeyer> wrtp: Yep.. try it out and let me know :)
<wrtp> niemeyer: so lbox propose publishes pending codereview comments, you think?
<wrtp> hmm, that would be good, if slightly unexpected.
<wrtp> if i could upload changes without sending mail (and also without marking the branch as "work in progress") that would work too.
<wrtp> along the lines of go's hg upload.
<niemeyer> wrtp: Probably not, actually
<niemeyer> wrtp: But it might be doable
<niemeyer> wrtp: I have message_only set to true
<wrtp> i think a lot of the noise is because the default behaviour is to mail, where i think that sending mail should be a considered action, executed deliberately.
<niemeyer> wrtp: There's no reason for people to be uploading changes to Rietveld when they don't intend those changes to be reviewed
<niemeyer> wrtp: If all you want is to push the branch, just "bzr push"
<niemeyer> wrtp: lbox propose will.. well.. propose
<wrtp> niemeyer: i usually do want those changes to be reviewed, but i'll usually push them and have a quick check, before clicking Publish+Mail.
<wrtp> niemeyer: or lbox mail :-)
<wrtp> if it existed.
<niemeyer> wrtp: I'll reserve lbox mail to the moment when lbox becomes an email client.
<wrtp> niemeyer: lbox announce, maybe
<niemeyer> wrtp: What you want already exists.. lbox propose -prep..
<wrtp> niemeyer: but -prep marks the branch as "work in progress"
<wrtp> niemeyer: and taking it out of -prep mode sends a mail.
<niemeyer> wrtp: Which seems to be exactly what you want.. if you're reviewing your own changes, it's not yet ready for review
<wrtp> niemeyer: the problem is that it's not possible to publish codereview remarks and upload code ready for review without sending 4 emails.
<wrtp> niemeyer: (i just verified that lbox propose does not submit pending codereview comments)
<niemeyer> wrtp: I can fix that easily
<wrtp> niemeyer: and also that the default should be to not announce.
<wrtp> niemeyer: that way it's harder to make unnecessary noise.
<niemeyer> wrtp: We already talked about that. The command "lbox propose" should propose. This won't change.
<wrtp> niemeyer: it is editing the proposal. it doesn't have to announce that edit.
<wrtp> niemeyer: it should always be good practice to check your proposal before sending mail to people about it, and the default usage of the tool should reflect that IMHO
<wrtp> niemeyer: it would be great to fix the codereview comments issue BTW
<niemeyer> andrewsmedina: Splitting the container names on spaces as you are doing is fine. Please once you're happy with the error stuff, just lbox propose again
<andrewsmedina> niemeyer: I'll finish it in time for lunch
<niemeyer> andrewsmedina: Thank you
<andrewsmedina> niemeyer: Do you have any deadline for the release of the juju Go port?
<wrtp> niemeyer: indeed it's a bug in lxc-create. i wondered where that 'b' directory had come from:
<wrtp> /usr/bin/lxc-create:123
<wrtp> mkdir -p $lxc_path/$lxc_name
 * wrtp wishes that the bourne shell had got quoting right.
<niemeyer> andrewsmedina: We have some plans, but they're not firm yet
 * fwereade_ looks owlishly at allhands.canonical.com and tries to resist the temptation to add himself as a peer reviewer
 * wrtp can't log in to allhands.canonical.com
<wrtp> fwereade_: you can definitely use the error from Cmd.Run, BTW, rather than doing the separate Stat call.
 * hazmat wishes he could have slept on the plane
<hazmat> niemeyer, i did work out how to get py juju to handle session expiration without restart, impl in progress
<hazmat> basically tracking watches and ephemerals with the existing expiration trap handlers
<fwereade_> wrtp, it strikes me as kinda tedious to do different things on different errors after the fact
<wrtp> fwereade_: you're doing just that on the Stat result, no?
<fwereade_> wrtp, and I'm not sure why we should bother setting up and running a process if we can guarantee it's not going to work and provide tailor-made errors
<wrtp> fwereade_: depends how expensive the setup is, i guess
<fwereade_> wrtp, hmm, I guess I trust the Stat error to be relevant but not the Run one, which is maybe irrational
<wrtp> fwereade_: it is - it works just fine (assuming the path has a separator in)
<wrtp> fwereade_: http://paste.ubuntu.com/883400/
<fwereade_> wrtp, the path passed to Command?
<niemeyer> hazmat: Expiration trap handlers?
<wrtp> fwereade_: yeah
<niemeyer> hazmat: Greetings, btw :)
<niemeyer> hazmat: Back from PyCon?
<hazmat> niemeyer, greetings! good to be back
<wrtp> fwereade_: (it's documented in exec.Command)
<hazmat> niemeyer, indeed. i took the red eye, just got back in a few minutes ago
<fwereade_> wrtp, ah, ok; and I guess we get sensible errors for non-executability too?
<wrtp> fwereade_: yeah, you'll get "permission denied" which seems fine to me.
<fwereade_> wrtp, cool, should be a never-happen anyway I think
<wrtp> fwereade_: although you can test with os.IsPermission too if you want
<fwereade_> wrtp, we check hook permissions on charms anyway IIRC
<wrtp> right
<hazmat> niemeyer, there's a session expiration callback, tracking ephemeral nodes and extant watches, allow us to recreate them on session expiration without disturbing the watch callbacks. we can also trap/cb  on connectionloss/sessionexpired errors for operations that are attempted while this connection state exists (expired session)
<wrtp> fwereade_: also, isn't most of the setup done *before* calling Exec?
<hazmat> in the latter case we re-establish a session and the prior ephemeral nodes/watches and allow the op to retry
<hazmat> i think i still need to work out some details on the last one for ops made during bad connection state
<niemeyer> hazmat: You mean recreating watch/ephemeral/etc state without telling the call sites about what is actually going on behind the scenes?
<hazmat> niemeyer, yes
<niemeyer> hazmat: I'm a bit skeptical about being able to do that in a reliable manner
<hazmat> fair enough
<hazmat> niemeyer, effectively the call sites will see a watch fire, with the current state, so they will be informed about the current view
<hazmat> er. state
<niemeyer> hazmat: Yeah, but the whole world may have changed.. every single watch will have to fire somehow
<niemeyer> hazmat: Since the event they were waiting for may have occurred meanwhile..
<hazmat> niemeyer, indeed
<niemeyer> hazmat: Or it may not..
<hazmat> niemeyer, we can fire the watch when its restablished
<niemeyer> hazmat: I know.. we can do all sorts of things to try to hide the fact that, indeed, the connection has died.. or, we can prepare the application for having the application dying on its face and reestablishing state
<hazmat> niemeyer, we've already done the latter
<hazmat> but you've adovocated that we can do better
<niemeyer> hazmat: Kind of..
<fwereade_> wrtp, admittedly there's not much setup
<niemeyer> hazmat: The unit agent having to reestablish a connection with zookeeper doesn't mean it's fine to disturb the actual software that is running
<niemeyer> hazmat: This is the tricky part
<niemeyer> hazmat: As I understand it, and please correct me if I'm wrong, what the approach you suggest would do to work around this is to preserve state in memory so that the charm isn't disturbed. Do I get it right?
<hazmat> niemeyer, well the world may have changed by the time we re-establish, we have no way of knowing, triggering the watch gives the application a chance to reconsider the global state in the context of its last knowledge of the current state
<hazmat> niemeyer, yes
<hazmat> niemeyer, we can also robustly die ;-) and restart
<hazmat> which is what i was planning on.. its a quite a bit simpler
<hazmat> but i think this might approach has some merit as well
<hazmat> er..
<hazmat> s/might/
<niemeyer> hazmat: I hope we can robustly die, and restart, and not disturb the charm.
<niemeyer> hazmat: and by not disturbing I mean not marking the unit as offline to the relation peers, etc
<niemeyer> hazmat: That said, you certainly have freedom for experimentation on that area..
<niemeyer> hazmat: I'm just starting to wonder a bit how it'll all fit in time for 12.04, as per the mail, but no please-mom in this case, as you say.
<hazmat> okay.. that's fine as well re die without affecting the charm.. it has some of its own edge cases as the failure to record the transition, may attempt to write zk state, which will also fail.. needing zk/disk state reconcilliation but that's needed anyways to prevent a termination at the moment between those two writes from causing inconsistencies
<hazmat> niemeyer, gotcha, i have some concerns as well, but more around the bugs than the features. i think the subordinate work was in good progress. the constraints work should be okay. we need to yank some of the ghetto constraints out of the env file.. i guess we have more time for bug fixes though
<wrtp> fwereade_: if there's a place for the Stat, i think it's in the code that initially analyses the charm and can decide which hooks exist, once only.
<wrtp> fwereade_: although it's probably premature optimisation; i can't see this being a bottleneck.
<niemeyer> hazmat: Yeah, it's tight
<TheMue> fwereade_: i've got two probs with the presence package. a standand alone test crashes and the usage of Alive() for a non-existent path leads to true.
<niemeyer> Okay.. lunch time here. Will be back in a bit.
<fwereade_> wrtp, fair enough, it doesn't look too bad as it is
<fwereade_> TheMue, hum, this is somewhat disturbing
<fwereade_> TheMue, you're definitely using the right gozk?
<TheMue> fwereade_: which is "the right" gozk?
<TheMue> fwereade_: or better, which is the "wrong" one?
<fwereade_> TheMue, the current one should be fine, AFAIK
<fwereade_> TheMue, the wrong one is any version before rog's locking fixes
<TheMue> fwereade_: i updated it together with weekly â¦03-13
<TheMue> fwereade_: now done a fresh go get -u and i still have the same problem
<wrtp> fwereade_, TheMue: i may need to do the tagging
<wrtp> fwereade_: hold on
<wrtp> TheMue: hold on
 * TheMue holds on
<TheMue> wrtp: btw, seen my little env proposal at http://paste.ubuntu.com/883111/ ?
<wrtp> TheMue: yeah, i thought it was a bit overkill tbh. i'm happy going with the []string as in fwereade_'s proposal.
<TheMue> wrtp: 13 lines are overkill?
<wrtp> TheMue: any additional abstraction can be overkill :-)
<TheMue> wrtp: now we got such a fine language to abstract data types w/o much trouble and he calls it overkill ;)
<wrtp> TheMue: abstraction is abstraction, and the less of it the better :-)
<TheMue> wrtp: so why do you wanted an extra type for the retry values?
<TheMue> wrtp: and less it not always better. if it leads to more code duplication and possible failures
<wrtp> TheMue: that abstraction already existed, it just didn't have a name :-)
<fwereade_> wrtp, I'm not quite sure I see where you're coming from on Flush... you seem to be arguing that we should do as little as possible in Exec, while my perspective is that hook.Exec should do not leave hook execution incomplete for some putative future client to handle
<wrtp> fwereade_: my perspective is that the caller of Exec can have sole responsibility for the context.
<TheMue> wrtp: is the tagging done?
<wrtp> TheMue: yeah, i think so
<fwereade_> wrtp, ie that contexts aren't really anything to do with hook execution?
<wrtp> fwereade_: that too. the Exec function doesn't care about the context, other than to flush it.
<TheMue> wrtp: yep, thx, no ore error
<wrtp> TheMue: cool. i must make a script that automatically does the tagging after a submit.
 * TheMue still like abstraction that helps. maybe it's my smalltalk history
<wrtp> TheMue: oh yeah, don't get me wrong, i love a good abstraction in the right place.
<fwereade_> wrtp, you seem to be saying that hook execution can be considered to be complete before its changed state has been persisted
<fwereade_> wrtp, I don't think I agree with that
<TheMue> wrtp: and here it is the right place. an env is anv, not only a slice of strings
<wrtp> TheMue: for exec.Cmd and os, it's a slice of strings.
<TheMue> fwereade_: if i'm doing an Alive() on a non-existing node i expect a false. am i wrong here?
<wrtp> TheMue: and i'm happy to go along with that view unless it's making things harder
<fwereade_> TheMue, no, you're perfectly correct there
<wrtp> fwereade_: i'm not saying that
<TheMue> wrtp: yeah, sadly, but that doesn't make it better. time is also only a number, but thankfully they changed it to a better type.
<fwereade_> wrtp, then... that Exec should not be responsible for executing the hook?
<TheMue> fwereade_: strange, because i'm getting a true
<fwereade_> wrtp, that it can just kinda half do it?
<wrtp> fwereade_: no, Exec executes the hook. when the hook returns, we flush its context.
<fwereade_> TheMue, is this in a test?
<wrtp> fwereade_: but the caller of Exec can create the context, call Exec, then flush it
<TheMue> fwereade_: in my test. ;) but pls wait, will test it with helpful print statements, not that i'm testing the wrong node
<fwereade_> wrtp, in my view context flushing is part of execution -- it's the completion of any relation-sets that happen to be called
<fwereade_> TheMue, are you sure you're cleaning zk up properly between tests?
<wrtp> fwereade_: schematic: http://paste.ubuntu.com/883487/
<fwereade_> TheMue, we don;t seem to have a proper nuke-everything-in-zk function anywhere
<wrtp> fwereade_: except of course, the context id would be passed to Exec.
<fwereade_> wrtp, why defer Flush()
<fwereade_> wrtp, I'm pretty sure that a broken hook shouldn't mess with state
<wrtp> fwereade_: yeah, true. it could easily do the flush only if Exec didn't return an error.
<fwereade_> wrtp, I still don't see the benefit of putting the flush somewhere else
<wrtp> fwereade_: i don't see the benefit of passing a callback into Exec when it's trivial to do the call directly.
<wrtp> fwereade_: it makes for a better separation of concerns, i think
<wrtp> fwereade_: Exec is solely responsible for running the hook
<wrtp> fwereade_: the caller of Exec is responsible for the context management
<fwereade_> wrtp, running a hook will often change state somewhere, right? what's so special about relation-set that it should be handled outside the hook execution?
<wrtp> fwereade_: running a hook changes state via the context callbacks only, right?
<fwereade_> wrtp, running a hook can do anything it wants to local machine state...
<TheMue> fwereade_: even w/o nuke-everything i'm talking here about only one node which get's nuked. and my tests before worked.
<wrtp> fwereade_: which, as i think we've agreed, are managed by the caller of Exec
<wrtp> fwereade_: we don't flush local machine state...
<fwereade_> wrtp, no, because we don't need to do it ourselves
<wrtp> fwereade_: and it's flushing that we're concerned with here
<fwereade_> wrtp, you're still saying that some of the things done in a hook should be handled before exec returns and some shouldn't
<fwereade_> wrtp, ...aren't you?
 * wrtp is thinking
<wrtp> fwereade_: i'd say that the context is flushed *after* the hook returns
<wrtp> fwereade_: the relation-set is "done" before exec returns, but relation-set doesn't actually change the relation until the hook returns.
<fwereade_> wrtp, in the sense of "after the hook-as-written-by-the-user exits", that's true
<fwereade_> wrtp, well, it does, as far as the hook author knows
<wrtp> fwereade_: well, that wouldn't change. (actually it's probably possible to observe it in fact)
<fwereade_> wrtp, relation-set followed by relation-get should do the apparently sane thing
<wrtp> fwereade_: relation-set; sleep 10; relation-set; wouldn't result in any changes visible in another unit...
<fwereade_> wrtp, no, but they should be visible locally
<wrtp> well, not the first set anyway
<wrtp> fwereade_: sure
<wrtp> fwereade_: so i think it's logical to have Exec mirror the "hook as written by the author" executing.
<fwereade_> wrtp, and I think it's logical to have Exec-having-returned mean that the exec is complete and all necessary changes have been persisted
<fwereade_> wrtp, I don't think it's reasonable to draw an arbitrary line which says that some direct consequences of hook execution are part of exec and some aren't
<fwereade_> wrtp, if you can explain why that line's not arbitrary you may convince me
<fwereade_> wrtp, just a thought: why do you assume the caller of exec will necessarily be the creator of the context?
<fwereade_> wrtp, in python, some contexts are reused for more than one hook
<wrtp> fwereade_: it doesn't matter - Exec can be entirely context-agnostic, and that's a good thing
<wrtp> fwereade_: here's how i imagine Exec: http://paste.ubuntu.com/883519/
<wrtp> fwereade_: a very very simple wrapper around exec.Command
<fwereade_> wrtp, ctx.Vars?
<wrtp> fwereade_: oh yeah, forgot that bit, one mo
<wrtp> fwereade_: http://paste.ubuntu.com/883521/
<fwereade_> wrtp, all you're doing is smearing responsibility for... hook execution... into different places
<wrtp> fwereade_: i really don't think so
<wrtp> fwereade_: Exec *executes the hook*. something else *creates and flushes the context*.
<wrtp> fwereade_: no need for a context interface
<wrtp> fwereade_: of course, it might be that we want to folk context creation and flushing *into* the Exec call. i'd be good with that.
<fwereade_> wrtp, what's the connection between creation and flushing?
<wrtp> s/folk/fold/
<wrtp> fwereade_: presumably the context becomes invalid after the hook has finished execution?
<fwereade_> wrtp, why?
<fwereade_> wrtp, we reuse contexts in python
<wrtp> fwereade_: what happens if some background process uses it later?
<wrtp> fwereade_: it'll get the context for a different hook
<fwereade_> wrtp, I don't see how that's a danger
<fwereade_> wrtp, sometimes we know we want to run 2 hooks immediately after one another
<fwereade_> wrtp, and so the context gets reused
<fwereade_> wrtp, flushing doesn't invalidate a context
<fwereade_> wrtp, but contexts don't live beyond the time they're relevant either
<wrtp> fwereade_: that's all fine. if we don't want to create the context each time, fine.
<fwereade_> wrtp, so why is there a connection between creation and flushing? why should the same thing be responsible for both?
<wrtp> fwereade_: we might want to delay flushing until after the second hook has run too...
<wrtp> fwereade_: i'm not saying it should.
<wrtp> fwereade_: it's outside the purview of Exec
<fwereade_> wrtp, so an error in one hook should affect the results of an earlier hook? surely not
<wrtp> fwereade_: erm, i think you'd have a problem with that anyway
<wrtp> fwereade_: if you reuse a context
<wrtp> fwereade_: then the dodgy settings from the failed hook will continue on into the next hook
<fwereade_> wrtp, if the hook fails we won't continue on to the next one
<fwereade_> wrtp, you just suggested we should delay the flush
<fwereade_> wrtp, so you get hook 1 writing state, hook 2 failing, and hook 1's apparent success not really sticking
<wrtp> fwereade_: ah, i see.
<fwereade_> wrtp, you seemed to be arguing from a position in which flushing and creation were linked somehow -- did I misunderstand that?
<wrtp> fwereade_: no, i'm arguing that Exec shouldn't concern itself with the context at all
<wrtp> fwereade_: because it's trivial for Exec's caller to do that.
<wrtp> fwereade_: it makes Exec simpler at no cost, AFAICS
<fwereade_> wrtp, it just moves code around, at the cost of having an Exec that smears the effect of any relation-set calls out beyond the lifetime of that call
<wrtp> fwereade_: it's more than just moving code - it removes code (the ExecContext type and that argument to Exec)
<fwereade_> wrtp, it's 1 less param and 1 fewer lines in a not-very-big function, in exchange for an arbitrary division of responsibilities
<fwereade_> wrtp, ExecContext is only there because Context is not yet implemented
<fwereade_> wrtp, I *do* kinda want to keep it around beyond this branch, for simplicity of testing, but that's not critical really
<fwereade_> wrtp, and that one saved line has to be called somewhere else anyway
<wrtp> fwereade_: i guess i really don't see it as an arbitrary division of responsibilities. i think it makes a lot of sense to have Exec as a simple wrapper around Command, and the context stuff dealt with elsewhere. then you don't *need* the ExecContext interface for testing, because Exec doesn't use it.
<fwereade_> wrtp, ok, why have an Exec function at all? if you just want a trivial wrapper around Command, why not inline the whole thing at the call site (whatever that may be)?
<wrtp> fwereade_: and if you look at the code without considering the hook abstraction, why are we passing in the context? it's so that it can be called when Exec returns. we know how to do that - we call Exec, then call Flush...
<fwereade_> wrtp, so thatit can *sometimes* be called when hook returns
<wrtp> fwereade_: sure. we'll be checking the error anyway
<fwereade_> wrtp, and no it's not just dependent on whether there's an error returned
<wrtp> fwereade_: no?
<fwereade_> wrtp, if the hook doesn't exist we don;t bother to flush
<fwereade_> wrtp, but that's not an error
<wrtp> fwereade_: interesting point.
<fwereade_> wrtp, ok, we *could* just always flush, it's basically be a no-op if no state changed
<wrtp> fwereade_: it's a performance issue
<wrtp> fwereade_: but if that *is* an issue, then yes, it's a good reason for flush to stay inside Exec.
<wrtp> fwereade_: although...
<fwereade_> wrtp, I doubt it's significant, I imagine ConfigNode is smart enough not to round-trip if there have been no changes
<wrtp> fwereade_: as i said earlier, i think that we shouldn't be calling Exec at all if the hook doesn't exist.
<fwereade_> wrtp, ah, sorry, I missed that
<fwereade_> wrtp, why make the rest of the code care about hook existence?
<TheMue> fwereade_: if i get it right StartPinger() creates a non-existent node (if the parent exists). i'm wondering, because i get a -101 (no node)
<wrtp> [16:00] <wrtp> fwereade_: if there's a place for the Stat, i think it's in the code that initially analyses the charm and can decide which hooks exist, once only.
<fwereade_> wrtp, yeah, I've already dropped the pre-checking
<wrtp> [16:00] <wrtp> fwereade_: although it's probably premature optimisation; i can't see this being a bottleneck.
<fwereade_> wrtp, still, why does it matteratall to anything otherthan the hook executor whetheror not the hook exists?
<wrtp> fwereade_: anyway, i think Exec still serves a useful role - it sets up environment variables and it knows where hooks live &c
<wrtp> fwereade_: just a matter of efficiency, i think.
<wrtp> fwereade_: maybe Exec should be a method on a Charm type.
<niemeyer> fwereade_: I suggest dropping Context from that branch entirely, so we can submit it, and then bring on another branch
<wrtp> fwereade_: then the Charm type could do some initial introspection when created.
<wrtp> niemeyer: +1
<fwereade_> wrtp, but you've also moved responsibility for env var setup outside exec in your last paste
<wrtp> fwereade_: all the stuff in ExecInfo.Vars would still be around
<niemeyer> fwereade_: I'm not suggesting it doesn't make sense.. if you still think it makes sense by the time Context is introduced, just add the argument again and let's evaluate it with more context
<niemeyer> Erm.. overuse of "context" there, but you see what I mean
<wrtp> fwereade_: it's just that ExecInfo would also have ExtraVars (say) giving env vars that aren't inferrable from ExecInfo
<wrtp> it's true, we are discussing without knowing what the surroundings are like.
<fwereade_> wrtp, as advised against on "data-bag" grounds by niemeyer...
<wrtp> fwereade_: isn't that what ExecContext.Vars did?
<niemeyer> wrtp: ExtraVars isn't necessary.. anything that would be put in ExtraVars can be introduced via another field in ExecInfo.. after all, it's sole reason of existence is as a holder for those vars.
<wrtp> niemeyer: yeah, that sounds right.
<niemeyer> fwereade_: But really, just drop context and let's move on.. you know more about what's coming than we do, so you may be right and we just don't know it.
<fwereade_> niemeyer, ok, I'll try to find some way to do so sanely :/
<fwereade_> I still just do not get this insistence that Exec shouldn't actually finish hook execution
<niemeyer> fwereade_: Exec in that branch is a self contained problem, very well isoalted.
<niemeyer> fwereade_: There's an interface being provided, that is part of something else that doesn't yet exist. We don't have visibility yet onto the caller of Exec either. It's quite reasonable to have Flush being called by whoever calls Exec, after all, that's not about *executing* anything.
<niemeyer> fwereade_: That said, I'm just trying to satisfy your curiosty. I'm not even suggesting yet that this is the right approach. I just think we've had enough discussion, and dropping Context from that branch is a trivial way to move forward.
<niemeyer> fwereade_: Bring it back in the next branch, if needed be
<fwereade_> niemeyer, ok then
<Pikkachu> what's juju?
<wrtp> Pikkachu: https://juju.ubuntu.com/
<Pikkachu> ah ok nevermind
<TheMue> fwereade_: got my question regarding StartPinger() above?
<fwereade_> TheMue, whoops, sorry
<TheMue> fwereade_: i didn't expected the ZNONODE
<fwereade_> TheMue, StartPinger doesn;t necessarily create a node
<fwereade_> TheMue, but, yes, it should if it doesn't exist
<TheMue> fwereade_: yes, that's what i expected when reading your code
<fwereade_> TheMue, and, yes, it depends on the parent existing
<TheMue> fwereade_: the parent exists, but the node itself not, yes
<fwereade_> TheMue, but there aren't any errors?
<TheMue> fwereade_: i only get the ZNONODE back, i'll now take a look if the parent really exists. but it should, there's only so few code. ;)
<fwereade_> TheMue, ZNONODE back from where?
<TheMue> fwereade_: from StartPinger()
<fwereade_> TheMue, it *looks* like the only place that could be coming from is the Create in changeNode.change, can you confirm that?
<wrtp> anyone interested in helping me discuss a problem i've got with the ec2test server?
<TheMue> fwereade_: just tested explicitely, the parent exists
<wrtp> (a theoretical problem - i'm not sure what to do...)
<fwereade_> TheMue, is the Create the source of the error?
<TheMue> fwereade_: will have to add some print statements, one mom pls
<fwereade_> wrtp, niemeyer: btw, https://codereview.appspot.com/5753057 is now context-free
<TheMue> fwereade_: it's at the create. it's called twice and the first time w/o an error, the second time with "no node"
<niemeyer> fwereade_: LGTM, sorry for the pain on this one.
<fwereade_> niemeyer, no worries, there's clearly something I haven't managed to communicate but I still can't quite figure out what ;)
<niemeyer> fwereade_: We're moving forward.. it's all good
<fwereade_> niemeyer, yeah :)
<wrtp> fwereade_: LGTM, likewise on the pain.
<fwereade_> wrtp, cheers :)
<wrtp> niemeyer: i've come up against a difficult issue doing the delayed ec2test update stuff.
<niemeyer> fwereade_:
<niemeyer> % lbox submit -adopt
<niemeyer> Branches: lp:~fwereade/juju/go-tweak-supercommand => lp:juju/go
<niemeyer> Requires: lp:~fwereade/juju/go-add-cmd-context
<niemeyer> Proposal: https://code.launchpad.net/~fwereade/juju/go-tweak-supercommand/+merge/96136
<niemeyer> error: Pre-requisite lp:~fwereade/juju/go-add-cmd-context not yet merged.
<niemeyer> fwereade_: One side is done.. :)
<fwereade_> niemeyer, yay!
<wrtp> yay! +1
<andrewsmedina> niemeyer: I'm having a problem with lbox
<niemeyer> wrtp: What's up?
<niemeyer> andrewsmedina: What's up?
<niemeyer> :)
<wrtp> niemeyer: here's a summary: http://paste.ubuntu.com/883633/
<andrewsmedina> niemeyer: http://dpaste.org/eEmUR/
<wrtp> niemeyer: basically it's hard to know how eventual consistency works
<wrtp> niemeyer: and i'm not sure that my simplistic idea for modelling it is sufficient
<niemeyer> andrewsmedina: Authentication is failing for some reason
<niemeyer> andrewsmedina: I suggest removing ~/.lpad_oauth and ~/.goetveld_* and trying again
<niemeyer> wrtp: What's that text reflecting, more precisely?
<TheMue> fwereade_: ah, changing the test order helped, the first time it works, then it fails. does the Pinger has to be stopped explicitely before you can start another one on the same node?
<niemeyer> wrtp: Assumptions, experiments, ..?
<wrtp> niemeyer: a sequence of actions.
<niemeyer> wrtp: I can tell that part.. but it's still a number of bulleted points.. what's the introduction to those actions.
<wrtp> niemeyer: difficult to experiment when we're concerned with non-deterministic behaviour
<niemeyer> wrtp: I'm just trying to reverse engineer what you're showing me.. :)
<wrtp> niemeyer: ok, i'm implementing the transaction stuff - instead of acting directly on the server state, each action looks at the current server state, but creates a delta which will be applied some time later.
<wrtp> niemeyer: "transaction" isn't the right word there
<fwereade_> TheMue, hmm, I don't *think* it *should* be a problem to have more than one pinger on a node, but I never really considered the possibility
<wrtp> niemeyer: there's a queue of deltas, and one gets applied on each server action, so the visible state of the server always lags behind the state the actions have "modified"
<andrewsmedina> niemeyer: same problem
<niemeyer> wrtp: Ok, I don't think this is reasonable: "client: create g2 referring to g1 (succeeds because g1 exists in the visible version)
<niemeyer> "
<niemeyer> andrewsmedina: You've reauthenticated, and it's failing even then?
 * wrtp refers niemeyer to the last paragraph.
<andrewsmedina> niemeyer: yes :(
<fwereade_> TheMue, you should certainly be explicitly stopping/killing the pinger if you don't want it to hang around
<wrtp> niemeyer: i can't see how i can make something which can reconcile both views - one fully consistent and the other not.
<wrtp> niemeyer: (with just a simple queue)
<niemeyer> wrtp: Let's do this.. let's ignore this for the moment
<niemeyer> wrtp: and let's move on..
<wrtp> niemeyer: ignore this set of changes to ec2test?
<wrtp> niemeyer: i'm happy to do that. it seems... difficult.
<wrtp> niemeyer: i can shelve what i've done for now.
<niemeyer> wrtp: This is a *huge* rabbit hole.. trying to simulate an interface that is arbitrarily broken in ways we can't really predict is not likely to yield anything good.
<wrtp> niemeyer: yeah, i think so.
<wrtp> niemeyer: and i think that's why i've been finding it difficult to make good progress with it recently.
<niemeyer> wrtp: Our live tests are more likely to be relevant than anything we come up with, and this will save us countless hours of pain.
<wrtp> niemeyer: agreed.
<wrtp> niemeyer: in that case, i think my next, smallish change will be to add better error messages to the zookeeper package.
<TheMue> so, off for today, bye
<wrtp> TheMue: night night
<niemeyer> TheMue: Have a good evening!
<niemeyer> wrtp: Sounds fantastic
<wrtp> niemeyer: i thought about renaming Error to ErrorCode and having type Error struct {Code ErrorCode; Path string}
<niemeyer> andrewsmedina: Ok.. hmm
<niemeyer> andrewsmedina: What's the "date" command telling you right now?
<andrewsmedina> I'm using the last weekly
<andrewsmedina> release yesterday
<andrewsmedina> *released
<niemeyer> wrtp: Sounds reasonable
<wrtp> fwereade_: you missed this minor... // Exec executes the named hook in the environment defined by ctx and info.
<fwereade_> wrtp, blast, sorry
<wrtp> fwereade_: 's fine... next time.
<fwereade_> wrtp, cheers :)
<fwereade_> gents, I'm off for the night, take care
<wrtp> fwereade_: g'night
<niemeyer> fwereade_: Have a good one indeed
<niemeyer> <niemeyer> andrewsmedina: What's the "date" command telling you right now?
<andrewsmedina> niemeyer: Wed Mar 14 14:07:51 BRT 2012
<niemeyer> andrewsmedina: The error message is right.. your system time is bogus
<andrewsmedina> :(
<niemeyer> andrewsmedina: Just fix it.. should be easy :)
<andrewsmedina> niemeyer: yes
<niemeyer> Breaking for a moment
<wrtp> i'm away for the evening.
<wrtp> see y'all tomorrow
<niemeyer> wrtp: Apparently the submit-with-comments is actually working with current lbox: https://codereview.appspot.com/5753057/
<niemeyer> wrtp: Erm, propose-with-comments
<niemeyer> Oh, or maybe not.. this is a different code path
<niemeyer> Ah, not really.. it's the same mechanism.. should be working indeed
<andrewsmedina> niemeyer: https://codereview.appspot.com/5764043/
<niemeyer> andrewsmedina: Looks great, thank you
<niemeyer> andrewsmedina: I suspect "usage" there needs a capital "U"
<andrewsmedina> niemeyer: tonight I will start a local environment
<andrewsmedina> niemeyer: from lxc no
<niemeyer> andrewsmedina:
<niemeyer> % lxc-start --blah
<niemeyer> lxc-start: unrecognized option '--blah'
<niemeyer> Usage: lxc-start --name=NAME -- COMMAND
<andrewsmedina> niemeyer: http://dpaste.org/kKOdM/
<niemeyer> andrewsmedina: Ouch
<niemeyer> andrewsmedina: Looks like we'll need to parse for both..
<niemeyer> andrewsmedina: Also, Output() there won't work, as it's dropping stderr
<niemeyer> andrewsmedina: You can use CombinedOutput() instead
<andrewsmedina> niemeyer: I can use Run and set stderr an stdout
<niemeyer> andrewsmedina: CombinedOutput does that for you
<andrewsmedina> niemeyer: nice!
<andrewsmedina> niemeyer: can You run "lxc-version" and send me the output
<andrewsmedina> ?
<niemeyer> andrewsmedina: 0.7.5
<andrewsmedina> niemeyer: also here
<andrewsmedina> but I'm using centos
<niemeyer> andrewsmedina: The issue is that the usage message isn't consistent
<niemeyer> andrewsmedina: I also have both locally
<andrewsmedina> yep
#juju-dev 2012-03-15
<bigjools> niemeyer: around?
<bigjools> I need some help from someone please - I am testing maas deployment and juju deploy seems to work but the provisioning agent blows up with the wrong auth data for maas
<bigjools> I need to work out what has corrupted the auth data, it seems to have turned a colon-separate string into a list of the substring parts
<wrtp> TheMue: a review for you: https://codereview.appspot.com/5834047/
<wrtp> TheMue: good morning BTW. lovely and sunny here.
<TheMue> wrtp: will take a look. is it regarding the zk interface?
<wrtp> TheMue: it's a preliminary before adjusting state.Initialize
<TheMue> wrtp: good morning from a sunny norther germany too ;)
<wrtp> TheMue: (in the the light of recent changes to the zk package, tes)
<wrtp> s/tes/yes/
<TheMue> wrtp: where is the cleanup of the temporary directory?
<wrtp> TheMue: Server.Destroy removes its directory
<TheMue> wrtp: does srv.Destroy() handle it?
<TheMue> wrtp: ah, fine
<wrtp> TheMue: it always has - it's just that before you couldn't create a server in an existing directory.
<wrtp> TheMue: so we needed to remove the directory created by TempDir
<TheMue> wrtp: ic
<wrtp> TheMue: do you think this is small enough that it should be ok to push without niemeyer's LGTM?
<wrtp> fwereade__: ^
<fwereade__> wrtp, I think it duplicates something I just merged
<wrtp> fwereade__: oh, that's odd. i could've sworn i did a merge from trunk
<fwereade__> wrtp, just merged like 5 mins ago :(
<wrtp> fwereade__: no email about it. will try merging from trunk again.
<TheMue> wrtp: currently i won't do so. but we should define a process to discharge niemeyer, a kind of 4-eyes-lgtm
<fwereade__> wrtp, I got the merged mail at 13:08, and your proposal at 13:10
<wrtp> fwereade__: hmm, still no mail.
<fwereade__> wrtp, hm, I don't think you reviewed that one, but surely you'd still see *something*
<wrtp> fwereade__: yeah, i usually do.
<wrtp> fwereade__: hmm, i see it now, after searching for it. i think i'm probably just blind.
<wrtp> fwereade__: anyway, that's fine. better actually. i'll just abandon.
<fwereade__> wrtp, sorry collision :)
<wrtp> fwereade__: that's fine, i should've remembered.
<fwereade__> wrtp, no worries; anyway, lunch :)
 * fwereade__ is down to 1 unmerged python branch \o/
<wrtp> fwereade__: enjoy
<wrtp> fwereade__: yay!
<wrtp> TheMue: so, as discussed some time ago, i'm changing state.Open so that it waits for the state to be initialized (so i can join some more dots up and make the juju command work for real). it's not a big change, but i'm wondering how this should fit with the tests. i'm thinking of making TestPackage call Initialize. do you think that'll work?
<wrtp> TheMue: or perhaps every suite should call Initialize in SetupSuite and zkRemoveTree of everything on teardown?
<wrtp> TheMue: that's probably cleaner, to my mind.
<TheMue> wrtp: what exactly will Open() wait for?
<wrtp> TheMue: the /initialized node to exist
<TheMue> wrtp: and whos task will it be to create this node (and the other ones)?
<wrtp> TheMue: that's the task of the bootstrap node
<wrtp> TheMue: it starts zookeeper and does the initialisation.
<TheMue> wrtp: but the code of it shall be provided by the state package?
<wrtp> TheMue: yeah.
<wrtp> TheMue: this is what i've got currently: http://paste.ubuntu.com/884722/
<TheMue> wrtp: so in this case it should also provide a kind of Terminate()
<wrtp> TheMue: i'm thinking that Terminate (or Clean or Uninitialize) could be provided by the testing package.
<wrtp> TheMue: which would remove everything in the zk tree so we can start with a clean slate each time.
<TheMue> wrtp: how do we cean-up environments? or doesn't juju provide it?
<wrtp> TheMue: because i don't think there's ever a time outside of testing when you want to do that.
<wrtp> TheMue: we call environ.Destroy
<wrtp> TheMue: but that destroys the whole zk installation.
<wrtp> TheMue: so it's not necessary to remove the individual nodes.
<TheMue> wrtp: ok, radical enough ;)
<wrtp> :-)
<TheMue> wrtp: yeah, in this case a Cleanup() provided in the testing package should be enough
<wrtp> TheMue: cool. it can be called in different places in the different suites (e.g. once per test for StateSuite and once per suite for TopologySuite)
<TheMue> wrtp: hasn't there been any open questions about an extendability by fwereade__ ?
<wrtp> TheMue: sorry, don't understand the question.
<TheMue> wrtp: iirc thare have been some questions regarding this topic by fwereade__
<fwereade__> TheMue, I'm strongly in favour of adding a nuke-zk function to the testing package
<wrtp> fwereade__: good.
<wrtp> fwereade__: testing.zkClean() ?
<fwereade__> TheMue, and the only cases I can think of in which we *don't* want to start a test with an initailized state are in tests for Initialize itself and jujud initzk
<fwereade__> wrtp, make it public, but yeah
<TheMue> fwereade__: ok
<wrtp> fwereade__: maybe CleanZk, to mirror StartZkServer?
<fwereade__> TheMue, wrtp: that is still 2 cases in different packages in which we want to be able to start from a clean ZK but it may actually be easier to *always* initialize, and just CleanZK() at the start of the tests
<wrtp> fwereade__: that's what i'm thinking
<wrtp> fwereade__: except... i'm not sure that testing should be reliant on the state package
<wrtp> fwereade__: i think i'm happier if StartZkServer just does that.
<fwereade__> wrtp, sounds good to me
<fwereade__> wrtp, anything that needs a state can just set it up
<wrtp> fwereade__: yeah
<fwereade__> wrtp, and we avoid any need for silly little init-nuke-test dances
<wrtp> fwereade__: yeah.
<TheMue> fwereade__, wrtp: sounds fine
<wrtp> fwereade__: hmm, is there a way of telling which zk nodes are "native" (i.e. created automatically) so i don't try to delete them. or will i just hard code the name (can't remember it now!)
<fwereade__> wrtp: heh, I didn't realise they existed... what would be the negative consequences of a kill 'em all approach?
<wrtp> fwereade__: i dunno. maybe i'll get a permission denied error though.
<fwereade__> wrtp, the presence tests take the kill-'em-all approach and seem to work ok
<fwereade__> wrtp, that said, I ignore errors
<wrtp> :-)
<fwereade__> wrtp, but then so do the other state tests
<fwereade__> wrtp, thinking on it, that's a problem
<wrtp> gotta go for lunch
<fwereade__> wrtp, enjoy :)
<TheMue> wrtp: enjoy
<fwereade__> TheMue, did you track down the issue with presence nodes?
<fwereade__> TheMue, sorry, I meant to follow up on that this morning...
<TheMue> fwereade__: part of it, yep. had to do with a different test environment i had to fix
<fwereade__> TheMue, cool, I didn't *think* presence nodes were utterly broken, but one does worry ;p
<TheMue> fwereade__: now only a non-firing watch is left (after i killed the pinger)
<fwereade__> TheMue, hmm, weird, I thought I'd covered everything in the tests
<TheMue> fwereade__: order is (a) watch creation w/o pinger (b) start pinger, watch is fine Â© kill pinger, watch doesn't fire the change
<fwereade__> TheMue, how long do you wait to detect the death?
<fwereade__> TheMue, kill should be picked up almost immediately though
<fwereade__> TheMue, hmm
<fwereade__> TheMue, worst-case stop detection is 4x timeout
<TheMue> fwereade__: i tried different times from 25 to 100 ms
<fwereade__> TheMue, sorry, 4x period ie 2* timeout
<fwereade__> TheMue, and you have 25ms period?
<TheMue> fwereade__: no, 50, mom pls, will try with different timings
<fwereade__> TheMue, you can't have my mom, but I'm not here to judge what you do with other  peoples' :p
<TheMue> fwereade__: hmm, period is now 25 and i'm waiting 110
<fwereade__> TheMue, would you push a branch I can take a look at?
<fwereade__> TheMue, wait, did you rewatch after the change?
<TheMue> fwereade__: strange how you and niemeyer don't now "mom pls" as abbrev for "one moment please".
<fwereade__> TheMue, it's modelled after ZK watches: one change, and that's it
<fwereade__> TheMue, I use "mo"
<TheMue> fwereade__: i know it since earliest irc days
<fwereade__> TheMue, less disturbing ambiguity, and over a lifetime you'll save literally dozens of character
<fwereade__> s
<TheMue> fwereade__: yeah, best it is to use no more abbreviations
<TheMue> fwereade__: only one change? oh
<fwereade__> TheMue, yeah, sorry -- the idea was to write it as much as possible as though it were a building block as exposed by zk
<TheMue> fwereade__: wrtp and i changed it that way that a "Watcher" always refreshes the watch and you get an independant channel
<fwereade__> TheMue, that makes perfect sense for a Watcher type
<fwereade__> TheMue, but I'm not even sure there's any justification for an AgentWatcher
<TheMue> fwereade__: ok, then i know where to change it
<fwereade__> TheMue, what's the use case?
<TheMue> fwereade__: having a watcher which notifies you if the node, here the agent node, is created or removed. you can query the status as well as wait for a change in any direction.
<fwereade__> TheMue, but what would actually use that?
<fwereade__> TheMue, we can already query status; and I don't think we actually ever watch an agent except when waiting for one to exist
<fwereade__> TheMue, at which point we take some action and we're done
<fwereade__> TheMue, the use cases I'm aware of are essentially Connected() and WaitConnected()
<TheMue> fwereade__: so the original naming is very misleading
<fwereade__> TheMue, could very well be I'm afraid
<TheMue> fwereade__: one doesn't watch a node but waits for creation
<fwereade__> TheMue, I *think* so
<fwereade__> TheMue, let me check again
<fwereade__> TheMue, yeah, the only clients are the ssh and debug-hooks commands
<fwereade__> TheMue, each of which just wait for the agent to show up before they continue
<fwereade__> TheMue, also it's an interesting semantic issue
<fwereade__> TheMue, in ZK, "watch" really does mean "watch until there's a single change, and that's all"
<fwereade__> TheMue, doesn't really fit with colloquial usage but in ZK context I think it's the right term to use
<TheMue> fwereade__: yeah, and so the idea behind the watcher has been to monitor for a change, possibly gather additional informations (if needed in some use cases) and provide it, continuosly
<fwereade__> TheMue, yep: there are plenty of cases where we do want watchers but IMO they're unwanted complexity until they actually are required
<TheMue> fwereade__: they are so simple, no real complexity
<fwereade__> TheMue, but if they're not needed then they still each represent a little extra cognitive load for no benefit
<TheMue> fwereade__: one little implementation based on an own type and a behaviour to plug in based on interfaces, it's really dead simple and could be reused everywhere
<TheMue> fwereade__: standard since more than 20 years
<TheMue> fwereade__: reusage of code, clean separation of infrastructure and logic, standard concurrent behaior
<fwereade__> TheMue, my point is purely that it's not going to be used even once, so reuse is a moot point
<TheMue> fwereade__: we'll see
<fwereade__> TheMue, and if it is one day used, which is fine, then we can implement it -- and like you say it'll be nice and simple :)
<TheMue> fwereade__: already in the state package there are so many watches
<fwereade__> TheMue, and that's OK I think, it's the ZK-level metaphor; in some cases we want more sophisticated behaviour layered on top so we implement a watch*er* that wraps a series of watch*es*
<TheMue> fwereade__: ok, as long as we only reimplement a non-concurrent software it shall be fine. maybe i'm just too much used to think in larger 24/7 systems.
<fwereade__> TheMue, I'm not sure what you're getting at there, expand please
<fwereade__> TheMue, given the known use cases for agent-watching, what good will the extra layer do us?
<TheMue> fwereade__: it's no extra layer, it's only a different kind of implementation
<fwereade__> TheMue, you seem to be implying that one-shot watches are somehow not appropriate for concurrent systems, and I don't follow that
<TheMue> fwereade__: no, i didn't said that
<fwereade__> TheMue, it seems to me that they're perfectly good building blocks, and can be used to do more sophisticated things when a need becomes apparent
<fwereade__> TheMue, all I'm saying is that we don't need any more sophistication on agent watches yet, and that we shouldn't add it until we do
<niemeyer> Good mornings!
<fwereade__> niemeyer, heyhey
<TheMue> fwereade__: i only said that my thinking about concurrent components comes out of a different area of systems
<niemeyer> fwereade__: +1, whatever that means :-)
<TheMue> niemeyer: moin (that's a regional greeting here, could be used all day long ;) )
<fwereade__> TheMue, sorry, I misunderstood, no offence intended :)
<niemeyer> TheMue: Hehe, I like that one
<fwereade__> TheMue, very handy :
<TheMue> fwereade__: np, still have troubles to express it the right way. but thise evening i'm again at the Oldenburg English Club where we speak english all the time. ;)
<fwereade__> TheMue, cool :)
<TheMue> fwereade__: but thx for this discussion, they helped a lot to get a better insight of how watches are used in juju
<TheMue> fwereade__: so my propose for the agent mixin will contain a semantically different naming just to see if it better expresses how it has to be used
<fwereade__> TheMue, cool -- I would personally recommend `Connected() (bool, error)` and `WaitConnected() error`
<fwereade__> TheMue, IMO they match current usage nicely
<TheMue> fwereade__: yep, exactly
<fwereade__> TheMue, but I'll wait and see :)
<andrewsmedina> niemeyer: morning :D
<fwereade__> TheMue, cheers
<wrtp> TheMue: i think i'm with fwereade__ on this. i think we should leave the agent watcher until it's actually used.
<wrtp> niemeyer: hiya
<wrtp> niemeyer: the better zk error messages branch is ready for review, BTW: https://codereview.appspot.com/5835045/
<TheMue> wrtp: yep
<niemeyer> wrtp: Cheers
<wrtp> fwereade__: the directory /zookeeper seems to be special. i guess i'll just special-case that.
<fwereade__> wrtp, ah, cool, good to know
<andrewsmedina> niemeyer: https://codereview.appspot.com/5764043/
<wrtp> andrewsmedina: that looks better, but i'm still not sure about that it's worth returning the standard output from commands that don't produce any (what are we going to do with the return value of container.create for example?)
<wrtp> andrewsmedina: i'd separate stdout from stderr too, so that it's not possible that the output of list will be corrupted by a log message printed to stderr.
<andrewsmedina> wrtp: I think that we can remove it if we dont will use the output
<wrtp> andrewsmedina: why would we ever use the output? it doesn't produce any? i'd prefer to return the output (in a more useful form than []byte) if we decide it has something we need.
<wrtp> s/any?/any!/
<andrewsmedina> wrtp: but, I think that the best way is first create the local enviroment and then refactor the lxc lib
<wrtp> andrewsmedina: AFAICS the lxc lib is an abstraction around the lxc commands. i think the methods in it should reflect the abstraction we want to provide, rather than the commands that are being run underneath.
<wrtp> andrewsmedina: you're writing functions without knowing how they're going to be used. best to start as simple as possible, i think.
<andrewsmedina> wrtp: but lxc lib don't is simple for you?
<wrtp> andrewsmedina: BTW despite the bogus "return first line as error" bit, the code i gave as runLXCCommand in my comment, pretty much reflects what i'd expect to see.
<wrtp> andrewsmedina: sorry, i don't understand.
<andrewsmedina> wrtp == rog?
<wrtp> andrewsmedina: yes
<wrtp> (sorry, unexpected IRC user names!)
<andrewsmedina> wrtp: I do not understand what you expect
<andrewsmedina> wrtp: you expect that returns should be more readable?
<wrtp> andrewsmedina: i expect that the functions should return useful values.
<wrtp> andrewsmedina: create will always return an empty byte array AFAIK
<wrtp> andrewsmedina: something like this for runLXCCommand, BTW: http://paste.ubuntu.com/884842/
<andrewsmedina> wrtp: yes, the lxc-create or returns ''foo' already exists' or ''fooa' created'
<wrtp> andrewsmedina: if that happens, we want to return that as the error.
<andrewsmedina> wrtp: but when I will define the custom stdout?
<wrtp> andrewsmedina: because it means the create has failed
<niemeyer> Apparently Rietveld is now eventually consistent as well
<wrtp> andrewsmedina: like this, for example: http://paste.ubuntu.com/884853/
<niemeyer> Add comment > Comment disappears > Reload, reload, reload, reload > Nothing > Type comment again > Save > Old comment shows up
<andrewsmedina> wrtp: in other cases we only need the error output
<wrtp> andrewsmedina: exactly.
<wrtp> niemeyer: i get that quite a bit.
<andrewsmedina> wrtp: now I understand
<wrtp> niemeyer: i force the comment to appear just by entering a blank comment.
<niemeyer> wrtp: I guess we live in the future, and people decided that things making sense is too boring..
<wrtp> andrewsmedina: thanks for bearing with me :-)
<niemeyer> wrtp: Ah, so you mean it consistently shows up if you save another comment on top?
<wrtp> niemeyer: yes
<niemeyer> wrtp: Good to know, thanks
<andrewsmedina> wrtp: I will improve the runLXCCommand :D
<niemeyer> wrtp: Hold on.. but today it's _seriously_ broken
<niemeyer> There must be something going on
<niemeyer> wrtp: I've just mailed you a review, and it disappeared!
<niemeyer> and now IT'S THERE!
<wrtp> niemeyer: weird
<niemeyer> This must be about that "Highly Available Data Store" that they've been introducing in App Engine
<niemeyer> wrtp: Anyway, you just got an eventually consistent review..
<wrtp> niemeyer: thanks
<niemeyer> wrtp: It's like a sink.. I can save 10 comments in the same location and all of them disappear immediately.. :)
<wrtp> niemeyer: i haven't seen that before
<niemeyer> wrtp: It wasn't like that even yesterday
<wrtp> niemeyer: do they come back eventually? slightly smelly?
<fwereade__> wrtp, niemeyer: it's always *sometimes* been like that for me
<fwereade__> er, if that makes any sense
<niemeyer> fwereade__: Yeah :)
<niemeyer> wrtp: They do come back eventually, all at once
<wrtp> annoying
<niemeyer> andrewsmedina: Thanks, I'm submitting your branch just now
<niemeyer> wrtp: Disagree regarding separating stdout and err, btw
<niemeyer> andrewsmedina: ^
<niemeyer> wrtp: If you separate the two streams, the error output from commands is often incomprehensible
<wrtp> niemeyer: really? isn't that the whole point of the distinction?
<wrtp> niemeyer: particularly for short-running commands
<niemeyer> wrtp: I haven't talked to the guy that created the distinction, so I can't tell what the whole point is, but I can say that I have practical experience with incomprehensible error messages.
<niemeyer> wrtp: If you take stderr out of context, you don't know what failed anymore.
<andrewsmedina> launch time
<wrtp> niemeyer: in this case, i'm more worried about stdout being corrupted by prints to stderr
<niemeyer> andrewsmedina: I'm submitting your branch as-is, thanks.. there are a few other details that we may polish over time, but nothing huge
<wrtp> niemeyer: if the command prints to stderr and exits without an error, we'll have dud list output for example.
<niemeyer> wrtp: Ok, so let's fix that, but most of these commands don't care about stdout
<wrtp> niemeyer: that's true. so the stderr won't need any stdout to be understandable, because there isn't any
<niemeyer> Right
<niemeyer> wrtp: Or there is, but it's informational output
<wrtp> niemeyer: so separating the streams makes sense in this case, i think.
<niemeyer> wrtp: Which we want interleaved with stderr, in case problems happen
<wrtp> niemeyer: really? i didn't see any.
<wrtp> niemeyer: but i didn't look too hard, right enough.
<wrtp> niemeyer: anyway, i'm happy that if stdout is nil, it gets merged with stderr
<wrtp> niemeyer: but it shouldn't be merged for list.
<niemeyer> wrtp: Agreed
<niemeyer> Separate branch.. we've banged on andrewsmedina enough for this one
<wrtp> niemeyer: sure.
<wrtp> andrewsmedina: sorry about all the requested changes. we're still hammering out our conventions and package structuring between ourselves...
<wrtp> andrewsmedina: thanks for working on this
<niemeyer> mthaddon: Welcome!
<mthaddon> o/
<niemeyer> mthaddon: So, charm store
<mthaddon> indeed...
<niemeyer> mthaddon: The package you want is golang-weekly
<niemeyer> mthaddon: PPA details in https://wiki.ubuntu.com/Go
<mthaddon> k, thx - and in terms of the bzr branch for the code itself and the build commands you want us to run?
<niemeyer> mthaddon: Build commands:
<niemeyer> go get -u launchpad.net/juju/go/store/charmd
<niemeyer> go get -u launchpad.net/juju/go/store/charmstore
<niemeyer> Erm..
<niemeyer> Sorry
<niemeyer> go get -u launchpad.net/juju/go/store/charmd
<niemeyer> go get -u launchpad.net/juju/go/store/charmload
<niemeyer> mthaddon: You'll need to export GOPATH to a path where you want the packages and binaries to be put in
<niemeyer> mthaddon: The two commands above will do everything necessary, and will get you two binaries under $GOPATH/bin
<mthaddon> ok, thx - I'll have a play with that and let you know if I have any more questions (I'm sure I will)
<niemeyer> mthaddon: Of course
<niemeyer> mthaddon: Please just ping us if you need anything
<mthaddon> will do
<wrtp> niemeyer: responded.
<niemeyer> wrtp: "it is necessary, i think, otherwise we'll return when err is nil."
<niemeyer> wrtp: You're right, my bad
<wrtp> niemeyer: good. my crack habit isn't showing too badly then.
<mthaddon> niemeyer: just to be clear, that version of golang-weekly is needed to both build and run the code, or just to build it?
<niemeyer> wrtp: Good stuff, LGTM
<fwereade__> wrtp, niemeyer: beginnings of hook.Context at https://codereview.appspot.com/5832045
<niemeyer> mthaddon: Just build it.. take the two binaries home after that ;-)
<wrtp> niemeyer: cool. i'm happy too.
<mthaddon> niemeyer: so we don't need this package installed on the charmstore server itself? ok, thx
<niemeyer> mthaddon: That's right
<mthaddon> k
<niemeyer> mthaddon: All the nightmares of the security folks regarding static builds are now transformed into dreams for you
<mthaddon> heh
<wrtp> fwereade__: you've got a review
<fwereade__> wrtp, cheers :)
<niemeyer> There's a bug in lbox submit.. just pushing a new package build now
<fwereade__> wrtp, the constructors are more about documentation and clarity than anything else
<wrtp> fwereade__: i'm not convinced they help. you can document the fields of the type just as easily.
<fwereade__> wrtp, maybe the term would be "literateness" rather than documentation
<wrtp> fwereade__: and it's nicer to do make a type with named fields.
<wrtp> fwereade__: still not sure i buy it.
<fwereade__> wrtp, hmm, also in my mind there's a certain "it's ok to set these fields" implication in a type which exposes them
<wrtp> fwereade__: isn't it ok to set them?
<niemeyer> fwereade__: I don't have the context, but that doesn't hold in general
<wrtp> that's true too
<fwereade__> wrtp, well, I can't see any good reason for anyone to mutate them
<niemeyer> fwereade__: If you set a field, there's a goal.. if you're setting fields improperly, bad things will happen
<fwereade__> niemeyer, true
<fwereade__> niemeyer, and so if there's never any good reason to set those fields after creation, why expose them?
<niemeyer> fwereade__: In general, the line I try to draw is whether this is an implementation detail or not, and whether an interface for such type might be useful or not
<wrtp> niemeyer: +1
<fwereade__> niemeyer, well, the fact that we happen to store names is an implementation detail really
<niemeyer> fwereade__: As I said, those are general statements.. I don't have the context. Can you point me to the code being debated?
<fwereade__> niemeyer, https://codereview.appspot.com/5832045/
<niemeyer> fwereade__: Which line/type?
<fwereade__> niemeyer, Context
<fwereade__> niemeyer, https://codereview.appspot.com/5832045/diff/1/hook/context.go#newcode19
<wrtp> [15:40] <fwereade__> niemeyer, well, the fact that we happen to store names is an implementation detail really
<wrtp> fwereade__: is it something we'd ever want to change? if not, there's no harm in binding ourselves to that detail IMHO.
<fwereade__> wrtp, I can all too easily imagine a future review saying "why bother storing the names when we could store the objects", for example
<fwereade__> wrtp, and my only argument against it would be that it's an arbitrary rearrangement of code that makes one bit simpler at the cost of making another bit more complex
<fwereade__> wrtp, and I don't seem to have much luck convincing people with that line of argument ;)
<niemeyer> fwereade__: I agree with wrtp on that one.. there are already three different constructors, which do nothing else than assigning to all possible fields of Context in different setups, with no processing.
<niemeyer> fwereade__: And we already have a Members accessor that returns members..
<niemeyer> fwereade__: In practice, I believe we'll have very few call sites building such a context
<fwereade__> niemeyer, ok then
<fwereade__> wrtp, niemeyer: I'll tack UnitName and RelationName onto ExecInfo as well, ok?
<niemeyer> fwereade__: That said, again this isn't set on stone
<wrtp> fwereade__: sounds good.
<fwereade__> niemeyer, frankly it may as well be
<niemeyer> fwereade__: Often times I went back and fixed bad assumptions I made in the interface of values, either hiding things that were too exposed, or turning methods that were really data into fields
<niemeyer> fwereade__: It's hard to foresee the perfect way to use a type before actually using it
<niemeyer> fwereade__: So take those suggestions with a grain of salt at all times, please :)
<fwereade__> niemeyer, I don't remotely claim my design is perfect but I'm starting to feel there's no point my thinking about design at all
<wrtp> that's why it's lovely having a statically typed language - you can make changes in such a way that the compiler will tell you where you need to fix things...
<niemeyer> fwereade__: :(
<fwereade__> niemeyer, ok, I overstated that
<wrtp> :-( too
<fwereade__> niemeyer, wrtp: it seems like we do spend a lot of time discussing how the code "should" be without really having enough context to make these decisions
<wrtp> fwereade__: i try to start things off minimal and build up only as necessary.
<wrtp> fwereade__: which is always the context i'm coming from: "how can we make this smaller, simpler, easier to understand etc?"
<wrtp> fwereade__: i'm very sorry if my reviews come across as overly critical
<niemeyer> fwereade__: Agreed.. I feel like we went over board on that in the last couple of days too.
<fwereade__> wrtp, don't worry, I'm not feeling hostility in either direction
<wrtp> fwereade__: yeah, but it sounds as if you're frustrated, which isn't good.
<niemeyer> fwereade__: In a way, I feel like we're building a shared agreement about conventions
<niemeyer> fwereade__: Which is a bit exhausting at times
 * wrtp would agree with that.
<fwereade__> wrtp, that is true, I guess at least I'm talking about it and so I expect we'll fix it ;)
<niemeyer> fwereade__: But I see that as a hump.. once the agreement is clear, it's easier for both parties to dismiss bad claims, or to not even make them in the first place
<fwereade__> niemeyer, yes :)
<fwereade__> niemeyer, wrtp: I think I'm fundamentally feeling railroaded into implementing something in which hook execution is entirely divorced from its context
<fwereade__> niemeyer, wrtp: and that doesn't feel right to me
<wrtp> fwereade__: i'm thinking that you'll have a slightly higher-level function that will bring the two together nicely.
<niemeyer> fwereade__: I don't think this is the case, to be honest. The way I see it is that we're trying to keep things simpler, while we can.
<fwereade__> wrtp: yeah -- while my feeling was that Exec should be that higher-level function, because it's been cut down to the point where it may as well be inlined into whatever that higher-level function turns out to be
<niemeyer> fwereade__: That may well be true.. I think we have a different feeling about what merging logic means.
<fwereade__> wrtp: er, s/because/but/ (I think)
<fwereade__> niemeyer, could very well be
<niemeyer> fwereade__: Merging is just a statement of "Ok, this is a nicely done little thing that makes sense."
<niemeyer> fwereade__: It's kind of an agreement that "so far, so good"
<niemeyer> fwereade__: The next branch can corrupt that entirely with different needs, though
<niemeyer> fwereade__: Because "so far, so good"  by *then*, can mean something else
<wrtp> fwereade__: perhaps the dilemma goes away a little bit when we start to think about a Context as a potentially independent thing, not *necessarily* inside a hook. then the separation of hook execution from context starts to feel more natural, perhaps.
<fwereade__> niemeyer, yeah, that makes sense, but I'll need to think about it a mo
<niemeyer> fwereade__: We may be able to enlighten you about how much Exec make sense as an independent unit, and the opposite may well happen: a different design that has Exec taking Context may be so much more elegant.
<fwereade__> wrtp, heh, I see the execution as necessarily being inside a Context, but that a Context is a legitimate entity in itself without ever having to execute a hook
<niemeyer> fwereade__: Maybe Exec is even a method on Context?   I don't know.. I'm not closed to any of that.
<mthaddon> niemeyer: I setup an lxc container to test the build command, and got https://pastebin.canonical.com/62406/ - is that okay, or a problem in any way?
<fwereade__> niemeyer, I guess the problem is that I have a tendency to pre-design things in my mind, and so the code I deliver ends up somewhat skewed by the plan
<wrtp> mthaddon: those are just warnings. you can ignore them.
<mthaddon> k, thx
<fwereade__> niemeyer, I think this is exacerbated by having the python code sitting in the background -- ofc I want to do better if I can, but it does shape my thinking
<wrtp> fwereade__: i think we all do that.
<fwereade__> niemeyer, heh, I had been wondering about Context.Exec, which could indeed be very nice
<wrtp> fwereade__: that could indeed be nice.
<niemeyer> There we go.. :)
<fwereade__> niemeyer, but the trouble is I don't really feel free to deliver something with Context.Exec, because I can't *necessarily* justify it without an excess of speculative context
<wrtp> fwereade__: it's trivial to mutate the current Exec into Context.Exec i think
<wrtp> fwereade__: (when/if it's ready for it)
<fwereade__> niemeyer, wrtp: and Context.Exec would *certainly* deal with the flushing ;)
<wrtp> :-)
<hazmat> meeting time
<wrtp> fwereade__: i think part of my reason for trying to keep things stripped down to the bare minimum is that it makes forward progress more agile.
<niemeyer> fwereade__: That sounds fine, FWIW
<wrtp> fwereade__: less infrastructure to shift out of the way
<niemeyer> hazmat: I think we're one hour early still.. and I'm about to have lunch!
<wrtp> niemeyer: i'd be happier to have it now - i've got to leave 30 mins early today.
<hazmat> niemeyer, time zone changes
<fwereade__> wrtp, it is trivial to mutate: and I think that's the problem, because I've felt that review suggestions have started to tend towards the trivial-to-change-if-it-turns-out-to-be-a-good-idea
<niemeyer> wrtp: Ale is heading to a restaurant right now from her work place..
<niemeyer> hazmat: No, no time zone changes here :)
<wrtp> nor here
<hazmat> niemeyer, :-) i'm fine with +1 hr
<niemeyer> Ok.. biab
<wrtp> fwereade__: i think it's harder to remove concepts than add them, in general.
<fwereade__> wrtp, heh, interesting, I think my gut feeling goes the other way
<fwereade__> wrtp, well, not exactly
<fwereade__> wrtp, hmm, it's subtler that than, not sure I can articulate it well...
<fwereade__> wrtp, IME I think that unnecessary concepts have a tendency to wither away quite cleanly as a design evolves
<fwereade__> wrtp, to the point where finally killing them becomes trivial
<wrtp> fwereade__: i wish they did...
<fwereade__> wrtp, while realising that something is *missing* has a tendency to hit a large amount of code
<wrtp> fwereade__: did you ever see this, BTW: http://www.youtube.com/watch?v=7RJmoCWx4cE
<fwereade__> wrtp, haha, nice
<wrtp> fwereade__: a nice demonstration of the fact that it's *always* easy to add levels of abstraction...
<fwereade__> wrtp, it's *easy* but frequently distressingly noisy and heavyweight
<wrtp> fwereade__: indeed.
<fwereade__> wrtp, ok, I think there's a tension between "how can I make the code reflect the problem space" and "how can I make this code trivial to apprehend to someone who hasn't been thinking about the specific problem"
<wrtp> fwereade__: i think some of the most interesting things happen when the top down design collides with the bottom-up implementation.
<wrtp> fwereade__: yeah
<wrtp> fwereade__: i think ideally it should work both ways...
<fwereade__> wrtp, well, yes :)
<wrtp> fwereade__: and *that*'s the beauty of choosing the right abstraction, i think
<wrtp> fwereade__: one which resolves that tension in a satisfactory way
<wrtp> fwereade__: and that's the origin of the "90% solution" too, i think.
<wrtp> (i've probably remembered the wrong percentage)
<wrtp> fwereade__: i guess i do usually weight the bottom-up concerns higher than the top-down concerns.
<fwereade__> wrtp, ok, but I end up in a position where I can't help having to figure out how the code "should" be in the broader context, and then try to break it up into "simple" concepts that don't fit so well but are easier to pick up at first glance, and then change them *again* when the next piece of the puzzle slots into place
<fwereade__> wrtp, the problem may well be rooted in that "can't help" above
<fwereade__> wrtp, I should probably meditate more often ;)
<wrtp> fwereade__: well, i usually start from: "i'd like to be able to do *this* (high level thing)", then factor it into something that i can start implementing a piece of, then i find "actually, given this nice low level abstraction, perhaps it would be nicer if my high-level thing looked like *this*", and so on
<wrtp> so i *try* (not always successfully!) to let the low level details guide how the high level stuff works.
<fwereade__> wrtp, I feel like that's veering uncomfortably close to the Great Object Orientation Reuse Fallacy ;p
<fwereade__> wrtp, the perfect hammer is not perfect in all contexts, as it were
<wrtp> fwereade__: but perhaps the reason we discuss so much about these initial steps is that building software is a bit like crystallisation - once you've got the right concepts, everything crystallises around them
<fwereade__> wrtp, very true
<wrtp> fwereade__: no indeed - part of my point, i think, was that you're redesigning the hammer (and the context) as part of an on-going process
<wrtp> fwereade__: so i'm wanting us to crystallise around the *right* (simple-as-possible) concepts :-)
<fwereade__> wrtp, and my point is that I try to deliver code that will fit nicely with the next step, which is frequently 3/4 done already *because* I've been trying to kick the implementation around as an aid to minimally-flawed crystallisation
<mthaddon> niemeyer: ok, so I have local copies of the binaries - how do I actually run them? do we have any initscript wrappers for these? and do they need any config files to know (for instance) which port to listen on, which mongodb instance to connect to, etc.?
<wrtp> mthaddon: niemeyer's just gone for lunch. i don't *think* there are any config files necessary, but there may be required arguments to pass.
<wrtp> fwereade__: yeah, i know that feeling. and we're trying to push you in a different direction to where you'd ended up :-)
<mthaddon> ok, I'll catch him when he gets back
<fwereade__> wrtp, and so when (eg) you say that something is overly complex I get all grumpy because I feel like you're pushing me towards local maxima that are actually distractions
<fwereade__> wrtp, exactly
<wrtp> fwereade__: i think that being able to push multiple successive branches at once will be a good help, because then we can "look into the future" to see where you're heading.
<wrtp> fwereade__: and still say "let's go a different direction" anyway :-)
<fwereade__> wrtp, haha, yeah
<fwereade__> wrtp, "your destination is stupid" is a much easier criticism to take than "I don't know where you're going, but you're doing it wrong"
<fwereade__> wrtp, because it's *much* easier to have a concrete argument about it and discover that, yeah, I'm on crack ;)
<fwereade__> wrtp, or occasionally not, as the case may be
<andrewsmedina> I'm back
<wrtp> fwereade__: yeah. it's difficult for us to make any other kind of criticism though currently :-)
<fwereade__> wrtp, indeed, and "trust me, guys, I know what I'm doing" really doesn't, and shouldn't, carry much weight ;)
<wrtp> fwereade__: if it's any consolation, my recent succession of 7 branches had changed beyond all recognition when it got to the end...
<fwereade__> wrtp, yeah, it's the way of these things :)
<wrtp> mthaddon: charmload takes the mongo address as its first argument; charmd also takes the http address to listen on as its second argument.
<wrtp> fwereade__: the final result was much better, mind, although the process was painful at times :-)
<mthaddon> wrtp: what about logging, pidfiles, etc.? I'm thinking of an initscript ideally
<fwereade__> wrtp, ok, I think I've reached a resolution, and it's actually not very different to the grumpy statement that started this all off; but it's not actually so grumpy :p
<mthaddon> wrtp: I guess I could try massaging start-stop-daemon to DTRT here
<fwereade__> wrtp, I *do* need to stop thinking about design when I'm developing functionality
<wrtp> mthaddon: logging goes to stdout. neither daemonise themselves.
<fwereade__> wrtp, and separate "implementation" branches from "intergation" branches
<andrewsmedina> wrtp: I will start work on the local provider
<wrtp> andrewsmedina: cool.
 * fwereade__ hates it when he misspells a word in quotes
<wrtp> lol
<fwereade__> wrtp, does that sound plausible?
<wrtp> fwereade__: aren't all branches "implementation" branches?
<fwereade__> wrtp, in a limited sense, yes; I'm still casting around for the right words
<wrtp> fwereade__: "joining the dots" ?
<wrtp> fwereade__: that's what i'm trying to do now, actually.
<fwereade__> wrtp, "implementation" is the word I really need a replacement for
<wrtp> fwereade__: bringing environs and state and cmd together
<wrtp> fwereade__: independent functionality?
<wrtp> fwereade__: unit implementation?
<fwereade__> wrtp, I think I probably like "independent"
<wrtp> fwereade__: yeah, perhaps that expresses how i feel about building things up.
<fwereade__> wrtp, my gut feeling is still that it involves extra work, which feels unnecessary; but practical experience is that it's a lot cheaper than arguing about the details ;)
<wrtp> fwereade__: sorry about that :-)
<fwereade__> wrtp, if it leads me to a better way of working I'm all for it :)
<wrtp> fwereade__: but if it's any consolation, i think we should end up with a really lovely piece of s/w at the end :-)
<fwereade__> wrtp, it's been a good discussion, I don't need consoling any more :)
<wrtp> fwereade__: good. i was a bit concerned by your original remark...
<wrtp> fwereade__: i know it can be difficult sometimes.
<fwereade__> wrtp, yeah, I was feeling pretty concerned myself when I made it :p
<fwereade__> wrtp, better to talk about it than let it fester though ;)
<wrtp> fwereade__: definitely. pity we can't go for a beer now.
<fwereade__> wrtp, I'll have a virtual beer with you later all the same ;)
<niemeyer> mthaddon: charmload takes a single argument which is the mongodb address:port, and charmd takes that plus the HTTP listen address
<fwereade__> wrtp, well, it'll be a real beer, but... yeah, I appear to have lost the gift of coherence
<wrtp> fwereade__: sounds good. i'm going out tonight, so will raise a glass to you!
<niemeyer> mthaddon: If you run either without any arguments they'll tell you that as well
<fwereade__> wrtp, cheers :)
<fwereade__> are we meeting-time now?
<mthaddon> niemeyer: ok, thx - any recommendations on how to run them as a service? start-stop-daemon?
<mthaddon> (well, charmd - I guess charmload is a cronjob)
<TheMue> fwereade__: new approach for agent is in
<niemeyer> mthaddon: Yeah.. it currently spits logs in stdout
<fwereade__> TheMue, cool
<niemeyer> mthaddon: and doesn't fork in the background..
<niemeyer> mthaddon: Anything that can deal with that is fine
<niemeyer> mthaddon: If you need changes in the way it runs, I can do them too
<mthaddon> ok, will see what I can do
<wrtp> i've got to go in 23 minutes, so would be good if meeting was sooner rather than later.
<wrtp> hazmat, niemeyer: ^
<niemeyer> wrtp: That's going to be rushed, and I actually just found out that there's a conflicting meeting which I'm in right now, so I'm happy to delay/postpone that one
<wrtp> niemeyer: ok.
<wrtp> i guess we could have the meeting tomorrow
<hazmat> that's fine
<fwereade__> hazmat, may I merge the maas auth fix? IMO it's a trivial, but since you're here I thought I'd check :)
<hazmat> didn't see that
<hazmat> fwereade__, looks good
<mthaddon> niemeyer: how would we monitor the service - ideally a nagios check type thing that lets us know charmd is up and running and doing what it should be but is relatively lightweight
<hazmat> fwereade__, [a=author] on the commit msg
<fwereade__> hazmat, gaah, hit go *just* as your notification came up
<niemeyer> mthaddon: We can pick one of the charm URLs and check to see if it's alive
<fwereade__> hazmat, sorry :((
<niemeyer> mthaddon: This will go up to the database, so will be a more relevant check
<niemeyer> mthaddon: Once the server is up I can provide you with such a URL
<mthaddon> ok
<mthaddon> niemeyer: while I'm in questioning mode, what URL do we want for this? charmstore.ubuntu.com or something?
<fwereade__> ok gents, since we're not meeting I'm done for the day; good nights all :)
<niemeyer> mthaddon: We actually have a URL for that that is already expected by the client.. I *think* it is store.juju.ubuntu.com
<niemeyer> mthaddon: let me confirm
<mthaddon> k
<niemeyer>         repo = RemoteCharmRepository("https://store.juju.ubuntu.com")
<niemeyer> mthaddon: ^
<mthaddon> k, thx
<niemeyer> robbiew: ARGH.. I just submitted the objectives form by mistake, with a single objective entered.
<robbiew> niemeyer: lol...hazmat did as well..not very user tested
<niemeyer> robbiew: Sorry :(
<robbiew> niemeyer: just send me what you want and I can add them, then you can countersign it
<wrtp> i'm off. see y'all tomorrow.
<niemeyer> wrtp: Have a good one
<niemeyer> robbiew: Will do
<niemeyer> robbiew: Sent
<robbiew> niemeyer: cool, thx
<niemeyer> robbiew: Sorry for the toruble
<robbiew> niemeyer: it's no trouble...no worries
<robbiew> niemeyer: done...just need you to countersign...and then you can begin the awesome task of self review :)
<niemeyer> robbiew: Thanks. I'll look for a mirror.
<robbiew> heh
<robbiew> niemeyer:  http://www.youtube.com/watch?v=-DIETlxquzY
<niemeyer> robbiew: ROTFL
<robbiew> niemeyer: :)
<hazmat> niemeyer, btw jimbaker's enhanced relation spec is need of a review from you
<jimbaker> hazmat, niemeyer, that would be good - i'm back from pycon tomorrow so i can work on that then
<niemeyer> jimbaker: Sure, will review it onw
<niemeyer> now
<jimbaker> niemeyer, thanks
<niemeyer> jimbaker: This looks a bit like a bag-of-changes spec.. these changes being described in the summary look quite unrelated to each other
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: There's a pretty clear hint of that in the topic of the spec actually: "Modified support for working with relations"
<jimbaker> niemeyer, you are absolutely right about that title, it was very intentional
<niemeyer> jimbaker: As in, intentionally vague? Why?
<jimbaker> niemeyer, this is what i was asked to work on
<jimbaker> nonetheless, there's only this common theme
<jimbaker> that it is working on a number of features about relations
<niemeyer> jimbaker: on something intentionally vague? Sorry, I'm not following what you mean..
<niemeyer> jimbaker: Yeah, but you being asked to work on a number of things is something unrelated to having a spec that describes a number of unrelated things
<niemeyer> jimbaker: We can reach a different agreement on each one of those things
<niemeyer> jimbaker: and this single spec will be blocked while we can't reach an agreement on all of them
<jimbaker> it's not meant to be intentionally vague, but it does cover a range of useful topics
<jimbaker> if you want to consider it a set of separate proposals, that probably works
<jimbaker> and we can work on each one on in turn
<niemeyer> jimbaker: I'll go over it because I guess that's the shortest path to action by now, but it's better to have more clearly defined specs and goals. You could have some of those bits already done by now, for example, if you had started with a single one of them
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: Actually, that's not true..
<niemeyer> jimbaker: We can probably get approval on some of those aspects very quickly
<niemeyer> jimbaker: and you might start working on them right away
<jimbaker> niemeyer, that's good to hear
<niemeyer> jimbaker: Can you please do something very quick: take those independent concerns apart and put them in separate branches, and repush for review
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: I'll review all of them in turn right away
<niemeyer> jimbaker: But I expect we'll be able to get some of them in quickly, while others will require more back and forth
<niemeyer> jimbaker: You can work on the initial ones first, while we reach agreement in the rest
<jimbaker> niemeyer, sounds good
<niemeyer> jimbaker: Thanks, and sorry for the trouble
<jimbaker> niemeyer, np
<hazmat> SpamapS, re local provider surviving restarts.. i'd really like to an upstart 1.4 feature for setuid/setguid.. do you think that's sensible re compatibility though for older distro releases?
<hazmat> argh.. ^ like to use
<flacoste> can we have a review of https://code.launchpad.net/~julian-edwards/juju/fix-maas-config-serialization please?
<niemeyer> hazmat: ^
<hazmat> flacoste, its already merged
<flacoste> wow!
<flacoste> thanks!
<flacoste> sorry, for the noise, i must had a stale page :-)
<niemeyer> hazmat: Man, that's the first time I've seen a review request responded to in a negative amount of time.
<flacoste> !
<niemeyer> jimbaker: https://codereview.appspot.com/5836049/ has a review
<SpamapS> hazmat: I don't think we can reasonably support the local provider in pre-11.10 releases.. but 11.10 needs to work...
<SpamapS> hazmat: two options.. either do a version check before emitting setuid/setgid, or just use start-stop-daemon
<niemeyer> jimbaker: and you have a review for https://codereview.appspot.com/5837051/ as well
<niemeyer> hazmat: Might be good to have your opinion on that one too ^
<niemeyer> I'm stepping out for some execising
#juju-dev 2012-03-16
<hazmat> bcsaller, lots of comments on the rel type branch..
<bcsaller> hazmat: thanks, cool, I'll check it out
<hazmat> nothing structural
<hazmat> not finished
<bcsaller> hazmat: yeah, didn't post yet. I do like that it will IM me
<hazmat> bcsaller, oh? i didn't know that it IMs
<bcsaller> hazmat: when you log in with your google account I think its an account setting, you have to allow it access to your contacts or something
<bcsaller> and then it shows up in google chat
<hazmat> nice
<hazmat> review out
<bigjools> anyone around who can help me debug stuff in the cloudinit provider please?
<TheMue> fwereade: morning
<fwereade> TheMue, heyhey
<TheMue> fwereade: new agent approach is in, now even more simplified
<fwereade> TheMue, reviewed, basically LGTM but with handwavey architectural notes
<TheMue> fwereade: what does "handwavey" mean?
<fwereade> TheMue, that I can't really justify my claims, even if they feel right to me, so I've waving my hands around in an attempt to convince by distraction
<TheMue> fwereade: hehe
<TheMue> fwereade: i'm currently testing a bit with private interface methods. so that at initialization only the surrounding instance has to be passed.
<fwereade> TheMue, yeah, I don't think there's anything wrong with your choices: I think you'll end up changing them in the future, but IMO nothing there is unclear or unpleasant and it can mutate to fit in with the rest of the app when we know *how* it needs to mutate
<TheMue> fwereade: the period and timeout are first shots, thats why i put them into constants. makes the tuning easier. could even be configurable in future
<fwereade> TheMue, yeah: those timings are good for testing
<TheMue> fwereade: and the embeddable type surely can be public if wanted. do you know any place today where the mixin is used outside the state lib?
<fwereade> TheMue, I may be wrong, but I thought the agent types themselves would use it and I was unjustifiably expecting them to live outside state
<fwereade> TheMue, btw total quibble: I'm not sure you need "embed" in the name -- it's a perfectly good agent class on it own, even if it doesn't have any behaviour other than connect/disconnect
<TheMue> fwereade: i've choosen it because it is no agent itself, it only provides something to embed for the work with/in agents
<TheMue> fwereade: like the current type is called AgentMixin
<fwereade> TheMue, like I said, total quibble :)
<fwereade> TheMue, yeah, I'm not saying "you must change this", just "I would have done it differently"; maybe some of my arguments will strike you as good things but I don't think they're dealbreakers :)
<TheMue> fwereade: yeah, currently there's no established way in go how to name a type wich is intended to be embedded only
<TheMue> fwereade: otoh AgentEmbed sounds indeed strange.
<fwereade> TheMue, random thought: AgentCore?
<TheMue> fwereade: core isn't used yet in the go port, but base. so i would prefer agentBase. and btw, todays mixin is only used inside state. so it's ok to let it be private
<fwereade> heya rog
<fwereade> er, wrtp :)
<wrtp> fwereade: mornin'
 * wrtp doesn't seem to have a hangover but can't quite believe it yet.
<fwereade> wrtp, it's probably waiting in the wings :p
<fwereade> brb
<wrtp> fwereade: that's what i'm worried about. am continuing to drink water...
<wrtp> TheMue: i still don't understand quite how you intend agentEmbed to be used. on the one hand, it has a Pinger, which is something used by the agent itself; but on the other hand it calls Alive, which is something to be used by something outside the agent.
<wrtp> TheMue: i'm not sure that it makes sense to put both of these things in the same type.
<wrtp> TheMue: good morning BTW!
<TheMue> wrtp: moin ;)
<TheMue> wrtp: it's just used like the old AgentStateMixin
<wrtp> TheMue: i'm not sure that we need to faithfully copy every detail of the python structuring.
<TheMue> wrtp: it's embedded into Service, Machine, Unit (and maybe some more)
<wrtp> TheMue: as it is, it just looks to me like a very thin wrapper around the presence package.
<wrtp> TheMue: (presence + one method that does a timeout)
<TheMue> wrtp: dunno if any architectural changes are planned. afaik the first goal is to catch up with the python implementation and later make changes
<wrtp> TheMue: i think we're copying behaviour, not internal architecture.
<wrtp> TheMue: after all, this is quite a different language, and different structuring is often appropriate.
<TheMue> wrtp: it is, yes. but while presences is a technique regarding zk the wrapping is a more semantical encapsulation. otherwise we deon't need a state, every state user could use zk itself.
<wrtp> TheMue: i look at this code and i think "what does this actually *do*?".
<TheMue> wrtp: currently it's only demonstrating a way to discuss about
<wrtp> TheMue: as far as i can see, this is a convenience for stuff *inside* state, not an important visible abstraction.
<TheMue> wrtp: later that agentEmbed will be embedded in at least Service, Machine and Unit. And with that step they all will get the agent functionality.
<wrtp> TheMue: yeah, ok, i think i see it now
<TheMue> wrtp: it's just a pretty simple encapsulation that doesn't hurd but bundles logical behavior instead of reimplement those four methods inside of each type which needs it
<wrtp> TheMue: how about calling the type "agent" rather than agentEmbed
<wrtp> TheMue: then the various unit types can be UnitAgent, ServiceAgent, etc
<wrtp> TheMue: and they'll embed it
<TheMue> wrtp: fwereade and i already discussed about the naming. agendEmbed is indeed ugly, but only agent is also wrong. it "is" no agent. so i would prefer agentBase (or fwereades' agentCore).
<TheMue> wrtp: the embedding types are just stete.Service, state.Machine, state.Unit
<wrtp> TheMue: what about agentPresence?
<wrtp> TheMue: after all, that's what the type implements
<TheMue> wrtp: no, in this case all state types would have to be called UnitZK, ServiceZK, ...
<TheMue> wrtp: no information about "how" please, only about "what"
<wrtp> TheMue: i don't understand. isn't the type about the presence or absence of an agent?
<wrtp> TheMue: regardless of the fact it uses the presence package.
<TheMue> wrtp: the usage of those methods is btw spreaded. has_agent (now AgentConnected) is e.g. calles by juju.control.status
<TheMue> wrtp: it's about if an agent is connected or not. this naming has been a proposal by fwereade
<fwereade> TheMue, wrtp: I'm starting to lean towards just state.Agent myself
<wrtp> TheMue: connected==present, no?
<wrtp> fwereade, TheMue: i'm wondering if rather than embedding this type, each type (Service, Machine etc) could have a field or method which returns the Agent.
<TheMue> wrtp: yep
<wrtp> fwereade: then you'd call, e.g. svc.Agent().Connected()
<wrtp> rather than svc.Connected()
<fwereade> wrtp, that reads very nicely, yeah
<TheMue> wrtp: what are your concerns regarding embedding (aka mixin in py)?
<TheMue> wrtp: you /only/ win the additional ().
<wrtp> TheMue: i think the fact that the methods are named AgentConnected, WaitAgentConnected etc hints to me that this is really an independent type struggling to get free :-)
<wrtp> TheMue: if we don't embed, then we can use the more natural Connected(), WaitConnected(), Connect() and Disconnect()
<TheMue> wrtp: i can remove those Agent parts ;)
<TheMue> wrtp: every embedded type is an own type, nothing else, but only "embedded"
<wrtp> TheMue: yeah, but when the methods are directly on Unit etc then it's not entirely obvious that they're about the agent associated with the type.
<TheMue> wrtp: that's why i put Agent into the name ;)
<wrtp> TheMue: i'm not sure it's a big win to have it embedded. it means that, for instance, the agent methods will be documented several times.
<wrtp> (i think - i'll just check)
<TheMue> wrtp: your arguments match to all embedded types. they allways could be put in as extra field
<wrtp> TheMue: not when the embedded type is there to satisfy an interface, which is the usual reason for using them
<TheMue> wrtp: yeah, documented exactly where they are matching, at a Service, Unit, Machine
<TheMue> wrtp: â¦ which is one (of many) use cases for type embedding, but it's not restricted to only this one.
<TheMue> wrtp: type embedding is the good old composition, nothing more
<wrtp> TheMue: i think that using them to satisfy interfaces is the usual reason. otherwise why embed rather than just adding a field or a method?
<wrtp> s/them/embedding/
<wrtp> TheMue: BTW AFAICS if you have an embedded type, the embedded methods aren't documented at all. i thought they'd fixed that, but it seems not.
<TheMue> wrtp: but opposit of embedding/using an accessor we maybe shoud discuss with gustavo how this topic should be handled in future. the provided methods of todays mixin are only used in tests and at about 5 places in code. ;)
<wrtp> TheMue: sounds like a reasonable reason for not embedding it to me - no particular need to privilege those methods.
<TheMue> wrtp: good to know (the doc info). if you embedd a type you don't need an accessor method. and if you alternatively make the field public anyone could write to it (what could be unintended).
<wrtp> TheMue: that's not unusual. just because something is public doesn't mean you're allowed to write to it. (there are lots of examples in the core Go tree)
<TheMue> wrtp: it seems to me that you more have a problem that it is embedded than if it is logically working the right way or maybe should be organized a different way?
<TheMue> wrtp: which doesn't make it better (those exambles) for long term maintainability
<wrtp> TheMue: yeah, i'm happy with the functionality. i think it would work well (and be better documented) as an independent type.
<wrtp> TheMue: i'm concerned that if we embed it: a) we won't see any documentation for the Agent* methods at all or b) we'll get the same documentation comments repeated several times.
<TheMue> wrtp: btw, it IS an independent type also. there's no problem (beside the weird naming) with using it as a field
<wrtp> TheMue: cool. in which case why not just call it Agent, rather than assuming it's going to be embedded?
<TheMue> wrtp: the first argument is indeed bad, yes. multiple times wouldn't bother me, because i don't have to navigate to a different type to get the doc.
<TheMue> wrtp: is it an agent?
<TheMue> wrtp: or does it represent an agent?
<wrtp> TheMue: the latter, i think.
<wrtp> TheMue: it's all that the state package knows of agents, AFAICS
<TheMue> wrtp: what btw is an agent exactly?
<wrtp> TheMue: an agent is the program that runs on the machines or units etc and implements the connection between the state and the actual juju behaviour.
<wrtp> TheMue: so, Agent wouldn't *be* the agent, but it would represent one.
<TheMue> wrtp: still not absolutely happy with it
<TheMue> wrtp: i would like to know more about the motivation why the original code has choosen a mixin (opposit to a regular type).
<TheMue> wrtp: if that had no real reason i'm fine with those changes you've mentioned
<wrtp> fwereade: what do you think? was there a particular reason why the original code chose to use a mixin rather than a regular type?
<fwereade> wrtp, TheMue: it's common behaviour across a few types that aren't otherwise related; don't think there was any deeper reason than that
<wrtp> fwereade: do you think Agent as a name for the type seems reasonable?
<wrtp> func (u *Unit) Agent() *Agent
<fwereade> wrtp, yeah, I think so, package naming should take care of type ambiguity
<fwereade> wrtp, but I feel that this remains somewhat academic until we have a client
<wrtp> fwereade: agreed.
<wrtp> fwereade: i think it'll simplify the documentation though. rather than "agentEmbed is a helper type to embed into those state entities which..." we can have "Agent represents a juju agent."
<wrtp> TheMue: does that sound reasonable to you?
<TheMue> wrtp: it's ok for me to name it Agent (even if i'm still not absolutely happy with it). but mixins/emebeds are a nice way for my to encapsulate related functionality to weave it transparently into multiple types
<TheMue> wrtp: naming and doc change is ok
<wrtp> TheMue: embedding could still work ok if we embedded the Agent as a public field. i'm not sure that'll be the best idea, but it would allow the same access pattern.
<wrtp> TheMue: cool. sorry for my usual recalcitrance :-)
<TheMue> wrtp: no, please no public field. than i would better prefer an accessor
<wrtp> TheMue: fair enough. unit.Agent().WaitConnected() it is then :-)
<TheMue> wrtp: will change the proposal but will also ask gustavo
<wrtp> TheMue: sounds good
 * TheMue still likes type embedding even w/o the needance of an interface implementation
 * wrtp has been trying to look for examples where it's used like that.
<TheMue> fwereade: changed the timout value, but that's bad for tests. so i have to change it to be configurable
<TheMue> wrtp: we are 'before' Go 1, the language is young. so it will still last a long time until all good pattern/idioms are found
<wrtp> TheMue: true.
<wrtp> TheMue: but there is lots of good code already...
<TheMue> wrtp: yep
<TheMue> wrtp: and also lots of bad code
<wrtp> TheMue: of course, but not much in the Go tree as far as i've seen. that's my "gold standard"...
<TheMue> wrtp: and both will grow, with some/few pearls and a lot of bollocks
<wrtp> TheMue: sure.
<fwereade> bigjools, are you there?
<wrtp> TheMue, fwereade: trivial review for you: https://codereview.appspot.com/5845045/
<TheMue> wrtp: LGTM, but what does IsError() additionally do?
<wrtp> TheMue: additionally?
<wrtp> TheMue: you mean "what does it do as well as check the particular zk error code?"
<wrtp> TheMue: or just "what does it do?"
<TheMue> wrtp: ok, better, what does it more than err = ...?
<wrtp> TheMue: zk errors are no longer simple integer types.
<wrtp> TheMue: check out the new version of zk.
<TheMue> wrtp: but error types, aren't they?
<TheMue> wrtp: will look later, have to leave for about two hours
<TheMue> wrtp: cu
<wrtp> TheMue: yeah, the thing you want to compare is the error code, not the struct containing it
<wrtp> TheMue: have fun
<TheMue> wrtp: ah, ok, thx
<wrtp> TheMue: i did originally name the function "HasErrorCode" but niemeyer suggested IsError
<wrtp> TheMue: which is a better name i think.
<fwereade> wrtp, LGTM
<wrtp> fwereade: i think i'm gonna submit as it's so trivial
<fwereade> wrtp, sounds good to me
<fwereade> wrtp, were you working on an extension to testing to get States?
<fwereade> wrtp, or did you decide to leave it?
<wrtp> fwereade: i decided that testing shouldn't rely on the state package, but i am working on an extension to testing to clean the zk tree.
<fwereade> wrtp, sweet
<wrtp> fwereade: i'm wondering about exporting a ZkRemoveTree function from testing too, so we don't have *three* implementations...
<fwereade> wrtp, +10-
<fwereade> wrtp, how does https://codereview.appspot.com/5832045 look to you?
<wrtp> fwereade: it seems a little odd that the ContextId can't be inferred from the Context...
<fwereade> wrtp, I see ContextId as only meaningful in a map[string]*Context
<fwereade> wrtp, it's not like the Context ever uses the id itself
<wrtp> fwereade: i was thinking that perhaps it would work well if the context id was created in (and stored inside) the context itself.
<wrtp> fwereade: makes it easy for methods on the context to print it out in log messages too, which might be useful
<fwereade> wrtp, I don;t think the context id will be helpful in a log; the unit and relation definitely are, but the context id tells us nothing
<fwereade> wrtp, the members *might* be but that should be left up to the user, we don't want to print an arbitrarily long list of members every time
<wrtp> fwereade: can't we potentially have several contexts for a given unit/relation pair?
<fwereade> wrtp, the only meaningful way in which they will differ is in the members list
<wrtp> fwereade: they've also got state too, right?
<fwereade> wrtp, and what temporary/cached settings they happen to have
<wrtp> fwereade: yeah
<wrtp> fwereade: *shrug*
<fwereade> wrtp, yeah, but any State object will do; it's about the state of the env, not something unique to the context
<wrtp> fwereade: not quite, because there's temporary state that gets flushed (or not), right?
<fwereade> wrtp, (heh, where "env" refers to the global ZK environment)
<fwereade> wrtp, yeah, that's the map[string]*ConfigNode stuff I mentioned
<fwereade> wrtp, which remains, technically, speculation
<wrtp> fwereade: just seems like it might be a nice place to store the context id. we're always gonna have a 1-1 relationship of context<->id, no?
<fwereade> wrtp, why duplicate the information? it'll be in a map of context ids to contexts anyway
<wrtp> fwereade: in the past when i've had map[id]something, i've usually ended up putting the id in the something eventually.
<fwereade> wrtp, then we can put it in as soon as there's a need :)
<wrtp> fwereade: if you do that, you'll remove the field from ExecInfo, right?
<fwereade> wrtp, ofc
<wrtp> (that's a good enough reason for me, BTW)
<fwereade> wrtp, cool :)
<fwereade> wrtp, TheMue: I want to add something to testing, and I need opinions
<fwereade> wrtp, TheMue: specifically I want to move the test charms repo into testing and add some methods that always returns paths to copies of those charms (which should remain inviolate)
<wrtp> fwereade: sounds like a good idea to me
<wrtp> fwereade: make testing pull its weight a little more :-)
<fwereade> wrtp, TheMue: have me made a semi-firm decision to keep dependencies out of testing?
<wrtp> fwereade: not necessarily
<fwereade> wrtp, TheMue: because what'll actually be *convenient* 9 times out of 10 is "gimme a Charm", not a path-to-a-charm
<wrtp> fwereade: does that mesh well with the current charm testing code?
<fwereade> wrtp, depends: in go/charm itself, sometimes but definitely not always
<fwereade> wrtp, in terms of tests for state, and also for hooks (which will be using state)
<fwereade> wrtp, ...pretty much always, I think
<wrtp> fwereade: i'm happy either way. the testing package is all about convenience, and dependencies don't matter too much (although it's nice to keep them down as usual)
<fwereade> wrtp, well, ok, both charms and paths-to-charms will be useful in both situations
<fwereade> wrtp, cool, thanks
<wrtp> fwereade: you've got a reivew
<wrtp> review
<fwereade> wrtp, cheers
<wrtp> fwereade, TheMue: another review for your entertainment and delight:  https://codereview.appspot.com/5841047
<fwereade> wrtp, what would you think of renaming TestingT to Fatalfer?
<wrtp> lol
<wrtp> i wondered about that.
<fwereade> wrtp, my brain got tangled up trying t figure out what was actually in play there
<fwereade> wrtp, and since a gocheck.C would work just as well...
<fwereade> wrtp, ...the T feels wrong
<wrtp> fwereade: agreed
<wrtp> fwereade: but i couldn't think of a better name
<fwereade> wrtp, Fatalfer is ugly but effective
<wrtp> fwereade: true, and noone is ever gonna name it :-)
<fwereade> wrtp, IMO
<wrtp> ok, i'll change it
<fwereade> wrtp, yeah
<fwereade> wrtp, cool
<fwereade> wrtp, reviewed
<wrtp> fwereade: "Can we really delete "/" without getting an error?"
<wrtp> hmm, seems so!
<fwereade> wrtp, it's not in the tests
<wrtp> fwereade: ah, good point
<wrtp> fwereade: will add it
<fwereade> wrtp, cool, thanks
<wrtp> niemeyer: hiya!
<niemeyer> Morning jujuers!
<fwereade> niemeyer, heyhey
<TheMue> niemeyer: moin ;)
<wrtp> TheMue: any particular reason why some methods are on *TopologySuite and some on TopologySuite?
<TheMue> wrtp: have to look, normally no reason.
<wrtp> TheMue: ok, was thinking of editing to make them consistent.
<fwereade> wrtp, LGTM
<wrtp> TheMue: another thing: why is TopologySuite in internal_test.go? i haven't looked too hard, but it looks like most of what it does is using public methods. you could maybe get away with putting readTopology into export_test.go, perhaps?
<TheMue> wrtp: it's only needed in SetUpTest() due to field modification
<wrtp> TheMue: what is?
<TheMue> wrtp: it's there because in the first approach the methods have been private
<wrtp> TheMue: ah, so it could go back outside internal_test? that would be a good thing to do, i think.
<TheMue> wrtp: hehe, that's the problem of asking a second question while the first isn't answered yet
<wrtp> TheMue: ah you mean *TopologySuite is only needed in SetUpTest?
<wrtp> :-)
<TheMue> wrtp: yep
<wrtp> i think i'd just make 'em all pointer-receiver
<wrtp> TheMue: it's pretty conventional to do that.
<TheMue> wrtp: could do this, yep, would change nothing beside the fact that any test could create side-effects by modifying the suite data by accident
<wrtp> TheMue: i don't mind that
<TheMue> wrtp: btw, the agent entity in the test will be removed if we agree that this approach is the right one. it has been there only for testing.
<wrtp> TheMue: ok.
<TheMue> wrtp: as i don't mind about not always use reference types
<TheMue> wrtp: otherwise, why aren't all automatically reference types, e.g. like in java?
<wrtp> TheMue: i usually go all one or all the other.
<TheMue> wrtp: what do you win with that approach?
<wrtp> TheMue: i usually use reference types when the struct is bigger than a few words.
<wrtp> TheMue: dunno really, just seems conventional.
<TheMue> wrtp: regarding agent, i would still like to keep it private. it's no standalone data type and will only be used directly inside the state package
<wrtp> TheMue: that's only true if it's embedded. i'm thinking that it'll be returned as an object in itself.
<wrtp> [09:44] <wrtp> func (u *Unit) Agent() *Agent
<TheMue> wrtp: one can call myUnit.Agent().Connected() w/o problems
<wrtp> TheMue: if the Agent method is exported, i think its return type should be.
<wrtp> TheMue: otherwise you can't declare a variable to assign it into.
<TheMue> wrtp: you don't need an extra variable
<wrtp> TheMue: you might want one...
<TheMue> wrtp: why?
<TheMue> wrtp: what's the use case?
<wrtp> TheMue: so you can use it some way that's independent of the unit its associated with.
<wrtp> TheMue: having a publicly exported method with a private return type seems a bit... rude.
<wrtp> TheMue: and then you won't see the docs for the methods either.
<TheMue> wrtp: that's why i preferred the embedded approach first
<wrtp> TheMue: i don't see why we can just export the Agent type. seems to work ok for me.
<niemeyer> wrtp: Which methods would be in the Agent?
<TheMue> wrtp: so anyone can do an agt := &state.Agent{st, "/some/crude/path", nil} ?
<wrtp> niemeyer: https://codereview.appspot.com/5782053/diff/15001/state/agent.go
<TheMue> niemeyer: it's the reincarnation of the old AgentStateMixin
<wrtp> TheMue: only if you export the fields.
<TheMue> niemeyer: my second approach has been to create it as type that can be embedded
<TheMue> wrtp: ok, agt := &state.Agent{}
<wrtp> TheMue: that's true of any exported type.
<wrtp> TheMue: including State, etc
<TheMue> wrtp: yep, that's why i'm careful with exporting
<TheMue> wrtp: Agent only makes sense together with its state entity
<wrtp> TheMue: i disagree. it makes perfect sense for a unit agent to get the Agent from its Unit and use that independently.
<TheMue> wrtp: and forget about the unit state?
<wrtp> TheMue: (if it feels like it)
<wrtp> TheMue: yeah, why not?
<wrtp> TheMue: it'll be doing other things with the unit state, but it's not nonsense to use the Agent type independently.
<niemeyer> TheMue: I suggest having those methods on the unit itself..
<TheMue> niemeyer: hehe
<niemeyer> TheMue: StartAgentPinger, IsAgentAlive, WaitAgentAlive..
<TheMue> niemeyer: that's what i had with embedding
<niemeyer> TheMue: You don't need DIsconnect.. just return the pinger
<niemeyer> TheMue: No need to embed either.. this is the unit..
<TheMue> niemeyer: only the unit? today it's also in machine and service
<niemeyer> TheMue: pinger should be returned, rather than kept within the agent/unit
<niemeyer> TheMue: Ah, bad assumption on my part.. nevermind then
<wrtp> niemeyer: you might prefer TheMue's original implementation: https://codereview.appspot.com/5782053/diff/7001/state/agent.go
<TheMue> niemeyer: we had a longer discussion about different variants this morning
<niemeyer> TheMue: Yeah, just export Agent then..
<wrtp> niemeyer: i thought that if this functionality was going to be used in several different places, that it would make sense as an independent type rather than embedding it.
<niemeyer> TheMue: Sounds good, sorry for jumping in.. I'm clearly missing details
<niemeyer> wrtp: +1
<niemeyer> wrtp, TheMue: What's there looks quite good, FWIW
<wrtp> niemeyer: hence the current proposal.
<wrtp> niemeyer: the latest version? or the earlier one?
<TheMue> niemeyer, wrtp: ok, will go in in a few moments as new proposal
<niemeyer> wrtp: What's in https://codereview.appspot.com/5782053/diff/15001/state/agent.go
<wrtp> niemeyer: cool
<wrtp> niemeyer: i'm thinking that with the agent type exported, it'll work well.
<wrtp> niemeyer: but i'm not sure that TheMue agreeds :-)
<wrtp> agrees
<TheMue> wrtp: no pro, 3 to 1 ;)
<wrtp> TheMue: i'm hoping you'll agree too when you see it being used :-)
<TheMue> wrtp: nah, there only has been small doubts. i simply like embedded types.
<wrtp> me too... in their place :-)
<TheMue> wrtp: so i now can also remove the test type and integrate agent into unit as a first run
<wrtp> TheMue: sounds good
 * wrtp goes to lunch.
<niemeyer> fwereade: ping
<fwereade> niemeyer, pong
<niemeyer> fwereade: Do you have a moment for a call?
<fwereade> niemeyer, sure
<fwereade> niemeyer, g+?
<niemeyer> fwereade: Sounds good
<fwereade> niemeyer, invited
<niemeyer> fwereade: You too :-D
<fwereade> niemeyer, ha, sorry, I'll join yours; ok?
<niemeyer> fwereade: Ok, I'm moving, ok?
<niemeyer> :)
<niemeyer> HAHA
<niemeyer> fwereade: Choose a random number, now! :-)
<fwereade> 1
<wrtp> back
<wrtp> are we having a meeting now?
<wrtp> niemeyer, hazmat: ^
<fwereade> wrtp, an hour from now I think
<wrtp> fwereade: my calendar said now.
<wrtp> fwereade: but i'm happy to go with an hour from now too
<fwereade> wrtp, niemeyer said "lunchtime" ;)
<hazmat> doh
<wrtp> fwereade: ah, that's fairly conclusive :-)
<hazmat> meeting in 1hr
<hazmat> i pushed it to the same time not taking into account aforementioned timezone shifts
<hazmat> on the calendar
 * niemeyer waves
<niemeyer> hazmat, fwereade, wrtp, bcsaller, jimbaker: Party time?
<fwereade> woooooo!
<wrtp> niemeyer: yay!!!
<hazmat> indeed
<jimbaker> sorry, i need to find a better place. didn't realize we had a meeting now
<hazmat> invites should be out
<niemeyer> I used to hangout with people.. nowadays I hangout with extras..
<fwereade> haha
<wrtp> bizarre, i wonder where my windows have been disappearing to
<wrtp> they've all gone, including my editor (this IRC window, until i quit it from the sidebar and restarted it), my music player, etc
<wrtp> but they're still running!
<wrtp> guess i'll try restarting unity
<wrtp> again
<wrtp> weird, i did that and they've all come back .... except for Chrome which has now disappeared again
<wrtp> sigh
<wrtp> time to reboot, i think
<wrtp> and i can't type into any of them except this one!
<wrtp> niemeyer: that error will never happen unless something is doing a concurrent delete alongside ZkRemoveTree
<wrtp> niemeyer: because we'll get an error earlier when calling zk.Children
<niemeyer> wrtp: That error will never happen unless it happens.. you've documented it to have a given behavior, so either implement the behavior or remove the comment.
<wrtp> niemeyer: fair enough
<fwereade> wrtp, I strongly fvour removing the comment
<fwereade> wrtp, I'd rather have that test fail than other tests work with weird inconsistent data
<wrtp> fwereade: i think it's important to be able to do ZkRemoveTree("/x") and have it work when /x doesn't currently exist
<wrtp> fwereade: it means that a test can be correctly torn down even if the path was not created as expected.
<fwereade> wrtp, should we not specifically be ignoring ZNONODEs though?
<niemeyer> fwereade: The end result is the same if the node doesn't exis
<niemeyer> t
<niemeyer> fwereade: RIght, that's what's documented
<fwereade> wrtp, if there's any other error we want to know about it
<fwereade> niemeyer, ^
<niemeyer> fwereade: Yeah
<wrtp> fwereade: yeah. i am, but i hadn't taken into account the possibility that something might be deleting nodes concurrently with ZkRemoveTree
<wrtp> fwereade: that's what i do, i think
<fwereade> wrtp, ohhh... sorry, yes, you never get to delete if you got a ZNONODE out of children
<wrtp> fwereade: yup
<fwereade> wrtp, I guess we're unlikely to call that any time other than test teardown so it's unlikely anything else would be working concurrently
<wrtp> fwereade: that was my thought, but i'm checking anyway now, for completeness' sake.
<fwereade> wrtp, yeah, seeing two "unlikely"s in the same sentence makes me nervous ;)
<wrtp> fwereade: i'm not going to add a test though 'cos it's hellish difficult to test.
<fwereade> wrtp, I think I'm ok with that :)
<wrtp> fwereade: phew
<niemeyer> wrtp: It's already quite awesome that we have testing for the testing functions at all :)
 * wrtp smiles and nods.
<wrtp> it was easy to do and i wanted to check the thing worked!
<wrtp> niemeyer: submitted. thanks for the review.
<fwereade> wrtp, just saw a transient failure in TestStartAndClean: "cannot start ZooKeeper server: "cannot listen on port 21812: listen tcp 127.0.0.1:21812: address already in use"
<wrtp> fwereade: hmm
<wrtp> fwereade: i think perhaps there is a race because the server might hang on to the port for a little longer than it should after Destroy.
<fwereade> niemeyer, wrtp: gtg before cath becomes vexed, but: https://codereview.appspot.com/5845051 (and happy weekends!)
<wrtp> fwereade: nice.
<wrtp> fwereade: and a very happy weekend to you too
<wrtp> fwereade: perhaps we should not always start the server on the same port.
<wrtp> fwereade: i was considering making that change anyway.
<wrtp> fwereade: because it eliminates a nasty category of difficult-to-diagnose error.
<wrtp> fwereade: the problem is that we can't reliably pick a good port to use, because there's if we use a socket to check it, there's a possibility that the port allocation lingers for a little while even after the connection's been closed.
<niemeyer> fwereade: Cheers!
<niemeyer> fwereade: Great way to close the week! :-)
<wrtp> niemeyer: i'm off too. have a good weekend!
<niemeyer> wrtp: Thanks, a good one for you as well
<niemeyer> I'll actually break now.. will be back in an hour or so for more reviews
<hazmat> fwereade, i remember talking to you about this bug https://bugs.launchpad.net/juju/+bug/920000 a while ago.. i believe you said it was invalid?
<fwereade> hazmat, as I recall it implies some can't-happeny environment weirdness -- it's clearly somehow related to the twisted change but we couldn't figure out exactly how
<hazmat> fwereade, okay.. so not fixable without reproduction, which shouldn't be possible?
<fwereade> hazmat, yes, well put
<hazmat> fwereade, thanks
<hazmat> niemeyer re status output..     relations:
<hazmat>         db: [blog1, blog2]
<hazmat> if a client has multiple relations to the mysql db in this case..  it would show up multiple times in that list without any contextual information to differentiate
<niemeyer> hazmat: Yo!
<niemeyer> hazmat: Sorry, that took longer than expected
<niemeyer> hazmat: We have two options: 1) mention as many times as necessary, with more context; 2) mention a single time, and if you want to know how e.g. blog2 is connected to mysql, well, look at blog2
<niemeyer> hazmat: Option 2 is more concise
<hazmat> niemeyer, option2 feels like a cleaner ui, if any of the connections from it are failing than it shows under relation-error:
<niemeyer> hazmat: +1
#juju-dev 2012-03-17
<niemeyer> fwereade: Wow, you're still up and kicking..
<fwereade> niemeyer, I should probably sleep, but I've been enjoying myself ;)
<niemeyer> fwereade: Hehe :)
<niemeyer> fwereade: I've been toying with the idea of introducing gocheck.TempDir(), to clean up the testing stuff further
<fwereade> niemeyer, I think the hook.Context turned out really well, tyvm again for that discussion
<niemeyer> fwereade: I'm a bit uncertain right now, though.. will sleep on it
<niemeyer> fwereade: My pleasure, and thanks as well. I enjoyed the insight into your thinking.
<fwereade> niemeyer, and on that note, I think I really will go to bed ;)
<fwereade> niemeyer, thanks :)
<niemeyer> fwereade: Sounds like a good plan :-)
<niemeyer> fwereade: It's almost time for me to go too!
<niemeyer> fwereade: Enjoy the weekend
#juju-dev 2013-03-11
<arosales> mramm: do you guys have a juju-tools tar ball for 1.9.11 that I can use to make a control buck on HP's cloud?
<mramm> arosales: sorry, was in a meeting and then went to lunch without seeing your message, looking...
<arosales> mramm: thanks
<arosales> mramm, I found http://juju-dist.s3.amazonaws.com/tools/juju-1.9.11-precise-amd64.tgz
<mramm> cool
<mramm> I am at the airport waiting for my flight, and haven't had time to look
<mramm> I added two new tickets:
 * arosales is going to see if that works for hp, and if so send my notes to the list.
<mramm> one for adding HP and private openstack cloud's to our "tools just work" story.
<arosales> mramm, hmm I don't see those yet @ http://goo.gl/ but perhaps I am looking at the wrong search
<mramm> https://canonical.leankit.com/boards/view/103148069/104185973
<mramm> https://canonical.leankit.com/boards/view/103148069/104185972
<mramm> I do not expect them to be done this week
<mramm> but hopefully next week before 1.9.12
<mramm> I will get them estimated and on the calendar this week -- but I expect that they will both be small
<arosales> mramm, ah ok I was looking for an lp bug.  Thanks
<mramm> yea, we are using LP for features users ask for, and for actual bugs
<mramm> but are tracking everyday development tasks and internally generated stuff on the kanban board (which is a better fit for our team's processes)
<arosales> question: should "juju destroy-environment" also remove my control bucket in go-juju 1.9.11?
<wallyworld__> arosales: yes, the control bucket gets deleted
<arosales> wallyworld__, hmm ok
<wallyworld__> not what you were expecting?
<arosales> wallyworld__, I guess not as I manually made the control bucket, but perhaps that is a work around . . .
<wallyworld__> arosales: yes, normally the control bucket gets made automatically and is considered part of the environment, you needed to make it manually since we dont yet have a well defined public bucket in which to put tools like for ec2
<arosales> wallyworld__, ya for ec2 I don't see the deleting control bucket message
<arosales> on destroy-environment, thta is.
<arosales> just for hpcloud
<wallyworld__> arosales: for ec2, the control bucket is deleted also, perhaps the logging is different
<wallyworld__> hmm, same logging
<arosales> wallyworld__, ah ok. perhaps cause I got a failed to delete control-bucket message from hp
<arosales> although it did delte
<arosales> *delete
<wallyworld__> that can happen sometimes according to a comment in the code
<wallyworld__> not sure why
<wallyworld__> we should fix that
<arosales> wallyworld__, http://pastebin.ubuntu.com/5606299/
<wallyworld__> thanks, that error code is useful, i'll see what the openstack docs say
<wallyworld__> we may be able to catch that specific error and deal with it
<wallyworld__> so, even with that error, the container was indeed deleted?
<arosales> wallyworld__, since on hp cloud the control bucket has to be manually do you have any suggestions for a workaround if juju destroy-environment deletes?
<arosales> wallyworld__, correct with that error the control bucket was still deleted.
<wallyworld__> arosales: i normally just do a upload-tools each time i bootstrap, but i'm guessing you don't have the source to do that?
<arosales> wallyworld__, it looks like the compute instances were also terminated, which is good as I wasn't able to sucessfully process juju destroy-environment on hp cloud
<wallyworld__> yes, destroy nukes compute nodes and control bucket
<arosales> wallyworld__, correct, I was trying to get around building from source, but perhaps I should just go the source route
<arosales> wallyworld__, sure I was just wondering if the error on deleting the control-bucket stopped terminating the compute instances.
<wallyworld__> arosales: you can create a separate bucket and add it to the config as public-bucket-url. i'll work up an example and tell you what the correct config is
<arosales> but it did not, compute instances were also terminated (just as a side note).
<wallyworld__> arosales: what's your timezone there, will you be around for a bit?
<arosales> us central
<arosales> so getting close to end of day for me.
<wallyworld__> arosales: the bucket deletion is done last, and error is ignore as such
<arosales> wallyworld__, good to know on destroying the resources, thanks.
<wallyworld__> arosales: would it be ok if i emailed you the setup notes?
<arosales> wallyworld__, no rush.  an email works fine for an example.
<wallyworld__> i'll do it now, but not sure how long it will take to get right
<arosales> no worries, just if you have a spare moment.
<wallyworld__> need to spin it up to test etc
<arosales> wallyworld__, I can make a public URL control bucket
<arosales> wallyworld__, https://region-a.geo-1.objects.hpcloudsvc.com:443/v1/49578719334052/juju-6de995f1bc7f4d72a1275f25f7a1ef1a/tools/juju-1.9.11-precise-amd64.tgz
<wallyworld__> not quite - when we decide how/where to make a public bucket available with all the tools in (like for ec2), you add public-bucket-url as a config item
<wallyworld__> in the above axample, you add something like @public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com:443/v1/49578719334052/juju-6de995f1bc7f4d72a1275f25f7a1ef1a@ to your yaml
<wallyworld__> s/@/"
<wallyworld__> so long as the bucket is one you made manually and is separate to your control bucket, you put tools there and it won't be deleted
<arosales> wallyworld__, ok I can give that a try
<wallyworld__> arosales: the issue with openstack is that the url contains the tenant id, unlike ec2, and we need to figure out a good way to publicise an acceptable public bucket url
<arosales> wallyworld__, yup http://pastebin.ubuntu.com/5606323/
<arosales> I ran into that
<wallyworld__> arosales: public-bucket-url
<wallyworld__> not public-control-bucket
<arosales> sorry I had "public-control-bucket"
<wallyworld__> np
<arosales> wallyworld__, do I need to specify the tools path in the url I pass to my yaml file?
<arosales> http://pastebin.ubuntu.com/5606337/
<wallyworld__> arosales: let me check
<arosales> wallyworld__, trying with appending /tools and /tools/juju-1.9.11-precise-amd64.tgz didn't help the finding the tools
<wallyworld__> arosales: juju appends "tools" automatically to the public bucket url. at first glance, what you have seems correct.
<wallyworld__> arosales: are you running quantal locally?
<arosales> wallyworld__, correct
<arosales> running quantal locally
<wallyworld__> arosales: so you need default-series: precise in your yaml
<wallyworld__> to tell juju to not just try and use the same series and you are running locally
<arosales> yup got that     default-series: precise
<wallyworld__> oh ok, and still failing
<arosales> wallyworld__, I got juju to work on hpcloud with this control bucket just had the removal of the control-bucket which I was seeing if I could make easier for other folks wishing to test on hpcloud
<wallyworld__> arosales: it would be better if we could just point folks at a public bucket, even if it has a tenant id in it (imho). i'll plug in your bucket above into my setup and see what i need to do to get it to work
<arosales> wallyworld__, no worries, seems the way forward is to build from source
<wallyworld__> i'd like to get this working ie it should work
<wallyworld__> i'll figure out what needs to be done and let you know, may only take a few minutes so i'll ping you if you are still around, otherwise email
<arosales> ya, I was hoping to give my public bucket to others to test, if they wish for an easier setup . .  .
<arosales> wallyworld__, sounds good.
<arosales> please don't let this pre-empt any of your more critical work though :-)
<wallyworld__> indeed. this should work, it's been tested before, just a doc issue i think unless something has broken
<wallyworld__> oh no, to me this is critical
<wallyworld__> you are the first guinea pig to try this. we only just got it working last week with the control bucket trivk
<wallyworld__> trick
<arosales> wallyworld__, ok, thanks.
<arosales> wallyworld__, I am happy to test or write up bugs, just let me know :-)
<wallyworld__> thanks. appreciate your patience
<arosales> ah no worries, part of the fun :-)
<wallyworld__> yes. we were so exciting last thurday when "juju deploy" wordpress worked for the first time on hp cloud :-)
<arosales> yup I also saw it working here with the control bucket removal
<arosales> \o/ :-)
<wallyworld__> indeed
<arosales> wallyworld__, I'll take a look at email tonight, but need to log off for a bit.
<arosales> wallyworld__, thanks for the help
<wallyworld__> sure, np. will email as soon as i figure it out.
#juju-dev 2013-03-12
<wallyworld__> arosales: hi, did you try it again?
<arosales> wallyworld__, I am actually giving it a go :-)  but my previous juju destroy-environment didn't exit properly. So bootstraping to that environment is giving me issues.
<arosales> wallyworld__, thanks for the instructions, btw
<arosales> I could just create a new environment . . .
<wallyworld__> np, if you had an error code from the failed destroy?
<wallyworld__> just manually delete your bucket etc using hp cloud console
<wallyworld__> or try destory again, does it work?
<arosales> wallyworld__, http://pastebin.ubuntu.com/5606834/
<arosales> I am not sure if this is a result of a defunct previous environment or not . . .
 * arosales will try with a different environment 
<wallyworld__> never seen that error before, sounds like provider-state is fooked. try deleting it and the container it lives in manually
<wallyworld__> could very well be related to previous env destroy
<arosales> wallyworld__, new environment also fails: http://pastebin.ubuntu.com/5606837/
<arosales> I'll delete the manually manually via the hp console.
<wallyworld__> looks like control bucket url is wrong
<wallyworld__> yeah, just delete everything - compute nodes, control bucket - and try again
<arosales> I have     public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/AUTH_365e2f1e-aea0-44c5-93f4-0fe2eb1f2bcf
<arosales> wallyworld__, manually destroyed environment
<arosales> redeployed and got http://pastebin.ubuntu.com/5606848/
<wallyworld__> i can't see that public bucket content
<wallyworld__> well now it can see your tools etc, so that's a win
<arosales> I am actually using your bucket
<wallyworld__> um, you may have left over security groups from previous runs
<wallyworld__> you can list those and delete
<wallyworld__> nova secgroup-list i think
<wallyworld__> maybe you can see them from hp cloud console, not sure
<wallyworld__> yes, you can
<wallyworld__> i had a look with my creds, and there's a few there, maybe 20
<wallyworld__> not sure what the limit is
<wallyworld__> did you have many groups created?
<arosales> deleted all juju created security groups
<arosales> re-bootstraped
<arosales> http://pastebin.ubuntu.com/5606856/
<arosales> still getting a resource access error when using your control bucket. .  .
<arosales> wallyworld__, perhaps I have to have my own control bucket?
<wallyworld__> yes definitey
<wallyworld__> you need credentials to read/write to it
<wallyworld__> but my public bucket can be shared
<arosales> oh no worries.
<arosales> that was my mistake I thought it was public.  I should have checked via the browser first.
<wallyworld__> your control bucket url looks wrong though
<wallyworld__> it seems to be missing a bit
<arosales> I was using the one you gave me via email.
<wallyworld__> maybe you can paste your environments.yaml fie
<wallyworld__> i didn't give you a control bucket url
<wallyworld__> the control bucket url is generated internaly
<wallyworld__> you got a public bucket url from me
<arosales> sorry I thought step 3 was a public-bucket-url to try
<wallyworld__> let me check my email
<wallyworld__> yes, step is is public bucket url
<wallyworld__> but you said control bucket above
<wallyworld__> i think
<wallyworld__> in any case, the logged messages seem to show an incorrect control bucket url
<wallyworld__> which seems strange
<wallyworld__> if you paste your environmnts.yaml file, i can see if it looks ok (remove any passwords if they are in there)
<arosales> wallyworld__, does that mean public control bucket's can't be shared?
<wallyworld__> there's no such thing as a public control bucket
<wallyworld__> there's a public bucket url which can be shared
<wallyworld__> and a control bucket which is private to your env
<arosales> ah
<wallyworld__> the control bucket url is generated internally from the bucket name in config
<wallyworld__> the public bucket url is gotten from the published url of a public bucket
<wallyworld__> so you create a public bucket (ie container) using the cloud console or equivalent
<wallyworld__> and then put the tools there
<wallyworld__> and then give people the url
<wallyworld__> for the control bucket, that's created by juju
<wallyworld__> it uses the control bucket name in config and creates it under yout own credentials
<wallyworld__> ie private to you
<arosales> wallyworld__, finally success http://15.185.118.228/wp-admin/install.php
<wallyworld__> \o/
<wallyworld__> so you used my public bucket
<wallyworld__> and your own control bucket name
<arosales> sorry about the misstep on the confusing the public-bucket-url with the control-bucket
<arosales> wallyworld__, correct
<wallyworld__> no problem at all, it can be and is confusing
<wallyworld__> i'm really glad it worked
<arosales> wallyworld__, one last question
<wallyworld__> sure
<arosales> I was going to send to the list on getting set up on hpcloud
<arosales> do you have a preference on using your public bucket?
<wallyworld__> that's the $64000000 question
<wallyworld__> i set up my bucket using a shared account
<arosales> I can also create one as we verified the steps here.
<wallyworld__> i'm not sure what our policies around all this are
<wallyworld__> the credentials i use were provided by someone else in canonical
<arosales> ah
<arosales> I'll go ahead and create a public bucket off my account (not shared)
<wallyworld__> we sort of need a true public bucket, but who pays for it if you know what i mean
<arosales> ya there is the cost part
<arosales> I am fine with incurring that cost for now for any users that want to try out hpcloud
<wallyworld__> is that something canonical would just do for the long term so people can use juju on hp cloud?
<arosales> wallyworld__, many thanks
<wallyworld__> np, really pleased you got it working
<wallyworld__> for private clouds, the issue goes away
<arosales> wallyworld__, possibly I'll need to confirm longterm logistics.
<wallyworld__> ok, thanks. if you find out something, can you let us know (us = blue squad and juju-core folks etc)
<arosales> wallyworld__, will do. I'll need to investigate a bit, but I will definitely let you know.
<wallyworld__> excellent, thanks. it been an open question for us, but till now, only of theoretical value since it wasn't working yet :-)
<arosales> wallyworld__, I guess this is treated a little different in aws?
<wallyworld__> ok ec2, there's a public bucket http://juju-dist.s3.amazonaws.com/
<wallyworld__> someone must  pay for that i think?
<wallyworld__> it has been there for ages, so i'm not sure who/how it was set up
<arosales> wallyworld__, seems logically , but I am not sure what account it comes out of :-)
<wallyworld__> me either
<wallyworld__> maybe i was told at one point but cannot recall now
<arosales> wallyworld__, I'll check with a few folks to see if I can find that info. Be good to know
<wallyworld__> yeah, i'm sur ethe juju-core folks would know
<wallyworld__> i'll ask them
<TheMue> Morning
<mgz> let's not do the juju team meeting at 7:00 GMT next week, as google calendar has today's (cancelled) one down for
<jam> mgz, wallyworld_, if you guys want to say hello on mumble, we can, though I don't expect you to be working today yet :)
<wallyworld_> jam: i've been working :-) i have a few questions, so let me grab my headphones
<jam> wallyworld_: you scared mumble
 * TheMue is at lunch
 * TheMue is back again.
<sidnei> hi folks, any chance that https://bugs.launchpad.net/juju/+bug/1097015 is fixed in juju-core?
<_mup_> Bug #1097015: "juju status" slows down non-linearly with new units/services/etc <canonical-webops> <juju:Confirmed> < https://launchpad.net/bugs/1097015 >
<mgz> good question, and on that we can hopefully answer for certain shortly when we do some scale testing
<sidnei> doesn't need too much of a scale fwiw, it's taking me in excess of 60s to run juju-status with about 12 units, one subordinate on each.
<sidnei> with pyjuju still that is
<mgz> sidnei: that much should be fixed
<sidnei> i guess i should give it a try then
<mgz> hmm, annoyingly the review for the fix doesn't include why kapil found it didn't work, and I don't recall
<mgz> hazmat: what exactly was borked with lp:~hazmat/juju/big-oh-status-constant again?
<sidnei> hazmat: ^?
<hazmat> sidnei, openstack?
<sidnei> hazmat: yup
<hazmat> sidnei, i'll take a look.. long term solution is hiding this behind the api and divorcing provider queries from state queries.. ie answer from state, and behind the api can cache response.
<mgz> hazmat: specifically, your branch that aimed to make the O() better for juju status ended up not being effective, but I can't remember why
<hazmat> mgz, for openstack, its because the state api is doing queries to resolve constriants
<mgz> we still ended up doing network operations per-machine, but I don't remember the specifics
<hazmat> mgz, the efficiency came from asking for instances collectivly instead of one by one
<thumper> morning folks
<hazmat> mgz, constraints afaicr
 * thumper is back and only minorly jet-lagged now
<thumper> morning hazmat, mgz
<hazmat> mgz, the constraints lookup in openstack isn't cached and ends up being quite over done.. for flavors
<mgz> we can, and do, cache flavor lookups for constraints, what else do we need?
<hazmat> thumper, greetings
<mgz> if the cache isn't working we should fix that/
<hazmat> mgz, for the openstack query that should be the majority
<hazmat> sidnei, can you do the status with -v and pastebinit
<hazmat> mgz, the rest is fixed overhead around state (still O(n)  that needs an api to resolve
<sidnei> hazmat: sure
<mgz> I know we do zookeeper work per-machine, but I'm pretty certain you told me about something else after you retested when we landed that branch
<mgz> (and I guess we do mongo work per-machine, so have traded one kind of performance characteristic for another in that regard currently on juju-core)
<mgz> hey thumper
<mgz> ...man, I wish #juju was logged, or that I didn't suck and had logs myself
<mgz> memory not quite as grepable
<hazmat> mgz, it is logged
<hazmat> irclogs.ubuntu.com
<mgz> sidnei: so, if it turns out to be something obvious with constraints, that's fixable
<mgz> hazmat: #juju-dev #juju-gui but no #juju
<hazmat> hmm
<mgz> would have been shortly after 2012-08-09 when it got landed that discussion happened
<bac> mgz: just open an RT and IS will log it
<bac> no good for retroactive, though.  :)
<hazmat> mgz, thats so sad.. there was an rt for it
<hazmat> when we switched from ensemble to juju
<hazmat> looks like it was never acted upon
 * hazmat files one
<hazmat> mgz, cc'd you
<mgz> ta
<hazmat> mgz, incidentally i don't see juju-dev there either
<hazmat> the logs that is
<mgz> should be if you selected a new enough year
 * thumper runs go test on trunk in raring to see where we are
 * thumper added a card to kanban to fix the tests
 * thumper fires up the vm again
<thumper> so...
<thumper> what is the damn param to only run a subset of the tests again?
<thumper> some -gocheck flag
 * thumper doesn't remember
<mgz> run `go test -bogusflag` to get flag help
<mgz> -gocheck.f takes a regexp of... something
<thumper> mgz: thanks
<thumper> mgz: we should really write that in to the damn help docs somewhere
<thumper> hi mramm
<mramm> thumper: hey!
<thumper> mramm: hey, just sent you an email about the kanban board
<mramm> I'm on my way to pgday at pycon
<thumper> I kinda forgot that you were online...
<thumper> oh, yeah
 * thumper is jealous
<thumper> one day I'll get to go to pycon
 * thumper preps a talk for next year
<mramm> thumper: we will figure out a way to get you there next year
<thumper> "using juju to deploy your django app"
<mramm> sure
<thumper> now we just have to make it awesome
<mramm> something like that would be nice
<mramm> right
<sidnei> hazmat: is this of any help? http://paste.ubuntu.com/5608923/
<mgz> sidnei: that took less than 2 seconds
<sidnei> mgz: you mean minutes right? :)
<mgz> ha
<mgz> so, no, we need the provider side log I guess
<mgz> 4 seconds to auth and get bootstrap node details, 7 seconds to ssh there and connect to zookeeper, *76* seconds doing stuff remotely, 2 seconds listing all servers with openstack api, 4 seconds formatting and exiting
<mramm> thumper: kanban board change looks great, I'm definitely +1 on that!
<thumper> mramm: coolio
<mramm> looks like I'm about to run out of power here on the plane -- see you all on the other side!
<hazmat> mgz, its all client side
<hazmat> mgz, that's pure zk overhead from the client
<hazmat> sidnei, so this is basically fetch raw data to the client overhead .. this only gets better with the api
 * hazmat heads out for a bit
<m_3> davecheney: pounce
<davecheney> m_3: booja!
<m_3> hey man
<davecheney> m_3: wazzup ?
<m_3> so are you on all day today?
<davecheney> how you doing ?
<davecheney> imma here all week, try the fish
<m_3> good, just munching on the queue
<m_3> ha
<davecheney> queue ? linkage ?
<m_3> ok, so I wanted to do another round of debugging testing after a bit if you're up for it
<davecheney> sure
<davecheney> you were going to tell me how to find the jenkins instance after it gets torn down and rebuilt
<davecheney> m_3: two secs, relocating up stairs
<m_3> oh way
<m_3> s/way/wait/
<m_3> :)
<m_3> bout to change locations
<m_3> can't really work on it til after food... just wanted to check your schedule and plan accordingly
<davecheney> m_3: no probs
<davecheney> do you want do a quck voice call (at your leasure) to sync up
<m_3> ok, so I'll ping you in an hour or so and we can get rolling on that
<davecheney> m_3: kk
<m_3> that sounds great
<davecheney> works for me
<m_3> danke sir
<davecheney> thumper: morning sir
<thumper> hi davecheney
<davecheney> thumper: just checking out your smart bool branch
<thumper> davecheney: cool
<thumper> it isn't rocket science :)
<davecheney> yeah
<davecheney> i've never seen the X status on a card before
<thumper> blocked :)
<thumper> magic communication
#juju-dev 2013-03-13
<thumper> davecheney: I'd love a sanity check on the series email just sent to the list
<thumper> davecheney: it is that classic thing of getting home, and going "ah... wat?"
<davecheney> thumper: that is too be expected
 * davecheney reads
<thumper> davecheney: I've summarised what I think I should do at the bottom of the email, but really want to check before diving in
<hazmat> m_3, davecheney can i get a dial in on that voice.. or has the clock passed
<davecheney> hazmat: m_3 is down with hardware issues
<davecheney> but lets schedule it for tomorrow
<hazmat> davecheney,sadly no clock unites you, i, m3 and mgz
<hazmat> mail it must be
<davecheney> le sigh
<thumper> davecheney: so... I'm trying to understand the "go test" process
<davecheney> right
<thumper> davecheney: so with gocheck, how does it add stuff to the standard testing calls?
<davecheney> there will be a file that bridges between the go test expected function
<davecheney> and the gocheck one
 * davecheney finds an example
<davecheney> you can figure out which one it is
<davecheney> it is the name printed when you run go test -v
<thumper> davecheney: var _ = Suite(&MainSuite{}) does the magic register
<davecheney> that is part of the story
<davecheney> that registers the Suite with gocheck
<thumper> right
<davecheney> but there is a bridge to jump from go test to gocheck
<davecheney> both are necessary
<thumper> yeah, I'm missing that bridge bit
<davecheney> from cmd/cmd_test.go
<davecheney> func Test(t *stdtesting.T) { TestingT(t) }
<davecheney> the TestingT is actaully gocheck.TestingT
<davecheney> but we import gocheck into the packages' own namespace
<thumper> ah...
<thumper> davecheney: so how often do you need to do that?
<davecheney> once per pacakge
 * thumper nods
<m_3> davecheney: ok, back
<m_3> davecheney: damn... that totally sucked
<m_3> davecheney: so yeah, I'd love to bump til tomorrow if that's cool with you
<davecheney> m_3: no worries
<davecheney> will be online at 8am
<davecheney> should be 2/3pm our time, give or take
<thumper> davecheney: what level of hooks are there in go during package load?
<thumper> davecheney: is there a way to get something run as the package is loaded?
<davecheney> thumper: the is the init() hook, which is executed pre main
<davecheney> there is no concept of package loading
<thumper> when is the init() hook run? and for which package?
<davecheney> it is run pre main()
<davecheney> for all packages, in dep order
<thumper> ah...
<thumper> interesting
<davecheney> so if A depends on B then B's init() gets run before A's
<davecheney> ___but____
<davecheney> inside a package you can have multiple init() functions
<davecheney> but there is no ordering on those
<thumper> but all init() methods are called?
<davecheney> yes
<thumper> isn't there name clashes?
<davecheney> init is a magic method
<thumper> so if I have three different files, a.go, b.go and c.go and all are in package d
<davecheney> when it is compiled each gets a sequyence number appended to avoid colisions
<thumper> and each file has an init() func
<thumper> they are all called?
<davecheney> they are all called
<thumper> ok
<davecheney> the order in which they are called is not known
<thumper> when are global vars created in relation to the init methods?
<davecheney> http://play.golang.org/p/kYYu_ZSixq
<davecheney> thumper: before
<davecheney> var's are handled in init methods
<davecheney> that you don't create
<thumper> are they guaranteed to run before the init methods I define?
<thumper> they sound special
<davecheney> yes
<davecheney> top level initalisation is special
 * thumper nods
 * thumper is thinking magic
<davecheney> http://play.golang.org/p/NjZU1tGibR
<thumper> so... vars in package d are initialized before a defined init() method in a dependent package?
<thumper> global vars that is
<davecheney> package level vars yes
<davecheney> there are no global vars
<davecheney> for each packge it goes
<thumper> ok, that is what I meant
<davecheney> vars, init()s, then the packages that depend on that package
<davecheney> all the way back up to main
<thumper> oh... :(
 * thumper thinks
<thumper> actually, I think that still works
<davecheney> the langauge _should_ prevent you from seeing an uninitalised variable
 * thumper thinks more
<davecheney> thumper: care to share ?
 * thumper has just worked out why "go test ./..." often has weird delayed output
<thumper> buffered i/o
<fwereade> mornings
<TheMue> Good morning
<hazmat> has anyone been able to get a vpc default/only account?
<mgz> jam, dimitern: standup?
<dimitern> mgz: in a sec
<rogpeppe1> hi all
<dimitern> rogpeppe: hiya
<TheMue> Hi all
<jam> mgz:  did you go through the standup already?
<mgz> jam: yup, we was fast
<jam> k, I did have some things to specifically bring up, but I can bring them up on IRC
 * TheMue needs inspiration regarding JUJU_HOME and testing. Anyone interested?
<benji> TheMue: sounds like fun... what is JUJU_HOME?
<TheMue> benji: It's a new env variable controlling the home of the juju file, today hard coded .juju.
<benji> makes sense
<TheMue> benji: When changing the variable, or a global variable representing it, for a test and multiple tests run concurrently they may get into conflict.
<TheMue> benji: As it is a global state.
<benji> yep; global state is a killer in tests (and everywhere else, really)
<TheMue> benji: Exactly. We already have some tests where $HOME is changed.
<benji> I don't know how the internals of go's testing infrastructure works, but my first though would be to channel all reads of JUJU_HOME into a function and then make tests use that instead of accessing the real variable
<TheMue> benji: The problem stays.
<TheMue> benji: Test A needs its home and sets it, test B too, now in test A it is read and it gets the - internal or external - value B had set.
<benji> indeed
<benji> each test would then need to create a temporary directory and set its local idea of home to that
<TheMue> benji: It's btw a func returning JUJU_HOME or as default ~/.juju
<TheMue> benji: Yes, and if the tested function calls JujuHome(), what will it get then?
<benji> how about making the default function return an error in tests, that way you don't have accedental reading of the shared state and then have each tests make a process- (or thread-) local version
<fwereade> TheMue, surely the problem only appears when run with explicit test.parallel > 1?
<TheMue> fwereade: Yep, only then. Otherwise it's simple. ;)
<fwereade> TheMue, panic in test setup if that's set then?
<benji> as I understand it, test.parallel defaults to the value of GOMAXPROCS, which may be greater than 1
<benji> (another instance of global state being a bad actor)
<fwereade> benji, but *that* is only >1 when explicitly set, right?
<TheMue> fwereade: We very often rely on the home internally, so all tests that even nested may access it would have to panic.
<gary_poster> fwereade, rogpeppe hi.  Would you be available for hopefully a quick G+ in about 4 minutes (top of hour) to resolve lingering confusion about how to proceed with the annotations changes from frankban and Makyo?  We could meet in http://tinyurl.com/guichat
<benji> fwereade: I assume so.
<fwereade> gary_poster, sorry, we have a meeting coming right up -- how about 30 minutes later?
<gary_poster> fwereade, we have a meeting then. :-) 60 minutes later instead, at the top of the next hour?
<fwereade> gary_poster, sgtm if it sgt rogpeppe
<gary_poster> thanks fwereade
<benji> TheMue: does go have a thread-locals?  That is where I would stash slightly-less-than-global-globals.
<TheMue> benji: One moment, meeting.
<mgz> hm, so I don't really need to be in ~gophers, and can land things on juju-core and goose regardless,
<mgz> but now can't triage goose bugs for instance, as ~gophers still owns the project
<gary_poster> ~gophers was deactivated entirely, apparently
<dimitern> i'm still not unclear why all that was even needed? changing gophers and all
<gary_poster> rogpeppe, are you up for a call in 13 minutes?
<gary_poster> 12 :-)
<rogpeppe> gary_poster: yup
<gary_poster> cool, thanks
<gary_poster> fwereade, rogpeppe, frankban, Makyo call in http://tinyurl.com/guichat in 2 minutes.  I'm there already :-)
<gary_poster> I will have to bow out early; I'm taking the role of facilitator
<mramm> gary_poster: mind if I poke my head in?
<gary_poster> mramm, no, please do
<gary_poster> fwreade, rogpeppe, the conversation is so scintillating in our chat room that I'm sure you want to join in...
<rogpeppe> fwereade: how about something like this? http://paste.ubuntu.com/5610973/
<rogpeppe> fwereade: except... what to call "EntityName"?
<fwereade> rogpeppe, I'm not quite sure about that Entity in there
<fwereade> rogpeppe, I would honestly like to keep them separate
<fwereade> rogpeppe, Refresh is only relevant to AuthEntity anyway, right?
<rogpeppe> fwereade: ok, so you can ask for an annotator by name, but you can't ask the name of the annotator when you've got it?
<Makyo> fwereade, Service has Refresh as well, correct?
<rogpeppe> fwereade: isn't Refresh relevant to all state entities with a doc?
<fwereade> rogpeppe, I am saying that having independent *Name methods that happen to return the same string is a price I consider worth paying to separate these bits of functionality
<fwereade> rogpeppe, Refresh is not elevant to any Annotator AFAICT
<rogpeppe> fwereade: Machine is an Annotator
<rogpeppe> fwereade: and has Refresh
<fwereade> rogpeppe, the fact that most annotators can be refreshed is not the point
<fwereade> rogpeppe, and files can be closed as well as written
<rogpeppe> fwereade: so you'd return a simple Annotator from AnnotatorEntity?
<fwereade> rogpeppe, I think so -- maybe I'd even just call it Annotator(string) (Annotator, error)
<rogpeppe> fwereade: and document that the syntax of the name is... what?
<fwereade> rogpeppe, specific to annotations
<fwereade> rogpeppe, by happy coincidence it is the same as that used for corresponding objects that can log in
<rogpeppe> fwereade: duplicate the documentation?
<fwereade> rogpeppe, roughly speaking, but the sets of relevant entities are different so it won't be *exactly* the same
<rogpeppe> fwereade: this better? http://paste.ubuntu.com/5611029/
<fwereade> rogpeppe, sorrt
<fwereade> rogpeppe, yeah, I think so
<rogpeppe> fwereade: promise? :-)
<fwereade> rogpeppe, I'm pretty sure that fits what I've been after since the beginning
<fwereade> rogpeppe, if I'm being inconsistent then I should absolutely be called on it, but I have been trying not to be
<rogpeppe> fwereade: so, the plan is to leave EntityName around as it currently is, rather than changing it everywhere, but would you prefer we retro-fixed it to... something else?
<rogpeppe> fwereade: i.e. have you got a better name for it?
<fwereade> rogpeppe, hmm, I didn't see an EntityName in the things you suggested
<rogpeppe> fwereade: yup, it's not needed by the things that use those interfaces, as far as i could make out
<fwereade> rogpeppe, I had kinda assumed that was orthogonal, and that my views on that specific name had been made clear
<rogpeppe> fwereade: yeah, i think it's orthogonal, and can be treated as a separate issue to be fixed independently
<fwereade> rogpeppe, ok, fair enough, I'm very happy to see the separate interfaces and can live with a name I don't like
<rogpeppe> fwereade: if you think of a sensible name, the change should be quite simple when we decide to do it.
<rogpeppe> fwereade: one thought we had: "ExternalName" (i'm not that keen but just throwing it out there)
<fwereade> rogpeppe, I still think the interfaces would be *better* if they included *Name methods but as I said I can live with EntityName
<fwereade> gents, I need to come back later to talk with thumper, so I'm knocking off early in the interest of having a bit of an evening before then
<rogpeppe> fwereade: okeydokey
<TheRealMue> fwereade: ping
<TheRealMue> fwereade: Have to step out but will ping you again later. ;)
<rogpeppe> mgz: have you used bzr pipelines?
<mgz> rogpeppe: I have, but abentley is the real expert
<rogpeppe> mgz: ah. i'm hoping they'll "just work" with cobzr, but i wanted to check before potentially destroying my repo
<mgz> they'll just work, cobzr is just creaky porcelain around some core bzr features
<rogpeppe> mgz: that's what i'm thinking
<rogpeppe> mgz: if i've got a few branches that already form a logical pipeline, can i link them together, or will i have to create new branches and merge my existing ones into them?
<mgz> I've never tried to do that
<rogpeppe> mgz: (i find myself in a position where the functionality that pipes provide looks like it'll be really useful)
<mgz> you probably need to create the pipes, but should be able to just pull in the existing branches
<rogpeppe> mgz: ok, i'll just start from scratch and merge. shouldn't be a problem.
<rogpeppe> mgz: yeah
<mgz> well, you shouldn't need to merge is the point, just pull in the revs
<abentley> rogpeppe, mgz: in fact, you can just use the existing branches.  bzr add-pipe will accept a location (i.e. the path to a branch) as its input.
<rogpeppe> abentley: thanks!
<rogpeppe> abentley: that didn't actually work for me - i got this error: 'bzr: ERROR: A control directory already exists: "file:///home/rog/src/go/src/launchpad.net/juju-core/.bzr/cobzr/242-allwatcher-handle/".'
<rogpeppe> abentley: but i just created the branch and merged in. no problem.
<abentley> rogpeppe: What was the command you typed?
<rogpeppe> abentley: bzr add-pipe name-of-existing-branch
<abentley> rogpeppe: locations work, names don't.  "bzr add-pipe ~/src/go/src/launchpad.net/juju-core/.bzr/cobzr/242-allwatcher-handle" should have worked.
<abentley> rogpeppe: Or whatever equivalent path.
<rogpeppe> abentley: ah, ok. i didn't realise there was a difference.
<rogpeppe> abentley: it's no matter any more anyway
<abentley> rogpeppe: cool.
<rogpeppe> abentley: first pipeline operations completed successfully - one pump, one interactive back merge. very useful!
<abentley> rogpeppe: great.
<rogpeppe> gary_poster, dimitern, fwereade, anyone else: here are the next two branches in the allwatcher task: https://codereview.appspot.com/7727044/ https://codereview.appspot.com/7727045/
<gary_poster> yay rogpeppe thanks!  how many left after that?
<rogpeppe> gary_poster: probably three or four
<gary_poster> rogpeppe, ok cool, thanks.  I'll look forward to reviewing soon.
<rogpeppe> gary_poster: thanks a lot
<rogpeppe> gary_poster: bzr pipes are going to make my a *lot* easier here, i'm hoping...
<gary_poster> lol, I bet
<rogpeppe> right, that's me for the day
<rogpeppe> see y'all tomorrow
<thumper> fwereade: morning
<thumper> fwereade: around?
<fwereade> thumper, heyhey
<fwereade> thumper, little bit caught up, 5 mins maybe?
<thumper> fwereade: I'm just reading your email response
<thumper> sure
<fwereade> thumper, cool
<benji> this is the second time I have gotten this error from "lbox propose":  error: ERROR: Failed to update bug task: Server returned 400 and body: milestone_link: Constraint not satisfied.
<fwereade> benji, I haven't seen that one, but it's rare for me to remember to attach bugs ;)
<benji> :)
<fwereade> thumper, right, I think I've unfucked my logic
<fwereade> thumper, although that was what I thought when I propsed originally, so...
<fwereade> thumper, anyway
<thumper> fwereade: so hangout?
<fwereade> thumper, free for a G+?
<thumper> aye
 * fwereade start one
<thumper> ah poos
<thumper> hi davecheney
<thumper> davecheney: you are not related to that first msg
<thumper> :)
<davecheney> morning
<davecheney> i could be a poo
<davecheney> you never know
 * m_3 definitely a pooh
<m_3> davecheney: I'm just crunching on the queue... available to g- at your convenience
<m_3> the charm queue is quite sysiphean this week
<davecheney> m_3: eep
<davecheney> ok, will send you a hangout in the next 30 mins
<m_3> cool... good for another three hours or so
<davecheney> m_3: https://plus.google.com/hangouts/_/be405f06d5b6f273d4eb071ded2408bf336e502a?authuser=1&hl=en
<m_3> ack
<m_3> omw
#juju-dev 2013-03-14
<davecheney> m_3: ec2 is having a less pissy day
<davecheney> precise-ec2-charm-alice-irc passed !
<m_3> davecheney: nice!
<davecheney> two for two
<davecheney> going to try with juju-core 1.9.12
<davecheney> m_3: and now ec2 is having a sad period
<davecheney> 3 failures in a row
<davecheney> m_3: aww crap
<davecheney>         status: error
<davecheney>         status-info: 'hook failed: "install"'
<davecheney> + grep -v -q error
<davecheney> ^ not spotting the problem
 * davecheney steps out for lunch
<m_3> davecheney: ack
<m_3> davecheney: gonna go get dinner
<m_3> davecheney: see you on the flipside
<thumper> davecheney: review done
<thumper> davecheney: I think the version tests should move into the cmd package
<thumper> that's about it
<davecheney> thumper: thanks mate
<davecheney> will take another crack now
 * thumper is done for today
<thumper> caio
<wallyworld_> davecheney: hi, i'm half way through fixing the raring tests and i noticed you have the card assigned to you. have you started on it yet?
<davecheney> nah, grab it
<wallyworld_> awesome thans
<davecheney> 4
<wallyworld_> davecheney: it turns out the git issues can be fixed by setting env vars in setup test. but if we ever deploy on that later version, we will need to ensure the git config is correctly written
<davecheney> wallyworld_: i could not figure out what was wrong with tims' setup
<wallyworld_> also, the later version of git in raring has different error messages, and we are doing exact string matches in the tests
<davecheney> i tried in a fresh raring vm
<davecheney> and couldn't replicate the problem
<davecheney> wallyworld_: ahh, i have a fix for that
<wallyworld_> did you have a .gitconfig already?
 * davecheney searches
<davecheney> not the .gitconfig one, i couldn't replicate the problem
<wallyworld_> i just changed the tests to use a regexp
<davecheney> that'll do
<davecheney> that is what I did too
<wallyworld_> i have never used git before, and i can replicate it
<davecheney> go with your version
<davecheney> mine didn't work :)
<wallyworld_> i think git has a few ways of knowing your email address
<davecheney> wallyworld_: thing is
<wallyworld_> and since tim nor i have any of those, the test sfail
<davecheney> in a fresh raring vm
<davecheney> ther ewas not problem
<wallyworld_> hmmm, ok
<davecheney> i cannot explain what thumper did to his machine
<davecheney> and so was unable to fix it
<wallyworld_> i did a dist upgrade from quantal
<wallyworld_> anyways, adding set envs in the test setup fixes it
<davecheney> brave man
<wallyworld_> i like to live on the edge
<dimitern> morning all
<fwereade> dimitern, mgz, jtv1: mornings
<dimitern> fwereade: hey
<fwereade> dimitern, thanks for the 1152717 review, I've loosened it up a little with a more detailed explanation; about to repropose, would be grateful if you would think of ways to break it ;p
<jtv1> Hi fwereade
<fwereade> dimitern, how's the upgrade-charm stuff going?
<dimitern> fwereade: will do :)
 * jtv keeps misreading that nick as "dim intern"  :(
<fwereade> haha
<dimitern> fwereade: i'll finish the dreaded 010-upgade-charm-... branch today I hope - still fiddling with tests
<dimitern> jtv: lol we're the first definitely
<dimitern> jtv: s/we/you/
<jtv> I find that hard to believe!
<jtv> Maybe I'm just the  first who dared to say it.
<dimitern> :D
<fwereade> dimitern, cool -- if the morning goes well I might be free to pair after lunch, if that would be useful to you
 * jtv may be going just a little bit dyslexic.
<dimitern> fwereade: that will be best and fastest actually
<fwereade> dimitern, ok, great
<dimitern> fwereade: although it's still a bit of a mess at home, so maybe I can come to yours?
<fwereade> dimitern, surely, cath and laura will be out most of the afternoon I think
<dimitern> fwereade: sweet, just give me a shout then - we can probably have lunch @cuba or something as well
<mgz> ...seems like a long way to go for lunch...
<fwereade> dimitern, when's your standup again?
<fwereade> mgz, ;p
<dimitern> fwereade: 12:30
<fwereade> dimitern, ok, lunch at 1:30 then?
<dimitern> mgz: oh, it's maltaspace you know - everything it's but a wormhole away :D
<dimitern> fwereade: sgtm
<fwereade> dimitern, reproposed https://codereview.appspot.com/7591044
<dimitern> fwereade: already looking
<fwereade> <3
<dimitern> jtv: sorry about my bitching about using lbox propose - i didn't realise you forked the maas provider and you're working on it separately
<dimitern> fwereade: so if we have the same situation s.doc.UnitCount ==1 in local (stale) state, then $gt will fail and we'll retry
<fwereade> dimitern, yep
<dimitern> fwereade: looks ok to me
<fwereade> dimitern, cool
<rogpeppe> mornin' all
<jam> so, does anyone know what writes "/var/lib/juju/agents/bootstrap/agent.conf" ? I'm playing with the Windows port, and the bootstrap node fails to find that file.
<jam> I'm guessing it is a bootstrap => cloud-init issue.
<jam> fwereade: as part of the "pass series into environ", is Arch on the table for poking at? As it seems to only use the juju client Arch (amd64/i386) but I'm not sure that actually makes sense.
<rogpeppe> jam: yes, the cloudinit code should write that file
<jam> (we only have 64-bit tools available, so why not start a 64-bit instance always)
<fwereade> jam, arch will be incorporated alongside constraints
<rogpeppe> fwereade: can i presume that https://codereview.appspot.com/7809043 was instigated by you?
<fwereade> rogpeppe, well, we kinda agreed the business value was insufficient to make it a focus now, but thumper only remembered that when he'd done it
<rogpeppe> fwereade: the problem is that it means that it's a right pain if you want to bootstrap from Go code now
<rogpeppe> fwereade: for instance it breaks builddb
<fwereade> rogpeppe, and I'm +1 on little steps that make environs clearer and less weirdly coupled, so I think it's still a good move
<rogpeppe> fwereade: that was the main consideration behind the current design
<fwereade> rogpeppe, hmm
<fwereade> rogpeppe, builddb can't create a cert independently?
<rogpeppe> fwereade: it *could*, but then it would be duplicating the code in cmd/juju/bootstrap
<rogpeppe> fwereade: if nothing else, the code to generate certs appropriately for an environment should be factored out into environs, so it's at least available to other Go code that wants to bootstrap.
<rogpeppe> fwereade: FWIW i hammered this out for ages with gustavo
<fwereade> rogpeppe, I agree the cert generation should not be inside cmd/juju, but I think it's the appropriate place to invoke it
<jam> rogpeppe: so the environs.agent code appears to use "path.Join()" to do these sorts of operations, which means it is trying to shell out to stuff like "bootstrap\agents.conf" rather than "bootstrap/agents.conf"
<fwereade> rogpeppe, so keeping the functionality in environs sgtm, but I remain -1 on keeping it *inside* environs.Bootstrap
<rogpeppe> fwereade: yeah, i'm not intrinsically opposed to needing two steps to bootstrap (i think that was my favoured choice originally)
<jam> so.... sigh, lots of code that would act differently on Windows for bad reasons.
<rogpeppe> jam: you mean filepath.Join ?
<jtv> dimitern: I guessed.  :)  We don't think of it as forking, really, more as a feature branch.  Eventually we'll submit the whole thing for review.  (We'll probably chop it into smaller pieces for easier reviewing)
<jam> rogpeppe: well explicitly path.Join
<jam> maybe that one is ok?
<fwereade> rogpeppe, incidentally, do you recall why builddb lacks tests?
<rogpeppe> jam: path.Join should never produce \
<rogpeppe> fwereade: because it's a quick hack command that gustavo wrote
<dimitern> jtv: :) I see
<fwereade> rogpeppe, that's an important part of our infrastructure?part of our
 * fwereade sighs gently
<rogpeppe> jam: i was originally trying to be very careful about path hygiene (path vs filepath) but got told that it wasn't worth it - we'd need another pass later, so, yes, everything will break under windows.
<rogpeppe> fwereade: it is?
<rogpeppe> fwereade: it just compiles mongo.
<rogpeppe> fwereade: tbh, it could be a regular charm.
<fwereade> rogpeppe, doesn't that qualify?
<fwereade> rogpeppe, true
<rogpeppe> fwereade: the only real test is running it. there's no input and only one possible output.
<rogpeppe> jam: your best bet is to grep for all occurrences of (path|filepath)\.Join and inspect on a case-by-case basis.
<fwereade> rogpeppe, weeeell... that'd be a test for the charm, right?
<jam> rogpeppe: well as a start, I need to figure out if this is exactly why it is failing
<rogpeppe> fwereade: yup. but it's not in the charm store. when would that test ever have run?
<rogpeppe> jam: and it's quite an expensive test (n hours on an ec2 instance)
<jam> rogpeppe: I think you meant fwereade :)
<rogpeppe> jam: i did :-
<rogpeppe> )
<fwereade> rogpeppe, surely there are plenty of things that could be tested without actually running the charm? eg that the code compiles, that the charm is actually a valid charm, etc?
<rogpeppe> fwereade: possibly. i think there are more important considerations. if that command fails, it can be dealt with elsewhere.
<rogpeppe> jam: what failure are you currently seeing?
<fwereade> rogpeppe, if it's not important enough to test it's probably not important enough to exist then?
<rogpeppe> fwereade: probably
<rogpeppe> fwereade: i think gustavo wrote it as more of a proof of concept than anything else.
<jam> rogpeppe: looks like agent.tools is ok, but trivial.ShQuote is not
<jam> > '\var\lib\juju\agents\machine-0\agent.conf'
<jam> not sure what file that will actually create :)
<fwereade> rogpeppe, if you're reviewing that branch, maybe tell thumper to correspond with davecheney and figure out if it's ok to drop it?
<rogpeppe> fwereade: ok
<fwereade> rogpeppe, ideally mongossl ends up in cloudarchive anyway
<jam> nm, not ShQuote, but c.Dir()
<rogpeppe> jam: that's not a ShQuote issue.
<jam> maybe??
<jam> rogpeppe: that is using path.Join() as near as I can tell
<rogpeppe> jam:
<rogpeppe> func (c *Conf) File(name string) string {
<rogpeppe> 	return filepath.Join(c.Dir(), name)
<rogpeppe> }
<rogpeppe> jam: to be honest, i think if we changed all filepath imports to use path, everything would probably work
<jam> rogpeppe: I haven't found a 'filepath' yet
<jam> I can see that c.Dir() is returning '/var/lib'...
<rogpeppe> jam: in environs/agent/agent.go
<jam> but by the time it gets put together into WriteCommands it is \var...
<jam> right, I guess that code is mixing "where will I read" with "where will I write" and not paying attention to the fact that "where will I write" is running on another OS
<jam> but as you say, / works on Windows anyway, and for now, the 'where will I read' is always *nix anyway
<jam> well prob always Ubuntu even
<rogpeppe> jam: for the time being, yes
<jam> rogpeppe: so changing that looks good so far, I have to wait for the instance to start up
<dimitern> how can I get the caller of a function at run time?
<jam> rogpeppe: and step 2, mongo doesn't start because the upstart file that gets written was using 'filepath' as well. :)
<rogpeppe> jam: just change "filepath" to "path" throughout the code
<dimitern> rogpeppe: ^^ ?
<jam> yeah, that's what I'm thinking as well
<rogpeppe> jam: there will be one or two places where it uses filepath.Walk, and you'll have to fix those, but i think everything else should probably just work
<jam> dimitern: we used that in the Hooks code for goose
<rogpeppe> dimitern: ?
<jam> dimitern: http://golang.org/pkg/runtime/
<jam> runtime.Caller()
<dimitern> jam: oh, cool 10x
<rogpeppe> ah
<rogpeppe> dimitern: i use a little helper function that gives me the source locations of all callers, all formatted on a single line
<dimitern> rogpeppe: great! can I have this pls?
<rogpeppe> dimitern: yeah, one mo
<dimitern> I'm getting an obscure panic and trying to see which call caused it
<rogpeppe> dimitern: import "code.google.com/p/rog-go/exp/runtime/debug"
<rogpeppe> // Callers returns the stack trace of the goroutine that called it,
<rogpeppe> // starting n entries above the caller of Callers, as a space-separated list
<rogpeppe> // of filename:line-number pairs with no new lines.
<rogpeppe> func Callers(n, max int) []byte {
<dimitern> rogpeppe: tyvm
<rogpeppe> dimitern: BTW doesn't the panic give you a stack trace?
<dimitern> rogpeppe: it does, but it's in another goroutine and it's not helpful
<rogpeppe> dimitern: ah yes
<rogpeppe> dimitern: don't you get *all* stack traces?
<dimitern> rogpeppe: I do but it's still confusing - there are like 80 goroutines
<rogpeppe> dimitern: what i tend to do in that case is search for functions i'm interested in
<rogpeppe> dimitern: in acme (you can probably do something similar in your editor) i do ,x/(.+\n)*/v/function-i'm-interested-in/d
<rogpeppe> dimitern: which removes all stack traces that don't mention the given function name
<dimitern> rogpeppe: aha, I can try this
<dimitern> rogpeppe: actually it's still weird - it's something to do with the watchers - wanna take a look?
<rogpeppe> dimitern: sure - paste away. is this against trunk?
<dimitern> rogpeppe: no, against the branch I'm working on, but I merge trunk regularly (just did this morning) - http://paste.ubuntu.com/5613199/
<rogpeppe> dimitern: can you reproduce the problem in trunk?
<dimitern> rogpeppe: no
<dimitern> rogpeppe: but then again the tests in the uniter/filter will be different
<dimitern> anyway I still don't get why "watcher was stopped cleanly" is a panic?
<rogpeppe> dimitern: if you get eof from a watcher, it should be because it stopped because of an error.
<dimitern> rogpeppe: istm there's no way to stop a watcher without a panic
<rogpeppe> dimitern: the only time it there's no error is when the watcher was stopped deliberately
<dimitern> rogpeppe: that is, if you then call musterr
<dimitern> and I still don't get how this the the only place where err == nil is a panic
<rogpeppe> dimitern: musterr is used when we know that the watcher is never stopped.
<rogpeppe> dimitern: can you push your branch?
<dimitern> rogpeppe: ok
<dimitern> rogpeppe: lp:~dimitern/juju-core/010-uniter-handle-config-upgrades
<rogpeppe> dimitern: from the look of those stack traces, *loads* of watchers have been stopped unexpectedly.
<rogpeppe> dimitern: (look at all the calls to MustErr that are in progress)
<dimitern> rogpeppe: I saw that, but still no clue what i did wrong
<rogpeppe> dimitern: looks like you might not be cleaning up properly or something
<rogpeppe> dimitern: which test fails?
<dimitern> rogpeppe: not, sure let me run it with -vv
<dimitern> rogpeppe: START: context.go:0: FilterSuite.TearDownTest, just after that is the panic
<rogpeppe> dimitern: ah yes, i just worked that out
<rogpeppe> dimitern: ah, i see the problem
<rogpeppe> dimitern: you're not deferring f.Stop
<dimitern> rogpeppe: I can't see the difference, since it's near the end anyway
<dimitern> rogpeppe: no, what - which file/line?
<rogpeppe> dimitern: well, it fixes your panic
<dimitern> rogpeppe: filter_test.go:300 ?
<rogpeppe> dimitern: no :243
<rogpeppe> dimitern: just after newFilter, in TestConfigEvents
<dimitern> rogpeppe: I'm not, but I'm calling f.Stop() explicitly later
<rogpeppe> dimitern: yes, but you're getting a test failure which means it never gets called
<dimitern> rogpeppe: right!
<dimitern> rogpeppe: I was thinking something like that
<dimitern> rogpeppe: anyway, 10x :)
<rogpeppe> dimitern: np. BTW, i'm not entirely sure why we get the error we do (i'd have thought that we'd get an error when the state is shut down underneath us)
<dimitern> rogpeppe: yeah, the fix is easy, but understanding the error is hard :)
<rogpeppe> dimitern: feel free to dig in and improve it. you'll get a better understanding of what's going on too if you do :-)
<dimitern> rogpeppe: once I get it I will :)
<rogpeppe> dimitern: perhaps try to write a little piece of code that reproduces the problem
<dimitern> rogpeppe: yeah, i have something in mind already
<dimitern> fwereade: can you take a look please - https://codereview.appspot.com/7425044 - the tests now pass, but I'm surely missing something?
<fwereade> dimitern, looking
<fwereade> dimitern, looking pretty good, sent some more comments -- should be able to polish it off today
<dimitern> fwereade: great to hear!
<dimitern> mgz: standup?
<mgz> ta
<rogpeppe> system hung up on me
<rogpeppe> fwereade: any chance you could have a look at these CLs? https://codereview.appspot.com/7727044/ https://codereview.appspot.com/7727045/
<rogpeppe> (or anyone else, for that matter (thanks already dimitern!))
<dimitern> rogpeppe: if you stop the watcher, won't you get this when trying to read on the channel?
<rogpeppe> dimitern: get what?
<dimitern> rogpeppe: https://codereview.appspot.com/7727045/diff/5001/state/megawatcher.go#newcode105
<dimitern> rogpeppe: detect the watcher was stopped
<rogpeppe> dimitern: sorry, i'm not sure i understand the question
<dimitern> rogpeppe: I mean having reply chan struct{} instead
<fwereade> rogpeppe, looking
<dimitern> rogpeppe: you mean having a bool instead gives you the possibility to send 2 distinct msgs: watcher closed and ?
<rogpeppe> dimitern: the allWatcher replies with either true (request accepted) or false (the StateWatcher has been stopped)
<dimitern> rogpeppe: ah, I see
<rogpeppe> dimitern: actually true means "request has been processed, and now contains the reply"
<dimitern> rogpeppe: thanks for clarifying this
<rogpeppe> dimitern: np. i'll try to clarify the comments appropriately.
<rogpeppe> fwereade: thanks
<fwereade> rogpeppe, sorry I didn't finish those yesterday -- both LGTM
<rogpeppe> fwereade: cool, thanks
 * dimitern lunch
<fwereade> rogpeppe, if you have a mo I have https://codereview.appspot.com/7591044/ and https://codereview.appspot.com/7715046/ out for review
 * fwereade lunch
<rogpeppe> fwereade: ok, thanks, will have a look in a bit
<benji> When you guys get a chance I have a small branch up for review: https://codereview.appspot.com/7554046/
<rogpeppe> fwereade, mramm: hangout?
<mramm> rogpeppe: sorry, did not wake up for the alarm
<fwereade> mramm, we're pretty much done actually
<rogpeppe> mramm: we're just about done, but still there if you want to join
<mramm> seems like you finished before I got there... :)
<fwereade> rogpeppe, https://codereview.appspot.com/7755045 might be trivial -- it's certainly small
<rogpeppe> fwereade: not trivial, i think - i'm having to think a bit
<fwereade> rogpeppe, ok, definitely not a priority then, I'll leve it lying around unreferenced in kanban and just drop it in a few days unless someone really likes it
<rogpeppe> fwereade: i *think* it's right, but there might have been a good reason i didn't do that before (rather than just not thinking of it)
<fwereade> rogpeppe, one thing that crosses my mind is that the highest-version behaviour in environs is incompatible -- the highest version (as deployed) will (surely?) be immediately downgraded to agent-version once an agent starts running
<rogpeppe> fwereade: i don't think so
<rogpeppe> fwereade: note line 177
<fwereade> rogpeppe, not following -- `binary.Number = vers`?
<rogpeppe> fwereade: yeah
<fwereade> rogpeppe, I'm probably being thick, please explain
<rogpeppe> fwereade: ah, i might have been misreading your remark
<fwereade> rogpeppe, ISTM that environs gets the highest compatible tools and deploys with those
<fwereade> rogpeppe, but that the agent that then runs checks for the agent-version in env config
<rogpeppe> fwereade: i can't remember where agent-version gets set originally.
<fwereade> rogpeppe, and up/down/whatever-grades to that immediately
<fwereade> rogpeppe, hmm, good question
<rogpeppe> fwereade: in general, if agent version is set, we want to use that version.
<fwereade> rogpeppe, tools we bootstrp with, or defaults to version.Current
<fwereade> rogpeppe, I think :)
<rogpeppe> fwereade: agent-version is important - that's how we have any control over what version the agents are running
<fwereade> rogpeppe, indeed
<fwereade> rogpeppe, I think environs should be a bit more careful with it
<rogpeppe> fwereade: i'm thinking that the tools selection logic ... yeah
<fwereade> rogpeppe, but I think thumper's heading in that direction anyway
<rogpeppe> fwereade: AgentVersion defaults to the current version, BTW
<fwereade> rogpeppe, I think I said that
 * rogpeppe continues to study the logic
<rogpeppe> fwereade: i *think* the provisioner should pass the current agent version into StartInstance
<rogpeppe> fwereade: (optionally)
<fwereade> rogpeppe, +-1, I'm not sure any of that stuff is the provisioner's business
<fwereade> rogpeppe, if anything should know the env config it's the env ;)
<rogpeppe> fwereade: hmm, good point, the environ should already *know* the agent version!
<rogpeppe> fwereade: but currently it can't tell if it's been set explicitly
<rogpeppe> fwereade: because i think that only if it's not set explicitly should the environ pass HighestVersion in the FindTools flags
<rogpeppe> fwereade: last thing you saw?
<fwereade> rogpeppe, I said "although I'm now less inclined to take a hard line on that -- if the provisioner were to be running a locatoralike, that would probably be best"
<rogpeppe> fwereade: ah, last thing i saw was:
<rogpeppe> [15:07:15] <fwereade> rogpeppe, if anything should know the env config it's the env ;)
<rogpeppe> fwereade: last thing you saw from me?
<fwereade> rogpeppe, in between the two I said " just like the state/pi info we pass in"
<fwereade> rogpeppe, nothing from you since "(optionally)"
<rogpeppe> [15:07:50] <rogpeppe> fwereade: hmm, good point, the environ should already *know* the agent version!
<rogpeppe> [15:08:05] --> teknico_ has joined this channel (~quassel@93-42-34-107.ip84.fastwebnet.it).
<rogpeppe> [15:08:24] <rogpeppe> fwereade: but currently it can't tell if it's been set explicitly
<rogpeppe> [15:09:19] <rogpeppe> fwereade: because i think that only if it's not set explicitly should the environ pass HighestVersion in the FindTools flags
<fwereade> rogpeppe, hmm, yeah, maybe
<fwereade> rogpeppe, I'm not totally wild about the "highest" behaviour because of the conflict with what the agent upgrader does
<fwereade> rogpeppe, "highest" STM like a good default for what to set it to when we do upgrde-juju
<rogpeppe> fwereade: i think highest is fine when deploying without a specified version
<rogpeppe> fwereade: if the agent version is not set, that is
<fwereade> rogpeppe, hmm, ok, if there's nothing in agent-version at *all*, that could make sense
<fwereade> rogpeppe, but is there ever such a situation?
<rogpeppe> fwereade: that is almost always the case currently
<fwereade> rogpeppe, I think we set one up at bootstrp time if none is set
<rogpeppe> fwereade: i think
<fwereade> rogpeppe, environs/config.go:179
<fwereade> rogpeppe, seems strange not to aim to have everything on the same version in general
<rogpeppe> fwereade: yeah, i'm not sure about that.
<rogpeppe> fwereade: the main case for not doing so is when bootstrapping
<fwereade> rogpeppe, a random smorgasbord of versions, until one is set explicitly, seems more likely to confuse and upset than nything else
<fwereade> rogpeppe, I'd be fine with bootstrapping to the highest available when not otherwise set, I think
<rogpeppe> fwereade: i think if you bootstrap, you should get the latest version, regardless of the client
<fwereade> rogpeppe, ok, sgtm
<rogpeppe> fwereade: so this brings us back to Bootstrap needing to know if the agent version is explicitly set or not
<rogpeppe> fwereade: i *think* i'd be ok if config.Config never looked at version.Current. then all agent-version setting is done explicitly where necessary.
<rogpeppe> fwereade: and then perhaps upgrader would just do nothing if agent-version was not set
<rogpeppe> fwereade: although that should never happen in practice.
<rogpeppe> fwereade: gone again?
<fwereade> rogpeppe, sorry, only half looking t my own screen
<rogpeppe> fwereade: np
<fwereade> rogpeppe, yeah, I think I'm ok with that
<fwereade> rogpeppe, I agree that the upgrader should never see a missing agent-version
<rogpeppe> fwereade: BTW i'd like to know your thoughts on https://codereview.appspot.com/7554046/diff/1/state/apiserver/api_test.go#newcode404
 * fwereade looks
<fwereade> rogpeppe, heh, tricky
<fwereade> rogpeppe, no, wait, we have a state.State
<fwereade> rogpeppe, we can implement the test by adding via state and trying to remove via api
<fwereade> rogpeppe, yesno?
<fwereade> rogpeppe, I'm -1 on dropping those tests, I think they will become very meaningful again when we start doing the internal API
<rogpeppe> fwereade: i don't want to drop all those tests, just the Client ones.
<rogpeppe> fwereade: and if we ever had any other logic behind them (say different users could do different things), we'd need more again.
<rogpeppe> fwereade: i'd drop all the Client tests except one
<fwereade> rogpeppe, I think there's still value in checking that all the client methods work the same
<rogpeppe> fwereade: ok. but we really are just testing that single "if" statement in every one of them, because they're all gated through that method.
<fwereade> rogpeppe, even if it's obvious from the implementation that they do, that only applies today and not in the face of unknowable future changes
<rogpeppe> fwereade: if those changes come, we'll be needing to change the tests anyway.
<fwereade> rogpeppe, I have a great fondness for the refactor-with-axe, run-tests, see-what-failed approach
<fwereade> rogpeppe, if we keep those tests then when I do that I can at least see that the bits of implementation that are important enough to care about still act as they did before
<rogpeppe> fwereade: FWIW dropping all but one of those tests shaves 8s off the api test runtime
<fwereade> rogpeppe, that is to me an argument for making them faster, not for just dropping them ;p
<rogpeppe> fwereade: do you see any way we can make them faster?
<rogpeppe> fwereade: fundamentally we are pretty slow at mutating state
<rogpeppe> fwereade: perhaps there's a magic switch we haven't given to mongod
<fwereade> rogpeppe, tbh, no, nothing that isn't deeply hackish
<fwereade> rogpeppe, (magically poke everything into state at once!)
<rogpeppe> fwereade: i don't think that'll help here.
<rogpeppe> fwereade: we're not spending the time in setUpScenario
<rogpeppe> fwereade: (even if we could do that, of course)
<fwereade> rogpeppe, ah, ok, I thought that was where you'd had problems in the past
<fwereade> rogpeppe, I bet we *could*, but I'm petty sure it'd be a bad idea
<rogpeppe> fwereade: yeah, that's why i took it out of the loop and made functions return an "undo" function
<rogpeppe> fwereade: if we were doing setUpScenario, the tests would take ages and ages to run
<fwereade> rogpeppe, if the problem is the operations themselves then I don't see a way out then
<rogpeppe> fwereade: indeed. that's why i think that removing most of the tests which aren't actually testing any existing logic is a reasonable way forward.
<fwereade> rogpeppe, I dunno, I've been bitten before by changing stuff and assuming that the lack of test failures indicated smart tests and a good change
<fwereade> rogpeppe, when in fact it was just that nobody had written them in the first place
<rogpeppe> fwereade: to change this behaviour, we'd have to change the Client method - if we put a comment there, i think we'd be ok.
<rogpeppe> fwereade: BTW, if we were running setUpScenario each time, it would add about 1
<rogpeppe> 3 minutes to the test time
<rogpeppe> (that's 3, not 13)
<fwereade> rogpeppe, ouch :/
<fwereade> rogpeppe, still, if we can't test authorization behaviour directly I think we have no choice but to do so indirectly
<fwereade> rogpeppe, and suck up the costs
<rogpeppe> fwereade: actually, we *could* theoretically provide an entry point into rpc that allowed testing access to a particular rpc object without actually calling a method on that object.
<rogpeppe> fwereade: i'd much prefer it if we could make operations on the state faster though.
<rogpeppe> fwereade: so we wouldn't be worrying about this.
<fwereade> rogpeppe, +1 in at least theory, but I'm hoping to hear something about things that hurt real-world performance from dave before I fret too much in that direction
<rogpeppe> fwereade: i guess so. it might takes 15 minutes just to add 10000 units to state, it's probably not that huge a deal
<rogpeppe> s/takes/take/
<rogpeppe> s/it's/but it's/
<fwereade> rogpeppe, yeah, we shall see the actual numbers
<rogpeppe> fwereade: well, you can get a best-case scenario by simply calling AddUnit 10000 times on a local state server.
<rogpeppe> fwereade: i suspect it'll be about 10-15 minutes to do that
<fwereade> rogpeppe, I'm about to head off -- if you have a chance to take a look at https://codereview.appspot.com/7715046/ and/or https://codereview.appspot.com/7591044/ I would be most grateful
<rogpeppe> fwereade: ok, will do
<fwereade> rogpeppe, cheers
<fwereade> gn all
<rogpeppe> right, that's me for the day. good night.
<thumper> morning
<hatch> morning thumper
<m_3> thumper: hey... what's the best way to fix `bzr info lp:charms/juju-gui` in-place (without deleting and re-pushing any branches)?
<thumper> m_3: what do you mean byfix?
<m_3> thumper: well the stacking seems screwed up
 * thumper takes a look
<m_3> thumper: and we discussed how to _prevent_ such a thing in the future
<m_3> but I don't think I ever asked how to fix it after the fact
<thumper> are people creating stacked branches locally ? like on their own machines?
<thumper> because that way lies insantiy
<m_3> thumper: dunno, I think this is done when people shortcut the promulgation process we discussed last week
<m_3> i.e., to promulgate, they pull the user branch like lp:~juju-gui/charms/precise/juju-gui/trunk
<thumper> m_3: you have a copy of it locally?
<m_3> and then push that same branch up to lp:~charmers/charms/precise/juju-gui/trunk without first creating this repository as a non-stacked branch
<m_3> yes, I have it locally now
<m_3> (just merged an MP)
<m_3> the cleanest way I know of to fix the stacking is to pull a fresh lp:charms/juju-gui, then delete the lp:~charmers owned branch... then push it back up to lp:charms/juju-gui
<m_3> but that feels heavy-handed, and probably just dumb
<thumper> m_3: I'm looking...
<thumper> m_3: even if I find a way, it is not likely to be easy...
<m_3> thumper: I mainly wanted to check if there was a nice `bzr reconfigure --unstack lp:charms/juju-gui` that should work on the lp repo
<thumper> well...
<m_3> but I've only seen that work locally
<thumper> kinda, but it breaks if the branch is broken
<thumper> although I have an idea
<m_3> ack
<m_3> I think we have like a dozen or so broken ones... I can count
<thumper> m_3: try the reconfigure --unstacked
<thumper> m_3: it may be that just the stacked on url is buggered
 * thumper no longer has write access to every branch on LP
<thumper> gave that up ages ago
<m_3> nope, same... http://pastebin.ubuntu.com/5614925/
<m_3> most of the others could probably be brute-force fixed... this one'll probably be more difficult b/c of stuff brached _from_ this
<m_3> dunno tho
<m_3> thumper: well nothing urgent... it's not killing anything afaik... just ugly
<m_3> and we'll hopefully stop making them this way soon
<thumper> davecheney: what is the builddb command?
<thumper> davecheney: rogpeppe suggested to remove the bootstrap from it, but I have no idea what the command is supposed to be
<davecheney> builddb was a command that wrapped a charm that built the mongodb that we use in juju
<davecheney> once we have mongo in a package
<davecheney> we can remove it
<davecheney> s/was/is
<thumper> hmm... ok
<thumper> davecheney: do you agree that builddb shouldn't do a bootstrap?
<thumper> I'm removing the env cert gen from the environs.Bootstrap function
<thumper> the other place this is called is from the builddb command
<thumper> we have two choices...
<thumper> put the cert gen in there too
<thumper> or as rogpeppe suggests, remove the bootstrap from the builddb command
<davecheney> do the second
<thumper> ok
<thumper> simple enough
<davecheney> builddb bootstraps an environment
 * thumper will have the branch tweaked and ready for a rereview soon
<thumper> davecheney: here is some leankit mojo for you
<davecheney> we can live without that feature if it spawns a charm in an existing charm
 * davecheney listenes
<thumper> davecheney: put a link to the codereview on the card
<davecheney> using the advanced tab ?
<thumper> davecheney: see my card for the cert gen
<thumper> davecheney: yeah
<thumper> so I added one for "review" with the url
<davecheney> ahh nice, i once tried using hte general 'link' field
<thumper> so anyone can go from the board, seeing that there are reviews needed, and go right to the review using the right click, "link to" entry
<davecheney> but it was unsatisfying
<davecheney> oh thank fuck
<thumper> we should tell everyone
<thumper> it would make it helpful if you are using the board to drive work (which we should)
<davecheney> i think some folks use the LP review queue
<davecheney> but that is just historical
<davecheney> not an indication of best practice
<thumper> yeah... but everyone should have the board open
<thumper> I should talk about this more in oakland
<thumper> I found it very helpful when working with my LP team
<thumper> not so useful with PS
<thumper> as there wasn't buy-in from the devs
<thumper> which was a shame
<davecheney> what did PS use for plannign ?
<davecheney> (or do I not want to know)
<thumper> planning?
<sidnei> davecheney: does go juju work with canonistack?
<thumper> you really don't want to know
<davecheney> sidnei: in theory yes, in practice, no
<sidnei> LOL
<thumper> heh
<sidnei> what's the blocker?
<davecheney> due to our requirements for public IPs and the lack of said in canonistack
<davecheney> IPv4
<sidnei> really? it needs public ips?
<davecheney> yes
<thumper> TODO: IPv6 everywhere
<sidnei> yesplease
<davecheney> thumper: do you remember what we decided in atlanta ?
<davecheney> was it going to be a small or large piece of work to separate the concept of instance and public ip ?
<thumper> davecheney: for canonistack?
<thumper> I think jam had a ssh tunnel magic thing that worked
<davecheney> that was mgz's 'instance addressing card', from memory
<davecheney> oh yeah, stunnel
<sidnei> sshuttle i guess
<sidnei> (eod)
<davecheney> that's the one
<davecheney> thumper: are you going to do another propose on https://codereview.appspot.com/7809043/ ?
<thumper> davecheney: yes
<thumper> that is what I'm working on now
<davecheney> ok, will hold off
<thumper> davecheney: what is a LoggingSuite, and why would I want it for the tests?
 * davecheney can't remember
<thumper> davecheney: another thing
<thumper> I have two test types I want
<thumper> one tests internal package functions
<thumper> the other tests the exported function
<thumper> so I want two different _test.go files
<thumper> so...
<thumper> do we have a standard
 * thumper tries to remember the defined standard for _test files that magically don't appear in the built files
<davecheney> export_test.go ?
<davecheney> thumper: the Juju practice of testing outside the package, compared to majority of Go code out there is considered unorthadox
<thumper> what is export_test?
<davecheney> just a hack to expose private symbols to external test code
<davecheney> thumper: i overhead you talking about this to william in Atlanta
<davecheney> it sounded like he didn't have much support for the idea of external testing
<thumper> was talking with rogpeppe about this too
<thumper> I think we should have both types
<thumper> testing only at the export boundary is not so good
<thumper> so...
<davecheney> one suggestoin, not fully considered, $PKG/*_test.go << internal tests
<thumper> I have added a file environs/cert.go, and I want environs/cert_test.go to be in the environs package and test the internals, and environs/cert_something_test.go to test the public func
<davecheney> $PKG/test/*_test.go << extenal tests of pkg $PKG
<thumper> +1 if $PKG/test is $PKG/tests as wthere should be more than one :)
<davecheney> sure
<davecheney> it was just a throwaway idea
<thumper> actually I like it
<thumper> but I think we should go for general acceptance
<thumper> do you have any suggestion for just getting this branch in?
<thumper> I want the external test func in environs_test package
<thumper> is it all "*_test.go" files that are ignored?
<davecheney> yes, _test.go is ignored during go build/install
<thumper> so...
<thumper> cert_blackbox_test.go?
 * davecheney reads your branch, i'm not sure what the problem is you are hitting
 * thumper does that
<thumper> davecheney: you don't see the new changes
<thumper> let me do this and it will all be obvious :)
<davecheney> ok
<thumper> davecheney: there aren't enums in go are there?
<davecheney> no
<davecheney> there are consts, and iota defined consts
<davecheney> but no enums
<thumper> I want a two value result, but bool doesn't feel right
<thumper> davecheney: so I have a method called EnsureCertificates which returns error right now
<thumper> but I want it to also return whether it created the cert
<thumper> so like (bool, error)
<thumper> but bool doesn't really have a meaning with Ensure...
<thumper> what does true mean?
<thumper> perhaps we just need to put meaning on it
<thumper> true means ensure was all good
<thumper> false means we had to generate the certs
 * thumper does that
<davecheney> thumper: closes we have http://play.golang.org/p/UJV5ff9fCg
<davecheney> thumper: sounds good enough for the moment
<thumper> davecheney: http://play.golang.org/p/zgPWMG_qyj
<thumper> davecheney: I prefer to be explicit
<thumper> davecheney: how does that look for a public interface?
<davecheney> thumper: sgtm
<davecheney> bools are always hard
<thumper> cool
<davecheney> i dislike function calls with f(bool, bool, int, bool, bool)
<davecheney> they encourge people to demand named args
<davecheney> thumper: anyway, this is low level nitty gritty shit
<davecheney> so i think it is fine to be explicit
<thumper> davecheney: well, we do work for pedantical :)
 * davecheney rimshot
#juju-dev 2013-03-15
<davecheney> arguably this function is serving two masters
<davecheney> so there will be some uglyness
<wallyworld_> thumper: tests pass in raring now :-D https://code.launchpad.net/~wallyworld/juju-core/fix-raring-tests/+merge/153488
<thumper> wallyworld_: seriously?
<thumper> you're my hero if it works
 * thumper grabs the branch
<wallyworld_> thumper: works for me, ymmv
<wallyworld_> i had the same issues as you, so easy to see it was fixed or not
 * thumper wonders if his raring branches for goamz and goose are recent enough
<thumper> I'd love not to have to work in that VM
<thumper> it is a PITA
<wallyworld_> thumper: i have goose trunk (tip) and whatever goamz i got a while ago and it's all good for me
<davecheney> wallyworld_: thumper i fixed all the issues I could find in atlanta
<davecheney> nobody can replicate your git problem
<davecheney> which is le shit
 * wallyworld_ can
<davecheney> ORLY
<thumper> davecheney: wallyworld_ has
<wallyworld_> davecheney: https://code.launchpad.net/~wallyworld/juju-core/fix-raring-tests/+merge/153488
<davecheney> ther was the one with the message difference
<wallyworld_> fixed :-)
<wallyworld_> fixed too
 * wallyworld_ now has a clean test run under raring
<thumper> davecheney: I think https://codereview.appspot.com/7809043/ is ready now
<davecheney> thumper: ta
<thumper> wallyworld_: umm... your tests have hung
<wallyworld_> some do take a while for me
<thumper> go test ./... is hanging
<wallyworld_> but they always have
<wallyworld_> the uniter ones take ages
<thumper> this has been over 5 minutes
<thumper> and stuck in one place
<thumper> normally faster than that for me
<wallyworld_> which place?
<thumper> whatever comes after cmd/charmload
<thumper> which may not be where it is stuck
<wallyworld_> the changes are/should be very unobtrusive
<thumper> as it is using buffered i/o
<wallyworld_> just setting some env vars
<wallyworld_> mine has always "stalled" after charmload
<wallyworld_> even before the changes
<wallyworld_> thumper: these ones for me always take a while
<wallyworld_> ok      launchpad.net/juju-core/cmd/juju        65.754s
<wallyworld_> ok      launchpad.net/juju-core/cmd/jujud       38.378s
<wallyworld_> they are logged always just after charmload
<wallyworld_> also these
<wallyworld_> ok      launchpad.net/juju-core/state   29.673s
<wallyworld_> ?       launchpad.net/juju-core/state/api       [no test files]
<wallyworld_> ok      launchpad.net/juju-core/state/api/params        0.006s
<wallyworld_> ok      launchpad.net/juju-core/state/apiserver 59.152s
<wallyworld_> ok      launchpad.net/juju-core/worker/uniter   67.571s
<thumper> I'm pulling latest bits, and will try again
<thumper> your branch doesn't look wrong
<thumper> must be something else
<wallyworld_> did you try rerunning?
<wallyworld_> thumper: funny, i just ran the juju tests with gocheck.vv and it took ~40 seconds even with all the extra trace, as opposed to 65s above
 * thumper runs with gocheck.vv
<wallyworld_> you can't to that at the top level and recurse afaik
<davecheney> .vv == vroom vroom ?
<thumper> theoretically .vv means unbuffered I/O
<thumper> but I think it lies
<thumper> very verbose :)
<wallyworld_> yeah
<thumper> seems to be stuck in the same place
<thumper> normally the command tests use all my cores
<thumper> but running at virtually 5%
<wallyworld_> blocked on i/o i guess
<thumper> no, just not doing anything
<davecheney> *cough* sleep *cough*
<thumper> wallyworld_: have you tested in raring?
<davecheney> back in lisbon a lot of test failures were hit with the sleep hammer
<thumper> because this is broken for me
<wallyworld_> thumper: i am running raring
<thumper> hmm...
<wallyworld_> and i could reproduce you git failures
<thumper> wondering if I've hit a different race condition
 * thumper has a fast machine
 * wallyworld_ has a slower machine and a smaller penis
<wallyworld_> thumper: maybe try and underclock your cpus and see what happens?
<thumper> wallyworld_: how would I do that?
<wallyworld_> power settings - battery mode?
<wallyworld_> or cpu greq select?
<wallyworld_> freq
<thumper> hmm...
<thumper> poos
<thumper> still stuck
<thumper> killed it
<thumper> at 5 minutes of wall time
<thumper> I wish there was a way to send it a signal and get it to dump where it was at
<wallyworld_> yeah
<wallyworld_> maybe paste the vv output?
<thumper> nothing to see
<thumper> ?   	launchpad.net/juju-core/cmd/charmload	[no test files]
<thumper> that was the last output
<thumper> davecheney: should I have any mongod process running after the tests have finished?
<thumper> I have one
<thumper> should I kill it?
<wallyworld_> thumper: you need to cd to just the juju tests
<wallyworld_> to use gocheck.vv
<wallyworld_> since ./... and gocheck args don't work together
<thumper> seriously?
 * thumper sighs
<thumper> which ones were failing before/
<thumper> ?
<wallyworld_> thumper: the git tests
<wallyworld_> are the easiest to see
<davecheney> thumper: no, but they do leak
<wallyworld_> also the uniter ones
<wallyworld_> and deployer ones
<davecheney> i go thought and clean them up
<davecheney> or learn not to ^C the tests
<wallyworld_> davecheney: question, this had be stumped - cmd/jujud/unit_test.go:20: s.agentSuite.SetUpSuite(c)  <--- i had to add this and didn't know why the call just didn't drop through
<wallyworld_> why does aliasing the import fix it?
 * davecheney scrolls back to find the CL
<davecheney> wallyworld_: it's not aliasing
<davecheney> hang on
<davecheney> which file are we talking about ?
<wallyworld_> unit_test.go
<wallyworld_> i misread the diff a bit
<thumper> wallyworld_: stuck here http://pastebin.ubuntu.com/5615399/
<wallyworld_> davecheney: you wanted testing.GitSuite renamed to GitSuite
<davecheney> wallyworld_: right, sorry was looking at wrong CL
<davecheney> it's not renaming, it is giving it a name
<davecheney> at the moment both fields in the struct are anonymous
<davecheney> which embeds them
<wallyworld_> but i don't understand why that is needed and why i had to add the explicit call throughs
<davecheney> however that creates and ambiguity on who defines the embedded UnitSuite.SetUpSuite
<wallyworld_> thumper: you have thr wrong mongo
<thumper> ah... wat?
<wallyworld_> thumper: you need to install the one from the tarball and set your path to that one
<wallyworld_> thumper: since ssl is not supported in the packaged version
<davecheney> thumper: yes, don't use mongo 2.0 from raring
<thumper> Installed: 1:2.2.3-0ubuntu1
<wallyworld_> and something about lawyers says we can't add ssl
<davecheney> thumper: bzzt, that one does not have ssl enabled
<thumper> wallyworld_: I think the lawyers are still arguing
<thumper> clucking bell
<thumper> juju dev sucks the big kumara
<wallyworld_> yeah, sucks balls
<thumper> didn't bigjools have a ppa somewhere
<wallyworld_> this is out of our control sadly
<thumper> I don't like the tar ball answer
<wallyworld_> yes, believe so
<davecheney> wallyworld_: by giving testing.GitSuite a name, you break the ambiguity
<bigjools> I do
<thumper> wallyworld_: nothing is out of our control
<thumper> we control EVERYTHING
<thumper> bigjools: you may need to rebuild
<thumper> bigjools: plzfix
 * davecheney contols both the horizontal and the vertical
<wallyworld_> davecheney: ah thanks, i 'll re-read the code with that though in mind. i need to think about it so i fully grasp it
<bigjools> thumper: hmmm?
<davecheney> thumper: that osn't bigjools' version, it's coming from raring
<davecheney> wallyworld_: no worries
<bigjools> mine has SSL
<thumper> yeah.. I know
<thumper> bigjools:  needs to make a new one
<bigjools> pebkac
<davecheney> the other part of the solution is, when you embed a structure, its field name is the name of the structure
<bigjools> thumper: what's up with it?
<thumper> hmm...
<wallyworld_> thumper: so now you can +1 my mp when i fix davecheney's issues
<davecheney> so by naming testing.GitSuite, GitSuite, none of the rest of the code notices, as that was what it called it originally
<wallyworld_> :-)
<thumper> perhaps the ppa has been disabled
<davecheney> thumper: bigjools packaged 2.2.2
<bigjools> apt-cache policy .... tells you all
<davecheney> raring offers 2.2.3
<davecheney> ^ sorta guess
<thumper> and the ppa was disabled on upgrade
<thumper> bigjools: make a new one and I'll buy you a beer
 * wallyworld_ is glad he just used the barball
<wallyworld_> tarball
<bigjools> arf
<davecheney> wallyworld_: right first time
<wallyworld_> hah
<thumper> but I don't want to use the dumb tarball
<davecheney> my favorite mongo feature is the 60hz timer it sets up
<thumper> wallyworld_: where are the instructions?
<davecheney> specifically to waste your battery
<davecheney> thumper: they _should_ be in the README
 * davecheney scratched head
<wallyworld_> thumper: not sure, i just figured it out - untar to /opt (say) and update yuor path
<thumper> it just feels SO wrong, that I'm resisting
<davecheney> thumper: they would ahve to be in the README (checking)
<thumper> until we get a better solution
<davecheney> we onboarded so many juju devs in Atlanta
 * bigjools agrees with thumper
 * wallyworld_ agrees too
<wallyworld_> but we encountered some resistence trying to change it
 * davecheney also agrees, but points the phalanax of lawyers that lie between us and our desired solution
 * thumper gathers all his toys in one corner
<bigjools> yeah I only built for quantal
<davecheney> bigjools: that would be it
<bigjools> I'll do another
<thumper> bigjools: awesome, ta
<thumper> davecheney: who can I poke to make this go faster?
<thumper> there is that long email thread
<thumper> but I'm prepared to go higher
 * thumper is sick of it
<davecheney> thumper: antonio and jamespage have identified themselves as the owner of the issue
<bigjools> go thumper, go thumper, go thumper.... oooo yeaahhhh go girl
<davecheney> the last status update I got said they were waiting on 10gen to do something for them
 * thumper goes to look them up on the direcdtory
<davecheney> so, we're waiting on the good graces of the 10gen lawyers
<davecheney> GLWT
<thumper> davecheney: in which cases they'll be waiting forever
<davecheney> exactly
<thumper> ...
<davecheney> thumper: are you cc'ed on that email thread
<thumper> davecheney: yeah, and I'm about to reply and start kicking tires
<davecheney> go thumper go
<thumper> ok, it seems that james did create a version for raring, but it has been superseded by the raring release
 * thumper emails everyone
<bigjools> thumper: an ssl version?
<thumper> yeah
<thumper> according to the email
<bigjools> I'll build anyway
<bigjools> hmmm
<thumper> bigjools: how long will it take?
<bigjools> thumper: iirc about 30-60 mins
<thumper> ok... I'll wait
<thumper> and use your one
 * bigjools can bump ppa priority :)
<thumper> \o/
<bigjools> in fact there's a little know changelog trick to get more priority too
<bigjools> known*
<wallyworld_> davecheney: thanks for the review - with the empty SetupSuite() methods, I cargo culted those from elsewhere. So I guess those other places should be fixed too at some point
<davecheney> wallyworld_: stick a card in leankit to refactor that shit
<wallyworld_> will do
<davecheney> the contract for gocehck.Suite() takes an interface{}
<davecheney> so there is no requirement for every Suite to have those methods
<davecheney> they are only needed if used
<davecheney> and empty ones dont' really help
<davecheney> apart from that, get thumper to LGTM
<wallyworld_> yeah, i thought so too, but figured it must have been needed for some reason/convention
<davecheney> then you'll be in sweet sweet raring heaven
<wallyworld_> oh yeah :-)
<davecheney> wallyworld_: i think you were right wen you said cargo cult
<wallyworld_> it happens if one is a bit unsure when implementing something and one assumes that what was done before was correct :-)
<wallyworld_> and yet we have two +1's required
<wallyworld_> so it seems stuff still slips through
<davecheney> wallyworld_: i don't think you want to argue for quorum on a change request
<davecheney> :)
<wallyworld_> sure :-) but my point is with two reviewers, perhaps "obvious" stuff could be caught
<davecheney> buddha says "when arguing with loved ones, don't bring up the past"
<wallyworld_> not meant to be negative, just an observation
<davecheney> an observation well made
<wallyworld_> davecheney: i only found one place to fix as it turns out - LoggingSuite has empty Setup/TearDownSuite() and a few calls to those - i'll just fix it up as a drive by
<davecheney> +1
<wallyworld_> thumper: as soon as you +1 my mp, i can land the raring test fixes :-)
<thumper> link?
<bigjools> thumper: it's building https://launchpad.net/~julian-edwards/+archive/mongodb/+packages
<wallyworld_> thumper: https://codereview.appspot.com/7677044/
<thumper> wallyworld_: acked
<thumper> do it
<thumper> land it
<thumper> now
<wallyworld_> \o/
<wallyworld_> thumper: just doing a quick driveby to remove some redundant code, will be done in a few minutes
 * davecheney away -- lunch
<wallyworld_> davecheney: i've removed the empty setupsuite stuff, but am thinking - perhaps it was done that way so that stuff could be added if required without needed to then go and add setupsuite calls to all the structs which embedd the logging test suite
<davecheney> wallyworld_: maybe leave the drivebys for another day
<wallyworld_> yeah, i just reverted them :-)
<davecheney> i'll just leave this here http://www.youtube.com/watch?v=Ktbhw0v186Q
<davecheney> afk for reals this time
<wallyworld_> you are a sadistic man
<bigjools> thought it was going to be a rickroll
<wallyworld_> or worse
<wallyworld_> thumper: can you feel it?
<thumper> can I feel what?
<bigjools> haha
<wallyworld_> that's not the first time someone has said that to me
 * bigjools lunches
<wallyworld_> thumper: if you give it a oull, it will come
<wallyworld_> pull
<bigjools> ...
<wallyworld_> the raring test fixes i mean
<jtv1> oh
<jtv1> not his lunch?
<wallyworld_> you all have dirty minds
<thumper> wallyworld_: I need bigjools's mongo branch first
<bigjools> well given who was saying it ...
<thumper> and I'm kinda busy actually working
<wallyworld_> sure, just letting you know :-)
<jtv> wallyworld_: except Julian, who has dirty hands.
<wallyworld_> lol
 * bigjools wipes
<jtv> ...
<wallyworld_> bigjools: on the curtains?
<jtv> That's it.  No food for me this lunchtime.
<bigjools> thumper: at least 30mins left to build
<bigjools> it takes an hour
 * bigjools heads off
<jtv> Meanwhile, a question for the juju experts: we need to implement EnvironProvider.InstanceId() for the maas provider, but the EnvironProvider has no idea what maas it's supposed to talk to.
<jtv> It's not like the ec2 provider where you have a fixed IP address for the metadata service.
<wallyworld_> jtv: in the vm world, it is the machine id on which the provider is running
<jtv> So... the bootstrap node?
<wallyworld_> yeah
<wallyworld_> and others too i think
<jtv> :(
<wallyworld_> the id is used to update the state
<wallyworld_> so the agent can see that the machine has been provisioned
<wallyworld_> i think
<jtv> The fact that the method lives on EnvironProvider suggests that no provider needs to be running.
<jtv> Otherwise it'd be on Environ, where we have a real chance of getting the information required.
<wallyworld_> yes
<wallyworld_> this is done before that happens
<jtv> Damn.
<wallyworld_> on ec2 and openstack, there's the metadata service
<wallyworld_> which provides this
<jtv> On MAAS too, but at this point we don't know its address.
<wallyworld_> :-(
<wallyworld_> i *think* the update of the state can be delayed
<wallyworld_> till the provider starts
<wallyworld_> but you'd need to talk to william
<jtv> pleasesaygrantpleasesaygrantpleasesaygrant
 * jtv is in a timezone far, far from the UK
<wallyworld_> readereadereadereade
<jtv> buggerbuggerbuggerbugger
<wallyworld_> the main other person who would know is also in the uk
<thumper> haha
<jtv> uk...  ISO country code for:
<jtv> Ukraine
<jtv> That helps
<thumper> no it isn't
<jtv> ?
<wallyworld_> i meant england
<jtv> gb == united kingdom of great britain & northern ireland
<jtv> wallyworld_: I figured.  Just doing some wishful thinking
<wallyworld_> jtv: right now, i can't see why it has to be the environprovider which provides that info - i think some refactoring could fix the problem
<wallyworld_> but i don't fully appreciate the finer details of the workflow to offer practical advice and understanding of the consequences
<jtv> That would mean the world to us at this stage.  We're currently stuck on this.
<wallyworld_> jtv: maybe send an email to the list - william is good at responding
<jtv> Yes, he is.  Thanks, I'll do that.
<wallyworld_> hopefully can be sorted for next week
<jtv> We have until...  This evening.  :/
<jtv> I'm putting together the email now.
<bigjools> jtv: iso country codes are fucked
 * bigjools relurks
<thumper> jtv: UKRAINE   A2 = UA    A3 = UKR
<thumper> jtv: however UK is not a 2 character country code for anything
<jtv> !
<jtv> Damn.
<jtv> Well, I apologize.
 * thumper was bored so looked it up
<thumper> given that domains for the uk finish with .uk
<thumper> you'd think it was the iso code
<thumper> but you were right on the .gb
<thumper> which I don't think anyone sues
<thumper> uses
<thumper> jtv: perhaps ukraine and UK got together and agreed that neither should have 'UK'
<thumper> just to piss everyone off
<davecheney> thumper: you asked for it, http://www.youtube.com/watch?v=rNu8XDBSn10
<thumper> davecheney: that's ok
<thumper> davecheney: I did think for a minute that you were rick rollingme
<davecheney> didn't the ukraine used to be called something else ?
<thumper> davecheney: like the USSR?
<davecheney> thats the one
<thumper> before it got broken into little pieces
<jtv> Well they were a republic within the USSR.
<jtv> Not an "independent republic" â those were parts of Russia and did not have the theoretical right of secession.
 * davecheney does not want to give the northern territory it's own top level domain
<jtv> .aunt?
<davecheney> next thing you know we'd give QLD .xxxx
<jtv> ?
<davecheney> http://www.xxxx.com.au/
<jtv> Phew.  I was taking a risk by loading up that one...
<jtv> A neighbour had a visit from the MIB about having looked at Wikileaks.
<jtv> But this ain't so bad.  :)
<thumper> which leads me to the joke: why do australians put four X's on their beer?
<davecheney> jtv: GTFO
<thumper> a) because they can't spell beer
<jtv> Sounds good.  Go on.
<davecheney> ? there is a second answer ?
<thumper> jtv: seriously, MIB?
<jtv> Well no, I don't know what they were actually wearing.  But they were from the government and they were not friendly.
<thumper> I heard that amazon have bought up .book .author and something else
<thumper> wow
<davecheney> .cloud ?
<thumper> don't recall
<thumper> bigjools: built?
<jtv> I always wanted to have to gov't agencies visi and ostro here: ostro.go.th & visi.go.th.
<davecheney> i remember the freeforall back in '98 when you could buy a $.com without having to own the trademark
 * bigjools points thumper to the fine url from earlier
 * thumper scrolls back looking for the url
<bigjools> published an hour ago
<bigjools> mmm lunchtime dip in the pool was very nice
<thumper> hmm...
<thumper> doesn't fix the problem
<thumper> tests still sit doing nothing
<wallyworld_> thumper: error message?
<thumper>  unknown option sslOnNormalPorts
<wallyworld_> you sure the new one is being used?
<wallyworld_> that implies it's still using the old one
<thumper>  Installed: 1:2.2.3-0ubuntu2+ssl
<wallyworld_> hmmm
 * wallyworld_ has no idea
<thumper> hmm, just installed mongodb, think i need the other three package
<thumper> I thought it would do that
<wallyworld_> i just untarred the tarball for mine - perhaps everything was included
<wallyworld_> but i could have sworn there was just a single mongodb dir in there
<wallyworld_> jtv: you sent your email to the list?
<jtv> wallyworld_: you didn't see it?  I may have had the wrong email account active.  Let me retry.
<thumper> wallyworld_: working now
<wallyworld_> thumper: \o/
<wallyworld_> what did you change, just install the extra packages
<wallyworld_> jtv: yeah, didn't see it so thought i'd ask in case you mis sent it
<thumper> yeah, just also installed mongodb-clients mongodb-dev mongodb-server
<wallyworld_> it that all?
<thumper> I thought that installing mongodb would bring in the others
<thumper> yes
<wallyworld_> i was trying to be sarcastic :-)
<thumper> wallyworld_: I do get some failures though
<wallyworld_> :-(
<thumper> but I think they are timing related
<wallyworld_> so re running, different results?
<thumper> cmd/jujud OOPS: 29 passed, 1 skipped, 2 FAILED, 3 FIXTURE-PANICKED, 14 MISSED
<wallyworld_> 5 failures, not good
<jtv> Thanks for checking, wallyworld_
<wallyworld_> np
<thumper> wallyworld_: is it in trunk now?
<thumper> I'll get trunk and try again
<wallyworld_> yah
<thumper> nope, still failing tests for me
 * thumper runs all again and pastebins...
 * thumper takes kids to get fud
<wallyworld_> pastebin?
 * wallyworld_ off to parent teacher interview, bbiab
<rogpeppe> fwereade: you've got a couple of reviews
<fwereade> rogpeppe, lovely, tyvm
<rogpeppe> fwereade: i've got the next stage in the allWatcher out for review, if you fancy a look. it's all easy after that one. https://codereview.appspot.com/7594048/
<fwereade> rogpeppe, I don't think I'll manage to do that properly before this afternoon, I'm a little bit involved with environs at the moment
<rogpeppe> fwereade: np
<fwereade> rogpeppe, sorry, I had the beginnings of a go, but I need to use all my RAM to understand the whole thing clearly
<rogpeppe> fwereade: thanks. yeah, it's a little bit involved. not bad in the end, i think, but needs some thought.
<rogpeppe> fwereade: i have reasonable confidence it's well tested, at any rate :-)
<rogpeppe> (that was by no means a backhanded reference to the destroy service tests BTW)
<fwereade> rogpeppe, haha, np, I didn't take it as such ;p
<rogpeppe> fwereade: are you planning to start designating reviewers,
<rogpeppe> BTW?
<fwereade> rogpeppe, mramm has assigned that card to himself
<rogpeppe> fwereade: ah, cool
<rogpeppe> fwereade: i hadn't seen the card :-)
<fwereade> rogpeppe, dimitern added it, mramm grabbed it -- it's like a well-oiled machine :)
<dimitern> :)
 * rogpeppe was well-oiled last night
<rogpeppe> :-)
<dimitern> fwereade: tickets confirmed
<fwereade> dimitern, awesomeness, tyvm
<dimitern> fwereade: I'll update the wiki as well
<fwereade> dimitern, yu rock
<dimitern> fwereade: no worries :)
<dimitern> have you had a chance to look at yesterday's CL ?
<dimitern> fwereade: ?
<fwereade> dimitern, whoops, sorry
<fwereade> dimitern, I don't see it in +activereviews, from what I saw I think it's ready to repropose without -wip
<dimitern> fwereade: hmm.. I did not specify -wip last time - once you specify it does it stick? -wip=false is needed or smth?
<fwereade> dimitern, hm, it's just at the bottom of the page for no clear eason
<fwereade> dimitern, sorry
<dimitern> fwereade: np
 * fwereade lunch
<rogpeppe> dimitern: some code to review, if you wanna: https://codereview.appspot.com/7594048/
<dimitern> rogpeppe: on it
<rogpeppe> dimitern: thanks!
<wallyworld_> rogpeppe: fwereade: i'd love a chat on the logging stuff. i hate email exchanges in code reviews, so impersonal
<rogpeppe> wallyworld_: yes, that would be great
<fwereade> wallyworld_, rogpeppe: sgtm, when is good? now?
<rogpeppe> now would be good for me
<wallyworld_> good for me, then i can go to bed
<fwereade> rogpeppe, wallyworld_: ok, I'll start a hangout
<fwereade> rogpeppe, wallyworld_: https://plus.google.com/hangouts/_/09c5551027db88904e9a62bd3da34b261f46407b?authuser=0&hl=en
<dimitern> wallyworld_: fwiw I agree with the suggestions about splitting the syslogd support from log.*f() refactoring and replacing all around the code - it needs to be done carefully
<wallyworld_> it's already split
<mgz> can I join and lurk too?
<dimitern> wallyworld_: oh, great then
<dimitern> fwereade: ping
<fwereade> dimitern, pong -- sorry, it's still not done, I'm going to have to do reviews after the meeting
<dimitern> fwereade: ah, the meeting, yes - np
<bac> rogpeppe: ping
<dimitern> bac: they have a kanban meeting at the moment
<bac> dimitern: thanks.  no rush.
<fwereade> rogpeppe, is https://codereview.appspot.com/7417051/ abandoned?
<rogpeppe> fwereade: no, i'll fix the file close issue and submit
<rogpeppe> fwereade: i think it's still worth doing
<fwereade> rogpeppe, +1
<fwereade> dimitern, basically LGTM, but the SetCharm move is important, ping me if there's uncertainty
<fwereade> rogpeppe, how about https://codereview.appspot.com/7610044/ ?
<rogpeppe> fwereade: ah, i thought i'd submitted that one.
<rogpeppe> fwereade: it really needs to go in.
<rogpeppe> fwereade: oh, i remember, i started running tests and got diverted
<bac> hi rogpeppe, would you have time to look at a branch that is behaving oddly?  perhaps we could have a hangout to chat.
<rogpeppe> bac: definitely.
<rogpeppe> bac: anytme
<rogpeppe> time
<rogpeppe> fwereade: both those branches now submitted
<bac> rogpeppe: it is at https://codereview.appspot.com/7610046
<fwereade> rogpeppe, cheers
<rogpeppe> bac: ha ha, you've got the .THIS deletion too.
<bac> yep
<bac> rogpeppe: starting hangout
<dimitern> fwereade: cheers
<rogpeppe> fwereade: could you join us for a moment to verify some stuff about DestroyRelation, please? https://plus.google.com/hangouts/_/ade743831e17fc93a3d824612a93e8004534fcdd?authuser=0&hl=en
 * dimitern lunch
<fwereade> rogpeppe, bac: working as intended, a unit of wordpress enters scope in setupScenario and thus has a reference to the relation
<rogpeppe> fwereade: ah! of course, that's the difference between the two places
<rogpeppe> fwereade: i'm stupid - i should have realised that
<fwereade> rogpeppe, took me a little while to twig :)
 * fwereade is relieved
<rogpeppe> fwereade: always good to review the code again, eh? :-)
<fwereade> rogpeppe, yeah :)
<fwereade> bac, so go ahead and propose -- maybe add a comment to that effect so it's clear for readers
<rogpeppe> fwereade: 1
<rogpeppe> +1
<fwereade> benji, ping
<benji> fwereade: hi
<fwereade> benji, https://codereview.appspot.com/7460047/ and https://code.launchpad.net/~benji/juju-core/1130173/+merge/151555 don;t agree on whether it's been submitted -- would you look into it please?
<benji> sure; looking
<fwereade> benji, tyvm
<benji> fwereade: I figured it out.  I had a thinko in which I attempted to lbox propose the wrong branch and it reopened the merge proposal.  I verified that the code is indeed on the trunk and set the MP to Merged.
<fwereade> benji, cheers
<fwereade> niemeyer, btw you have a couple of approved branches
<dimitern> I have a panic on store/ tests in trunk, just pulled
<dimitern> anybody seen this? http://paste.ubuntu.com/5616924/ I'm just about to file a bug
<dimitern> it's intermittent - I run it again now and it's ok
<dimitern> I filed bug 1155681 for it
<_mup_> Bug #1155681: intermittent failure (panic) in store/ tests TestBlitzKey <intermittent-failure> <juju-core:New> < https://launchpad.net/bugs/1155681 >
<hatch> Is there anywhere that documents the relationship between a LP repo and CS path?
<niemeyer> fwereade: Thanks!
<dimitern> fwereade: moving f.SetCharm before fetching the charm in u.deploy() cause the steady state upgrade uniter tests to fail
<fwereade> dimitern, ah, damn, I expected we'd have missed one of them
<fwereade> dimitern, I presume they're just using the wrong things to wait for? or seriously broken?
<dimitern> fwereade: they're waiting in vain for some time and then it timeouts
<fwereade> dimitern, for what? paste me maybe?
<dimitern> fwereade: just looking at the log, trying to figure out when.. i'll paste it
<fwereade> dimitern, cheers
<dimitern> fwereade: http://paste.ubuntu.com/5617031/
<dimitern> fwereade: the funny thing (which I've seen before) is, that after that test failed the test runner seems stuck and no more test cases are executed (had to ^C it)
<fwereade> dimitern, well, that should also be fixed ;p
<fwereade> hatch, https://juju.ubuntu.com/docs/internals/charm-store.html has the "Publishing a Charm" section; and m_3 might be in a good position to tell you more
<hatch> fwereade: thanks - rogpeppe actually pointed me to the code responsible as well
<hatch> thanks
<dimitern> fwereade: i agree, but it's something to do with how the uniter tests are executed in general it seems - once all pass it's ok, most of the time when one fails others are not affected - it's probably due to the waiting code
<fwereade> dimitern, ISTM that the tests are waiting for the wrong behaviour -- we did change it a bit
<fwereade> dimitern, now an attempted upgrade will have the unit's charm set to the target url not the source
<dimitern> fwereade: yeah, i figured that much
<dimitern> fwereade: so we need to carefully refactor all upgrade tests
<dimitern> fwereade: *OR* just pass charm: 1 in waitUnit{} where it gets stuck
<fwereade> dimitern, yeah, if you look at the following test case I think it's clear the omission is just a bug
<dimitern> fwereade: :) comments are useful in these cases
<dimitern> fwereade: running the tests again now
<fwereade> dimitern, if you can think of a comment that would have helped, please feel free to add one :)
<dimitern> fwereade: well, not now - but before :)
<dimitern> fwereade: now all cases are similar (charm: 1) and all
<fwereade> dimitern, yeah, I think it was just a bug
<fwereade> dimitern, fundamentally racy, but we were lucky for some reason
<dimitern> fwereade: adding charm:1 fixed that test, there was another failure in error upgrade tests - same cause, fixed - running again
<m_3> hatch: currently lp:~<launchapd-id>/charms/precise/<charm-name>/trunk is the required format for it to show up in the store and then be deployed via `juju deploy cs:~<launchpad-id>/<charm-name>`
<hatch> yeah I am going to file a ticket about that - I would like to be able to deploy a charm right from the repo using an absolute path
<m_3> hatch: unless you have a compelling reason, stick with LTS charms atm please
<m_3> hatch: by all means... there was a bug for that somewhere, dunno if it made it to juju-core
<hatch> I'm working on the gui charm right now so requiring me to deploy to a specific branch isn't ideal - it's doable, just not ideal :)
<m_3> hatch: we often use a local repo... ~/charms/precise/mycharm is available via `juju deploy --repository ~/charms local:mycharm`
<benji> hatch: I'm pretty sure there is a way to deploy the charm against a particular branch of the GUI
<hatch> http://paste.ubuntu.com/5617026/
<hatch> is apparently the code in question
<hatch> doesn't look like it
<hatch> ohh gui yes
<hatch> charm no
<hatch> It's just a minor inconvenience that would be awesome if one could specify an absolute path to a remote branch
<m_3> +1
<bac> fwereade: thanks for looking.  i'm unclear about your explanation, though.
<fwereade> bac, ah, I'm sorry
<fwereade> bac, in setupScenario, we call EnterScope on a RelationUnit for wordpress/0
<fwereade> bac, this mimics the unit agent joining the relation
<fwereade> bac, and adds a reference to the relation
<fwereade> bac, I would not be opposed to a setupScenario tweak that immediately leaves scope after entering
<bac> so there are more parties to the relationship than just the services, thus causing it to hang around?
<bac> fwereade: ok, i'll investigate doing that
<fwereade> bac, the reason to enter/leave, rather than to do nothing, is because it's a subordinate relation: subordinate units are created on-demand, when there's a relevant principal participating in the relation
<rogpeppe> fwereade: i wouldn't mind that, i think.
<fwereade> rogpeppe, cool, I have done that in one or two places in the past
 * rogpeppe still doesn't really understand what "entering scope" implies
<fwereade> rogpeppe, becoming visible to ones counterparts
<fwereade> rogpeppe, by creating a document whose _id encodes what counterparts ought to be able to see it
<rogpeppe> fwereade: ... and creating subordinates, presumably
<fwereade> rogpeppe, yes: the txn also includes the creation of a relation unit settings doc; and sometimes a subordinate unit, when one is required for sanity and does not yet exist
<rogpeppe> fwereade: so that's not the usual way that subordinate units get created?
<fwereade> rogpeppe, that's the only way subordinate units get created
<rogpeppe> fwereade: ah, i misread "required for sanity" then
<dimitern> fwereade: whoohoo! all passed (after fixing 3 more places with the same issue)
<fwereade> rogpeppe, the idea is that each principal is entirely responsible for its own subordinate(s)
<fwereade> rogpeppe, the alternatives are icky
<mgz> go dimitern
<fwereade> rogpeppe, create 100k units when we add a relation?
<fwereade> dimitern, yay!
<rogpeppe> fwereade: yeah, seems reasonable.
<dimitern> mgz: I'm very nearly there - go func() {} - multitasking :)
<dimitern> fwereade: reproposed with the changes, if you think it's ready, I'll submit it (have 2 LGTM from TheMue already earlier)
<fwereade> dimitern, tbh that first LGTM is no longer valid, there have been a lot of changes
<fwereade> dimitern, better to work on merging it into the followup and shaking out what you can
<dimitern> fwereade: yeah, I know - I wanted to ask him again, but it seems he's off today
<fwereade> dimitern, yeah, back monday
<dimitern> rogpeppe: maybe you can take a look? https://codereview.appspot.com/7425044
<rogpeppe> dimitern: deep in debugging currently, but will do in a short while
<dimitern> rogpeppe: cheers
<rogpeppe> fwereade, dimitern: here's the next in line in the allWatcher branches; not too far off now: https://codereview.appspot.com/7815044
<dimitern> rogpeppe: will look in 5m
<rogpeppe> dimitern: now looking at your branch
<rogpeppe> dimitern: ta!
<dimitern> rogpeppe: it seems the diff is screwed
<dimitern> rogpeppe: (mine, that is) - will repropose it now
<rogpeppe> dimitern: thanks - was just about to say
<dimitern> rogpeppe: can you see it now?
<rogpeppe> dimitern: yup, thanks
<dimitern> I suspect this issue occurs when you bzr push (or lbox propose) from a subdir - but don't have enough info to confirm this
<rogpeppe> dimitern: reviewed
<rogpeppe> right, time for me to go
<dimitern> rogpeppe: tyvm
<rogpeppe> have a great w/e everyone
<dimitern> rogpeppe: g'nite!
<rogpeppe> dimitern: toi aussi!
#juju-dev 2013-03-17
 * thumper waits for davecheney
<thumper> hi davecheney
<thumper> davecheney: got time for a chat?
<thumper> davecheney: I want to set up live tests...
<davecheney> thumper: sure, lemmie get my headset
<davecheney> thumper: dailing ...
<thumper> davecheney: to what?
<davecheney> skype
<davecheney> go test launchpad.net/juju-core/envrions/ec2 -amazon
<davecheney> i think ...
<davecheney> two secs
<davecheney> environs/ec2/suite_test.go
<davecheney> defines the -amazon test
<thumper> http://www.eaglegenomics.com/2011/05/elasticfox-aws-ec2-add-on-for-firefox-4-with-t1-micro-instance-support/
<davecheney> thumper: pass -test.timeout=900s
<davecheney> or 1200s
<thumper> davecheney: http://paste.ubuntu.com/5615616/
<davecheney> thumper:  https://s3.amazonaws.com/juju-dist/tools/mongo-2.2.0-quantal-amd64.tgz
<davecheney> ^ le mongo
#juju-dev 2014-03-10
<waigani> morning axw
<axw> waigani: morning
<waigani> can I ask you a quick golang question?
<axw> certainly
<waigani> I'm being stupid again (somewhere!): http://play.golang.org/p/lIn3HvcBNI
<waigani> I thought interface{} was like a type wildcard?
<axw> you can assign any type to interface{}, but this is different... *thinks how to phrase*
<davecheney> axw: you can assign any value to a variable of type interface{}
<axw> yes, value not type...
<davecheney> waigani: you can't do what you tried there because map[x]x and map[y]y are differetn types
<davecheney> in real terms, they have different layouts in memory
<davecheney> what you tried to do won't work because go doesn't have type polymorphism
<waigani> right so the map as a whole is taken as one type?
<axw> yes
<davecheney> yup
<davecheney> map, slide, array, chan
<waigani> hmmm
<waigani> so is there a smarter way to do what I was trying to do?
<axw> copy each key/value
<thumper> davecheney: https://code.launchpad.net/~thumper/juju-core/no-proxy/+merge/210098 for you local issue
<davecheney> thumper: ta
<axw> waigani: just do a range and assign each k/v
<davecheney> waigani: you can delete from a map as you range over it
<axw> waigani: what are you actually trying to do though?
<davecheney> you could write a very long complicated functoin that does that via reflect, or just write
<davecheney> for k,v := range m { if k == "whoot" { delete m[k] } }
<waigani> right, so your saying don't bother writing the generic function
<axw> pretty much
<waigani> okay
<waigani> davecheney: "you can assign any value to a variable of type interface{}" - any value of any type? I'm missing something?
<waigani> davecheney: will that variable then have the type of the new value?
<davecheney> waigani: eh ?
<davecheney> var v interface {}
<davecheney> v = something
<davecheney> v's type remains interface{}
<waigani> okay, cool got you.
<waigani> thanks
<waigani> last question, if something was of type string and I tested v to be a string - would that be true or false?
<waigani> davecheney: ^
<axw> _, ok := v.(string)   -- ok will be true
<axw> if you assigned a string
<axw> the interface value remembers the type of the assigned value
<davecheney> waigani: http://play.golang.org/p/qsXniVhWSa
<davecheney> s/remembers/stores
<thumper> axw: standup
<axw> coming
<axw> storage, memory, same thing.
<davecheney> remembers == ooh spookey magic
<davecheney> stores == how it really works
<davecheney> :)
<thumper> waigani: https://launchpad.net/juju-core/+milestone/1.18.0
<waigani> thumper: thanks
<thumper> wallyworld_: here is the golxc branch https://code.launchpad.net/~thumper/golxc/clone/+merge/209790
<wallyworld_> ok
<wallyworld_> thumper: there are no tests for non nil extra args. should there be a test that at least checks that the args are assembled correctly ie "... extra args -- template args"
<thumper> hmm...
<thumper> ok
<thumper> wallyworld_: I wish I had the cleanup suite for that
<thumper> ...
<wallyworld_> thumper: do you agree we should have the extra test?
<thumper> I'd like to see a test that shows the command line being called
<thumper> we have some in juju
<thumper> where we patch PATH, and create a command that echoes the args
<thumper> so you can have a test that the command is being called with the right args
<wallyworld_> yeah, that's what i'd like
 * thumper thinks
<thumper> perhaps I should make "github.com/juju/testing"
<thumper> that has the cleanup suite and checkers from trunk
<wallyworld_> that would be useful
<wallyworld_> why should juju-core get all the toys
<thumper> well, more goose, goaws, etc
 * thumper goes to make the repo
<wallyworld_> that's what i mean - why should juju-core be the only project with the cool stauff
<thumper> haha
<thumper> I already made the repo
<thumper> just haven't stuck anything in there
<waigani> How do you get a logger.Warningf message in testing? I need to assert it.
<waigani> I  saw ctx := testing.Context(c), but it seems that captures the output from a cli command.
<waigani> here is my test: http://pastebin.ubuntu.com/7065333/
<waigani> c.GetTestLog() works! YAY :)
<thumper> wallyworld_: https://github.com/juju/testing
<thumper> doesn't have checkers yet
<wallyworld_> ok, give me a minute to finish a mp response comment
<wallyworld_> great. let the mass migration begin :-)
<thumper> that's fine, I'll look to add a function for the "make me a fake exec" that we've used a lot
<waigani> thumper: https://codereview.appspot.com/73360043
<waigani> thanks for ploughing through the monster branch axw
<axw> np
<waigani> thumper: https://codereview.appspot.com/73390043 (I've got a question in there)
<thumper> got a question where?
<thumper> oh, that's a different one...
<thumper> waigani: can I get you to consider suspend and resume commands?
<thumper> for juju-local
<waigani> thumper: okay ...
<thumper> waigani: :-)
<thumper> don't worry, it'll be fun
<thumper> although there are a few interesting edge cases
<waigani> it always is ..
<thumper> :)
<waigani> there always are ..
<waigani> thumper: did you want to put the details in the review?
<thumper> https://code.launchpad.net/~thumper/juju-core/autostart-containers-after-creation/+merge/210099
<thumper> waigani: this ^^ is the one we should land first
<waigani> thumper: I forgot to take out AutoRestart() - I was just using it to debug/test
<waigani> thumper: are you asking me to review your branch, I'm not sure if I'm ready for that :/
<thumper> waigani: no
<thumper> waigani: I'm asking you to wait :-)
<thumper> wallyworld_: can I get you to look at this? https://code.launchpad.net/~thumper/juju-core/no-proxy/+merge/210098
<wallyworld_> sure
<wallyworld_> thumper: i would have looked but i thought you were getting someone else to look
<thumper> wallyworld_: looks like dave didn't
<wallyworld_> np
<dimitern> jam, hey, i'll be 10m late for the 1-1 sorry
<dimitern> axw, hey
<dimitern> axw, what do you suggest as an appropriate place for these 2 scripts? utils/ssh/testing ?
<axw> dimitern: hey. I think it should be close to environs/manual, if not inside
<axw> dimitern: why not just expose the scripts from environs/manual, and add the testing code in environs/manual/testing?
<dimitern> axw, ok, i'll try that - i was having issues with import loops
<axw> ah
<axw> I did try that at one point, maybe that's why I didn't finish it.. can't remember now
<dimitern> axw, the whole point of the change was to get rid of gocheck deps in production code - because of that we were importing gocheck's gnuflags, and since they're global juju commands were affected
<axw> oops :\
<jam> dimitern: I actually didn't expect you on today, since you get 1-2 days off for sprinting
<axw> dimitern: I understood the intent (didn't realise that problem), the only issue I have is putting non-ssh-related things in utils/ssh
<dimitern> jam, oh, i wasn't sure it about that :) i actually did some work yesterday, but perhaps I'll take a swap day this friday
<dimitern> jam, should we have the 1-1 anyway?
<jam> dimitern: we can if there's stuff you want to talk about, we did just see eachother 3 days ago :)
<dimitern> jam, yep, let's skip it then :)
<dimitern> wwitzel3, hey, just so you know - it's not needed to call lbox propose after each commit, what I usually do is go review the comments, do them all, answer them, doing one or more commits and finally do lbox propose (without posting the draft comments, as lbox does that for you)
<wwitzel3> dimitern: yeah, it wasn't intentially .. I just kept seeing things I wanted to fix after I self reviewed it.
<dimitern> wwitzel3, i know the feeling :)
<dimitern> fwereade, wwitzel3, voidspace, mgz, jam, standup?
<jam> dimitern: I think most people are away, but I'll join you in just a sec
<wwitzel3> dimitern: I can't seem to get in to the call
<dimitern> jam, yeah, i guess so - just pinging
<mgz> I'll be in
<voidspace> dimitern: coming
<voidspace> I need to find headset though
<natefinch> wwitzel3: it can be tricky, try logging out of your gmail and log into canonical
<wwitzel3> natefinch: I did that, I even opened up a private browser session and tried that.
<voidspace> according to hangouts I'm the only person here
<voidspace> am I in the right place?
<natefinch> voidspace: nope
<dimitern> wwitzel3, are you logging in with your @canonical account?
<natefinch> voidspace:  retry
<voidspace> natefinch: I used the link from the calendar
<voidspace> will retry
<natefinch> voidspace:  thats the one, but sometimes it's finicky
<voidspace> ah, so dimitern sent me a link
<voidspace> it says "you're not allowed to join this video call"
<voidspace> from the calendar it uses my canonical id - but goes to the wrong place
<dimitern> voidspace, using your @canonical account?
<voidspace> from a direct link it uses the wrong account
<voidspace> dimitern: well, I'm logged into both
<voidspace> I will try a different browser
<dimitern> voidspace, log off from both and login in the @c first, that way it's the default
<voidspace> dimitern: I am logged into multiple google services - making canonical my default would be very inconvenient :-)
<voidspace> I'll use another browser
<dimitern> voidspace, yeah, that's the alternative
<voidspace> for online services the calendar links always worked fine so I didn't need to
<voidspace> dimitern: natefinch: same result - only logged into Canonical
<voidspace> "you aren't allowed to join this video call"
<dimitern> voidspace, i'll invite you
<voidspace> going to plus.google.com shows me definitely logged in with my canonical id
<voidspace> dimitern: cool - thanks
<voidspace> yay
<voidspace> no audio though, fiddling with the settings :-)
<voidspace> I have audio
<voidspace> :-)
<wallyworld_> mgz: did my email about joyent make sense?
<mgz> wallyworld_: I must confess to having opened it in a tab, gone, "that's long, I'll come back to it later"... and not having read it properly yet
<wallyworld_> lol, np :-)
<wallyworld_> tl;dr; we still have work to do
<mgz> okay, so, it's actually pretty good progress
<wallyworld_> i got it starting a bootstrap instance but ssh ing in to run the scripts fails
<wallyworld_> and i had to hack the header date
<mgz> you had to fix a bunch of tings, but are at the bootstrap starts an instance stage, next is working out why ssh in doesn't work
<wallyworld_> otherwise it barked with a timestamp skew error
<wallyworld_> i also didn't see any tools metadata issue, which was the reason i was asked to help
<wallyworld_> but yes, it's progress :-)
<wwitzel3> added a test for TargetRelease if someone can take a look https://codereview.appspot.com/72270044/
<mattyw> fwereade, ping?
<natefinch> mattyw: pretty sure he's out today
<mattyw> natefinch, ah ok - no problem, thanks
<natefinch> mattyw: welcome
<mattyw> natefinch, in one of my code reviews fwereade suggested that I should pull out the admin user name "admin" into a const - I was going to ask if he had any idea where it should live - do you have any opinion?
<natefinch> mattyw: depends on where and how it is used
<mattyw> natefinch, I'm working on api calls to add/remove users - but there are places where the admin user is going to be a special case at the moment
<mattyw> natefinch, it's only related to the apiserver at the moment I think
<natefinch> mattyw: if it's only used in a single package, just make it a non-exported const for now, it just makes it easier to refactor, and make sure it doesn't get typoed.
<mattyw> natefinch, do you have much experience with the client side of the api? do you know what william might be by unpacking the error?: https://codereview.appspot.com/61620043/patch/110001/120007
<natefinch> mattyw: I'm not sure familiar with the client API code, but looking at other examples, it looks like if you expect a maximum of one error from the api. you can call OneError() on the params.ErrorResults that is returned, and it'll return an error interface, which you can then return from your function
<natefinch> mattyw: like this: func (m *Machine) EnsureDead() error {
<natefinch> 	var result params.ErrorResults
<natefinch> 	args := params.Entities{
<natefinch> 		Entities: []params.Entity{{Tag: m.tag}},
<natefinch> 	}
<natefinch> 	err := m.st.caller.Call("Machiner", "", "EnsureDead", args, &result)
<natefinch> 	if err != nil {
<natefinch> 		return err
<natefinch> 	}
<natefinch> 	return result.OneError()
<natefinch> }
<mattyw> natefinch, the OneError() function returns an error if there's not on error in the result though doesn't it - so if there are > 1 errors (because it's part of a bulk call) then you won't get the error you expect?
<natefinch> mattyw: yeah. it's only valid for non-bulk calls.. .but it looks like add user and remove user are just single calls
<mattyw> natefinch, I guess they are actually - the server supports bulk calls but the client only allows single calls at the monent
<mattyw> natefinch, that hadn't actually crossed my mind - thanks for helping out
<natefinch> mattyw: Probably best to just follow the pattern the other API methods use.
<mattyw> natefinch, that's normally a good plan
<jamespage> fwereade, sinzui: gonna give me a clue as to when I might expect to have to go ask the release team for a freeze exception for 1.18?
<sinzui> jamespage, 4 to 9 days. We will release 1.17.5 in a 1+ days. If we think it is stable enough, it becomes 1.18.0
<jamespage> sinzui, ok
<sinzui> jamespage, I don't think it will be, I still see regressions, So I believe 1.18.0 will be 1.7.5 + some discrete fixes
<jamespage> sinzui, ack
<sinzui> which reminds, me, time to rename 1.18.0 to 1.20.0. The new 1.18.0 will have only the crucial regressions and tests targeted
<wwitzel3> I just spent 50 minutes on a failing test that was = vs := .. because the scope of the for was modifying the function scoped variable of the same name. lolcry
<natefinch> wwitzel3: you'll get used to the := vs. = .... it's tricky at first, but after a while it becomes a lot more obvious, and you just pay more attention to := inside conditionals.
<wwitzel3> natefinch: yeah, I figured as much, it just made me feel dumb when I finally noticed it.
<natefinch> wwitzel3: pretty much everyone who writes Go code has done the same thing one time or another.
<thumper> mramm: morning
<thumper> mramm: seems like you had daylight savings
<thumper> 8am isn't such a good time for me :)
<mramm> sure
<mramm> I have another meeting in 15 min
<mramm> can we push ours back another hour?
<mramm> to 1:15 from now
<mramm> thumper: ^^^^
<thumper> sounds good
<natefinch> thumper: daylight savings time in the spring is nice.  I get up at the same time, but the clock says it's an hour later.
 * thumper growls
<thumper> 10 minutes chasing a dumb bug because I used "print" in the bash script instead of "printf"
<natefinch> haha
<natefinch> thumper: ..... bash script ....      <- there's your problem
<hatch> the bigger problem there is 'bash script' ;)
<hatch> lol ^5
<natefinch> nice
<thumper> it is being tested in Go, does that help?
<hatch> :)
<natefinch> thumper: not really
<thumper> haha
 * thumper ignores natefinch
<natefinch> :)
<mwhudson> could be wore, could be csh!
<mwhudson> or perl
<natefinch> thumper: serious question, though.  Any idea why I wouldn't be able to connect to an Amazon juju environment?  bootstrap says it's timing out trying to ssh in.. though if I manually make an amazon instance I can ssh in just fine.
<thumper> natefinch: nope, sorry
<dimitern> thumper, hey, I've got a review for you when you have 15m :) https://codereview.appspot.com/72860045/
 * dimitern steps out
 * natefinch EODs
<thumper> wallyworld: can you kick the bot?
<wallyworld> ok
<wwitzel3> thumper: you know why this isn't being picked up by the bot? https://code.launchpad.net/~wwitzel3/juju-core/lp-1289316-lxc-maas-precise/+merge/209974 ?
<thumper> bot is fubard
<wwitzel3> ahh
<thumper> wwitzel3: wallyworld was going to kcik it
<wwitzel3> thumper: I thought it might of fixed itself since I saw the merge for your branch.
<thumper> oh, yay it landed
<wwitzel3> :)
 * thumper actually looks
<thumper> wwitzel3: it takes a few minutes for the tests to run
<thumper> perhaps 15-20
<thumper> so wait a bit
<thumper> it was fubared
<wwitzel3> wwitzel3: Ok, I set it approved a couple hours ago, didn't know the bot was fubar then
<wwitzel3> oops thumper
<wallyworld> thumper: i did kick it
<thumper> yeah I see that
<thumper> thanks
<thumper> wallyworld: wwitzel3 didn't realize and was waiting for his branch to land
<wwitzel3> wallyworld: it seems to be working now
<wallyworld> ah ok
<wallyworld> \o/
<wwitzel3> thumper: I am getting a failure on the bot, juju-core/replicaset: error getting replset config : no reachable server
<wwitzel3> thumper: but I am unable to replicate that behavior locally
 * thumper sighs
<thumper> yeah
<thumper> intermittent failure
<thumper> I get it sometimes
<thumper> you have two choices
<thumper> fix it
<wwitzel3> thumper: ok, I will ticket it or + the bug
<thumper> or click approve
<wwitzel3> thumper: click approve where? I've createda bug for the intermitten failure and I'm going to approve this.
<thumper> on the merge proposal near the top, from needs-review to approved
<wwitzel3> thumper: ahh ok, just to put back in the queue
 * thumper nods
#juju-dev 2014-03-11
<thumper> wallyworld: I finally have tests for the golxc clone stuff
<wallyworld> yay
<thumper> wallyworld: required me putting the checkers into "github.com/juju/testing"
<thumper> slightly earlier than I thought
<thumper> but, hey, it needed doing
<wallyworld> not a bad thing
<thumper> right
<thumper> just pushing now
<thumper> wallyworld: https://code.launchpad.net/~thumper/golxc/clone/+merge/209790 is now up to date
<wallyworld> ok
<thumper> it isn't perfect, but better than nothing IMO
<thumper> wallyworld: ta
<wallyworld> np
<thumper> wallyworld: also, I have another that you reviewed first, and I moved stuff
<wallyworld> ok
<thumper> wallyworld: https://codereview.appspot.com/72210043/
<thumper> wallyworld: I think it was me moving the lsb-release function
<thumper> which I did
<wallyworld> ok
<wallyworld> thumper: how about extracting a common method to parse the lsb-release file, taking as a param the name of the field (DISTRB_CODENAME  or DISTRIB_RELEASE) to extract
<thumper> wallyworld: seriously?
 * thumper grumbles
<wallyworld> well it is cut and paste code
<wallyworld> but feel free not to i guess
<wallyworld> your call
 * thumper will do it if we touch a third time, how's that?
<wallyworld> ok
<wallyworld> suppose
<thumper> geez
<davecheney> wut ?!?
<davecheney> http://paste.ubuntu.com/7070987/
<thumper> davecheney: which provider?
<axw> davecheney: in environs/tools/tools.go, does "toolsConstraint.Arches" contain ppc64?
<axw> thumper: that bit should be provider independent I think, as it's uploading tools
<thumper> axw: the reason we use the combined output is for testability
<thumper> axw: we have the hook thing to replace the output
<thumper> axw: it doesn't do one for stdout/stderr just yet
<axw> thumper: I guess that's fine since you should never have stdout & stderr at the same time, though I still think it'd be good to add the string to the error
<thumper> I agree it would be good
<thumper> perhaps we add a new test function :-)
<axw> thumper: I was thinking back to ssh things, but in that case stdout/stderr were mixed - not relevant here
 * thumper nods
<thumper> I did steal one of your ideas though...
<thumper> https://github.com/juju/testing/blob/master/cmd.go, with PatchExecutableAsEchoArgs and AssertEchoArgs
<axw> cool :)
<thumper> waigani: standup
<thumper> axw: could I get you to take a look at this too? https://codereview.appspot.com/73300043/
 * thumper goes to review ian's branch
<axw> sure
<davecheney> axw: local provider
<davecheney> hang on
<davecheney> maybe my ppc change didn't land
 * davecheney checks
<davecheney> axw: thumper could I get a set of eyeballs on https://codereview.appspot.com/69980044/
<axw> sure, just after thumper's one
<davecheney> kk
<axw> davecheney: it's already reviewed - antying in particular you want looked at?
<axw> or it's landed and still not working?
<axw> no not landed...
<davecheney> not landed, should be trivial review
<axw> thumper: reviewed yours
<thumper> axw: ta
<axw> davecheney: "uname -m" returns what?
<davecheney>  uname -m
<davecheney> ppc64le
<davecheney> WHAT!
<davecheney> we've all be calling this ppc64el !!
<davecheney> hmm, might have to fsck with that regex
<davecheney> going to whinge in a channel
<davecheney> bbsd
<axw> davecheney: el is just for dpkg I think, kernel calls it le (which makes more sense to me...)
<mwhudson> el is a funny computron joke, right?
<mwhudson> at least i always assumed that was the explanation for armbe/armel
<axw> hah
<axw> never thought of it like that :)
<davecheney> yup, that is the joke
<axw> so blame it on humourless kernel developers ;)
<davecheney> i think it comes from byte order marks
<davecheney> but probably goes back decades before that
<davecheney> mwhudson: i'm going to complain to those guys in that channel about this
<davecheney> the archive is called pppc64el ffs!
<mwhudson> davecheney: my side of the fence just has aarch64 vs arm64, at least they are easier to tell apart :-)
<axw> davecheney: also, the GOARCH is ppc64?
<axw> heh
<mwhudson> davecheney: i don't actually know which channel you mean, but have fun :)
<davecheney> right, ask expected i've been told i'm holding it wrong
<davecheney> mwhudson: axw sorry mate, https://bugs.launchpad.net/juju-core/+bug/1290654
<_mup_> Bug #1290654: juju must not rely on the output of uname -m <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1290654>
<axw> okey dokey
<axw> davecheney: although, we may want to support non-dpkg OSes at some point, which renders that bug invalid
<davecheney> axw: please don't shoot the messenger, i'm as unhappy about this as you are
<davecheney> axw: given how non dpkg os's keep getting pushed off indefinitely, lets leave that til it actually is a problem
<davecheney> so, the fire alarm has been going off for quite a while now
<davecheney> nobody appears too concerned
<thumper> axw: trying to use utils.CopyFile, but it makes my test fail :-(
<thumper> axw: http://paste.ubuntu.com/7071362/
<thumper> axw: without that diff, tests pass, with that diff, fail
<thumper> http://paste.ubuntu.com/7071364/
<axw> thumper: back to front?
<axw> dest first
<thumper> ah...
<thumper> ffs
<thumper> why?
 * axw shrugs
<axw> asm? :)
<thumper> well that fixed it :-)
<axw> wallyworld_: oops, thanks for picking up the copy and paste error :)
<wallyworld_> np :-)
<axw> wallyworld_: if I were to make a helper for the path, then probably all the tests should change. I'd rather leave that to another MP if you don't mind...
<wallyworld_> sure
<axw> need to do a bunch of tidying up I think
<wallyworld_> yeah
<thumper> grr!!!!!
<thumper> my branch has failed to land about 5 times so far
<thumper> with different intermittant failures
<thumper> All related to mgo starting AFAIKT
<thumper> s/K/C/
<wallyworld_> been there done that :-(
<thumper> no idea what causes the failures
<wallyworld_> thumper: you happy with the changes to my branch now?
<thumper> wallyworld_: just looking
<wallyworld_> ok :-)
<thumper> emailed
<axw> wallyworld_: is the juju bot running tests for gwacl? seems to lacking a branch for gocheck
<wallyworld_> axw: it was. but maybe when it was set up again that was left out
<wallyworld_> thumper: ta. yes, find tools will change for sure i think. but there's an api now :-)
<thumper> right
<thumper> axw: https://codereview.appspot.com/73300043 in order to better match the config file, I also moved the mounting of the logdir to after creation (it needed to be done anyway)
<axw> sounds good, looking
<axw> wallyworld_, thumper: I don't have the bot creds, would one of you please branch launchpad.net/gocheck into  /home/tarmac/gwacl-trees/src/launchpad.net/gocheck on the bot?
<wallyworld_> ok
<axw> thanks
<thumper> I don't have creds either :-(
<thumper> wallyworld_: while you're there, care to "go get github.com/juju/testing" ?
<wallyworld_> suppose
<thumper> ta
<thumper> omg, my branch landed sixth time lucky
<wallyworld_> axw: done
<axw> thanks wallyworld_
<wallyworld_> np
<wallyworld_> thumper's is done too
<thumper> wallyworld_: you the man!
<wallyworld_> sometimes
<thumper> axw: are you ok with me using the combined output for testing purposes? https://codereview.appspot.com/73310043/
<thumper> wallyworld_: meaning sometimes you are the woman?
<wallyworld_> why yes
<wallyworld_> how did you know
<axw> thumper: yeah sorry, I will lgtm
<thumper> ta
<davecheney> axw: i'd really like to land this sukka today, https://codereview.appspot.com/69980044/
<axw> davecheney: looking
<axw> davecheney: lgtm
<davecheney> axw: ta
<davecheney> landing
<jam> wwitzel3, thumper: FWIW when you get a test suite failure about replicaSet, that *usually* is what breaks the bot (I believe the test is waiting for mongod to start, but times out and then leaves it alive, which leaves the flock open which blocks future test runs)
<davecheney> or not ... replicaset_test.go:223: c.Assert(err, gc.IsNil)
<davecheney> ... value *errors.errorString = &errors.errorString{s:"Error getting replset config : no reachable servers"} ("Error getting replset config : no reachable servers")
<jam> davecheney: known intermittent test failure, you should reapprove, I'll check that the bot is running ok
<jam> davecheney: maybe you already did, since the bot is working on that branch as of :37 which is just after you commented
<rogpeppe> mornin' all
<dimitern> rogpeppe, mornin :)
<rogpeppe> dimitern: hiya
<dimitern> rogpeppe, can you take a look at this? https://codereview.appspot.com/72860045/
<rogpeppe> dimitern: will do in a little bit
<dimitern> rogpeppe, ta
<rogpeppe> dimitern: (when i've had more than 20 seconds to catch up on my email :-])
<dimitern> :) sure
<dimitern> rogpeppe, ping
<rogpeppe> dimitern: i'm looking at your review now, BTW
<voidspace> how do I update dependencies.tsv?
<voidspace> when adding github.com/juju/ratelimiter as a new dependency
<rogpeppe> voidspace: go get launchpad.net/godeps
<voidspace> rogpeppe: running
<rogpeppe> voidspace: then make sure that all your deps are currently up to date, by running godeps -u dependencies.tsv
<rogpeppe> voidspace: then run godeps -t $(go list launchpad.net/juju-core/...)  > dependencies.tsv
<voidspace> rogpeppe: and that has added the new dependency
<voidspace> rogpeppe: thanks
<rogpeppe> voidspace: cool
<voidspace> and changed the order of another in the list
<voidspace> right, I need to stash that instruction somewhere
<rogpeppe> voidspace: the godeps -t line is in the CONTRIBUTING file
<voidspace> rogpeppe: ah, I read the README but not that one
<voidspace> rogpeppe: my bad, thanks
<rogpeppe> dimitern: in general, i would have much preferred it if you had separated that large CL into several smaller ones
<voidspace> I've finally got vim killing trailing whitespace on save
<rogpeppe> voidspace: some people find it nicer just to gofmt on save
<voidspace> rogpeppe: that's not a bad call
<rogpeppe> voidspace: or, better, run goimports on save (that way you rarely have to worry about adding or removing imports)
<voidspace> rogpeppe: does goimports run go fmt too?
<rogpeppe> voidspace: yes
<voidspace> rogpeppe: sounds like the option to go for then
<rogpeppe> voidspace: (it kind of *is* gofmt, but with additional import-related functionality)
<voidspace> rogpeppe: right
<rogpeppe> voidspace: in general, any tool that transforms go source code will do the equivalent of gofmt on output
<voidspace> rogpeppe: this one? https://godoc.org/code.google.com/p/go.tools/cmd/goimports
<voidspace> rogpeppe: there seem to be several forks
<rogpeppe> voidspace: yup
<rogpeppe> voidspace: that's the canonical one
<dimitern> rogpeppe, I know, I was thinking of that
<voidspace> rogpeppe: cool, thanks
<dimitern> jam, rogpeppe, if we're dropping 1.14 compatibility, then bug 1235217 can be closed and NewEnvFromName code simplified
<_mup_> Bug #1235217: old environments should be given .jenv files <jenv> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1235217>
<mattyw> dimitern, rogpeppe this merge proposal: https://codereview.appspot.com/51450047/ has totally changed. The meaning of the juju login and juju whoami commands has changed totally in the last couple of days so it's be decided to remove them from this mp - and by some accident of history there should also be a fix for https://bugs.launchpad.net/juju-core/+bug/1285256 in it - do you think I should restart the codereview - or keep it as it
<mattyw> is?
<_mup_> Bug #1285256: bootstrapping juju from within a juju deployed unit fails <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1285256>
<rogpeppe> mattyw: i'll have a look when i've finished the review i'm on
<mattyw> rogpeppe, many thanks
<rogpeppe> dimitern: could you explain to me the reasoning behind the configstore.Exists function, please?
<dimitern> rogpeppe, that's the way we detect if the env is bootstrapped early, so we can report it consistently across all commands except bootstrap and sync-tools
<rogpeppe> dimitern: it's only used in one place, right? (in juju.newAPIClient)
<dimitern> rogpeppe, well yes
<rogpeppe> dimitern: why not just do a ReadInfo in that place?
<dimitern> rogpeppe, we might not have a store yet to ReadInfo from
<rogpeppe> dimitern: how could that happen?
<rogpeppe> dimitern: the only way that i could see is if ~/.juju doesn't exist yet, but that should not happen
<dimitern> rogpeppe, well i'd like to be defensive there
<rogpeppe> dimitern: i don't understand
<rogpeppe> dimitern: if we couldn't create a config store object, how could we ever write an info?
<rogpeppe> dimitern: if the NewDisk call fails, we'll return with an error anyway, which is sufficiently defensive, i think
<dimitern> rogpeppe, Default() will panic if there's no JujuHome set, which is awkward
<rogpeppe> dimitern: huh?
<rogpeppe> dimitern: Exists panics in that case too
<rogpeppe> dimitern: if juju home isn't set, we *want* to panic - it's something that every juju client program should do
<rogpeppe> dimitern: Exists is also wrong in this case because it uses configstore.Default, which at some point in the future may not return a disk-based configstore
<dimitern> rogpeppe, exists can be patched for tests
<rogpeppe> dimitern: that is, it assumes that configstore.Default returns the same thing as configstore.NewDisk(osenv.JujuHome()), which is breaking the Default abstraction
<dimitern> rogpeppe, and let's worry about default not returning a disk-based store when we get there
<rogpeppe> dimitern: let's not willfully break abstractions that don't need to be broken, please
<rogpeppe> dimitern: i don't understand why we need to patch Exists.
<rogpeppe> dimitern: i don't understand the PatchValue in bootstrapEnv
<rogpeppe> dimitern: it's easy to "patch" Exists without diving under the covers - we can just create the environ info
<wallyworld_> we having a standup? I'm more than happy to skip it :-)
<rogpeppe> wallyworld_: probably
<rogpeppe> wallyworld_: joining
<jam> mgz: poke for standup ?
<jam> wallyworld_: ^^ ?
<wallyworld_> jam: i'm there
<wallyworld_> wrong hangout maybe
<jam> wallyworld_: probably, as I don't see you in here
<jam> wallyworld_: https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig
<voidspace> rogpeppe: a bad photo
<voidspace> rogpeppe: https://www.dropbox.com/s/tkmkun6etxju83l/my-setup.jpg
<rogpeppe> voidspace: nice!
<voidspace> rogpeppe: room for a couple more if I reorganise... ;-)
<jam> voidspace: you have a 7th mini monitor if I see it correctly
<voidspace> jam: I have a chumby clock
<jam> voidspace: one exists inside my home somewhere
<voidspace> jam: which is not really a monitor as I only ever run the clock app...
<voidspace> jam: heh :-)
<mattyw> voidspace, is that a kinesis keyboard I see?
<voidspace> mattyw: yep, kinesis advantage. My favourite keyboard of all time. :-)
<mattyw> voidspace, I've always wanted to give one  a go - but could never find someone who had one to try out - and I don't feel I can justify the cost
<mattyw> ^^ unless I knew I was going to be able to use it
<voidspace> mattyw: yep, they're ridiculously expensive - but like a good chair, worth the investment
<voidspace> mattyw: I tried one at PyCon one year
<voidspace> mattyw: you *have* to touch type (or have a miserable time)
<voidspace> mattyw: but if you can touch type (or mostly touch type - it will make you learn fast) then they're awesome
<voidspace> mattyw: reach every key without moving your hands
<voidspace> mattyw: just rest your palms on the keyboard (rests provided) and the "well design" means you can reach every key with your fingers
<mattyw> voidspace, hopefully I'll be able to try yours at some point - I was convinced to buy a mechanical keyboard about 12 months ago - the perfect soundtrack
<voidspace> mattyw: when I travel I tend to bring my kinesis split keyboard instead
<voidspace> mattyw: the advantage is a bit expensive and bulky to travel with
<voidspace> mattyw: but yes, the mechanical switches make them very nice to type with
<voidspace> but a bit clacky
<mattyw> is fwereade around today?
<rogpeppe> mattyw: i don't think so
<jam> wallyworld_: you have a review for https://code.launchpad.net/~wallyworld/juju-core/simplestreams-ordering/+merge/210324
<wallyworld_> thanks
<wwitzel3> jam: on the verbose output / fix for the replica set , I didn't see a reitveld , should I just comment on the diff in lp?
<jam> wwitzel3: yeah, I didn't get lbox setup on the new machine, so just LP is fine
<wallyworld_> jam: i think sorting is not strictly necessary but "nice"
<wallyworld_> hence i put it in the code
<voidspace> right, coffee
<voidspace> mattyw: Will won't be around until Thursday
<dimitern> rogpeppe, are you still on it? :)
<rogpeppe> dimitern: yeah
<rogpeppe> dimitern: i'll publish my comments so far though
<dimitern> rogpeppe, thanks
<rogpeppe> dimitern: why the new manual/testing package?
<dimitern> rogpeppe, to drop gocheck imports in real code
<rogpeppe> dimitern: why not just put the code inside a _test.go file inside environs/manual?
<dimitern> rogpeppe, because it's used in environs/manual and provider/manual tests
<rogpeppe> dimitern: really?
<rogpeppe> dimitern: i only see a manual/testing import inside environs/manual tests
<dimitern> rogpeppe, sorry, not in provider/manual - let me take a look
<dimitern> rogpeppe, it seems that's true now, ok will move it in environs/manual/fakessh_test.go
<rogpeppe> dimitern: thanks
<jam> natefinch, rogpeppe: can we have a hangout later tonight to chat about what we need to do for HA ?
<rogpeppe> jam: sure
<rogpeppe> jam: what time's good for you?
<jam> I have family time until about 16:00 UTC, so after that would be ok
<jam> rogpeppe: would 16:00 work for you? I think it shouldn't be a problem for Nate
<rogpeppe> jam: seems fine for me.
<rogpeppe> jam: i guess it might be lunchtime for nate
<natefinch> rogpeppe, jam:  1600 is a little tricky because my wife will be getting my daughter from preschool, so I'll have the baby. but 16:30 would work.
<jam> natefinch: fine for me
<jam> you can also meet with one hand, right? :)
<jam> natefinch: rogpeppe: bumped to 16:30
<rogpeppe> jam, natefinch: SGTM
<natefinch> jam: cool
<natefinch> jam: sometimes the kids require three hands :)
<rogpeppe> dimitern: rest of review done
<dimitern> rogpeppe, tyvm
<rogpeppe> dimitern: np
<voidspace> rogpeppe: why did you make ratelimit.New Capacity an int64 - expecting some *really* big buckets?
<rogpeppe> voidspace: because i wanted to be able to use it for bytes-per-second transfer rates
<rogpeppe> voidspace: and 4GB per second isn't beyond the bounds of possibility
<voidspace> rogpeppe: heh, ok
<voidspace> rogpeppe: so the answer is yes
<rogpeppe> voidspace: :-) yeah
<sinzui> Looks like bootstrap is broken on precise
<sinzui> aws and hp fail
<sinzui> as does azure
<sinzui> trusty passed
<natefinch> sinzui: how is it failing?
<sinzui> Not must information http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/aws-deploy/880/console
<sinzui> hp is no better http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/hp-deploy/822/console
<sinzui> I will try to gather a --debug level bootstrap
<natefinch> sinzui: I suspect those are the failures I was seeing on trusty yesterday
<sinzui> but trusty passed on CI
<natefinch> sinzui: it fails for me, though.  I wonder why.  I had the same bootstrap failed: rc: 1  error
<sinzui> interesting
<sinzui> I am going to wait for CI to finish. I will know all the places it failed
<natefinch> sinzui: for me, I did some debugging and it was timing out trying to ssh into the instance... I don't know if you're seeing the same problem
<natefinch> sinzui: I had to had quite a bit of additional code to figure that out, since that RC 1 error is not exactly informative as-is
<sinzui> natefinch, was it a timeout, or do you believe there was network issues preventing success
<natefinch> sinzui: it was a timeout.  I was able to manually create an amazon instance and ssh into it.  I'm about to try juju bootstrapping a machine and then manually ssh'ing into it.
<sinzui> oh, even manual deploy failed
<natefinch> sinzui: weird, yeah, I can ssh into the machine just fine, but juju's attempt fails.... this call is essentially what I did manually to have it work:
<natefinch> 2014-03-11 13:52:33 DEBUG juju.utils.ssh ssh_openssh.go:147 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i "/home/nate/.juju/ssh/juju_id_rsa" -i "/home/nate/.ssh/id_rsa" "ubuntu@107.21.197.67" '/bin/bash'
<natefinch> sinzui: I just did $ ssh -i ~/.ssh/id_rsa ubuntu@107.21.197.67
<sinzui> interesting
<natefinch> sinzui: I have to get on a UDS meeting in 5 minutes, but afterward I can do some more investigation.   Other people seem not to be having this problem, so maybe there's an environmental problem that is making Juju unhappy.
<natefinch> sinzui: evidently had the days mixed up, so no UDS thing for me (at least not for me to be in).  Lemme see if I can figure out what's going on.
<sinzui> natefinch, thank. CI replayed the deploy tests for precise and all failed. They failed 5 times in a row
<voidspace> right, lunch
 * voidspace lurches
<hatch> http://askubuntu.com/questions/432679/getting-git-error-on-juju-charm-upgrade <--- this seems like a bug to me. The user shouldn't have to configure git to use juju....?
<natefinch> hatch: I know we use git for charm upgrades, but I wouldn't think we'd require git to be configured in any particular way, especially on juju-deployed machines.
<natefinch> rogpeppe, mgz, dimitern: can you guys bootstrap aws etc?  my bootstrap times out trying to connect to the instance
<hatch> natefinch yeah that's what I was thinking. I've also never ran into that issue and I've done a lot of bare-bones installs....I was just trying to get an idea if I should comment that he should file a bug
<sinzui> natefinch, I am testing hp cloud...and it is looking better. I am wondering if the issue relates to user setup. CI runs under a very restricted user
<sinzui> natefinch, nm, it just bailed
<natefinch> sinzui: doh
<natefinch> sinzui: honestly, it's probably better that way.  Some weird user setup is harder to figure out than a general failing
<sinzui> HP died at
<sinzui> Installing package: --target-release 'precise-update/cloud-tools' 'git'
<natefinch> that's odd
<sinzui> natefinch, I think cloud-tools might be at issue. That would explain why trusty is happy
<natefinch> sinzui: I think I figured out part of my own problem... some debugging code I had put in to debug a different problem was inadvertently screwing up SSH.
<sinzui> natefinch, I updated bug with the info I just learned https://bugs.launchpad.net/juju-core/+bug/1290890
<_mup_> Bug #1290890: juju 1.17.5 RC cannot deploy to precise <ci> <deploy> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1290890>
<mattyw> does anyone know why a merge proposal might suddenly go insane? https://codereview.appspot.com/51450047/
<sinzui> natefinch, Hey, we are probably using different tools. I am testing using the 1.17.5 release candidate tools that were placed in each cloud. Are you using upload tools?
<natefinch> sinzui: I'm using trunk without upload tools.  I think I've used upload tools without it making a difference.
<sinzui> natefinch, I agree, I just wanted you to know how juju got the tools in the ci tests and my one tests
<natefinch> sinzui: cool
<natefinch> mattyw: what's insane about that?  (not sure what it should look like).   I do know that sometimes rietveld gets confused, and you need to repropose/
<mattyw> natefinch, I'll try that - I was expecting 5 files changed - I got almost all of core
<sinzui> natefinch, I don't see git in the archive. http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/g/
<natefinch> sinzui: what is in the archive and how it gets there is a mystery to me
<jam> natefinch: rogpeppe, I'm going to try to be back in time for our hangout, but I have to go do some grocery shopping. If I'm not back in time, please start talking about it without me, I'llbe on  as soon as I can
<rogpeppe> jam: ok
<natefinch> jam: k
<sinzui> natefinch, the team that owns the archive, which includes jamespage, backport new software to precise so that our cloud tools are always modern.
<sinzui> natefinch, I don't think anyone asked for git to be backported, and ot probably doesn't. Juju just assumes if it needs git, it needs to get it from the cloud-archive. Just like it does for mongodb-server
<natefinch> sinzui: ahh interesting
<natefinch> sinzui: doersn't seem like that should have changed recently, thoughj
<sinzui> natefinch, I am going to bootstrap hp with 1.17.4 and see if it gets git from a different location
<sinzui> natefinch, I updated Bug #1290890. 1.17.4  didn't install git from cloud-tools. The regression is in code that always want to install precise packages from cloud-tools
<_mup_> Bug #1290890: juju 1.17.5 RC cannot deploy to precise <ci> <deploy> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1290890>
<natefinch> sinzui: interesting, thanks for doing all that work.
<voidspace> yay, my tests compile
<voidspace> they don't *do* anything, but they compile ;-)
<voidspace> actually, they start a loop with the a testInstanceGettter instead of a real environ and then quit, so not true that they do *nothing*
<wwitzel3> it's the small victories
<wwitzel3> I decided to make my maas configuration "better" .. by using a bridged virtual interface that used port fowarding and NAT via iptables so that as I moved locations, I wouldn't have issues with my maas getting confused by the network setup.
<natefinch> wwitzel3: you're a braver man than I
<natefinch> wwitzel3: last time I touched my iptables I couldn't print for a month
<voidspace> haha :-)
<rogpeppe> natefinch: https://plus.google.com/hangouts/_/canonical.com/discuss-ha?authuser=1
<bodie_> is there a diagram illustrating the basic architecture of juju anywhere or should I just break down the source for myself?
<voidspace> rogpeppe: ping
<voidspace> rogpeppe: you able to help with a go question?
<rogpeppe> voidspace: in a call right noew
<voidspace> rogpeppe: okey-dokey
<voidspace> natefinch: fancy helping me with a go issue?
<natefinch> voidspace: in the same call as Roger, sorry :)
<voidspace> natefinch: heh, ok
<voidspace> I'll figure it out
<wwitzel3> voidspace: now I'm curious :)
<voidspace> hah
<voidspace> wwitzel3: 	 I get the following build error
<voidspace> wwitzel3: receive from send-only type chan<-
<voidspace> and I can't see how the declaration/creation of the channels are incorrect
<wwitzel3> branch or pastebin?
<voidspace> I'm about to resort to google, I assume it's a simple error
<voidspace> the error is from line 184 of this diff:
<voidspace> https://code.launchpad.net/~mfoord/juju-core/instancepoller-aggregate/+merge/209966
<voidspace> instanceInfoReq is defined on line 59
<natefinch> chan type without an arrow is send/receive, chan type with arrow is either receive only <-chan or send only chan<-
<bodie_> there's no syntax like arbitrary syntax
<voidspace> natefinch: ah, the channel is incorrectly defined I believe
<voidspace> or I'm using it incorrectly
<voidspace> surely for a channel to be *useful* someone has to be able to send on it and someone has to be able to receive
<natefinch> voidspace: arrow points in direction of data flow  (into or out of channel), always pointing left
<voidspace> natefinch: what good is a channel I can write to, but no-one can read from?
<voidspace> I guess I'm missing a piece of the puzzle :-)
<natefinch> voidspace: you can't start with a read or write only channel, but you can return one from a function or or pass one into a function, to restrict what someone else can do with it
<voidspace> ah...
<bodie_> interestijng
<natefinch> voidspace: so you start with a read/write, return it to someone else defined as read only, then you know that person can't send on it
<voidspace> natefinch: thanks, for now I've relaxed the declaration as I don't care about stopping other people doing things
<wwitzel3> voidspace: so was the fix changing line 71 of that diff?
<wwitzel3> 61
<voidspace> wwitzel3: yep, 61
<voidspace> cool, it compiles, runs and blows up in interesting ways!
<wwitzel3> ship it
<voidspace> which is right as I configured the test data for it to return
<voidspace> hah
<voidspace> wwitzel3: the other fix would have been to create the channel and then set it on the instanceInfoReq struct
<voidspace> wwitzel3: because I'm creating it in the constructor the compiler creates one of the type specified
<voidspace> which I then can't read from
<voidspace> yay for type inferencing
<wwitzel3> :)
<arosales> Hello, could I get confirming the joyent provider is being pulled into the  1.18 release?
<dstroppa> mgz: has the joyent-provider-storage mp landed?
<rick_h_> arosales: very interested in that answer for quickstart as well if you can ping when you hear
<mgz> dstroppa: no, there was some bot issues, and I forgot to go back and land after fixing
<mgz> dstroppa: I'll do it now
<dstroppa> mgz: thanks
<sinzui> natefinch, r2403 is probably the issue. That revision changed cloud init to address issues for precise on maas. CI is running the previous revision . We will know in 30 minutes if CI loves it
<natefinch> sinzui: ahh, good, I hope it helps
<sinzui> natefinch, r2403 is the bad rev. CI passed aws, hp, manual, and local
<natefinch> sinzui: awesome
<sinzui> wwitzel3, Juju CI hates you. Don't take it personally. r2403 introduced by your branch to fix bug 1289316 prevents precise deployments.
<_mup_> Bug #1289316: lxc not installed from ubuntu-cloud.archive on precise <lxc> <maas> <precise> <regression> <juju-core:Fix Committed by wwitzel3> <https://launchpad.net/bugs/1289316>
<sinzui> wwitzel3, Bug #1290890 documents that juju is attempt to install git and other packages from the cloud-archive. They should come from ubuntu.
<_mup_> Bug #1290890: juju 1.17.5 RC cannot deploy to precise <ci> <deploy> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1290890>
<voidspace> can you define types in a closure in Go?
<voidspace> or methods?
<voidspace> really I want a method to use a closure
<voidspace> I can just add more state to the struct to mimic it if not
<sinzui> wwitzel3, natefinch: The safe choice is to revert r2403, but maybe you see how to ensure just mongodb and lxc are installed from the cloud archive (though there may be other tools that need to come from there too_
<wwitzel3> sinzui, natefinch: ok, I remember talking about it and it was determined that any LTS release should just have everything come from cloud archive, but it is easy enough to change.
<sinzui> wwitzel3, I think that is a sensible decision, but did some arrange for each package to be there?
<sinzui> We need to maintain a list of packages that the server team needs to copy or backport. They may have rules against copies
<sinzui> ^ jamespage is it problematic to copy a list of packages from ubuntu to the cloud archive to make it easy for juju to choose where to install packages from
<wwitzel3> sinzui: no, I didn't know that I needed to ensure that the list of packages was updated.
<wwitzel3> sinzui: so is the fix getting those packages on to cloud archive for precise or reverting r2403?
<sinzui> wwitzel3, The issue is we want to release at any hour. We cannot because your rev simply doesn't work. There is no point in testing the revs that are about to land. reverting is best, but just installing mongodb-server and lxc from cloud-archive is also a fix
<mgz> a straight revert for now sounds fine
<wwitzel3> sinzui, mgz: ok, so revert will address 1290890, then I need to reopen 1289316, and make the changes for only lxc and mongo-server to install from cloud archive?
<bodie_> is the preconfigured vagrant juju box suitable for core dev?
<wwitzel3> mgz: I also have no idea how to do a revert :)
<voidspace> I have a sanely failing test - the loop is running, receiving my request, and correctly returning the error from the testInstanceGetter
<sinzui> wwitzel3, yes, that is a viable path
<voidspace> now to make testInstanceGetter do something other than return an error...
<voidspace> which means I need something implementing instance.Instance, yay for interface embedding I guess
<wwitzel3> sinzui: or revet, which fixes the regression, reopen and have it pending the other packages get added to cloud archive?
<wwitzel3> revert
<sinzui> wwitzel3, revert, reopen. We can ask the server team if they can put all packages in the archive. This might not be good though if every new dep needs to be vetted by another team and scheduled to their own cadence.
<wwitzel3> sinzui: ok, sounds good, thanks
<wwitzel3> mgz: can you walk me through the process of reverting?
<mgz> wwitzel3: basically, you merge the inverse of the change, then propose that (and get approved, or self approve)
<mgz> so, make a new branch, `bzr merge co:master -r2403..2402`
<mgz> check that, commit, propose
<mgz> then to reland later, you want to merge trunk back into your feature branch, reject the changes, and commit, but that can be done later
<bodie_> ugh.  anyone know why i'm getting this? http://pastebin.centos.org/8371/
<bodie_> (using the juju box as suggested)
<bodie_> the virtualbox GUI didn't have anything useful afaict
<bodie_> is there some config step I'm missing...?
<bodie_> bleh
<wwitzel3> mgz: can you take a look https://code.launchpad.net/~wwitzel3/juju-core/lp-1290890-revert-2403/+merge/210474
<rogpeppe> voidspace: sorry about lack of reply - we've only just finished on the HA estimation
 * rogpeppe must stop now
<rogpeppe> g'night all
<voidspace> g'night from me too
<jam> natefinch: I know you were looking at changing the mongodb dependency to probe for the upstart job on the server rather than the client, is that scoped on the Board ?
<jam> and/or is it in a state that we can hand it to someone else to have you focus on HA ?
<jam> natefinch: also, I set up the board in 3 distinct HA lanes under TODO, which maps to how I think about what the implementation flows would be, can you look over it and see what you think?
<jamespage> sinzui, sorry - not quite sure what you need
<sinzui> jamespage, The devs decide juju should install all precise deps (git, cpu-checker, etc) from the cloud archive. They never asked if the server team would support that
<jamespage> sinzui, well that was a bad decision
<sinzui> jamespage :)
<jamespage> sinzui, heres the list
<jamespage> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/cloud-tools_versions.html
<sinzui> The the revision is being reverted
<jamespage> sinzui, ack
<sinzui> jamespage, I updated the bug with that list thanks. I think an informed decision will be made now
<natefinch> jam: the mongodb stuff is done and in the code I am working on landing.
<natefinch> jam: the lanes look good to me
<jamespage> sinzui, np
<jamespage> sinzui, no one is doing anything on the juju-mongodb -> juju-db rename are they?
<jamespage> if we do that it needs to be co-ordinated
 * thumper frowns
 * thumper looks at wwitzel3
<thumper> wwitzel3: you couldn't have known, but you have broken my branch
<thumper> wwitzel3: you put the machine config back into AddAptCommands
<thumper> wwitzel3: I have been trying to keep it independent from MachineConfig because I'm using it elsewhere...
<thumper> wwitzel3: I can pull it back out by passing the target release in as a param...
<thumper> wwitzel3: also, re: https://code.launchpad.net/~wwitzel3/juju-core/lp-1290890-revert-2403/+merge/210474 you were missing a commit message so the bot was ignoring it (I fixed that for you)
<sinzui> jamespage, I have not done anything. I think natefinch is landing changes that assume the db is named juju-mongodb
<natefinch> sinzui: my code doesn't care what the package name is, just what the filename is.  as long as it's installed in /usr/lib/juju/bin/mongod, my code doesn'tcare
<sinzui> natefinch, excellent
<wwitzel3> thumper: well my revert will land and I can make the fix for not passing in MachineConfig
<thumper> wwitzel3: I see the reversion change
<wwitzel3> thumper: you mean the increase? yeah, I just: bzr merge -r2403..2402 which reverted back to r2402 but under 2404
<wwitzel3> thumper: if there is a better way, I can redo it
<thumper> no, that is the right way
<thumper> I'll just wait for it to land before I continue with my particular change
<wwitzel3> thumper: rgr, thanks for fixing the commit message as well
<thumper> np
<wwitzel3> thumper: does -m with bzr not set that?
<thumper> yes, but not on the merge proposal
<thumper> there is a subtlety there
<wwitzel3> thumper: ahh right, ok, before I used lbox which did it for me
<wwitzel3> thumper: got it, thanks
<thumper> the description is what people use to describe the change
<thumper> actually, lbox only sets the description
<thumper> you need to set the commit message on launchpad when approving
<thumper> otherwise the bot ignores it
<natefinch> wwitzel3: forgetting the commit message is a mistake I made for the first like 4 months here
<thumper> most likely this will all change when we move the code anyway
<wwitzel3> thumper: got it, thanks :)
<wwitzel3> natefinch: hah, I suspect it will happen to me a few more time
<wwitzel3> coming from git and this being my first time with bzr, I actually quite like the workflow of it, I imagine you could have the same flow with git, but in my experience most don't
<wwitzel3> sinzui: the revert has landed in trunk re lp:1290890
<wwitzel3> oh, go bot updated the bug for me .. well that was nice of it
 * thumper goes to pull trunk again
<sinzui> thank you wwitzel3
<wwitzel3> thumper: ok, so now I want to address the MachineConfig being passed in that broke you, on my old branch do I just revert the revert? Fix and push?
<wwitzel3> sinzui: np
<thumper> wwitzel3: the reversion you just landed un-broke me
<thumper> wwitzel3: I just suggest that for the function pass in the series not the entire machine config
<thumper> and that _should_ be fine
<thumper> remember that it will be called from somewhere that doesn't have a machine config object
 * thumper is about to write the plugin
<hazmat> thumper, ping.. got time for a pre-implementation call?
<thumper> natefinch: any idea how to fix the intermittent replicaset bug?
<thumper> hazmat: sure
<natefinch> thumper: jam had a fix for it, I think it's proposed, lemme go look
<wwitzel3> thumper: the revert caused 1289316 to be broken again, so I guess I'm wondering what the easiest way is to get my branch back to 2403 and fix the issues?
<wwitzel3> thumper: can I just revert the revert? or checkout to a specific revision?
<thumper> wwitzel3: are you not testing first?
<natefinch> thumper: https://code.launchpad.net/~jameinel/juju-core/replicaset-test-timeout-1290588/+merge/210355
<natefinch> thumper: I just approved the MP, we'll see if it merges
<wwitzel3> thumper: what do you mean? this is more a workflow and source tree question, than anything. If there is a prefered way for me to handle something like this within bzr.
<thumper> wwitzel3: I mean firing up an environment, local and ec2 and actually deploy
<hazmat> wwitzel3, i believe that's a reference to actually using the product
<wwitzel3> thumper: not for every change no, just been using go test ./...
<thumper> wwitzel3: when touching stuff like this where we are changing how real envs install stuff, yes, I'd test that live
<wwitzel3> thumper: for this change I bootstrapped and deployed mysql/mediawiki to my local maas
<wwitzel3> thumper: I already see what I did wrong though, so never mind :P .. I forgot upload-tools
<bodie_> I'm setting up a dev box.  should I go with 12.04 or 13.10?
<thumper> wallyworld_: https://code.launchpad.net/~thumper/juju-core/update-golxc-version/+merge/210492 ?
<thumper> v small.
<thumper> bodie_: I'd use trusty
<thumper> but hey, that's just me :)
<wwitzel3> thumper: I see you made the update in that branch, thank you
<wwitzel3> thumper: actually, that just isn't rebased yet, I need to make the update. Thought actually I am not sure the original update actually resolved the issue since I was testing without using upload-tools
<thumper> :)
<wallyworld_> thumper: looking
<wallyworld_> thumper: shiteveld diff screwed, approced lp mp with a suggestion
<bodie_> thanks thumper :)
<bodie_> that's what I'm using on my laptop, just not confident it'll serve for dev work.  but if it works, it works
#juju-dev 2014-03-12
<thumper> wallyworld_: I think the bot is wedged
<wallyworld_> ffs. ok
<thumper> wallyworld_: trust yours to be picked up first
<wallyworld_> i know right :-D
<thumper> WTF?
<thumper> the bot won't land my branch because it had trouble loading the prereq
<thumper> FFS
<wallyworld_> thumper: if it's any consolation, i updated golxc yesterday and last night had to patch the lxc stuff in juju core for it to compile, pending your stuff landing
<thumper> wallyworld_: all you had to do was 'cd ~/go/src/launchpad.net/golxc; bzr revert -r 7'
<thumper> and it would have been fine
<wallyworld_> yeah i know
<wallyworld_> was just trying to make you feel better
<thumper> oh, ok
<thumper> thanks
<thumper> fuck yeah!
 * thumper pokes around a bit
<thumper> hazmat: hey...
<thumper> hazmat: wordpress works fine with aufs
<thumper> hazmat: you had me all concerned about nothing
<hazmat> thumper, hmm
<thumper> hazmat: talked with hallyn about it
<thumper> hazmat: he suggested to use aufs if btrfs doesn't work
<thumper> hazmat: sorry, if not btrfs backed
<thumper> hazmat: and shake out bugs :-)
<thumper> but should all be good
<hazmat> thumper, ah.. that's my issue.. i have btrfs under aufs
<hazmat> thumper, cool.. glad thats resolved
<thumper> so, fast and small for all \o/
<hazmat> thumper, so your able to install and relate wordpress to mysql?
<hazmat> thumper, i always get this error on aufs on the db relation.. 2014-03-12 01:41:35 INFO db-relation-changed rm: fts_read failed: Stale NFS file handle
<thumper> hazmat: yep, and looked at the web on 10.0.3.x
<hazmat> thumper, cool
<thumper> all good
<davecheney> umm, ubuntu@winton-02:~/src/launchpad.net/juju-core$ juju bootstrap -v --upload-tools
<davecheney> Flag --verbose is deprecated with the current meaning, use --show-log
<davecheney> 2014-03-12 01:50:08 WARNING juju.cmd.juju common.go:34 ignoring environments.yaml: using bootstrap config in file "/home/ubuntu/.juju/environments/local.jenv"
<davecheney> 2014-03-12 01:50:08 ERROR juju.cmd supercommand.go:296 environment has no bootstrap configuration data
<davecheney> this broke overnight
<wallyworld_> davecheney: i just tried bootstrapping local from trunk, seems to work
<wallyworld_> but i didn't have a jenv file lying around
<wallyworld_> i know that warning is new, but i don't know of any logic changes
<davecheney> ubuntu@winton-02:~/src/launchpad.net/juju-core$ juju status
<davecheney> ERROR Unable to connect to environment "local".
<davecheney> Please check your credentials or use 'juju bootstrap' to create a new environment.
<davecheney> Error details:
<davecheney> not bootstrapped
<davecheney> environment is not bootstrapped
<sinzui> davecheney, I can fix leankit
<wallyworld_> davecheney: you can try destroying your env and start again, that will clear any old jenv file
<sinzui> davecheney, you were definitely deleted
<sinzui> oh sweet, someone paid for more seats. There is not need to delete the old users
<wallyworld_> thumper: if you have a moment - https://codereview.appspot.com/74330044
<wallyworld_> thumper: you're not having much luck with your branch :-(
<thumper> no...
<rick_h_> anyone able to give me a hint on "BadRequest - The affinity group name is empty or was not specified." when deploying to azure. I've created a storage group in East US and that's the location set in my env.yaml
<rick_h_> ah, seems I'm hitting https://bugs.launchpad.net/juju-core/+bug/1259350
<_mup_> Bug #1259350: juju bootstrap fails in Azure (BadRequest - The affinity group name is empty or was not specified.) <azure-provider> <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1259350>
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1291165
<_mup_> Bug #1291165: juju bootstrap local cannot bootstrap  <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1291165>
<davecheney> this has proved hard to unfuck
<wallyworld_> can you try removing the jenv file?
<davecheney> ubuntu@winton-02:~/src/launchpad.net/juju-core$ cat /home/ubuntu/.juju/environments/local.jenv
<davecheney> ubuntu@winton-02:~/src/launchpad.net/juju-core$ file /home/ubuntu/.juju/environments/local.jenv
<davecheney> /home/ubuntu/.juju/environments/local.jenv: empty
<davecheney> brilliant
<davecheney> wallyworld_: yup, that fixed it
<wallyworld_> davecheney: i would have hoped destroy-env would have been able to do that
<wallyworld_> maybe it can't handle an empty file
<davecheney> that sounds like a good resolution to the bug
<davecheney> --force should remove the .jenv, with prejudice
<davecheney> otherwise CTS will shank us
<axw> wallyworld_: when you have a moment, please: https://code.launchpad.net/~axwalk/gwacl/deleteservice-media/+merge/210534
<wallyworld_> sure
<wallyworld_> axw: are you going to delete the obsolete delete code?
<wallyworld_> in juju
<axw> wallyworld_: I am going to in my new implementation
<wallyworld_> great
<axw> wallyworld_: actually, DestroyHostedService is in gwacl
<axw> which is what does all this manually
<axw> I will remove it once the Juju side is updated
<wallyworld_> do maybe add to your branch
<wallyworld_> ok
 * thumper is starting to get real fucked off with the landing bot
<thumper> no...
<thumper> getting fucked off at that intermittent failing test
<thumper> that fails more often than not
<thumper> wallyworld_: you'll be happy to know that I came up with some good tests for this new code :)
 * thumper is pretty happy with them
<wallyworld_> great
<thumper> just running 'make check' before proposing
<wallyworld_> ok
 * thumper waits with baited breath to see if the fucking test fails again
 * wallyworld_ gets the popcorn
<thumper> GRRRR!!!!
 * thumper approves again
<thumper> I think this branch is almost a record
 * thumper taps his fingers...
 * thumper approves it again
<thumper> wallyworld_: https://codereview.appspot.com/74370044 if you have time
<wallyworld_> ok
 * thumper wanders off for a bit
<thumper> will check on the bot in about 20 minutes
<jam1> thumper-afk: I think the bot has been failing because it failed enough to start running out of disk space
<jam1> at least, it was at 6 out of 8GB consumed
<thumper-afk> :(
<thumper-afk> jam: can I get you to email me the bot creds again? or point me to where to get them from?
<thumper-afk> that way I can look myself instead of annoying wallyworld_
<jam> thumper-afk: sure, though I'm poking at it right now myself
<thumper-afk> thanks in advance
<axw> thumper-afk: FYI, just found this: https://bugs.launchpad.net/juju-core/+bug/1291207
<jam> I'm trying to sort out the failures
<_mup_> Bug #1291207: juju-run symlink is broken after upgrade-juju <run> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1291207>
<thumper-afk> :-(
<thumper-afk> will look tomorrow (most likely
<axw> looks like it'll be easy to fix.
<wallyworld_> thumper-afk: you hold the record
<thumper-afk> fcking permissions
<thumper-afk> wallyworld_: for annoying you?
<thumper-afk> \o/
<wallyworld_> that too :-)
<wallyworld_> i mean for failed landing attempts
<jam> thumper-afk: I have to sort out the original creds for nova list, but for now the IP address is 10.55.61.118 and your launchpad SSH keys can log in as 'ubuntu'
<thumper> I'm going to sign off now, but I'll keep poking the landing bot
<thumper> kk
<thumper> ta
<jam> bot is in limbo right now while I debug
<thumper> kk
<dimitern> rogpeppe, hey
<rogpeppe> dimitern: yo!
<dimitern> rogpeppe, I decided to drop one of the incomplete fixes - the one about .jenv detection
<rogpeppe> dimitern: sounds like a good plan - it needed some more work
<dimitern> rogpeppe, after rummaging for a while in the code I realized you're right and it deserves its own CL as it'll blow up this out of proportion
<dimitern> rogpeppe, re registering a file:// protocol on utils/http by default
<dimitern> rogpeppe, it seems the manual provider is using "file:///var/lib/juju/storage/tools.tar.gz" in the provider-state-url file when provisioning
<dimitern> rogpeppe, and without that it fails to fetch the provider-state and claims it's not bootstrapped
<dimitern> rogpeppe, (or something similar - i'm testing again now to see if that's the case)
<rogpeppe> dimitern: how does it manage to work currently?
<dimitern> rogpeppe, it's quite fragile in my experience
<rogpeppe> dimitern: but how can it work at all if the file protocol isn't registered?
<dimitern> rogpeppe, i had to set up a vm with "ubuntu" user to make it work for example - it was always trying ubuntu@bootstrap-host and it was failing to use the bootstrap-user i specified
<dimitern> rogpeppe, well, that's the thing - simplestreams *does* register the file:// protocol for testing purposes
<dimitern> rogpeppe, in some init() func
<rogpeppe> dimitern: this isn't testing code though, is it?
<dimitern> rogpeppe, but that code path doesn't seem to run in some cases
<dimitern> rogpeppe, no it's production code
<dimitern> rogpeppe, simplestreams.go:379
<rogpeppe> dimitern: in which dir?
<dimitern> rogpeppe, envs/ss/
<dimitern> rogpeppe, but since RegisterProtocol there uses http.DefaultTransport it wasn't working when ssl-hostname-validation was set to false (and a non-validating tls transport was used)
<rogpeppe> dimitern: hmm, i start to see
<wwitzel3> rogpeppe: what is the command you want tested for 512-maas-bootstrap-bridge-utils?
<dimitern> rogpeppe, I'll run a series of tests of manual bootstrap and local bootstrap + manual provisioning to before reproposing without the file:// proto registration
<rogpeppe> wwitzel3: we need to check that we can deploy units to the bootstrap node and that we can connect to those units
<rogpeppe> dimitern: the fact that simplestreams is registering the file protocol seems a bit wrong. i need to think about it for a little bit.
<dimitern> rogpeppe, it *definitely* seems wrong like this, unconditionally
<wwitzel3> rogpeppe: ok, I'm on your branch and I've bootstraped a node in my maas .. so I can just deploy anything?
<rogpeppe> wwitzel3: try: juju deploy --to lxc:0 ubuntu
<wwitzel3> rogpeppe: ok, got it, when I "fixed" the lxc not being installed from cloud-tools before, I was testing not using --upload-tools. So my fix actually wasn't a fix and broke the CI build.
<rogpeppe> wwitzel3: good point
<wwitzel3> rogpeppe: so I'm actually fixing it now :P .. and I will test your branch with that test.
<rogpeppe> wwitzel3: thanks
<wwitzel3> rogpeppe: I also managed to get my maas configured in such away that I can easily destroy environ without having to rebuild the nodes from scratch. By snapshotting them at the right time, I can just restore to the snapshot.
<wwitzel3> rogpeppe: so testing is a lot faster now :)
<rogpeppe> wwitzel3: nice
<dimitern> rogpeppe, so without the file:// proto registration I get this:
<dimitern> rogpeppe, 2014-03-14 08:04:54 ERROR juju.cmd supercommand.go:296 cannot load state from URL "file:///var/lib/juju/storage/provider-state" (read from "/tmp/provider-state-url"): Get file:///var/lib/juju/storage/provider-state: unsupported protocol scheme "file"
<dimitern> rogpeppe, with manual bootstrap
<dimitern> rogpeppe, if you're against registering a file protocol handler, I can check for file:// schema in the environs/bootstrap LoadStateFromURL() and try to read it directly instead
<rogpeppe> dimitern: i'm still trying to think it through
<rogpeppe> dimitern: the thing that makes me most uncomfortable is the disconnected nature of the fix here - we have two places a long way apart in the code (utils/http vs environs/simplestreams) that are both intimately connected - that feels pretty sleazy
<dimitern> rogpeppe, it does, doesn't it
<rogpeppe> dimitern: i'd be happier if everything used a Client from utils/http
<rogpeppe> dimitern: then we could have a function to register schemes in there, rather than using http.DefaultClient
<dimitern> rogpeppe, in fact this error only seems to happen when you manual bootstrap with ssl-hostname-verification: false, tested just now
<dimitern> rogpeppe, and that's due to simplestreams
<rogpeppe> dimitern: that sounds right - if ssl-hostname-verification is true, we use http.DefaultClient
<dimitern> rogpeppe, so I'll drop the file:// proto registration for this CL and file a bug about unifying http clients
<rogpeppe> dimitern: doesn't that leave trunk broken?
<rogpeppe> dimitern: i guess it's already broken though
<dimitern> rogpeppe, yeah - it's no more broken than it was
<voidspace> rogpeppe: in juju-core/instance/instance.go where is Address defined?
<voidspace> is it a built-in type
<rogpeppe> voidspace: in instance/address.go
<voidspace> ah, same package name
<voidspace> just not the same file
<rogpeppe> voidspace: if in doubt, grep for 'type Foo'
<voidspace> gd doesn't work on my desktop machine either
<voidspace> must work out why
<rogpeppe> voidspace: yeah - i find it invaluable
<rogpeppe> voidspace: you could find out if it works ok just running it from the command line
<voidspace> rogpeppe: right, good call
<rogpeppe> voidspace: the only built-in types are mentioned here: http://golang.org/ref/spec#Predeclared_identifiers
<voidspace> rogpeppe: yeah, I just went there to check :-)
<voidspace> thanks
<rogpeppe> voidspace: (which is the definitive list of predeclared identifiers)
<rogpeppe> voidspace: if you haven't already, the spec is well worth a read
<rogpeppe> voidspace: being unusually readable for such things
<voidspace> rogpeppe: right, I haven't
<voidspace> rogpeppe: ok, so my godef wasn't properly installed in vim (the bundle wasn't working - I've now manually installed godef.vim)
<voidspace> rogpeppe: so now it is not working in much more interesting and potentially fixable ways... :-)
<rogpeppe> voidspace: lol
<voidspace> rogpeppe: it only looked like it was sort of working because "gd" is the default vim "goto local definition" vim command anyway
<rogpeppe> voidspace: so how's it failing now?
<voidspace> For example
<voidspace> parseLocalPackage error: no more package files found^@godef: no declaration found for Address
<rogpeppe> voidspace: that ^@godef thing is weird
<voidspace> rogpeppe: yeah, I'd better check godef.vim I guess
<rogpeppe> voidspace: godef has got a -debug flag, which might or not produce some useful info in this case
<voidspace> godef --help is "terse"
<voidspace> and [flags] expr does not explain the arguments it takes *particularly* well
<dimitern> rogpeppe, filed bug 1291292
<_mup_> Bug #1291292: use a utils/http client for all HTTP(S) calls across the codebase <manual-provider> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1291292>
<rogpeppe> voidspace: yeah, could do better :-)
<rogpeppe> voidspace: "expr" is a go expression
<voidspace> rogpeppe: so I did "godef instance.Address"
<voidspace> and got
<voidspace> godef: cannot read : open : no such file or directory
<rogpeppe> voidspace: try: godef -f somefile.go instance.Address
<voidspace> rogpeppe: cool, thanks
<rogpeppe> voidspace: where somefile.go is the file you're going from
<voidspace> right
<rogpeppe> voidspace: standup: https://plus.google.com/hangouts/_/canonical.com/juju-core?v=1394218410
<voidspace> rogpeppe: and it works
<voidspace> hmmm....
<rogpeppe> voidspace: hmm
<voidspace> rogpeppe: which is good news, just need to figure out the vim integration
<rogpeppe> voidspace: yeah
<dimitern> jam, mgz_, standup?
<mgz_> I'm here
<voidspace> rogpeppe: is this the canonical vim-godef? https://github.com/dgryski/vim-godef
<rogpeppe> voidspace: i think so
<voidspace> rogpeppe: thanks
<voidspace> rogpeppe: if I start vim from the launchpad.net directory it works - so it's not coping with shorter relative paths I think
<rogpeppe> voidspace: interesting
<voidspace> rogpeppe: if I start vim from launchpad.net/juju-core/worker (for example) it fails
<voidspace> rogpeppe: so yay, it works! :-)
<rogpeppe> voidspace:cool
<voidspace> rogpeppe: hmmm... maybe not, it works once and then the next call doesn't work :-/ odd
<rogpeppe> voidspace: i'd add some debug prints to the source
<rogpeppe> voidspace: see what arguments are actually being passed
<voidspace> rogpeppe: ok, outside work time I think
<rogpeppe> voidspace: then try calling godef directly with those arguments and see if you can repro the failure
<rogpeppe> voidspace: probsa
<voidspace> rogpeppe: my guess is that it's mainly a path issue
<rogpeppe> voidspace: it may well be
<dimitern> rogpeppe, last look over https://codereview.appspot.com/72860045/ before i land it?
<rogpeppe> dimitern: will do
<jam> vladk: It would seem that you made it to IRC, am I chatting to the right vlad ?
<vladk> yes
<voidspace> mgz_: so I need coffee and then we should talk
<mgz_> voidspace: me too
<jam> vladk: welcome again to the team
<jam> natefinch: FWIW my last round of "If we get EOF, Refresh + Ping" seems to have landed cleanly, and allowed thumper's branch to land
<jam> it is possible that the fix was that I now *always* call Ping after replSetReconfig
<jam> to detect if we're actually going to get an EOF
<jam> that would otherwise have been missed
<jam> and then Refresh() and Ping again
<jam> natefinch: I'm not sure, but hey, 2 branches landed back to back is great news for thebot
<dimitern> so ever since we decided to shorten the standup to 15m it started getting 30-45m each time :)
<natefinch> jam: that's great.
<natefinch> jam: that may have been the fix... there seems to be a lot of chicken waving in getting the replicaset stuff working correctly
<jam> natefinch: given the "wait for us to wake up and be ready" it certainly does make you wave some chickens
<dimitern> rogpeppe, should be good to land, right?
 * rogpeppe remembers to go back and look
<rogpeppe> dimitern: i don't think DetectionScript and CheckProvisioned should be exported - they could be added to export_test.go so that local tests can access them
<rogpeppe> dimitern: they aren't used in production code external to the package AFAICS
<jam> if people here could Ping when they set an MP to approved, I'd like to keep an eye on the bot, it looks like it is semi-healthy again
<wwitzel3> so in the case where a var of a package is private, but I want to use it in a test, do I just make it public .. seems not what I want to do.
<wwitzel3> the var is a slice that is modified by the package I am testing and I want to assert that those modifications are successful
<rogpeppe> wwitzel3: is the slice a global?
<natefinch> wwitzel3: are you doing this from another package, or from inside the package with the slice?
<wwitzel3> rogpeppe, natefinch: the slice is a global, it is being modified from side the functions of a struct in the same package.
<wwitzel3> s/side/inside
<rogpeppe> wwitzel3: in general I'm skeptical of functions that modify global state, but it may be ok in this instance
<rogpeppe> wwitzel3: what does the slice hold?
<wwitzel3> rogpeppe: fair, this is the requirePackages slice in the lxc/initalisation.go
<wwitzel3> requiredPackages .. helps if I could type this morning
<natefinch> wwitzel3: if your tests are in the same package as the slice, you can just modify it directly.  THat is, put your tests in the same package (as opposed to <package>_test) and then you can access non-exported data
<wwitzel3> natefinch: Ok, and that is not taboo?
<rogpeppe> wwitzel3: why does that slice need to be modified?
<rogpeppe> wwitzel3: it's a judgement call
<wwitzel3> rogpeppe: so that --target-release is passed to the AptGetInstall that consumes the slice
<natefinch> wwitzel3: it's not taboo, that's how you get nice limited-scope tests to verify specific behavior.
<rogpeppe> wwitzel3: the other approach is to have an export_test.go file in the same package that exports some variables just for tests
<wwitzel3> rogpeppe: I like the export_test approach
<natefinch> rogpeppe: export_test is an abomination... there seems to be no reason to do it other than "all the rest of my tests are external, and I don't want to make another file for internal tests"
<wwitzel3> natefinch: I like that it is explict
<rogpeppe> natefinch: i wouldn't word it so strongly
<rogpeppe> natefinch: there's a trade-off here
<natefinch> rogpeppe: it just means that your tests now use something that looks public that doesn't actually exist in the real package
<natefinch> rogpeppe: at least with internal tests, it's clear you're using package internals
<wwitzel3> natefinch: that's true
<natefinch> rogpeppe: there's nothing that says you can't have both internal tests and external tests
<natefinch> sorry, kids are up, I gotta go.   I wanted to have this discussion last week, but it was too hard over the hangouts.
<wwitzel3> well it seems like there is less debate over just making the tests be part of the package, so I will go that route :)
<rogpeppe> natefinch: with internal tests, you can't tell whether any call is public or private
<wwitzel3> and then people can pick on me in the code review
<rogpeppe> wwitzel3: i think natefinch is about the only one with that particular view :-)
<rogpeppe> wwitzel3: although i don't mind internal tests much either
<wwitzel3> rogpeppe: maybe so, but I have to pair with him today, so he wins :P
<rogpeppe> wwitzel3: ha ha
<rogpeppe> wwitzel3: BTW, i think that modifying the global slice is almost certainly the wrong approach in this case
<dimitern> rogpeppe, agreed I changed that as you suggested
<wwitzel3> rogpeppe: yeah, I am actually going to send a different one instead of modifying
<rogpeppe> dimitern: thanks
<dimitern> rogpeppe, and I'm waiting for your LGTM
<rogpeppe> dimitern: i'm presuming you haven't re-proposed the changes yet
<voidspace> is it common when comparing two structs of the same type to get a runtime error "comparing uncomparable type" from c.Assert
<voidspace> https://pastebin.canonical.com/106327/
<rogpeppe> dimitern: LGTM
<rogpeppe> voidspace: you probably want to be using gc.DeepEquals
<voidspace> rogpeppe: sounds good, thanks
<rogpeppe> voidspace: not all types in Go are comparable
<rogpeppe> voidspace: specifically, slices and function pointers aren't
<voidspace> rogpeppe: but you can compare type and compare members
<voidspace> rogpeppe: slices are uncomparable !?
<voidspace> that's the issue
<voidspace> DeepEquals is at least pointing me to the slices it can't compare
<voidspace> runtime error: comparing uncomparable type []instance.Id
<rogpeppe> voidspace: yes, slices are uncomparable because it's not clear how the comparison should work
<voidspace> compare length and if they're the same length compare members are equal
<rogpeppe> voidspace: there are at least three possible ways
<voidspace> what other way would make sense?
<rogpeppe> voidspace: slices also have additional values after the length
<voidspace> capacity
<rogpeppe> voidspace: yeah
<voidspace> hmmm
<rogpeppe> voidspace: and another possibility is to compare by pointer equality
<voidspace> an identity check
<rogpeppe> voidspace: yeah
<voidspace> that's a different check in my opinion
<rogpeppe> voidspace: it's not clear which one == should use though
<rogpeppe> voidspace: if i compare two pointers, it doesn't compare their contents
<voidspace> rogpeppe: that's an identity check not an equality check though right (using Python terminology)
<rogpeppe> voidspace: there's no distinction between the two in Go currently
<dimitern> rogpeppe, yeah, I wanted to address any comments with one proposal
<dimitern> rogpeppe, thanks!
<voidspace> rogpeppe: but the very idea of pointers introduces the distinction
<rogpeppe> voidspace: and if we compare members recursively, there's the possibility that we might get an infinite loop in an equality check
<voidspace> rogpeppe: only if you're equality implementation is dumb
<voidspace> :-)
<voidspace> rogpeppe: but yeah, maybe making arbitrary types automatically comparable is problematic
<voidspace> rogpeppe: how do you work around it for tests?
<rogpeppe> voidspace: DeepEquals
<voidspace> rogpeppe: except that doesn't work for types with slices as members
<rogpeppe> voidspace: sure it does
<rogpeppe> voidspace: DeepEquals is recursive
<voidspace> rogpeppe: hah
<rogpeppe> voidspace: (and deals with cycles too)
<voidspace> rogpeppe: the DeepEquals works
<voidspace> rogpeppe: and the error I *now* have is from the next Assert
<voidspace> I just didn't notice
<voidspace> so DeepEquals is fine and I'll stop complaining :-)
<rogpeppe> voidspace: if you use jc.DeepEquals (from testing/checkers) then it will tell you where the comparison failed too
<voidspace> yep
<rogpeppe> voidspace: which is great if you're comparing a large chunk of data
<voidspace> I'm using gc.DeepEquals (from gocheck) and it is giving me enough information at the moment
<voidspace> ah, but not the member name that failed, just the comparison that failed
<rogpeppe> voidspace: yeah
<voidspace> which for this type is sufficient as it only has two members
<voidspace> rogpeppe: thanks
<rogpeppe> voidspace: there's one other distinction between gocheck.DeepEquals (which uses reflect.DeepEqual) and checkers.DeepEquals - the former treats a nil slice as distinct from an empty slice; the latter does not.
<rogpeppe> voidspace: since in general we treat a nil slice exactly the same as an empty slice, the latter can be useful
<voidspace> rogpeppe: right, useful to know
<dimitern> wallyworld_, still there?
<wallyworld_> sorta
<dimitern> wallyworld_, so should i assign bug 1290684 to myself?
<_mup_> Bug #1290684: cannot perform multiple upgrades <upgrade-juju> <juju-core:In Progress by wallyworld> <https://launchpad.net/bugs/1290684>
<wallyworld_> i have a mp up for that bug
<wallyworld_> https://code.launchpad.net/~wallyworld/juju-core/old-agentconf-datadir/+merge/210526
<dimitern> wallyworld_, I can't see a linked branch
<wallyworld_> i haven't linked it sorry
<wallyworld_> will do so
<dimitern> wallyworld_, ok, so after yours lands, I can file a separate bug about migrating Jobs in 1.18 config?
<wallyworld_> yeah, and anything else that need doing
<wallyworld_> maybe it's just Jobs
<wallyworld_> can't recall right now
<wallyworld_> before i and i just want to ensure local provider is covered
<dimitern> sure
<dimitern> I also have some comments on your CL
<wallyworld_> ok
<wallyworld_> i'm off to bed, i'll look tomorrow
<jcastro> you guys ready for the UDS update?
<natefinch> jcastro: glad you reminded me
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdJ0WjYraULj4VVnIhg2-wan1zM_Q6nzudEY4WfC6p1-8aoWw?authuser=0&hl=en&hcb=0&lm1=1394632583536&hs=75&hso=0&heeid=tvJzNRrPTJA&ssc=WyIiLDAsbnVsbCxudWxsLG51bGwsW10sbnVsbCxudWxsLG51bGwsbnVsbCxudWxsLDc1LG51bGwsbnVsbCxudWxsLFsxMzk0NjMyNTgzNTM2XSxudWxsLFsiaG9hZXZlbnQiLCJBUDM2dFlkSjBXallyYVVMajRWVm5JaGcyLXdhbjF6TV9RNm56dWRFWTRXZkM2cDEtOGFvV3ciXSxbXSxudWxsLG51bGwsbnVsbCxudWxsLG51bGwsbnVsbCxudWxsLG5
<jcastro> 1bGwsbnVsbCxbMF0sW10sbnVsbCwidHZKek5SclBUSkEiLG51bGwsW10sbnVsbCxudWxsLG51bGwsW10sbnVsbCxudWxsLFtdXQ..
<jcastro> whoa!
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdJ0WjYraULj4VVnIhg2-wan1zM_Q6nzudEY4WfC6p1-8aoWw?authuser=0&hl=en
<jcastro> try that one
<natefinch> rogpeppe, mgz_, dimitern, jam: what the hell have we delivered and do we intend to deliver?
<natefinch> besides HA.... joyent provider, I guess
<dimitern> natefinch, for 1.18?
<natefinch> dimitern: uh sure, or trusty
<dimitern> natefinch, for trusty definitely more than for 1.18
<dimitern> natefinch, for 1.18 mostly lots of critical/high bug fixes and regressions
<natefinch> dimitern: trusty is probably what people will care about
<dimitern> natefinch, well HA, container networking (somewhat - ec2 + maas and basic support at that)
<dimitern> natefinch, joyent, smoother upgrades (preferably major version upgrades, but perhaps not schema upgrades)
<dimitern> natefinch, and better versioning (1.2.3-b1, -rc1, etc.)
<natefinch> cool
<jcastro> #ubuntu-uds-servercloud-1
<rogpeppe> dammit, i missed the hangout
<rogpeppe> hmm, this is worrying: http://paste.ubuntu.com/7079579/
<wwitzel3> I joined it, then realized I probably joined it the wrong way
<wwitzel3> so I just sat with my mic and camera on mute hoping no one noticed
<natefinch> haha
<natefinch> wwitzel3: it's ok.  I'm sure no one noticed :)
<natefinch> wwitzel3: it's not like it was being recorded or anything ;)
<wwitzel3> natefinch: hah, thanks
<natefinch> jcastro:  (from juju help add-machine):  juju add-machine ssh:user@10.10.0.3   (manually provisions a machine with ssh)
 * rogpeppe goes for lunch
<jcastro> yeah, the thing is we should explain in the docs how to use that cleverly.
<jcastro> I can add it.
<natefinch> jcastro: absolutely
<rogpeppe> my lunch will be slightly extended today, as the sun is out and i need some exercise. will work a bit later.
<natefinch> rogpeppe: have fun!
<wwitzel3> natefinch: so, I run go install -v juju-core/... and then I bootstrap with --upload-tools , but it would appear I am still just using the standard install
<natefinch> wwitzel3: is your GOPATH at the front of your path or the back?
<wwitzel3> i'm dumb, thanks
<wwitzel3> natefinch: I assume it should be at the back?
<natefinch> wwitzel3: I put mine at the front
<natefinch> wwitzel3: Go > all the things
<natefinch> wwitzel3: also that way, stuff I build overrides stuff I install, which is usually what you expect
<wwitzel3> natefinch: right, ok
<sinzui> dimitern, is bug 1291400 fallout from wallyworld_ 's fix for bug 1290684?
<_mup_> Bug #1291400: migrate 1.16 agent config to 1.18 properly (DataDir, Jobs, LogDir) <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1291400>
<_mup_> Bug #1290684: cannot perform multiple upgrades <upgrade-juju> <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1290684>
<dimitern> sinzui, it's a follow-up on his fix
<sinzui> thank you dimitern
<dimitern> sinzui, the local provider 1.16->1.18 broke, so I'm fixing that
<sinzui> rogpeppe, when do you think your fix for bug 1271144 will be merged>
<_mup_> Bug #1271144: br0 not brought up by cloud-init script with MAAS provider <cloud-installer> <landscape> <local-provider> <lxc> <maas> <regression> <juju-core:In Progress by rogpeppe> <https://launchpad.net/bugs/1271144>
<wwitzel3> natefinch: yeah my local version is 1.17.5, my which command is pointing to the right binary and GOPATH is correct, but when I upload-tools the server is still using 1.17.4.1
<wwitzel3> natefinch: oh nevermind, I found it, I had two GOBINs apparently
<natefinch> wwitzel3: you don't need gobin set, actually.  Go will figure it out as needed
<natefinch> wwitzel3: the only environment setting I set manually for go is GOPATH
<natefinch> wwitzel3: oh,and gomaxprocs, I guess, but that's more optional
<frankban> juju-core devs: I am getting this error while trying to bootstrap an ec2 environment using 1.17.4-trusty-amd64: http://pastebin.ubuntu.com/7079940/
<natefinch> frankban: I think that's due to a bug we had in 1.17.4 when you have juju-mongod installed locally.
<frankban> natefinch: juju-mongodb is installed indeed
<frankban> natefinch: so, is this going to be solved in the next trusty release?
<natefinch> frankban: if you rename it or remove it, it should fix things.
<natefinch> frankban: yep
<frankban> natefinch: so, that's installed as a dependency of juju-local. we changed quickstart to install juju-local in place of the specific packages (e.g. lxc, mongodb-server). I see two choices: 1) wait for the next quickstart release until trusty includes a fixed version that works well with juju-local or 2) make quickstart install mogodb-server before juju-local. The latter seems suboptimal. Do we have a prevision for wh
<frankban> en 1.17.5 or 1.18 will be released?
<natefinch> sinzui: 1.17.5 is passing CI, right?  When is that getting released?
<sinzui> natefinch, I wish I knew. We agreed to target bugs that block the release of 1.17.5 https://launchpad.net/juju-core/+milestone/1.17.5
<sinzui> but I we are not making progress on them
<sinzui> I want to release 1.17.5 tomorrow. We might need 1.17.6
<sinzui> natefinch, r2410 looks good there is just one outstanding azure test
<natefinch> frankban: ^^ That's the best I have.  Tomorrow, I guess. Sorry about the broken bootstrap.  Totally my fault.
<frankban> natefinch: no problem, and thanks!
<wwitzel3> natefinch: so I got the --target-release being properly sent to the apt-get command, but exec isn't liking it for some reason? http://paste.ubuntu.com/7080227/
<wwitzel3> natefinch: if I copy and paste the Running: line sans [ ] .. the command runs just fine
<natefinch> wwitzel3: this is probably the problem: "--target-release precise-updates/cloud-tools lxc"   you have to separate out the strings, you can't pass them as one string, otherwise the command treats them as one single argument that it doesn't understand
<natefinch> wwitzel3:  so like "--target-release",  "precise-updates/cloud-tools", "lxc"
<natefinch> wwitzel3: I think everyone makes that mistake when using go to execute commands.  There's no shell parsing the arguments, they're just passed to the executable as-is, so it would see arg0 as "--target-release precise-updates/cloud-tools lxc"  rather than seeing it as three separate arguments
<niemeyer> Hey
<niemeyer> How's the migration to GitHub going?
<natefinch> niemeyer: slow
<niemeyer> natefinch: Any dates settled yet?
<natefinch> niemeyer: not at all. We moved a few packages there, but there's basically no timeline for getting juju-core over there at the moment.  We'll have to set up a landing bot and stuff.  I know Jam was looking into it, but we've had a lot on our plates, so it hasn't gotten too far.
<niemeyer> natefinch: Thanks for the details
<natefinch> niemeyer: welcome.  Jam would have more details, since he's really the one who decided to take charge of it.
<niemeyer> natefinch: You're still using lbox for rietveld reviews, right?
<natefinch> niemeyer: while we're still on bazaar, yes.  We're not really sure right now what we'd use when we move to github.
<wwitzel3> natefinch: thank you
<niemeyer> natefinch: Hopefully it'll be unnecessary
<natefinch> niemeyer: well, github's reviews are somewhere between terrible and nonexistent depending on your definition, so we'll probably need something for reviews outside github.  And same for controlling branches landing.
<niemeyer> natefinch: Not sure why you feel that way
<niemeyer> natefinch: Can you point me to reviews you've done there which provided you with that feeling?
<mgz_> rogpeppe: what's the right thing to compare errgo errors in tests? as the errorWrapper objects have different identities
<mgz_> hm, seems Deeo
<natefinch> niemeyer: there's no side by side diffs, which makes large diffs very difficult to understand.  It sends one email per inline comment on the code.
<mgz_> *DeepEquals does work now
<niemeyer> natefinch: Can you point me to reviews you've done there so I can have an idea?
<natefinch> niemeyer: sure, one sec
<natefinch> mgz_: errors.Cause(err) should return the underlying error that is what we used to return raw
<natefinch> niemeyer: this isn't a review, just an example of a diff that is made a lot more difficult to read because it's not side by side: https://github.com/travis-ci/travis-ci.github.com/pull/437/files#diff-dca34899b9ea26aa8d1b388f6eab933dR5
<natefinch> niemeyer: the email thing I can't really show per se, but when doing this review, each time I put in a single minor comment, both roger and I received emails.  I much prefer reitveld's approach where all the comments are mailed all at once, when the reviewer indicates they're done.  https://github.com/juju/ratelimit/commit/78b2ece8f84a196d02c5b3505dd79cf1ba8d7702
<natefinch> niemeyer: believe me, I'd much rather use github for everything, so it's all in one place, but reviews are pretty important, and if the tool makes them significantly more painful, then I don't think it's a good tradeoff.
<niemeyer> natefinch: Okay, I was specifically wondering about *your* reviews, so I understand where your frustration comes from
<niemeyer> natefinch: This is just a unified and colored diff.. I've been reading the diff -u output for long enough to not to have any issues with those
<natefinch> niemeyer: unified diffs make it hard to see subtle changes that are made glaringly obvious in a nice side by side diff.
<niemeyer> natefinch: So, do you have any reviews you have done in GitHub you can point to so we can be on the same page?
<natefinch> niemeyer: this is mostly true when multiple lines are changed at once, and grouped together.  So if in the middle of three lines has a 2014 instead of a 2013, you can easily miss it in a unified diff
<natefinch> niemeyer: No, I don't.
 * rogpeppe is back from a rather-longer-than-intended bike ride
<natefinch> rogpeppe: welcome back.  I'm jealous, it's still pretty frigid here.
<rogpeppe> sinzui: we think we have a fix - we're working on trying to test it live before landing it
<niemeyer> natefinch: So "github's reviews are somewhere between terrible and nonexistent depending on your definition" is based on the opinion of someone that has no reviews in GitHub?
<sinzui> rogpeppe, great.
<niemeyer> natefinch: Uh.. awkward :)
<rogpeppe> sinzui: sinzui has a local maas setup which hopefully will be able to test the fix
<natefinch> niemeyer: I don't need ot have done a lot of reviews on github to know that one email per comment is annoying, or that I dislike unified diffs :)
<sinzui> rogpeppe, I wish I had a local mass setup. If I did, CI would use it.
<natefinch> sinzui: get one.  Seems crazy we don't have one for CI.  How much can it cost, a few grand?
<sinzui> I had one for 2 days, CI ran.
<natefinch> niemeyer:  I could live with the emails, but not having a side by side diff feels like going back in time 15 years for no reason.
<wwitzel3> niemeyer: I've reviewed plenty of things on github and not having side by side diffing makes it painful for large commits. The output of diff, unified or not, is intended to be consumed by programs. Not people.
<rogpeppe> mgz_: to compare errgo errors, either compare the strings, or for equality
<rogpeppe> mgz_: it was always wrong to compare errors.New errors with DeepEquals IMHO
<sinzui> natefinch, I have been negotiating with the server team. they signal where to test and we run the tests there...but the tests are on future maas, not a released maas
<wwitzel3> rogpeppe: I can test you stuff right now if you want, I finally ironed out the --target-release for lxc stuff
<sinzui> dimitern, Isn't bug 1282690 near completion. I see code was merged?
<_mup_> Bug #1282690: ensure joyent provider gets included in 1.18 release <joyent-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1282690>
<rogpeppe> wwitzel3: that would be very useful if you could
<niemeyer> wwitzel3: That's far from true. The output of diff is optimized so it can be understood.. if we were worried only about programs, we could do much better than diff.
<dimitern> sinzui, looking
<wwitzel3> niemeyer: understood and for human consumption are very different
<wwitzel3> niemeyer: xml can be understood, it isn't intended for human consumption
<niemeyer> wwitzel3: Understood by a human..
<niemeyer> No matter, such strong opinions of how something is terrible should hopefully not come from people that never used the tool.
<niemeyer> This is unhelpful, and create some bias for the next time an opinion shows up..
<dimitern> sinzui, i can't say it can be closed, because joyent is still commented out in provider/all, so it's not enabled by default, and afaik there are other things yet to finish
<dimitern> dstroppa, ^^ can you confirm please?
<natefinch> niemeyer: How would you even define a review on github? They don't even really have such a thing.  You comment on commits / pull requests.  That's it.
<sinzui> thank you dimitern
<wwitzel3> niemeyer: consumption assumes usage, not just understanding .. anyway, if it had side by side diffs and some better keyboard shortcuts, I'd be happy enough with it.
<niemeyer> natefinch: Please try to use it, then complain.
<dstroppa> dimitern, sinzui: still not closed, but getting closer to completion
<natefinch> niemeyer: I was complaining about the diffs, which I have used quite a bit.  I even gave you an example that I felt showed how unified diffs are bad.  The emails thing is easy to extrapolate into the future.
<wwitzel3> rogpeppe: you want to hangout and run through this together?
<rogpeppe> wwitzel3: sure
<rogpeppe> wwitzel3: https://plus.google.com/hangouts/_/canonical.com/juju-core
<niemeyer> <natefinch> niemeyer: well, github's reviews are somewhere between terrible and nonexistent depending on your definition
<natefinch> niemeyer: Yes.  I think email notifications and diffs are integral to a review system.
<jam> fkom
<niemeyer> natefinch: I think it could be much better, but I'd rather debate that with someone that has used it at all.
<natefinch> niemeyer: I've used it for two reviews.  The things I complained about aren't going to change if I do 200 more reviews.  Feel free to find someone else who has done more reviews to talk to.
<voidspace> https://codereview.appspot.com/74900044
<wwitzel3> https://codereview.appspot.com/72270044
<natefinch> voidspace: looking
<voidspace> natefinch: thanks
<bodie_> what happens if the state service and the real service layer diverge?  e.g. a network outage in a datacenter or node
<bodie_> maybe I'm misunderstanding it a bit as a state map where it's more a system for choreographing a service network and then deploying it
<natefinch> bodie_: not really sure what you're asking.  The state server checks on the status of the system continuously to see if machines are up.  Are you asking what happens if the state server can't contact a deployed machine, but that machine is still up and healthy?
<bodie_> well, let's say it goes dark for whatever reason.  so the state service DOES track the state of active nodes
<bodie_> like, let's say I'm running Riak, where if a single node goes offline it's not a big deal, but I probably want to replace it when I can
<bodie_> I'm just trying to map out in my head how the flow for that situation would look
<natefinch> bodie_: the juju state server will  bring up a replacement unit when it sees one is down
<bodie_> is that in mainline?
<bodie_> I'm asking around in a few places to get my bearings here before muscling into core, according to marco it doesn't respond but does see the outage
<voidspace> natefinch: going running, if you leave any comments on the mp I'll see them when I return
<natefinch> voidspace: cool
<voidspace> there is some small chance that tomorrow I will be ubuntu native
<voidspace> my PC build is "in progress"
<natefinch> voidspace: woo hoo
<bodie_> :D
<voidspace> it's "put everything in the case" time, followed by install ubuntu
<voidspace> followed by "debug driver issues"...
<voidspace> :-)
<bodie_> I just switched over from Debian last night.... so much cleaner to get rolling with juju, sadly
 * rogpeppe looks for a feather to put in voidspace's headband
<voidspace> rogpeppe: I feel more like the cowboy than the indian...
<voidspace> to be fair, with 13.10, which is on kernel > 3.10, everything *should* be fine
<voidspace> at least until I try with three monitors ;-)
<natefinch> bodie_: sorry, I'm wrong.  It doesn't automatically bring up replacements.  You'd have to do a juju status to see that one was down, and then juju deploy a replacement.
<bodie_> Gotcha
<bodie_> good to know ^^
<bodie_> I bet some really interesting charms could be put together with custom software to do things like that
<voidspace> although I do only have one ethernet cable upstairs - so I either to add another router (sounds like a recipe for pain with double NAT?) or go wireless on one of the machines
<voidspace> we'll see I guess
<natefinch> voidspace: can't you just use a switch?
<voidspace> natefinch: I have a router and not a switch
<voidspace> I don't *think* I have a switch
<voidspace> have to check my bits boxes
<voidspace> if I only have to go wireless for a day or two and I order one in it wouldn't be the end of the world
<natefinch> voidspace: 4 port switches are pretty cheap these days
<voidspace> natefinch: yeah
<voidspace> natefinch: some routers can be reconfigured as switches too
<voidspace> natefinch: I'll have to see what I've got
<wwitzel3> rogpeppe: http://paste.ubuntu.com/7080846/ it seems to have worked
<wwitzel3> bbiab
<rogpeppe> wwitzel3: you need to try: juju add-unit --to lxc:2 ubuntu
<wwitzel3> rogpeppe: ok, running that now then
<wwitzel3> rogpeppe: I can ssh to the 2/lxc/0 container
<rogpeppe> wwitzel3: brilliant
<rogpeppe> wwitzel3: if you can do the negative checks on trunk, then we'll be good to land it
<wwitzel3> rogpeppe: yep, will do that then
<rogpeppe> wwitzel3: thanks
<natefinch> o/ thumper
<thumper> hi natefinch
 * thumper looks around for the power supply
<natefinch> thumper: could make for a short day
<thumper> found it
<voidspace> so I'm EOD
<voidspace> g'night all
<rogpeppe> thumper: hiya
<thumper> o/ rogpeppe
<thumper> hmm... no fwereade
<natefinch> thumper: I think he's out until tomrorow
<thumper> rogpeppe, natefinch: I don't suppose either of you know the progress that william has around removing git from the charm process?
<rogpeppe> thumper: he seemed to be making good progress last week, but i don't know where he ended up
 * thumper nods
<thumper> hmm...
<thumper> seems our gobot is no longer doing the right think with new deps
 * thumper bumped the dependency on golxc and the bot didn't update it
 * thumper goes to look...
 * rogpeppe is done
<rogpeppe> g'night all
<thumper> damn...
<thumper> rogpeppe: don't suppose you are still here?
<thumper> natefinch: ping
<natefinch> thumper:  sorta, what's up?
<thumper> natefinch: the bot has stopped updating dependencies
<natefinch> thumper: I blame mgz_
<thumper> I vaguely remember that it should now be done with juju
<thumper> but I don't have the creds...
<thumper> and it seems dumb
<natefinch> thumper: me neither
<thumper> go get -u only brings in new deps
<thumper> how do I get go to update what it has?
<natefinch> thumper: go get -u should update what's there too, I think
<thumper> natefinch: not according to the help
<thumper> The -u flag instructs get to use the network to update the named packages
<thumper> and their dependencies.  By default, get uses the network to check out
<thumper> missing packages but does not use it to look for updates to existing packages.
<thumper> ah...
<natefinch> thumper: isn't that what the -u is fir?
<thumper> the by default bit
 * thumper is dumb
<natefinch> thumper: ha
<natefinch> thumper: ok, gotta go pick up thai food
<thumper> kk
<wallyworld_> thumper: the bot never did update deps - it was always done by hand AFAIR. CI uses the dependencies file though
<thumper> ah
<thumper> bugger
<wallyworld_> yeah :-(
<thumper> we should add it in then...
<wallyworld_> i wish Go used dep management
<wallyworld_> or had nice tooling for it
<thumper> just use semantic versioning and all your problems go away *
<wallyworld_> sigh
 * thumper does it manually
<thumper> hi fwereade
<thumper> you really here?
<thumper> wallyworld_: you fixed this, right? https://bugs.launchpad.net/juju-core/+bug/1291400
<_mup_> Bug #1291400: migrate 1.16 agent config to 1.18 properly (DataDir, Jobs, LogDir) <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1291400>
<wallyworld_> thumper: yes, but there's more to do and dimiter is doing the rest
<thumper> oh, more?
 * thumper sighs
<wallyworld_> i did datadir. but there's other new attrs like Jobs
<wallyworld_> and local provider is also a bit different
<wallyworld_> and dimiter did the 1.18 stuff originally so he's running with it
<thumper> ok
<bodie_> anyone know what's causing this: src/launchpad.net/juju-core/charm/testing/mockstore.go:17:2: no buildable Go source files in /home/bodie/go/src/launchpad.net/gocheck
<bodie_> (when I go get launchpad.net/juju-core/...
<bodie_> nvm, looks like i'm good.  just cleared it out and re-downloaded
<jcw4> anyone know about a type casting bug in the azure provider in juju-core?
<jcw4> conversion between gwacl.ConfigurationSet and gwacl.OSVirtualHardDisk
<bodie_> I think I'm getting something similar: http://pastebin.centos.org/8396/
<bodie_> is this normal?
<bodie_> probably something to do with the ellipses
<waigani> wallyworld_: you about?
<wallyworld_> yeeees?
<waigani> hehe, can I annoy you?
<wallyworld_> yeeees?
<waigani> just merged trunk got lots of test failures
<wallyworld_> did you update golxc
<waigani> to do with lxc container stuff, so I switched to trunk and ran the tests again
<waigani> ahhhh
<waigani> no
<waigani> I was going to say trunk fails as well, but that would explain it
<wallyworld_> go get -u launchpad.net/golxc
<wallyworld_> or something like that
<waigani> yep, will to cheers
<wallyworld_> np. let me know if you have problems
#juju-dev 2014-03-13
<wwitzel3> wallyworld_: will re-running go get on juju-core ensure the proper dependancies get updated? Or is there an easy way to go get -u all of the entries in dependancies.tsv?
<wallyworld_> go get -u on launchpad.net/juju-core should do deps, but
<wallyworld_> go has no proper dependency management like python etc
<wallyworld_> i think it just pulls trunk
<wwitzel3> right, thank you
<wallyworld_> the dependencies.tsv file is used with tooling by the CI
<wallyworld_> not by go itself
<wwitzel3> ahh ok, makes sense, was wondering what that was for
<wallyworld_> i lot of us are very disappointed with go's dep management
<wallyworld_> the designers think it's ok to build against trunk of all the deps
<wallyworld_> which is so crackful i don't know where to start
<bodie_> heh
<bodie_> crackful
<bodie_> so if I want to update ALL THE THINGS can I just go get -u launchpad.net/... ?
<bodie_> or will that go get everything on launchpad.net ?
<bodie_> I tried asking in #go-nuts, but they were snarky because they didn't know :P
<wallyworld_> not sure actually
<wallyworld_> not sure if it just looks on disk
<bodie_> dyou have a way to update a local tree?
<wallyworld_> i normally use bzr to get the required rev
<wallyworld_> by hand :-(
 * bodie_ offers wallyworld_ a consolation beer
<wallyworld_> cause i don't trust just building against trunk of all the deps
<bodie_> hm
 * wallyworld_ slurps the virtual beer
<wallyworld_> from what i recall, the go guys said that dependency management is hard so let's just ignore it :-(
<bodie_> looks like go get -u <thing>/... works
<wallyworld_> great
<wwitzel3> wallyworld_: yeah, I've seen projects using two accounts on github, -dev and -stable, which I guess works for ensuring you are working off some random rev of master
<wwitzel3> are not
<wallyworld_> yeah
<_thumper_> waigani: next you could do the tests for optional autostart for lxc
<_thumper_> waigani: although we shouldn't land this until we have "suspend" and "resume"
<_thumper_> otherwise we have no way to restart a non-autostarting environment
<waigani> _thumper_: okay, so where do I get started with the "suspend" and "resume" stuff
<waigani> wallyworld_: what was the bug you mentioned I could look at?
<wallyworld_> it's on the 1.17.5 milestone page, let me check
<waigani> _thumper_: i.e. is there any relevant code or am I introducing this as a new feature, in which case I'll go play with the cli lxc suspend and resume
<wallyworld_> waigani: https://launchpad.net/bugs/1291165
<_mup_> Bug #1291165: juju bootstrap local cannot bootstrap  <ppc64el> <juju-core:Incomplete> <https://launchpad.net/bugs/1291165>
<waigani> wallyworld_: cheers
<wallyworld_> ah someone has marked it incomplete
<wallyworld_> but still
<wallyworld_> if there's an empty jenv file, we should be able to nuke it
<thumper> davecheney: hey
<thumper> davecheney: how's things going?
<thumper> wallyworld_: https://codereview.appspot.com/74370044/
<wallyworld_> looking
<thumper> axw: that azure branch is a monster
<thumper> axw: no changes in gwacl?
<thumper> ah, can see it
<thumper> yes there is
<thumper> axw: but I guess all the gwacl stuff needed is reviewed and landed already
<thumper> wallyworld_: I did add some tests around the autostart for the template and the clone
<wallyworld_> ok, i missed those then :-)
<thumper> https://codereview.appspot.com/74370044/diff2/1:20001/container/lxc/lxc_test.go
<wallyworld_> great
<davecheney> thumper: o/
<davecheney> still fighting with proxy prolems
<davecheney> just adding some more debug to juju
<thumper> davecheney: what sort of problems?
<davecheney> not obeying no_proxy
<davecheney> i guess
<davecheney> you have to be very explicit in the no_proxy list
<axw> thumper: sorry, was in the zone
<thumper> axw: np
<axw> yes, monstrous in several ways
<thumper> davecheney: but no_proxy is at least being propogated?
<thumper> axw: I was going to do the initial review for your branch
<thumper> and trying not to get put off by the shear size :-)
<axw> thumper: sorry :(
<thumper> axw: :-)
<axw> I've tried to make it as minimal as I can
<thumper> hey, at least it is just four files :)
<axw> hehe
<thumper> reasonably isolated
<axw> I just rewrote those four files :)
<thumper> axw: so this branch doesn't put multiple machines into a single cloud service yet?
<axw> thumper: no. well, state servers will - but we don't spawn multiple of them yet
<thumper> hah
<thumper> ok
<thumper> just didn't want to get confused more than usual reading the code
<jcastro> hey thumper
<axw> thumper: let me know if you need any explanations on the azure concepts
<thumper> o/ jcastro
<jcastro> theorize with me.
<thumper> axw: sure
<thumper> jcastro: oh... kay...
<jcastro> so you know how you can add servers via the manual provider
<jcastro> what's to stop me from adding some servers from aws, some from azure, some from hp
<jcastro> and doing a poor man's cross environment relations?
<thumper> jcastro: they won't be able to talk to each other
<thumper> jcastro: the units use the internal private address
<axw> jcastro: they won't have the same internal addresses
<thumper> which is cloud (or region) private
<axw> what thumper said
<jcastro> so even if I was to try and trick them with say openvpn or whatever
<jcastro> bah! so close!
<thumper> yes, close
<axw> if you set up all the routing yourself you could do it
<thumper> I forsee a time when we use our own SDN for juju, so this would work
<thumper> but this isn't in the near future
<thumper> perhaps 6-12 months if we're lucky
 * jcastro nods
<waigani> thumper: lxc-freeze/lxc-unfreeze == suspend/resume?
<davecheney> thumper: i think so
<davecheney> i need to find out which host is getting knocked back by the proxy
<waigani> davecheney: cheers
<thumper> waigani: no... I don't think so
<thumper> waigani: because when a machine stops or is rebooted
<thumper> waigani: the machine is in a stopped state, not a frozen state
<thumper> so better to be consistent
<thumper> perhaps after the initial approach
<thumper> we could use freeze/unfreeze in addition to start/stop
<thumper> if necessary
<thumper> but start/stop are bare minimum necessary
<waigani> thumper: oh right, I assumed we already had that and you wanted suspend/resume in addition
<thumper> waigani: no...
<thumper> it's complicated
<axw> thumper: I think it's best to leave this Azure stuff till after 1.18.0 in case it destabilises - what do you think?
<thumper> hmm...
<thumper> well, it would only destabalize azure
<thumper> and they have kinda asked for it :-)
<thumper> I'll punt the question to mramm
<thumper> and extras
<axw> ok
<thumper> I'll put it on the minutes for tonight's meeting
<thumper> and we'll make a decision
<axw> okey dokey
<waigani> wallyworld_, thumper: is there a way to look at trunk without switching to it?
<wallyworld_> you can browse from launchpad
<waigani> from the filesystem?
<wallyworld_> from the browser
<waigani> it would be handy to have trunk in a sublime window
<wallyworld_> or you can branch it locally somewhere
<wallyworld_> in that case, just branch locally
<waigani> ah okay
<davecheney> thumper:         resp, err := utils.GetNonValidatingHTTPClient().Do(req)
<davecheney>         if err != nil {
<davecheney>                 return nil, fmt.Errorf("cannot upload charm: %v", err)
<davecheney> ^ this http client doesn't observe the proxy
<davecheney> or doesn't obey the proxy exclusion, or something
<davecheney> no
<davecheney> wait
<davecheney> its somewhere else
<davecheney> 2014-03-13 03:43:40 ERROR juju.cmd supercommand.go:296 failed to add remote charm "cs:precise/ubuntu-4": cannot upload charm to provider storage: 403 403 Forbidden
<thumper> hmm..
 * thumper is still in axw's first file
<davecheney> thumper: still chasing
<davecheney> it's in storage.Puit
<davecheney> and this must be posting to the little web server the local provider stands up
<davecheney> i just need to find out which localhost addr it is sitting on
<axw> davecheney: should be in lxcbr0
<thumper> 10.0.3.1 is the host
<thumper> davecheney: can you do 10.0.3.0/24 ?
<thumper> waigani: what'cha up to?
<thumper> waigani: because I have a few refactorings I'd like to see landed ASAP
<thumper>  :)
 * thumper sighs
<thumper> two down, two to go
<davecheney> thumper: i don't think so
<davecheney> i migth be able to do 10.0.3
<davecheney> i think it's just a substring map
<davecheney> thumper: i already added 10.0.3.1 to the list
<waigani> thumper: girl just got off the ice. I have to head home and do dinner/homework daddy thing. After that?
<waigani> or email me and I'll pick it up this evening
<thumper> davecheney: http://unix.stackexchange.com/questions/23452/set-a-network-range-in-the-no-proxy-environment-variable
<davecheney> thumper: intersting
<axw> thumper: was it as good for you as it was for me?
<thumper> axw: probably a lot less of a pain for me
<axw> reviews can be quite tiring
<wallyworld_> thumper: me is sad
<wallyworld_> ERROR juju.cmd supercommand.go:293 empty http-proxy in environment configuration
<wallyworld_> save me looking, any idea?
<wallyworld_> there is no http-proxy in my env config
<wallyworld_> never has been
<wallyworld_> ah fark, wrong environment
<wallyworld_> that was with local
<dimitern> hey thumper
<waigani_> thumper: I accidentally cancelled during a commit. branch is locked. bzr break-lock: The lock for [branch] is in use and cannot be broken.
<waigani_> thumper: bzr info: Could not acquire lock "...go/src/launchpad.net/juju-core/.bzr/checkout/dirstate": [Errno 11] Resource temporarily unavailable
<mattyw> waigani_, can you try doing this? bzr break-lock bzr+ssh://example.com/bzr/foo with the bzr+ssh pointing to the remote branch
<mattyw> ^^ah right during a commit - not a push - I suspect my "advice" is worthless here
<waigani_> mattyw: yeah during commit $#@$
<waigani_> axw? wallyworld_? ^
<axw> waigani_: I did that once, and thumper had to walk me through fixing it... I can't remember the details sorry
<waigani_> hmmmm
<waigani_> thumper: never mind, I fixed it
<waigani_> wallyworld_: can I bzr switch -b [branch] in a new terminal while lbox proposing in another?
<waigani_> axw: ^ ?
<axw> waigani_: no
<waigani_> okay, np
<jam> dimitern: good morning
<dimitern> jam, morning!
<jam> dimitern: I'd like to have vladk do some work with you to get up to speed on juju-core
<jam> dimitern: do you think you can spend some time pairing with him today ?
<dimitern> jam, sure, after the weekly meeting perhaps? I need to get this fix done first.
<jam> dimitern: that should be fine, I think he's still got infrastructure stuff to do today
<jam> vladk: did you need something ? It sounded like you were just saying something as I closed the window
<thumper> dimitern: o/
<dimitern> thumper, hey - seen my mail?
<thumper> yes, will get on it
<jam> fwereade: standup ?
<jam> mramm: welcome
<dimitern> mramm, fwereade, rogpeppe, mgz_, meeting?
<rogpeppe> dimitern: ah yes
<dimitern> voidspace, ^^
<wallyworld_> fwereade: i really want to get this in for 1.18 because API https://codereview.appspot.com/75330043
<dimitern> wallyworld_, are you working on bug 1291207?
<_mup_> Bug #1291207: juju-run symlink is broken after upgrade-juju <run> <upgrade-juju> <juju-core:Triaged by wallyworld> <https://launchpad.net/bugs/1291207>
<wallyworld_> dimitern: no
<voidspace> oh, is there a meeting now
<dimitern> wallyworld_, ok
<wallyworld_> dimitern: axw was going to i think
<wallyworld_> voidspace: come and join us :-)
<voidspace> wallyworld_: coming.
<voidspace> I did not know we had meetings at a different time on Thursdays!
<wallyworld_> Thursdays are special
<dimitern> voidspace, it's just this one
<axw> I wasn't planning to work on that bug...
<axw> I just reported it :)
<voidspace> dimitern: you mean just today, or just Thursday?
<dimitern> axw, wallyworld_, ok so if I can get to it I'll fix it today
<jam> voidspace: thursday is everyone, so we go a bit earlier to make it work for NZ
<axw> (currently deep into Azure)
<axw> thanks dimitern
<wallyworld_> dimitern: \o/
<dimitern> voidspace, every thursday
<voidspace> dimitern: thanks
<voidspace> so my code review suggests I should add global state and reduce test coverage!
<voidspace> amongst some more sensible comments ;-)
 * dimitern raises 2 crossed fingers in disgust...
<dimitern> hey sinzui, what's your plan for the 1.17.5 release? immediately after fixing the critical issues?
<voidspace> ah, the suggestion to reduce test coverage *does* suggest removing the code that was being tested too, so not entirely crazy :-)
<dimitern> axw, re bug 1291292 - the manual bootstrap writes provider-state-url file with a hardcoded file:///<storage-path>/provider-state
<_mup_> Bug #1291292: use a utils/http client for all HTTP(S) calls across the codebase <manual-provider> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1291292>
<dimitern> axw, and it does that because I guess p-s-url being an url, it's read with bootstrap.LoadStateFromURL
<axw> dimitern: it should be http:// though. I will try to remember to take a look later
<axw> machine-0 starts an httpstorage listener
<dimitern> axw, it starts an sshstorage in either bootstrap or provisioning mode (can't remember)
<axw> only command line should be using sshstorage - could be an issue around there
<dimitern> axw, seemed so to me as well
<dimitern> fwereade, i've assigned to myself all bugs i'm working on today - basically i need to get upgrades from 1.16 going smoothly
<fwereade> dimitern, ok, excellent; you and mgz will need to do vlans as a matter of some urgency though
<wallyworld_> jam: fwereade: i've tested this live upgrading from all of 1.16, 1.17, 1.18. I would love to get it in to 1.18 so we can delete lots of code post 1.18 https://codereview.appspot.com/75330043
<voidspace> mgz_: I'm just butchering our mp in response to the review from rogpeppe
<wwitzel3> rogpeppe: did you see my comment on the br0 bug?
<jam> dimitern: rogpeppe: vladk: Dimiter has mentioned that he's a bit swamped with some critical bug fixes. Roger, is it possible for you to spend some time with vladk looking over your shoulder about the work your on?
<rogpeppe> jam: sure
<rogpeppe> wwitzel3: no
<voidspace> mgz_: although rogpeppe and natefinch suggest adding global state to the tests
<rogpeppe> voidspace: ??
<rogpeppe> voidspace: what global state?
<voidspace> rogpeppe: using a global variable instead of a hard coded exception
<voidspace> rogpeppe: I solved it a different way instead :-)
<rogpeppe> voidspace: that's not global state actually
<voidspace> rogpeppe: which you *also* hinted at in a different part of the review
<rogpeppe> voidspace: but i also suggested another way
<rogpeppe> voidspace: yeah
<wwitzel3> rogpeppe: short version, I did the same set of tests we did yesterday using trunk and it all worked.
<rogpeppe> wwitzel3: bugger
<rogpeppe> oh for a real maas
<voidspace> rogpeppe: using aggregator.instanceInfo instead of the reqc channel directly improves the tests a lot
<voidspace> rogpeppe: so thanks for that
<rogpeppe> voidspace: cool
<voidspace> rogpeppe: just going through making all those changes
<voidspace> rogpeppe: and I have a question about the removal of the loop exit - but it's in my reply to the review so you can read it and answer when I'm done
<rogpeppe> voidspace: the only test coverage i suggested removing was inappropriately messing with private state - if we're going to do that, then there are *many* more tests we can do :-)
<thumper> night all
<rogpeppe> thumper: g'night
<voidspace> rogpeppe: we have code to handle the channel dying
<rogpeppe> voidspace: i suggested removing that
<voidspace> rogpeppe: right
<rogpeppe> voidspace: because we never actually close the channel
<voidspace> rogpeppe: however, my follow up question is
<voidspace> rogpeppe: inside newAggregator we have the inner func which wraps the loop call in tomb.Kill with a defered tomb.Done
<voidspace> rogpeppe: which we didn't see an easy way to test
<voidspace> rogpeppe: as loop exit is now not *possble* shouldn't that code be removed too?
<rogpeppe> voidspace: the loop does exit when the tomb is killed
<voidspace> right, there's still the tomb exit
<voidspace> I forgot about that return
<voidspace> ok
<rogpeppe> voidspace: when that happens, tomb.Done will be called
<rogpeppe> voidspace: and if that's not called, then Wait won't return
<rogpeppe> voidspace: so we are testing it AFAICS
<voidspace> rogpeppe: I thought we were removing the only return from the loop, but we aren't
<voidspace> so fine, that's my answer - thanks
<rogpeppe> voidspace: cool
<gnuoy> Hi, I just got stung by Bug#1243827 , is there a work around for it or a fix in the pipeline ?
<_mup_> Bug #1243827: juju is stripping underscore from options <canonical-webops> <config> <juju-core:Triaged by adeuring> <juju-deployer:Invalid by hazmat> <https://launchpad.net/bugs/1243827>
<voidspace> I'm very happy to remove tests if we also remove the code being tested
<rogpeppe> voidspace: it occurred to me that there are actually no tests that the requests are issued at the expected rate
<adeuring> gnuoy: put the data into quotation marks
<voidspace> rogpeppe: that gatherTime is honoured?
<rogpeppe> voidspace: yeah
<voidspace> rogpeppe: well, that's a ratelimiter detail - so that should be tested inside ratelimiter
<gnuoy> adeuring, I'll give that a try, thanks
<voidspace> rogpeppe: we test that they're batched by the ratelimiter
<voidspace> rogpeppe: we don't test that our gatherTime is *passed* to the ratelimiter, but if they're batched (which we do test) I think it's fair to just assume that
<rogpeppe> voidspace: kinda. we test some batching, but we don't really check that we're using the ratelimiter appropriately.
<adeuring> gnuoy: ...or use double underscores. But that will require a change in the config once the bug is fixed. (and I am quite close to having a fix)
<voidspace> rogpeppe: so I could test that the time taken for the batched request to return is greater than gatherTime
<rogpeppe> voidspace: i believe it works, but i woke up this  morning thinking that it didn't, and had to write a little piece of code to convince me itdid
<voidspace> rogpeppe: a simple assertion that timeTakenToBatch > gatherTIme then?
<voidspace> rogpeppe: I don't want to test the semantics of the ratelimiter inside the aggregator tests - the ratelimiter should be tested inside ratelimiter
<rogpeppe> voidspace: i'd actually prefer something that calls instanceInfo several times (say 100 times at 1ms intervals, with gatherTime say to 10ms) and check that the total number of requests issued is no more than 10
<voidspace> rogpeppe: that does sound like testing the ratelimiter works
<rogpeppe> voidspace: i do hear you there, but it's actually just testing one incarnation of the ratelimiter, and that we are using it appropriately.
<rogpeppe> voidspace: it would be quite easy to do it wrongly
<rogpeppe> voidspace: that said, i believe the code works, so i'm happy to go forward with the existing tests if you're happy with them
<voidspace> rogpeppe: I could add a (threadsafe *sigh*) counter to testInstanceGetter
<voidspace> rogpeppe: so it wouldn't be hard to write
<voidspace> rogpeppe: just fire off a bunch of requests, ensure none of them error and that testInstanceGetter.counter <= 10
<rogpeppe> voidspace: yeah
<rogpeppe> voidspace: sync/atomic can be useful for thread-safe counters, if you haven't already added a mutex
<voidspace> rogpeppe: the existing batch test also ensures that the right response goes back to the right request - so it still has value "as is"
<rogpeppe> voidspace: definitely
<voidspace> rogpeppe: I don't think we need a mutex in the place you suggested one
<rogpeppe> voidspace: ok
<voidspace> rogpeppe: as we're doing a blocking read so we *can* actually know there is no concurrency issue in that specific place
<voidspace> rogpeppe: the batched requests that set instanceGetter.ids can only happen after the first request completes
<rogpeppe> voidspace: that does assume a particular implementation, but i'm happy with that
<rogpeppe> voidspace: we don't do black-box testing
<voidspace> rogpeppe: well, it assumes that we can only get a reply after the request has been sent
<voidspace> rogpeppe: so we're assuming a particular implementation of the *universe*
<voidspace> ;-)
<voidspace> as we wait for the reply before making the next request
<voidspace> I don't *think* that makes assumptions about the implementation
<rogpeppe> voidspace: it assumes that the loop doesn't call Instances again after sending the reply
<voidspace> rogpeppe: we know there are no other requests coming in
<rogpeppe> voidspace: but that would indeed be a hard mistake to make
<rogpeppe> voidspace: i was not seriously suggesting adding a mutex, in all honesty
<voidspace> ok
<rogpeppe> voidspace: just pointing out the potential issue - it's a common pattern to see, and raises warning signs in my head
<voidspace> I've worked with threads before
<voidspace> so I'm no stranger to concurrency pain
<rogpeppe> voidspace: :-)
<rogpeppe> voidspace: given the above, you probably won't need a thread-safe counter either
<rogpeppe> voidspace: as you know that instances is always called from a single thread
<voidspace> no, we need to batch 100 calls concurrently as we can't use instanceInfo synchronously if we want batching
<voidspace> and a plain increment won't be thread safe
<voidspace> so I think we will
<voidspace> although you maybe right - let me think about it
<rogpeppe> voidspace: we can make 100 calls concurrently, but Instances will only be called from a single thread
<voidspace> ah yes
<voidspace> we make the requests from 100 different goroutines - but the instance is accessed just from the loop goroutine
<rogpeppe> voidspace: yeah
<voidspace> cool
<rogpeppe> voidspace: that said, we really are assuming an implementation here - the previous implementation of instanceInfo *would* have called Instances concurrently
<rogpeppe> voidspace: so probably best to use atomic.AddInt32 or whatever , just to be on the safe side
<rogpeppe> voidspace: then you can easily verify that the tests fail against the old version of the code if you wish
<voidspace> fair enough
<voidspace> a big part of my work with paymentservice was implementing correct locking to solve our concurrent database access problems - that was *worse* though because the concurrent access was from different processes, and actions that acquired the lock could trigger other actions in other processes that needed the lock too
<voidspace> so we had to add locking without deadlocking
<voidspace> we ended up with a state machine to handle the actions (state transitions) and also do the locking in basically *one place*
<voidspace> but before then we had to add careful locking everywhere we found a race condition
<voidspace> coffee
<rogpeppe> voidspace: yeah, mutexes work ok in simple situations, but for more complex situations, a single thread makes things easier
<voidspace> and William and I have some fun horror stories about threading from Resolver Systems - we changed the main calculation engine to work in a background thread (to not block the UI and allow multiple concurrent calclulations)
<rogpeppe> voidspace: database locking is awkward because you don't really want a monitor process either
<voidspace> but syncing the results back to the UI had to happen on the UI thread
<rogpeppe> voidspace: ha ha
<voidspace> we spent *days* finding and fixing race conditions
<voidspace> including multiple iterations of "I can *prove* that what just happened can't happen"
<rogpeppe> voidspace: a nice idiom in Go can be to send a function on a channel, which then gets executed by the monitor thread
<voidspace> rogpeppe: yes, being able to send functions on channels is very nice
<voidspace> similar to InvokeOnThread (or whatever it's called) in C# - which kinda-sorta has function pointers in the form of delegates
<voidspace> actually, it's functional programming support improved massively after I stopped using it
<rogpeppe> voidspace: yes, except that InvokeOnThread sounds like you can tell some code to execute on any arbitrary thread
<voidspace> only if there's an event loop running
<voidspace> so for the main GUI event loop you can pass a delegate and say "execute this delegate on the GUI thread and return the results to me"
<rogpeppe> voidspace: right
<voidspace> which yes, isn't the same as sending a function across a channel - just reminded me of it I guess
<rogpeppe> voidspace: if you have generalised "event loops" it makes sense
<voidspace> yep, you could run multiple event loops in different threads
<voidspace> anyway, coffee time for me
<rogpeppe> voidspace: enjoy
<voidspace> :-)
<rogpeppe> vladk: hi
<rogpeppe> vladk: when you feel that you've caught up with what you want to do, perhaps we could hang out together, as john suggested
<vladk> rogpeppe: may we hangout in an hour? I'm a little busy right now
<rogpeppe> vladk: sounds good
<rogpeppe> vladk: ping me when you're ready
<vladk> rogpeppe: ok
<wwitzel3> rogpeppe: https://codereview.appspot.com/72270044/
<wwitzel3> rogpeppe: I made those changes we discussed yesterday
<rogpeppe> wwitzel3: looking
<wwitzel3> rogpeppe: also I was able to replicate the error from your ticket (1271144)
<wwitzel3> rogpeppe: but am not able to replicate it on your branch :)
<wwitzel3> rogpeppe: it is intermitten on trunk, but I can make it happen
<adeuring> gnuoy: doies the string where the "_" is removed start with a "-" of "+" (just to confirm that I'm on the right track)
<rogpeppe> wwitzel3: great!
<rogpeppe> wwitzel3: that's a relief
<wwitzel3> http://paste.ubuntu.com/7084302/
<gnuoy> adeuring, yes, it starts with a "-"
<adeuring> gnuoy: great, thanks!
<gnuoy> np
<wwitzel3> rogpeppe: haha, I just looked at the diff for that fix
<rogpeppe> wwitzel3: loadsa code :-)
<wwitzel3> rogpeppe: well for what it's worth, after the testing, LGTM
<rogpeppe> wwitzel3: you have a review
<bodie_> o/
<voidspace> rogpeppe: so no issues with doing test checks inside a goroutine?
<voidspace> rogpeppe: I assume you do check rather than assert because you shouldn't assert inside a goroutine
<rogpeppe> voidspace: Check is considered ok; Assert is not
<rogpeppe> voidspace: yeah
<voidspace> rogpeppe: but a check can happily add a failure whereas an Assert can't just stop
<voidspace> right
<rogpeppe> voidspace: in fact it doesn't matter, but this is the party line
<voidspace> ah, ok
<voidspace> it does change the semantics of the test slightly (using Check rather than Assert)
<voidspace> in that we won't fail early
<voidspace> but no harm
<rogpeppe> voidspace: yeah, but i don't think it matters
<voidspace> rogpeppe: I'm intruiged as to how it doesn't matter
<sinzui> dimitern, yes, as soon as the blocking bugs are fixed. I may release without the fix for cloud-init in maas, release that in 1.18.0
<voidspace> rogpeppe: if you do an assert does the test runner kill all running goroutines and terminate the test method?
<voidspace> in fact I wonder how the test runner works at all without exceptiosn - does it use a panic?
<rogpeppe> voidspace: check out the implementation of Assert
<rogpeppe> voidspace: it calls runtime.Goexit
<voidspace> rogpeppe: interesting
<rogpeppe> voidspace: which is kinda like a panic
<rogpeppe> voidspace: except that it doesn't actually panic
<voidspace> heh
<voidspace> is it "magic" implemented by the compiler, or does it use standard go mechanisms?
<rogpeppe> voidspace: but if we used Assert, it wouldn't stop any of the other goroutines involved in the test
<voidspace> right
<rogpeppe> voidspace: runtime.Goexit is magic
<rogpeppe> voidspace: as is everything in the runtime package
<voidspace> heh
<voidspace> ok, well maybe fair enough then
<voidspace> it's "exposed magic"
<voidspace> rogpeppe: so your suggested code to avoid using the channel directly when testing the batching of calls
<voidspace> rogpeppe: spins up two goroutines and uses a WaitGroup
<rogpeppe> voidspace: yes
<voidspace> rogpeppe: but in the final assertion we're depending on the order of those calls
<rogpeppe> voidspace: ah, interesting
<voidspace> rogpeppe: which is strictly incorrect now I guess
<rogpeppe> voidspace: yeah
<rogpeppe> voidspace: you could sort 'em
<voidspace> rogpeppe: sounds good
<voidspace> or use a set
<voidspace> what's the set type?
<mgz> wouldn't sets be nice
<voidspace> hah
<rogpeppe> voidspace: sorting's probably a little easier
<bodie_> anyone have feeling on whether cobzr should be used?  I'm new to working on launchpad stuff...
<voidspace> and you can't use mappings with keys of arbitrary types, so you can't easily simulate them
<bodie_> Git kid
<mgz> bodie_: I say no
<mgz> but you do need some alternative setup to make go happy with its layout quirks
<bodie_> happy little quirks
<bodie_> taking the bob ross approach today
 * rogpeppe quirks happily
<voidspace> quirk as a verb
<voidspace> what fun
<bodie_> these kids these days with their selfies and their "quirking"
<rogpeppe> voidspace: the other possibility is just to avoid asserting on the ids passed to Instances
<voidspace> rogpeppe: that's the assert that the calls were batched
<voidspace> rogpeppe: so I'd rather leave it in...
<voidspace> rogpeppe: I could just assert that the length is two
<voidspace> rogpeppe: as that's what we care about
<rogpeppe> voidspace: you know that the right results have come back, so the right info has flowed through to instances; all you really care about is the number of calls to Instances (and the number of ids in those calls)
<rogpeppe> voidspace: yeah
<voidspace> rogpeppe: right, so the length of testGetter.ids is the relevant part
<rogpeppe> voidspace: yeah
<mgz> # launchpad.net/juju-core/state/apiserver/keymanager_test
<mgz> state/apiserver/keymanager/keymanager_test.go:20: inconsistent definition for type gocheck.C during import
<mgz> I guess that's some version skew issue...
<mgz> I should godepitup
<voidspace> oh yeah, if I restart the vm then my ssh sessions die too...
<jam> mgz: can you help me track down a bug I just ran into ?
<jam> https://bugs.launchpad.net/juju-core/+bug/1291967
<_mup_> Bug #1291967: cannot destroy-environment with 1.17.4 of a trunk bootstrapped env "ftp-proxy" <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1291967>
<jam> I think this is from thumper's changes
<jam> he added some new fields, that end up being the empty string, and 1.17.4 says that is a broken environment, and prevents be from destroying it.
<jam> In *itself* we don't have to be compatible with 1.17.4, but I don't want us to end up where in 1.18.0 will refuse to destroy an 1.18.1 environment.
<mgz> jam: ugh, that looks bad
<jam> mgz: you can ignore the Panic() I got, because I think it *might* be unrelated, but handling "empty strings treated as invalid config" is bad.
<jam> I thought I saw something from axw about "allow unknown fields"
<jam> mgz: I just don't have time to finish tracking it down.
<jam> mgz: for extra fun times, everytime I run "juju destroy-environment" it lists a different key as being invalid.
<jam> (hash map ordering? )
<mgz> okay, I'll see if I can repo
<jam> mgz: thanks  "go install launchpad.net/juju-core/..."; ~/dev/go/bin/juju bootstrap -e local; /usr/bin/juju destroy-environment local
<jam> mgz: reproduces it here, FWIW
<fwereade> mgz, jam: you are wonderful, I was just coming to undirectedly complain about exactly that
<mgz> fwereade: have I ever said how much I love config?
<fwereade> mgz, *everybody* loves config</deadpan>
<jam> mgz, fwereade: it might also be something waigani was working on, but I don't see it in +activereviews or the recent commits to trunk
<rogpeppe> mgz: i was looking at your comment on this review. https://codereview.appspot.com/72230045/
<mgz> rogpeppe: looking
<rogpeppe> mgz: i wondered if you'd be able to join a hangout to talk about it
<rogpeppe> mgz: as there are a few issues that i'm not sure about
<mgz> rogpeppe: minially, you can just copy the extra arg junk from the other file
<rogpeppe> mgz: that doesn't solve the target-release issue though
<rogpeppe> mgz: which is a little more tricky
<mgz> rogpeppe: do you use your canonical account fro hangouts or the other one?
<rogpeppe> mgz: either. in the current one i'm using canonical
<mgz> dialin'
<wwitzel3> rogpeppe: https://codereview.appspot.com/72270044/
<rogpeppe> https://plus.google.com/hangouts/_/7acpjqgade93ubac2gple0elsc?authuser=1&hl=en
<rogpeppe> mgz: ^
<mgz> that's non-pasteable for me, can you invite me?
<rogpeppe> mgz: it might work if you change authuser=1 to authuser=0
<mgz> rogpeppe: as in, it's a pain to type out a url that long
<rogpeppe> mgz: ok, np
<mgz> ubuntu on arm doesn't do hangouts
<rogpeppe> mgz: one mo
<rogpeppe> mgz: https://plus.google.com/hangouts/_/canonical.com/juju-core?authuser=1
<rogpeppe> mgz: that's the standup hangout
<jam> rogpeppe: mgz's IRC machine is a different machine than his Google Hangout machine
<rogpeppe> i couldn't work out how to invite people :-)
<rogpeppe> mgz: https://codereview.appspot.com/72270044/
<natefinch> rogpeppe: I figured out some of the problems I was having with bootstrapping, which then let me actually test my code, and now I realize there's a problem... the bootstrap command on the bootstrap node tries to connect to state before it starts the machine agent, and since the machine agent is now the one starting mongod, things don't work.
<rogpeppe> wwitzel3: are you around to join a hangout?
<fwereade> natefinch, the machine agent can handle starting up with mongod already there, though, right?
<marcoceppi> why are we depricating the -e flag for destroy-environment?
<wwitzel3> rogpeppe: yep
<rogpeppe> marcoceppi: an excellent question :-)
<rogpeppe> wwitzel3: https://plus.google.com/hangouts/_/canonical.com/juju-core
<fwereade> natefinch, it *should* just be a matter of doing it in bootstrap as well, right?
<natefinch> fwereade: yeah, we can do it there, too, that's a good point
<marcoceppi> rogpeppe: just got a warning about using destroy-environment with the -e flag
<natefinch> marcoceppi, rogpeppe: I don't think we need to deprecate the flag, it's just optional
<marcoceppi> also, rogpeppe juju bootstrap doesn't take a positional environemtn, it's like the only one that does
<natefinch> marcoceppi, rogpeppe: what's not optional is the environment name
<marcoceppi> natefinch: I'll file a bug about the user wraning then
<bodie_> What is the point of this line in addmachine_test.go?  it seems redundant
<bodie_> http://paste.ubuntu.com/7084701/
<voidspace> I assume that the following does ten iterations
<voidspace> for i := 0; i < 10; i++ {
<voidspace> and not 9
<natefinch> voidspace: yes. Just like in every other language :)
<voidspace> natefinch: I normally use languages with less opaque loop syntax ;-)
<natefinch> voidspace: I thought you did .Net stuff?  isn't that the exact same syntax?
<voidspace> natefinch: that was about six years ago!
<voidspace> but yes
<natefinch> voidspace: ahh
<natefinch> voidspace: I went C -> C++ -> C#, so it's the exact same syntax I've used since I started programming
<voidspace> maybe only five
<bodie_> today I learned: for loop is opaque
<bodie_> kids these days
<voidspace> if you weren't a programmer, what would this mean: "for i := 0; i < 10; i++"
<bodie_> find a programmer
<bodie_> :D
<voidspace> "for i in range(10)" I reckon even my non-programmer friends could hazard a guess...
<bodie_> hmm, good point
<natefinch> bodie_: the thing I think is funny is the people complaining about having to write 3 line loops instead of having some magic one liner to do the exact same thing
<voidspace> natefinch: well, it depends on the level of magic
<bodie_> one line + {} = 3 lines
<voidspace> natefinch: I do love list comprehensions
<voidspace> natefinch: and Go *could* gain slice comprehensions
<natefinch> voidspace: very very simple ones are ok, but they tend to get abused into huge monstrosities
<natefinch> voidspace: line returns are not evil :)
<bodie_> list comps are swanky, i'll give ya that
<voidspace> natefinch: well, no - but you can write bad code with every construct
<voidspace> natefinch: so I never find "but people write bad code" a very convincing argument in itself
<natefinch> voidspace: yes, but some constructs encourage hard to read code, and some discourage it
<voidspace> natefinch: and yes, I always break up large comprehensions into an explicit loop
<bodie_> you have to admit syntax like x = [i*i for i in x] is pretty handy
<voidspace> loop and filter in a single expression is nice - especially that it is an expression
<voidspace> f(x for x in y if x)
<bodie_> I hail from the mystical land of Perl so I'm definitely a little bit twisted in the head
<bodie_> I mean, there are ungainly list comprehensions, and then there's Perl...
<natefinch> bodie_: your poor poor soul
<voidspace> natefinch: I do like that Go is small, it certainly makes it easier to learn
<voidspace> except the "you don't need to care about pointers except when you need to care about pointers"
<voidspace> I foxed Martin with that yesterday and I'm *pretty sure* the same code also mildly foxed Roger in his review :-)
<voidspace> he suggested changing a slice of pointers to a slice of "non-pointers" but I don't think we can do that because we need pointer receivers
<natefinch> voidspace:  the smallness of go is hugely handy.  There's no "Oh crap, let's go look up how <somekeyword> works
<bodie_> that is something I really appreciate
<bodie_> as a relative newcomer
<voidspace> I *bet* Go in five years is substantially bigger though (although not to the extent of other languages I'm sure)
<natefinch> voidspace: C# was well into that territory when I left.  They basically threw everything in there.  Dynamic types?  Sure!  Contracts? Sure!  Async calls?  Sure!
<natefinch> voidspace: I don't know... it's one of their core tenants to keep the language small
<voidspace> natefinch: the C# full language spec was *already* huge when I started with it in 2006
<voidspace> natefinch: I like that they're very conservative
<voidspace> natefinch: but I'd stiill wager good money
<voidspace> *still even
<natefinch> voidspace: it'll be bigger, I don't know about substantially bigger.  I think they're pretty happy keeping it small, and there's not a lot that's missing other than generics.  Most everything else is just syntactic sugar
<voidspace> yeah, when generics arrive it will be nice
<voidspace> but who doesn't like things just that little bit sweeter - it's *very* hard to resist the call altogether
<natefinch> voidspace: I don't know.... they're some pretty grumpy old unix guys.  They don't even like syntax highlighting in their editors :)
<voidspace> :-)
<natefinch> voidspace: I bet we'll see a lot of improvements to the performance and behavior, a lot of improvements in tooling - they're talking about building their own debugger, because GDB isn't a good fit for Go... that's the one thing I'd love to have, is a better debugger.
<voidspace> natefinch: a real repl would be good too
<voidspace> anyway, I stand by my claim (five years is an age in programming - even going really slowly a lot can change). But we'll see.
<bodie_> I'm not sure how I feel about a gorepl
<bodie_> it's so trivially quick to compile and so small that it doesn't feel like I have an excuse to want one
<natefinch> voidspace: real repls in compiled languages are tricky.  .Net can sorta do it because it's compiled to an intermediary form, but for truly native code, it's tricky.  Being able to query values is usually enough... being able to change them is really nice, but might not be feasible
<voidspace> natefinch: right
<natefinch> bodie_: it helps a lot when you're 5 minutes into a 10 minute bootstrap and something fails
<bodie_> word
<voidspace> natefinch: but the support you need for a repl and the support you want for a debugger are not so different
<bodie_> I was actually looking at the bug for lxc-clone
<natefinch> voidspace: true
<bodie_> I think that would be really handy, but I'm not too sure where to start
<natefinch> voidspace: there are some go repls out there.... basically recompile after each line you enter, which is kind cool.  I haven't tried them, though, since it's usually pretty trivial to do the recompile yourself
<bodie_> https://bugs.launchpad.net/juju-core/+bug/1203291
<_mup_> Bug #1203291: local provider moar awesome with lxc-clone <local-provider> <papercut> <performance> <juju-core:Triaged> <https://launchpad.net/bugs/1203291>
<bodie_> ^ that one
<voidspace> natefinch: yeah, although the compiler really fights against you I guess
<voidspace> natefinch: unused variables, unimported modules
<voidspace> although all those are fixable with code transforms (automatically generate the imports and fake out the variable usage)
<bodie_> I think that what seems finicky about Go on the surface (grumpy compiler) is actually really useful for figuring out what's wrong with my code
<voidspace> yeah, proper compile errors can be great
<natefinch> the modules thing you can get around... usually it's just fmt that comes and goes, so you can do var _ = fmt.Printf  and it'll be quiet
<voidspace> but they can also make exploratory coding and writing tests harder
<bodie_> heh... emphasis on proper
<natefinch> THere's definitely been times when the unused variables error has shown me a bug in my code
<bodie_> maybe a linter that runs on every update would be handy, something like what's in gosublime
<bodie_> although the compile is already so snappy
<voidspace> natefinch: makes a great typo catcher
<bodie_> you could also write a hook to build on save
<bodie_> but again slightly too involved
<voidspace> flake8 does the same for Python - it's the most trivial kind of thing to catch
<natefinch> there's a gofmt variant that'll automatically add/remove imports
<voidspace> yep, rogpeppe mentioned that - sounds useful
<natefinch> bodie_: one thing you can check out is that gofmt will only modify the code if it has proper syntax, so if you save and it doesn't gofmt, you know there's a compile error.  It doesn't catch *all* errors, but it catches many.
<bodie_> neat :)
<bodie_> what do you think of golint?
<natefinch> bodie_: I haven't tried it
<bodie_> I think it's nice
<natefinch> oh, one more cool thing gofmt does is space out your code according to order of operations, so 5  +  7  /  2  +  8  gets formatted into 5 + 7/2 + 8  to make it more obvious what actually will happen
<bodie_> that is cool
<natefinch> that also saved me a hard to find bug
<mgz> (+ 5 (/ 7 2) 8)
 * natefinch scowls at mgz
<natefinch> fwereade: of course, making bootstrap create mongo is easy.... *testing* that it does so is a PITA
<voidspace> gc.LessThan?
<natefinch> voidspace: does it not exist?  If not, look for it in juju-core/testing/checkers
<voidspace> I was asking if it exists - I assume it does
<natefinch> voidspace: try it and find out :)
<voidspace> the compiler won't tell me because I'm making an assert on a member that I haven't yet written
<voidspace> the compiler will tell me in a minute
<voidspace> once I fix my syntax errors!
<natefinch> heh
<natefinch> voidspace: autocomplete?
<voidspace> natefinch: not yet
<voidspace> hopefully at the weekend or when I get a spare hour or two I will get vim properly configured
<voidspace> and work out why godef isn't working properly for me
<voidspace> etc
<natefinch> voidspace: ahh, sadface.
<voidspace> gc.LessThan does not exist the compiler tells me
<rogpeppe> 5 7 2 / + 8 +
<rogpeppe> voidspace: it's in testing/checkers
<voidspace> rogpeppe: checkers.LessThan ?
<voidspace> rogpeppe: thanks, I will look anyway
<rogpeppe> voidspace: yeah, although we usually import it as jc
<natefinch> voidspace: the convention is to import the package as jc (to mirror gc for gocheck)
<voidspace> natefinch: cool
<voidspace> js.Saves
<voidspace> *jc.Saves dammit
<natefinch> voidspace: it's not a pointer type
 * natefinch ducks
<voidspace> :-)
<voidspace> rogpeppe: so the batch test passes
<voidspace> rogpeppe: but I couldn't remember what type you suggested for the counter, so I just incremented an int
<voidspace> rogpeppe: can you remind me please
<voidspace> rogpeppe: make 100 requests, wait for them to complete
<voidspace> ah, we probably only made 2 calls, I need a pause in between calls
<adeuring> thedac: could you have a look at my comment in bug 1243827? I have doubt that using double underscores ever worked
<_mup_> Bug #1243827: juju is stripping underscore from options <canonical-webops> <config> <goyaml:In Progress by adeuring> <juju-core:Invalid by adeuring> <juju-deployer:Invalid by hazmat> <https://launchpad.net/bugs/1243827>
<mramm> core rep on the cross team meeting?
<mramm> fwereade: rogpeppe: dimitern: ^^^^
<dimitern> mramm, if possible, i'd like to skip it to finish of the 2 critical bugs i'm working on
<thedac> adeuring: hi, I orginally submitted that bug thinking it was against juju-deployer. In that context (a juju deployer config) using the doubled underscores worked. I have never tested that with juju core by itself. Also see jacekn's exampel of nagios_check_https_params: "-H 127.0.0.1 --ssl -p {https_port}" Would this include strings that begin with '-'?
<natefinch> mramm: I can get on it if others aren't available
<mramm> looks like william is on it
<natefinch> awesome
<voidspace> lunch
 * voidspace lurches
 * rogpeppe is also lunching
 * natefinch reboots
<adeuring> thedac: yes, it's about strings that start with a '+', '-' or a digit. And thanks for confirming that using double undescores in juju-deployer worked as a workaround. Seems that I need tro dig a bit deeper to avoid possible upgrade problems...
 * thedac nods
<natefinch> is it bad that I want to change the font in my editor because my : and = don't line up nicely?
<mgz> natefinch: surely having them not line up well is a feature
<mgz> make it stick out more
<TheMue> natefinch: as long as the reboot hasn't been needed due to the font change ;)
<natefinch> bodie_: btw, about colocated branches for bzr & go... I think it's really the only sane way to code in go, but others appear to disagree with me.
<natefinch> TheMue: haha, no, just updates
<natefinch> mgz: I guess I can know that whenever I get that twitch in my eye, it's because I'm looking at := rather than just =
<mgz> exactly!
<natefinch> mgz: ironically, the non-monospaced ubuntu font (which I use in IRC) lines them up quite nicely, but the monospaced version does not
<natefinch> mgz: I merged some changes from trunk, and forgot I had some local changes, too.  Is there any way to separate out the merge from my local changes?
<mgz> natefinch: it's a little painful unfortunately
<mgz> as it's a diff-on-diff operation
<mgz> try just doing `bzr shelve` and shelving your local changes
<mgz> then commiting the (cleaned) merge
<mgz> 2014-03-13 15:24:42 ERROR juju runner.go:220 worker: exited "environ-provisioner": no state server machines with addresses found
<mgz> ~hm...
<rogpeppe> voidspace: how is lbox propose failing for you? i have a few inline comments i'd like to make
<mgz> that's not something I wanted to see with a fresh bootstrap --upload-tools on trunk
<natefinch> mgz: yuck
<rogpeppe> mgz: hmm, that's not good
<natefinch> bbiab, picking up my daughter from preschool
<mgz> rogpeppe: how should I manually restart it to see if it'll work now
<rogpeppe> mgz: manually restart what?
<mgz> oh, it did it
<mgz> or at least status now says started, not down
<rogpeppe> mgz: without a bit more context i'm not sure i can help
<mgz> rogpeppe: when I first ran bootstrap just then, `juju status` reported the agent-state of machine-0 was down
<mgz> last line in machine-0.log was that
<mgz> but rechecking just then, nothing else in log, but agent-state was started
<rogpeppe> mgz: hmm, odd
<rogpeppe> mgz: i'm not entirely surprised by seeing the machine as down
<mgz> so, presumably it got to a point where it could find its own address and the worker restarted okay
<rogpeppe> mgz: but i'd have thought there would always be an address in machine 0
<mgz> yeah
<mgz> I'd also expect more log after it came back up...
<rogpeppe> mgz: that doesn't surprise me
<rogpeppe> mgz: the started status is mediated by the presence poller
<rogpeppe> mgz: the environ-provisioner exiting shouldn't cause the whole machine agent to go down
<rogpeppe> mgz: or even the API worker
<rogpeppe> mgz: was the "worker: exited" line the very last line in the log?
<mgz> rogpeppe: yes
<voidspace> rogpeppe: sorry, just seen your message
<voidspace> rogpeppe: pastebin about to follow
<voidspace> maybe I just need to merge trunk again
<voidspace> it's the branch check that is failing
<rogpeppe> voidspace: the branch check?
<voidspace> rogpeppe: https://pastebin.canonical.com/106421/
<rogpeppe> mgz: usually the Runner code prints "restarting "..." in 3s" immediately after the worker: exited msg
<voidspace> brb
<rogpeppe> voidspace: looks like it's found a genuine error in your code
<mgz> rogpeppe: that's not his code, it's trunk
<rogpeppe> mgz: ah, well it's bogus anyway, and shouldn't have got through govet
<rogpeppe> mgz: i suspect whoever committed the code has disabled govet
<mgz> 2366.14.2  tim.pen |            logger.Errorf("unexpected output: ", out)
<mgz> so, real issue, but a pre-existing one
<rogpeppe> mgz: yeah - i think tim has disabled govet
<rogpeppe> voidspace: you'll need to fix that before proposing your branch
<mgz> I try to disable all things containing gove
<rogpeppe> voidspace: it should be logger.Errorf("unexpected output: %s", out)
<voidspace> rogpeppe: I'll fix that and propose then
<mgz> or %q arguably
<rogpeppe> mgz: +1
<rogpeppe> voidspace: ^
<voidspace> rogpeppe: mgz yep
<mgz> voidspace: so, you can just `bzr switch trunk; bzr switch -b fixtim` ...edit...commit...propose...
<voidspace> mgz: too late, I fixed in my branch and started the lbox propose already
<voidspace> mgz: I mean, I could submit *another* branch for trunk with the same fix
<mgz> never mind then :)
<voidspace> there's another bug in clonetemplate.go too
<voidspace> *mockContainer does not implement golxc.Container (wrong type for Clone method)
<voidspace> who merged this...
<mgz> bzr blame!
<mgz> check you have your godeps right
<mgz> several tips are not compatible with juju-core atm
<voidspace> mgz: I haven't changed my deps
<voidspace> mgz: aren't we pinning our deps anyway?
<mgz> voidspace: yes, but that requires some active work when other people change things
<rogpeppe> voidspace: try (from the juju-core root) godeps -u *.tsv
<mgz> voidspace: run `godeps -u dependencies.tsv` from inside the juju-core dir
<voidspace> interesting
<voidspace> godeps: cannot update "/home/michael/canonical/src/launchpad.net/golxc": bzr: ERROR: branch has no revision tim.penhey@canonical.com-20140311005930-b14361bwnocu3krh
<rogpeppe> voidspace: if it fails, you'll have to manually go get -u the packages that it fails for
<mgz> it may well complain that it doesn't... right, that
<rogpeppe> voidspace: yeah
<rogpeppe> voidspace: go get -u launchpad.net/golxc
<rogpeppe> voidspace: then try again
<mgz> `pushd ../golxc; bzr pull; popd` also works
<rogpeppe> voidspace: blame me for not integrating pull support into godeps
<voidspace> rogpeppe: ah... :-)
<voidspace> so we just pull the branch and then we can update to the required revision
<rogpeppe> voidspace: yeah
<mgz> yup
<voidspace> that may well have fixed the other issues
<voidspace> trying
<voidspace> thanks guys
<rogpeppe> mgz: presumably that was a reference to a conservative MP then...
 * rogpeppe is very slow
<mgz> :P
<voidspace> yup, working
<voidspace> rogpeppe: CL updateed
<voidspace> *updated even
<rogpeppe> voidspace: that int type i was suggesting was int32 - use atomic.AddInt32 to increment it
<rogpeppe> voidspace: cool
<voidspace> rogpeppe: ah, thanks
<voidspace> I will make that change after coffee
<hatch> Hey is anyone working on an nginx charm yet ?
<rogpeppe> voidspace: reviewed
<voidspace> rogpeppe: I don't think we can use []instance.Instance
<rogpeppe> voidspace: no?
<voidspace> rogpeppe: they need to be created as pointers for the pointer receivers
<voidspace> we already tried it IIRC
<rogpeppe> voidspace: i think it should work fine
<voidspace> rogpeppe: I think it didn't :-)
<voidspace> but I will try again
<rogpeppe> voidspace: you're assigning *testInstance to instance.Instance
<rogpeppe> voidspace: so []instance.Instance{instance1} should work ok
<mgz> rogpeppe: we need to use testInstance as instance.Instance - which requires a pointer unless I'm confused about something
<rogpeppe> voidspace: when instance1 is a *testInstance
<rogpeppe> mgz: instance.Instance is an interface
<rogpeppe> mgz: it can hold a concrete type that's a pointer
<mgz> right, what you said then is fine... there was something in the earlier review that was different
<rogpeppe> mgz: the only reason it might not be a great idea is if you wanted to access the results slice after creating it and avoid the dynamic type conversion
<mgz> (or I misread)
<rogpeppe> mgz: but the code doesn't do that AFAICS
<voidspace> rogpeppe: mgz: so it looks like it's working
<voidspace> changing all the uses...
<rogpeppe> voidspace: thanks
<rogpeppe> voidspace: kinda trivial i know, but worth doing i think
<natefinch> interfaces are already pointer-ish, so you almost never need a pointer to an interface
<voidspace> natefinch: sure, it's just the concrete types you have to worry about
<voidspace> although that maybe the same...
<voidspace> what I didn't know that a pointer to an "instance" (term?) of a concrete type would satisfy an Interface type requirement
<natefinch> voidspace: yeah..  one of the things that took some time for me to wrap my head around was pointer receivers.... what helped me was to think of the pointer to a type as being a totally separate type than the non-pointer  value
<voidspace> natefinch: right
<voidspace> natefinch: that's not the confusion in this case - we *knew* we needed a pointer to the type because of the pointer receiver
<natefinch> voidspace: epseically true for pointers to type fulfilling interfaces
<voidspace> what we didn't know was that we could then use []Interface instead of []*ConcreteType
<natefinch> voidspace: ahh yeah
<natefinch> the way I think of it is that an interface is really just a special struct that gets automatically wrapped around a pointer to the concrete type (and in fact, that's really what they are)
<voidspace> natefinch: so something declared as being of type Interface will *always* be a pointer?
<natefinch> voidspace: an interface is a struct that holds the type and either a pointer to the value, or the value itself if the value fits in an int64  (the latter applies to pointers to concrete types)
<voidspace> natefinch: so, for a concrete type, to know if it's a pointer or not you need to know the struct layout in memory?
<natefinch> voidspace: yeah, but it doesn't actually matter if it gets a pointer to the value or not.  Methods are called on the underlying type the same way.
<natefinch> remember that "type" in this case might be "struct" or "pointer to struct"
<voidspace> natefinch: does it *never* make any difference - using an Interface abstracts that away completely?
<voidspace> I initially want to build up a mental model of how to use the language, and I'll *then* work on a mental model of what it's doing under the hood
<natefinch> yes, because the type you pass in has to fulfill the interface.  So calling Foo() on the interface is the same as calling Foo() on whatever you passed in.
<voidspace> although I realise that the two are inextricable entwined
<natefinch> the only reason it matters that an interface might hold a pointer to your type is that you then know that passing around an interfacve is cheap, since it's just passing a few bytes of data, even if the underlying type is huge
<voidspace> right
<voidspace> the classic difference between value and reference types
<voidspace> so an interface is always a reference type
<hazmat> fwereade, did you want to talk about permission denied
<hazmat> fwereade, i added some additional info to the bug report
<fwereade> hazmat, possibly, I'm trying to repro now, I'll take another look at the bug report
<natefinch> voidspace: yeah
<natefinch> voidspace: same with maps, slices, and channels
<voidspace> right
<voidspace> interesting
<voidspace> but not arrays
<voidspace> where you can't even take a pointer to it, right?
<fwereade> hazmat, I see nothing new in https://bugs.launchpad.net/juju-core/+bug/1279018 or https://bugs.launchpad.net/juju-core/+bug/1239681
<_mup_> Bug #1279018: relation-departed hook does not surface the departing relation-data to related units <landscape> <relations> <juju-core:Triaged> <https://launchpad.net/bugs/1279018>
<_mup_> Bug #1239681: relation-get failing with 'permission denied' <landscape> <regression> <juju-core:Triaged by fwereade> <postgresql (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1239681>
<natefinch> arrays are just sequential memory, so when you copy them you copy the whole dang thing.   You can get a pointer to an array, that's how slices work behind the scenes.
<hazmat> fwereade, the pastebin link from 3/5
<hazmat> fwereade, http://paste.ubuntu.com/7041709/
<fwereade> hazmat, ah, sorry, I saw that earlier
<natefinch> voidspace: the number of times I've had to interact with arrays in over a year of programming in Go is exactly zero, though.
<voidspace> natefinch: hah, cool
<hazmat> fwereade, i was debugging with them via irc in debug-hooks.. all 1.17.5.. could do relation-ids, see all the rel data minus the problematic one.
<natefinch> voidspace: one of the things I like about go is that the memory model is very easy to understand.  It's easy to know how big a struct is, so you can know how much memory you're using, what it costs to pass around, etc.  And yet, it doesn't interfere with actually *using* the language.
<voidspace> natefinch: that's good to hear
<fwereade> hazmat, I will keep trying to repro and come crying to you if I can't -- what I currently see is old relation settings still accessible despite that relation not even being in relation-list (as expected, because departing)
<hazmat> fwereade, i'm trying to get the partner to publish their charms as they've got very reproducible issue there.. alternatively i guess i can write some charms to help.. landscape team was also running into this issue
<hazmat> fwereade, i think you mean s/relation-list/relation-ids /
<voidspace> mramm: do we have any open positions at the moment? I know a superlative candidate...
 * rogpeppe pores over a 167MB log file, looking for the needle
<fwereade> hazmat, I meant to say "unit not even being in relation-list"
<natefinch> voidspace: pretty sure we still have open spots
<fwereade> hazmat, if the *relation* is not in relation-*ids* that's interesting
<hazmat> fwereade, the relation id is in relation-ids.. the issue is just relation-get against it
<fwereade> hazmat, it's always relation-get against a specific unit as well though
<hazmat> fwereade, note ... this isn't failing accessing remote units.. but self unit
<hazmat> afaics
<hazmat> although the ls bug was specifically against remote units
<fwereade> hazmat, *self* I had not spotted -- that paste didn't seem to be accessing self specifically
<fwereade> hazmat, I don't actually see any self accesses there...
<hazmat> hmm.. i guess not
<rogpeppe> voidspace: when you do feel you want a better handle on what's going under the surface (i find it's very useful for understanding operation costs), these two articles are great: http://research.swtch.com/godata http://research.swtch.com/interfaces
<voidspace> rogpeppe: cool, will bookmark
<hazmat> fwereade, i'll see if i can come up with something self-contained to reproduce tomorrow.
<natefinch> rogpeppe: thanks, I'm sure he explains it better than me :)
<fwereade> hazmat, still failing to repro... have done everything I can think of to trigger it in both peer and pro/req relations... if you can find something that would be great
<wwitzel3> is there any known issues with the machineconfig tests? I'm getting a bunch of timeout debug messages and then a PANIC during TearDownTest
<wwitzel3> http://paste.ubuntu.com/7085765/
<natefinch> wwitzel3: the known issues are that sometimes it does that.  At least, that's one of the known issues.  For some people it works fine, for some people (like me) it fails all the time.
<wwitzel3> natefinch: great
<natefinch> wwitzel3: which is ironic, because I wrote most of those tests :/
<wwitzel3> natefinch: must be an timezone thing
<wwitzel3> :P
<wwitzel3> natefinch: I deleted all of my gocheck folders in /tmp and it passed for me
<voidspace> rogpeppe: can you look at this and see if you think it's acceptable
<voidspace> https://pastebin.canonical.com/106435/
<voidspace> rogpeppe: I left the GreaterThan assert in, otherwise doing it all in one call would pass - we expect a minimum of 11 calls
<natefinch> wwitzel3: interesting
<rogpeppe> voidspace: thinking
<rogpeppe> voidspace: with some really weird scheduling decisions, i could see that it might possibly give less than 11 calls, but i think it's probably safe to assume that won't happen
<voidspace> rogpeppe: you mean if the code blocks inside the call to Instances so that more than ten requests queue up
<rogpeppe> voidspace: yeah
<voidspace> rogpeppe: right, I'd be inclined to leave it in the test and just bump it down if it's flaky
<rogpeppe> voidspace: sgtm
<voidspace> coolio
<mramm> voidspace: we do
<mramm> 6+3 open positions on core and JAAS teams
<mramm> so plenty ;)
<voidspace> mramm: can you send me links to the core ones and I'll send excellent candidates your way
<voidspace> mramm: or are they easy to deduce from talebo?
<voidspace> if they're marked juju-core I'll find them
<mramm> should be easy do deduce
<mramm> but I'll find them quick
<wwitzel3> mgz: https://codereview.appspot.com/72270044/
<mgz> wwitzel3: thanks
<mramm> https://ch.tbe.taleo.net/CH03/ats/careers/requisition.jsp?org=CANONICAL&cws=1&rid=743
<voidspace> mramm: thanks
<mramm> shoot me an e-mail too
<voidspace> mramm: coolio
<voidspace> will promote wildly on twitter just for the hell of it
<voidspace> mramm: I'll only email you about people I actually know though, the rest can just apply normally
<mramm> voidspace: cool, thanks
<mgz> mramm: did you do another round of travel approval approving?
<mramm> mgz: not yet
 * mgz patientlies
<wwitzel3> mgz: you want to hangout while you review that?
<wwitzel3> mgz: so you can point and laugh at me in real time
<mgz> sure
<natefinch> I need a way to filter my contacts on linked in by people who *haven't* started a new job in the last 8 months.... because that would filter out like 90% of my contacts
<bodie_> yeah, the job market is a massive game of musical chairs right now, it seems like
<dstroppa> fwereade: when I bootstrap with --upload-tools I get the same error as in the tests. I I force to get to tools .tgz file from streams.canonical.com then I get passed that point
<dstroppa> fwereade: so it looks like it something with the uploaded tools
<bodie_> I think I'm doing something wrong here, I ran go test launchpad.net/juju-core and it completed in 0.002s
<bodie_> is there some other way I'm supposed to do this?
<bodie_> I'm pretty new to the ecosystem
<natefinch> bodie_: go test ./...
<bodie_> thanks
<natefinch> bodie_: go test either tests the package in this directory, or takes the path of a package, where ... is a wildcard that matches everything.  Basically that will just run tests in this directory and all subdirectories
<natefinch> bodie_: that's 99% of the use I've had for it, but in theory you can do like .../foo/... and test everything with /foo/ in the path
<wwitzel3> natefinch: https://codereview.appspot.com/72270044/ when you get a chance :)
<bodie_> natefinch, thanks for the info :)
<natefinch> bodie_: welcome.  Works for go build, too
<natefinch> wwitzel3: ok
<dstroppa> fwereade: also updated bug #1285803
<_mup_> Bug #1285803: [Joyent Provider] metadata mismatch when testing again Joyent Public Cloud <joyent-provider> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1285803>
<rogpeppe> voidspace: ready to land with one tiny fix
<bodie_> I guess I need to have a juju stateservice running to run tests, right?
<natefinch> bodie_: go test does everything it needs to run tests, no external actions needed
<bodie_> hm
<natefinch> good lord we have a lot of files called bootstrap.go :/
<jcastro> does anyone know where kvm support is documented?
<jcastro> I'm not finding it in `juju help deploy`
 * Makyo is away: Lunch
<natefinch> jcastro: juju help add-machine has lxc, but interestingly says we don't support other containers
<natefinch> jcastro: juju help constraints mentions the container constraint
<natefinch> (and kvm)
<natefinch> jcastro: seems like we should have a juju help containers
<voidspace> rogpeppe: awesome, thanks
<rogpeppe> voidspace: thanks for bearing with me :-)
<jcastro> ok, and if someone has an existing KVM infrastructure they would just use the manual provider right?
<voidspace> rogpeppe: I've learned a lot
<voidspace> rogpeppe: most of which was even useful ;-)
<rogpeppe> voidspace: ha ha :-)
<voidspace> :-)
<voidspace> rogpeppe: I did wonder about that assignment
<natefinch> jcastro: yeah
<voidspace> rogpeppe: but I saw the function returned something and I followed the append pattern
<rogpeppe> voidspace: ah yes
<voidspace> but I wondered how it could still be threadsafe :-)
<rogpeppe> voidspace: append is definitely not atomic
<voidspace> right
<voidspace> this must be atomic though
<voidspace> it says so :-)
<voidspace> rogpeppe: anyway, fixing and then I'll approve the branch
<rogpeppe> voidspace: the documentation is quite clear BTW (see the start of the sync/atomic package)
<voidspace> rogpeppe: I'm a bloke
<voidspace> I don't read the instructions unless I *really* need to ;-)
<rogpeppe> voidspace: luckily it's blokish documentation
<voidspace> hehe
<voidspace> maybe not ideal for the future diversity of the programming pool though
<natefinch> jcastro: I made a bug for the add-machine documentation error
 * Makyo is back (gone 00:07:43)
<jcastro> natefinch, I filed one for KVM doc in general
<jcastro> and mentioned your recommendation
<natefinch> jcastro: good, someone will get a two-fer when they make the doc fixes then :)
 * rogpeppe has to finish now
<rogpeppe> still not succeeded in finding out what's going on with these mongo servers. currently suspecting a bug in mgo, but we will see.
<rogpeppe> g'night all
<ev> is it possible to tell juju about an apt proxy? Cloud init lets you, but it doesn't seem possible to pass that through.
<ev> hm, probably should've used #juju for that. Soz. Heading there.
<jcastro> yes, you can define one in environments.yaml
<jcastro> let me see if I can find it
<jcastro> "If the machine you are bootstrapping on uses the apt proxy, we
<jcastro> magically sniff those settings and pass them to the environment via
<jcastro> cloud init."
<bodie_> can someone help me break down what's going on here?  http://paste.ubuntu.com/7086357/
<bodie_> (could this be caused by running my own instance of mongo?)
<natefinch> bodie_: you running mongo shouldn't affect it.  The test takes that into account and used custom directories and ports.  However, it is a pretty flaky test.  some people have had luck cleaning out their /tmp directory and retrying, though I don't know if there's actual cause and effect there
 * bodie_ sets up cargo altar
<bodie_> hmm, looks like that might have an effect actually
<bodie_> removed directory: â/tmp/test-mgo959273350â ...
<natefinch> bodie_: that may be more of a symptom than a cause, but cleaning it up is a good thing
<bodie_> hrm
<bodie_> [LOG] 0:00.026 INFO juju mongod: error command line: unknown option sslOnNormalPorts
<bodie_> normal?
<natefinch> bodie_: ahh, no.  That means you're not using a version of mongo that supports SSL
<bodie_> maybe that means I don't have ssl enabled mongo?  but I thought it uses its own mongo
<bodie_> well that's annoying
<natefinch> bodie_: only if you have the special juju mongo installed
<natefinch> bodie_: I think if you install juju-local from the ppa, it installs the correct mongo
<bodie_> I see
<bodie_> d'you know if there's a way to retrieve that mongo somehow?
<bodie_> I was just going to build from source...
<natefinch> bodie_: building from source also works, that's what I did
<natefinch> bodie_: http://www.mongodb.org/about/tutorial/build-mongodb-on-linux/    this link is mostly right... there were some minor inconsistencies and unclear things, and I swear I wrote them up somewhere, but now I can't find them
<bodie_> seems to be working ok for me :) we'll see if this build chokes
<natefinch> cool
<voidspace> EOD folks
<voidspace> g'night
<bodie_> nite
<bodie_> so let's say I push to my new remote branch and my friend wants to check out my commit
<bodie_> using bazaar .... how would I figure that out
<bodie_> sorry for the n00b questions here
<hazmat> fwereade, so the other possibility is that your working against an assumption that the charms are doing correct things, when perhaps they are not
<natefinch> bodie_: bzr branch <your branch> <local_dir>
<bodie_> right, let's say I push that to the remote
<bodie_> can I have my buddy check out my branch?
<natefinch> bodie_: yeah, anyone can check out any branch, they just won't be able to push back up to your branch
<bodie_> nice!
<bodie_> do you know if there's a way to create a shared branch?
<natefinch> bodie_: the joys of a distributed version control system
<bodie_> heh... yes, that's what I'm trying to get at
<bodie_> oh, so you have access to all the branches by default once they're pushed
<bodie_> all right, I think I get that
<natefinch> bodie_: probably you can make one, but there's rarely a reason to, and I don't know how  (I'm actually pretty new to bazaar myself)
<bodie_> ah, well all extra lift is appreciated :)
<natefinch> man I hate forgetting to use --upload-tools.  I wish there was a way to like permanently turn that on
<wwitzel3> natefinch: function juju-bootstrap() { juju bootstrap "$@" --upload-tools ;}
<thumper> haha
<wwitzel3> natefinch: also ping on that review I'd like to be done with bug #1289316 so we can actually pair tomorrow (I know you're lookin forward to me slowing you down).
<_mup_> Bug #1289316: lxc not installed from ubuntu-cloud.archive on precise <lxc> <maas> <precise> <regression> <juju-core:In Progress by wwitzel3> <https://launchpad.net/bugs/1289316>
<natefinch> wwitzel3: yeah, working on it. I'll see if I can finish it up.  Just struggling with my own demons here.
<wwitzel3> natefinch: I can be a rubber duck if you need
<natefinch> wwitzel3: thanks, but it's really more of a matter of not doing stupid stuff like forgetting --upload-tools :)
<wwitzel3> natefinch: ahh well I already helped with that one ;D
<natefinch> wwitzel3: haha yeah
<natefinch> wwitzel3: sorry, I don't mean to be a sourpuss :)
<wwitzel3> natefinch: np, you're good
<natefinch> EOD for me
<wwitzel3> natefinch: later
<natefinch> wwitzel3: see you later. We'll pair tomorrow. Maybe you'll see what I'm missing ;)
<bodie_> still struggling to hurdle these build errors
<bodie_> it's the azure provider
<bodie_> is there a revision where it's not broken, or something?  or can I lend a hand to get it working?
<bodie_> http://paste.ubuntu.com/7086950/
<thumper> bodie_: possible that you don't have up to date dependencies
<bodie_> i'm on r2416
<bodie_> hm
 * thumper now looks at the actual error
<thumper> bodie_: possible your gwacl is either too old or too new :-)
<thumper> there is a file 'dependencies.tsv' that lists the deps
<thumper> wallyworld_: got a minute?
<thumper> wallyworld_: hangout?
<wallyworld_> ok
<bodie_> I see -- do you know if there are plans to integrate the DO provider into the source tree? (when you're finished there)
<bodie_> (or if anyone else has input)
<wallyworld_> thumper: you got a url for me?
<thumper> bodie_: possible, but not a priority right now
<thumper> wallyworld_: just getting it
 * wallyworld_ taps fingers impatiently
<thumper> wallyworld_: geez, sorry... https://plus.google.com/hangouts/_/76cpid01ju5bni12d56tdh441c?hl=en
 * wallyworld_ was trolling
<waigani> thumper: checkers branch is already merged
<thumper> waigani: but you didn't remove juju-core/testing/checkers
<waigani> ah
<thumper> o/ DarrenS
<waigani> thumper: should all other instances of "launchpad.net/juju-core/testing/testbase" be replaced with "github.com/juju/testing" - instances not using LoggingSuite that is
<thumper> oops
<thumper> tab fail
<thumper> o/ davecheney
<thumper> waigani: no
<thumper> not yet
<thumper> there are some things that need changing
<thumper> it isn't that trivial
<waigani> thumper: hangout?
<thumper> um... if we can keep it quick
<waigani> yep
<thumper> https://plus.google.com/hangouts/_/76cpjj9kf8th4q7frgnv40r7mc?hl=en
<wallyworld> fwereade: thanks for looking at my branch. the embedded interface in httpHandler is because httpHandler itself is an embedded struct and the containing struct provides the interface implementation. and Go doesn't have abstract methods :-( I added a comment
<fwereade> wallyworld, LGTM
<wallyworld> fwereade: awesome, thanks
<fwereade> waigani, I LGTMed https://code.launchpad.net/~waigani/juju-core/change-ConfigValidator-cfg-obj-to-type-string/+merge/208983 a week ago, you might not have noticed?
<waigani> fwereade: yep, I saw that thanks. Since then I've dug a whole for myself with this monster of a branch: https://codereview.appspot.com/70190050/
<waigani> fwereade: I'm learning the hard way why is important to have small branches.
<fwereade> waigani, yeah, it's one of those things that only really becomes clear the hard way
<waigani> heh
<waigani> *hole
<wallyworld> thumper: are you able to look at bug 1291207 (critical for 1.17.5) so i can start on the ppc work?
<_mup_> Bug #1291207: juju-run symlink is broken after upgrade-juju <run> <upgrade-juju> <juju-core:Triaged by wallyworld> <https://launchpad.net/bugs/1291207>
<davecheney> gz: u there ?
<davecheney> mgz_: u there ?
#juju-dev 2014-03-14
 * Makyo is away: EoD
 * thumper has another black eye
<thumper> I must really get better defence
<thumper> hi axw
<thumper> axw: do you know how I can test upgrading?
<thumper> never done it before
<axw> thumper: test upgrading? you mean the upgrade steps?
<axw> (hi btw)
<thumper> axw: no, looking at bug 1291207 and I want to upload new tools with an incremented version
<_mup_> Bug #1291207: juju-run symlink is broken after upgrade-juju <run> <upgrade-juju> <juju-core:Triaged by wallyworld> <https://launchpad.net/bugs/1291207>
<thumper> and test the unpacking process
<thumper> I think I know where the problem is, but want to confirm
<axw> thumper: oh, just "juju upgrade-juju --upload-tools"
<axw> that's all I did to trigger the bug
<axw> then I tried to "juju run" on one of the machines and found it failed
<thumper> kk
<thumper> ta
<axw> looks like another red eye :(
<thumper> axw: fix for  bug 1291207, pretty simple https://codereview.appspot.com/75680043
<thumper> axw: ta
<thumper> wwitzel3: working, browsing, or just can't leave well enough alone?
<_mup_> Bug #1291207: juju-run symlink is broken after upgrade-juju <run> <upgrade-juju> <juju-core:In Progress by thumper> <https://launchpad.net/bugs/1291207>
<wallyworld_> axw: out of interest, have you talk to william or anyone to get buy in for the need to be able to specify the principal unit assigned to a machine. just curious to ensure there's not a chance stuff might need to be reworked if not considered the direction we need to go
<wallyworld_> i don't have an opinion myself
<axw> wallyworld_: it's slightly different from what he and I originally discussed, so I will get his okay before landing anyway
<wallyworld_> ok
<axw> wallyworld_: this is the simplest approach and can be backed out relatively easily, but there will be changes needed to support Availability Zones for ec2
<wallyworld_> ok, i'll review it condiitonal on william's +1
<axw> thanks
<sinzui> wallyworld_, axw, thumper: did you ever get access to the ppc machines?
<thumper> sinzui: yes
<wallyworld_> sinzui: i haven't tried yet
<sinzui> fab
<wallyworld_> but i think i'm good to go
<axw> yup we all have batuan and vms
<thumper> sinzui: I have a few branches to land before 1.17.5
<thumper> sinzui: what is your eta?
<sinzui> okay. I heard you don't need batuan. I don't use it. But I was locked out for a few days. Juju CI has its own gateway, and I can offer it if needed to get to the machines
<sinzui> thumper, any bug targeted to 1.17.5 is a blocker, so I have no eta
<thumper> sinzui: hmm...
<thumper> well I'm landing a fix for one of the three outstanding
<sinzui> thumper, I could release tomorrow without rogpeppe 's branch, but that would need to land next week for 1.18.0
<thumper> the other two are in progress by dimiter and rog
<thumper> sinzui: ack
<sinzui> thumper, I think dimiter's bug looks like a requirement. Do you agree?
<thumper> let me look
<thumper> hmm..
<thumper> I'd say dimiter's is more important but not sure if it should block it or not
<sinzui> thank you thumper. I will review the state of things when I wake. I do want to release what we have as 1.17.5
<thumper> I vote yes, but check with william and john
<thumper> we should go for a majority
<thumper> if they both say no, then go with no
<thumper> axw: how long do you think to write 'juju unset-env' ?
<axw> thumper: 1/2 day maybe
<sinzui> Looks like CI is desperate to make the backup-restore test pass. I'll disable it since it has never passed
 * thumper needs coffee
<thumper> sinzui: or move it to another CI run
<thumper> maybe one day it'll pass :-)
<thumper> sinzui: can I get you to add a particular CI test?
<thumper> sinzui: after upgrading an environment, run "juju run --machine 1 hostname"
<thumper> it shouldn't fail :-)
<sinzui> thumper, the test has never passed since I wrote it. in fact I really cannot restore a machine and we have had too. canonistack lost CI a few weeks ago
<sinzui> bug https://bugs.launchpad.net/juju-core/+bug/1291022 details that we have not been able to restore on openstack or ec2
<_mup_> Bug #1291022: Cannot restore a state-server on ec2 and openstack <backup-restore> <ec2-provider> <hp-cloud> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1291022>
<thumper> hmm...
<sinzui> Though I should update the bug with an example of the what we see from start to finish
<wallyworld_> sinzui: thumper: dimiter's bug is a blocker imho because upgrades break
<thumper> wallyworld_: ack
<sinzui> wallyworld_, yeah. my sense was is that you could upgrade once to 1.17.5, then never upgrade again
<wallyworld_> yep that' it
<wallyworld_> i partially fixed it
<wallyworld_> but not all attributes that needed setting, only datadir
<wallyworld_> and my fix was not for local, but other providers
<wallyworld_> dimiter wrote the original code so he is running with it
<thumper> wallyworld_: https://codereview.appspot.com/75700043
<thumper> wallyworld_: nm, axw did it
<axw> wallyworld_: thanks for the reviews, I've sent an email to William to get his OK
<axw> and will address those comments now
<wallyworld_> np
<davecheney> sinzui: what is the current ip of the jenkins server ?
<sinzui> davecheney, http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/
<sinzui> davecheney, Sorry, I need to send am email about that a juju reports, with a request for a friendly host name too
<thumper> davecheney: do you have a doc listing the steps needed to compile juju core on ppc?
<davecheney> sinzui: no worries
<davecheney> thanks
<davecheney> thumper: yes
<davecheney> i shared it with you
<davecheney> thumper: https://docs.google.com/a/canonical.com/document/d/1m9R2n6LPLNLGjdopcNkQYVG8D5V4FTyvc1vvn-9ZifM/edit
<axw> thumper: should Azure be in the availability mode by default?
 * axw thinks the answer is yes
<davecheney> === RUN TestDeepEqual
<davecheney> --- FAIL: TestDeepEqual (0.00 seconds)
<davecheney>  deepequal_test.go:106: deepEqual(map[2:two 1:one], map[2:two 1:one]); unexpected error "mimatch at top level: type mismatch map[uint]string vs map[int]string; obtained map[uint]string{0x2:\"two\", 0x1:\"one\"}; expected map[int]string{2:\"two\", 1:\"one\"}", want "mismatch at top level: type mismatch map\\[uint\\]string vs map\\[int\\]string; obtained map\\[duint\\]string\\{0x1:\"one\", 0x2:\"two\"\\}; expected map\\[int\\]string\\{2
<davecheney>  "github.com/juju/testing/checkers"
<davecheney> probably map ordering expectatoin
<axw> yup
<axw> 0x1/0x2 are swapped
<davecheney> https://github.com/juju/testing/issues/1
<axw> I thought gc randomised map range
<axw> davecheney: does it not?
<davecheney> axw: it does
<davecheney> but technically that is an implementatoin detail
<axw> I guess not random enough
<davecheney> right
<axw> yeah I know
<davecheney> but depending on something that is random sounds
<davecheney> 1. hard
<davecheney> 2. wrong
<axw> yeah I just wondered why it wasn't failing before
<davecheney> right now, small maps, less than 64 bytes are not that random
<davecheney> 1.3 will make them more random
<axw> okey dokey
<davecheney> i lost that argument
<davecheney> nobody should depend on map ordering
<davecheney> thumper: some good news
<davecheney> the go http client _does_ obey no_proxy
<davecheney> ubuntu@winton-02:~/src$ go run noproxy.go
<davecheney> &{403 Forbidden 403 HTTP/1.0 1 0 map[Connection:[close] X-Squid-Error:[ERR_ACCESS_DENIED 0] X-Cache-Lookup:[NONE from batuan.canonical.com:3128] X-Cache:[MISS from batuan.canonical.com] Date:[Fri, 14 Mar 2014 03:23:05 GMT] Content-Length:[1173] Content-Type:[text/html] Server:[squid/2.7.STABLE7] Via:[1.0 batuan.canonical.com:3128 (squid/2.7.STABLE7)]] 0xc210427fc0 1173 [] true map[] 0xc2104331a0} <nil>
<davecheney> ubuntu@winton-02:~/src$ env no_proxy="google.com" !!
<davecheney> env no_proxy="google.com" go run noproxy.go
<davecheney> ^Cexit status 2
<davecheney> ubuntu@winton-02:~/src$ go run noproxy.go
<davecheney> &{403 Forbidden 403 HTTP/1.0 1 0 map[Connection:[close] X-Squid-Error:[ERR_ACCESS_DENIED 0] X-Cache-Lookup:[NONE from batuan.canonical.com:3128] X-Cache:[MISS from batuan.canonical.com] Date:[Fri, 14 Mar 2014 03:25:57 GMT] Content-Length:[1169] Content-Type:[text/html] Server:[squid/2.7.STABLE7] Via:[1.0 batuan.canonical.com:3128 (squid/2.7.STABLE7)]] 0xc210427fc0 1169 [] true map[] 0xc2104331a0} <nil>
<davecheney> ubuntu@winton-02:~/src$ env no_proxy="10.0.3.1" go run noproxy.go
<davecheney> <nil> Get http://10.0.3.1:80/: dial tcp 10.0.3.1:80: connection refused
<davecheney> and it looks like it works for ip addrsses
<wallyworld> thumper: guess what? i updated my packages and now unity crashes, i have no mouse, and wireless networking is broken \o/
<davecheney> so i think it might be custom http transports inside jju that are the problem
<davecheney> wallyworld: did you expect something else ?
<wallyworld> well yes :-(
<wallyworld> we are pretty close to release
<wallyworld> i'm not sure what to do except reinstall beta 1
<wallyworld> cause not having a mouse kinda sucks
<davecheney> wallyworld: fuck, are we at beta 1
<davecheney> that is terrible
<wallyworld> and, well, networking is important too
<davecheney> i can still hear you
<davecheney> can't be that bad
<wallyworld> i'm on wired
<wallyworld> so i'm tied to a corner next to my router
<wallyworld> not very comfortable
<axw> wow, I think I might not update for a bit
<axw> seeing as I have no rj45 and no adapter
<wallyworld> it could just be my hardware or something
<davecheney> wallyworld: what wifi ?
<davecheney> intel or atheros ?
<wallyworld> hmmm. can't recall. intel maybe
<davecheney> erk
<thumper> wallyworld: awesome...
<wallyworld> yeah, tell me about it
<thumper> wallyworld: I'd say wait a bit, do another apt-get update/dist-upgrade
<thumper> and reboot
<thumper> that's what fixed it for me
<thumper> just package skew as I was downloading
<wallyworld> tried that, may have to wait a bit more
<thumper> kk
<thumper> davecheney: https://github.com/juju/testing/pull/2
<axw> thumper: OT question, have you flown UA? are they ok?
<thumper> axw: not that I recall
<thumper> may have for internal flights
<thumper> axw: is this a long flight?
<thumper> I have heard they aren't that good for long haul
<axw> thumper: yeah. it's $500 more for QF :/
<davecheney> axw: DO NOT FLY UA
<davecheney> remember how when you were 5 and your parents took you on your first trip
<davecheney> UA ARE STILL FLYING THAT VERY PLANE
<axw> hehe
<thumper> davecheney: how do you run go on ppc
<thumper> I installed gccgo-4.9
<thumper> but that didn't install a go binary
<thumper> davecheney: are you in the github.com/juju team?
<davecheney> https://docs.google.com/a/canonical.com/document/d/1m9R2n6LPLNLGjdopcNkQYVG8D5V4FTyvc1vvn-9ZifM/edit
<davecheney> $ sudo apt-get install gccgo-4.9 gccgo-go
<davecheney> don't forget the second package
<davecheney> thumper: NFI
<thumper> davecheney: can you comment on the pull request above?
<davecheney> yeah, looks like I can
<thumper> oh yeah, "ssh rock<tab>"
<davecheney> WHOOT
<thumper> davecheney: I don't see a comment from you...
<davecheney> got a little bit closer to deploying ubuntu charm on ppc
<davecheney> thumper: i didn't write anything
 * thumper wants a LGTM
<davecheney> oh, lemmie test it
<davecheney> http://paste.ubuntu.com/7088244/
<thumper> davecheney: tested with cgo and gccgo locally (on amd64)
<davecheney> inching closer
<thumper> davecheney: ah, no aufs on ppc?
<thumper> davecheney: export JUJU_TESTING_LXC_FORCE_SLOW=true
<davecheney> 14:58 < davecheney> a send to a nil channel blocks forever
<davecheney> 14:58 < davecheney> a send to a closed channel panics
<davecheney> 14:59 < davecheney> a receive from a nil channel blocks forever
<davecheney> 14:59 < davecheney> a receive from a closed channel returns the zero value immediately
<davecheney> oops
<davecheney> thumper: the trick is i need to export no_proxy=... during bootstrap
<davecheney> so those values are passed to the bootstrap agent
<davecheney> it took a while to figure out who was making the http call
<davecheney> also, there will be no precise image for ppc64
<davecheney> ubuntu@winton-02:~/src/launchpad.net/juju-core$ juju deploy cs:trusty/ubuntu
<davecheney> ERROR charm not found: cs:trusty/ubuntu
<davecheney> fuck me
<thumper> so make one dear liza
<davecheney> the whole trick is I cant/shouldnt use a local charm repo
<thumper> davecheney: flick the aufs lxc clone bug at hallyn
<davecheney> thumper: http://paste.ubuntu.com/7088260/
<davecheney> setting that setting doesn't appear to set anything
<thumper> davecheney: if we need to disable the aufs clone for ppc, we can make it a real setting maybe
<thumper> davecheney: you need it all the time
 * davecheney strokes non existant beard
 * thumper thinks
<thumper> hmm...
<davecheney> lemmie ask hallyn
<thumper> poo
<thumper> davecheney: I'm not propogating that env var into the upstart job
<thumper> davecheney: to really test it, we'll need it stored as a setting
<davecheney> thumper: which package is it in ?
<davecheney> i'll hack the codeze
<thumper> if you want to hack the codeze, just edit your upstart script
<thumper> for jujud
<davecheney> ok
<davecheney> imma going to raise a bug
<davecheney> that we can throw at ahllyn
<thumper> ah...
<thumper> let me think
<thumper> davecheney: what shall we call the local provider setting
<thumper> lxc-no-clone ?
<thumper> or lxc-clone
<thumper> default to true
<thumper> allow false
<thumper> actually, default to true IFF trusty
<thumper> davecheney: are you testing that github.com/juju/testing branch?
<davecheney> thumper: not right now
<thumper> kk
<thumper> davecheney: I'm fetching it...
<thumper> just to confirm on ppc, I know it passes with gccgo locally
<thumper> davecheney: http://paste.ubuntu.com/7088314/
<davecheney> lgtm
<davecheney> i was getting there
<thumper> that's ok
<thumper> merged
 * thumper updates juju-core dep
<davecheney> thumper: confirmed working on ppc
<davecheney> thanks
<thumper> np
<thumper> that's what I'm here for :-)
<davecheney> thumper: https://bugs.launchpad.net/juju-core/+bug/1292346
<_mup_> Bug #1292346: juju deploy fails on ppc64el because aufs is not available <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1292346>
<davecheney> fails on both precise and trusty
<davecheney> deploy 'said images
<thumper> ok
<thumper> did you hack up the upstart script so it didn't try to use clone?
<thumper> to test normal lxc?
<davecheney> let me do some hacking
<davecheney> i was distracted twisting marcoceppi 's arm to get a trusty ubuntu charm in the store
<thumper> axw or davecheney: https://code.launchpad.net/~thumper/juju-core/update-testing/+merge/210960
<axw> thumper: done
<thumper> ta
<marcoceppi> thumper: I'm about to compile juju to test the local provider, anything I should know before going in to it?
<thumper> marcoceppi: depends
<thumper> what filesystem do you have at /var/lib/lxc
<marcoceppi> ext4
<thumper> and I'm assuming you are on trusty
<marcoceppi> saucy
<thumper> ha
<thumper> won't work
<thumper> trusty only baby
<marcoceppi> but I can be on trusty pretty quickly
<thumper> marcoceppi: ok, what it will do is the first time you deploy a charm for a series, it will create a template container 'juju-<series>-template" like juju-precise-template
<thumper> don't touch it
<davecheney> thumper: marcoceppi http://paste.ubuntu.com/7088331/
<thumper> first time it starts it, and waits for cloud init to finish and shut it down
<davecheney> oooh
<davecheney> ohh
<davecheney> maybe
<thumper> then it just clones to create machines
<thumper> using aufs
<davecheney>   "1":
<davecheney>     agent-state: pending
<davecheney>     instance-id: ubuntu-local-machine-1
<davecheney>     series: trusty
<davecheney>     hardware: arch=ppc64
<axw> ooh :)
<thumper> davecheney: ok, it is trying :)
<davecheney> http://paste.ubuntu.com/7088333/
<davecheney> getting very close
<davecheney> thumper: do the agent states move to started in lxc ?
<thumper> yes
<davecheney> ok, so it isn't working then ...
<thumper> when they are running and talking to the api server
<thumper> it will take some time
<davecheney> nope, no agents, http://paste.ubuntu.com/7088341/
<thumper> you can watch progress by looking in /var/lib/juju/containers/ubuntu-local-machine-1
<thumper> and look at the console.log
<davecheney> OH NO
<davecheney> i know what it is
<thumper> davecheney: cloud-init may not have finished
<davecheney> it's finsihed
<thumper> wat?
<davecheney> no running processes
<davecheney> gccgo binaries need a library installed
<axw> take a look in /var/log/cloud-init-output.log
<axw> ugh
<davecheney> libgo5.so
<thumper> ugh
<thumper> davecheney: hmm...
<axw> environs/cloudinit/cloudinit.go is gonna need another AddPackage
<thumper> davecheney: checking a few tests, juju-core/container passes on amd64 with both cgo and gccgo
<thumper> fails on ppc ggcgo
<davecheney> let me get somet details
<davecheney> is there a simple way to go into an lxc console ?
<davecheney> lxc-enter or something ?
<thumper> yes...
<thumper> ssh
<axw> davecheney: you should be able to ssh into it
<davecheney> oh
<davecheney> too easy
<thumper> :)
<thumper> ssh into it,
<thumper> install package
<thumper> reboot container
<davecheney> $ juju ssh ubuntu/0
<davecheney> ERROR unit "ubuntu/0" has no public address
<davecheney> bzzzzzzzzzzzzt
<axw> try the machine
<thumper> lxc-ls --fancy
<davecheney> yeah, that works
<davecheney> actually no
<davecheney> $ juju ssh 1
<davecheney> ERROR machine "1" has no public address
<thumper> no unit agent to set it
<davecheney> yup
<thumper> lxc-ls --fancy
<davecheney> catch 22
<thumper> get the ip address
<thumper> ssh ubuntu@10.0.3.x
<davecheney> lxc-ls --fancy
<davecheney> NAME  STATE  IPV4  IPV6  AUTOSTART
<davecheney> ----------------------------------
<davecheney> zip
<thumper> ah...
<thumper> wat
<thumper> sounds like lxc-create failed
<davecheney> thumper: what is your ssh key ?
<thumper> on lp, elwood
<axw> try as sudo
<axw> davecheney: ^^
<thumper> axw: yeah...
<thumper> otherwise it is just the user space lxc
<thumper> sorry didn't specify
<davecheney> ahh, that works
<davecheney> FUK<
<davecheney> key errors now
<davecheney> screw this
<davecheney> i'll just chroot
<thumper> key errors?
<thumper> really?
<thumper> that should have worked
<thumper> ah...
<thumper> if you don't have a local ssh there
<thumper> it will be using the one it generated
<thumper> axw: how does he specify?
<thumper> -i
<thumper> ?
<axw> davecheney: use "-i ~/.juju/ssh/juju_id_rsa"
<axw> I think that's what it's called
<davecheney> axw: ahh yeah
<axw> yep
<davecheney> man, all this stuff juju ssh does for you
<davecheney> made me soft
<axw> maybe we should allow "juju ssh" to take an IP for this scenario
<thumper> yeah...
<axw> :q
<axw> oops
<davecheney> $ /var/lib/juju/tools/machine-1/jujud
<davecheney> /var/lib/juju/tools/machine-1/jujud: error while loading shared libraries: libgo.so.5: cannot open shared object file: No such file or directory
 * thumper is waiting...
<davecheney> got it
<davecheney> yup
<davecheney> that was what I though
<davecheney> axw: maybe
<thumper> fuck yeah!
<davecheney> the local provider is a bit of a special case
<thumper> closer...
<thumper> axw: so... how do we conditionally install the libgo package?
<axw> thumper: probably just test the arch MachineConfig.Tools
<thumper> hmm...
<axw> thumper: otherwise I guess we'd need to encode the toolchain in the version string
<axw> which isn't really practical
<thumper> no...
<thumper> unless...
<thumper> what about making a meta-package
<thumper> called 'juju-deps'
<thumper> then on ppc that includes libgo
<thumper> and on others it doesn't
<axw> I suppose so
<thumper> we could then just install the meta-package
<axw> also arm64
<thumper> and not worry about specifics
<thumper> right
<thumper> possible at least
<thumper> davecheney: what do you think?
<thumper> worth raising with jamespage?
<davecheney> thumper: it's sort of worse than that
<davecheney> there is no libgo package
<davecheney> libgo.so.5 comes from gccgo itself, the compiler package
<davecheney> which pulls in a tonne more
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1292353
<thumper> ouch
<_mup_> Bug #1292353: juju must install the libgo.so.5 library when required <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1292353>
<davecheney> probably need to include more project here
<axw> davecheney: should we just distribute it with jujud?
<thumper> axw: we'd need to update the LD_LIBRARY_PATH too right?
<davecheney> axw: IMO that will make a bad problem worse
<axw> yeah
<davecheney> like thumper says
<davecheney> we'd get into the LD_LIBRARY_PATH business
<axw> or rpath shenanigans, but that's not any better
<davecheney> you can do elfedit fuckery
<thumper> davecheney: we have to work out something...
<thumper> nah don't edit things
<thumper> that's terrible
<davecheney> lemmie raise a bug on gccgo
<davecheney> probably not a big job to get it split out into a runtime pkg
<davecheney> thumper
<thumper> we need libgo.so in a different package
<davecheney> if I do apt-get install X y z
<davecheney> and z doesn't exist
<davecheney> what does apt do ?
<thumper> I think it fails
<thumper> but not entirely sure
<davecheney> thumper: you can see where I'm going with this ?
<thumper> no
<davecheney> axw: the good news is we'll get a libgccgo package
<davecheney> which will be a meta for libgccgo-4.9
<axw> cool
<davecheney> just like gccgo suggests gccgo-4.9
<davecheney> so we don't need to get too worked up on versions
<thumper> aye
<davecheney> thumper: I was thinking of always installing libgccgo [sic]
<davecheney> andignoring any errror
<davecheney> if it's hard to sniff the target series/arch
<axw> well it does mean we've got a new binary compatibility issue to deal with
<davecheney> axw: many, many
<davecheney> axw: i did raise this with many people 6 months ago
<davecheney> we're backing into a massive testing matrix
<axw> davecheney: it's not hard to tell the target arch
<axw> we can get that when we generate the cloud-init script
 * thumper is EOWing
<thumper> later peeps
<davecheney> https://bugs.launchpad.net/ubuntu/+source/gccgo-4.9/+bug/1292355
<_mup_> Bug #1292355: gccgo needs a runtime package <ppc64el> <gccgo-4.9 (Ubuntu):New> <https://launchpad.net/bugs/1292355>
<davecheney> o/ TheMue
<davecheney> thumper
<marcoceppi> welp, compiled core from trunk, but got a weird error
<marcoceppi> .go/src/launchpad.net/juju-core/provider/azure/environ.go:539: cannot use []gwacl.ConfigurationSet literal (type []gwacl.ConfigurationSet) as type *gwacl.OSVirtualHardDisk in function argument
<marcoceppi> so, no testing local provider for me, you guys are on your own
<axw> marcoceppi: you need to update gwacl
 * axw gets the rev
<axw> marcoceppi: bzr update -r 231
<axw> (in launchpad.net/gwacl)
<marcoceppi> OH haha
<marcoceppi> I did that in juju-core
<marcoceppi> it was not happy
<axw> that would be quite old :)
<marcoceppi> haha, yeah
<davecheney> such wrong channel
<davecheney> if this works, i'm going to the pub
<davecheney> if this doesn't work, i'll probably go to the pub
<davecheney> win/win
<davecheney> WORKED!
<axw> winner :)
<davecheney> http://paste.ubuntu.com/7088436/
<davecheney> marcoceppi: http://paste.ubuntu.com/7088436/
<marcoceppi> we have trusty!
<davecheney> marcoceppi: a whole one charm
<davecheney> i told you we didn't to test it
<davecheney> that charm is a charm
<davecheney> 100% success rate
<marcoceppi> that charm sure is a charm
<marcoceppi> just like all the other charms
<davecheney> it's a perl
<marcoceppi> but it's defintely more charmy
<marcoceppi> yay, it still didnt' compile
<marcoceppi> bunch of signal killed errors
<marcoceppi> oh well, I'll just wait for 1.17.5
<davecheney> compilation is for loosers
<davecheney> winners write in binary
<marcoceppi> I use magnets and write directly to tapes
<davecheney> i use a hammer and a punch and write them directly onto roms
 * marcoceppi drops the AIX and walks away
 * davecheney bows to marcoceppi 
<rogpeppe> mornin' all
<wallyworld_> axw: i ended up having to reformat / since first install complained packages were corrupt :-( now i need to set up stuff again. sigh
<axw> oh shit :(
<wallyworld_> at least i have a clean install now :-)
<axw> that is true. I did consider doing it myself, but cbf at the time
<wallyworld_> and i backed up stuff first from command line
<wallyworld_> so no data loss, jut time
<axw> bit delayed, but morning rogpeppe
<rogpeppe> axw: hiya
<rogpeppe> axw: i've been meaning to create a bug for this, but i should mention it to you: when bootstrap in the local provider fails, the error is often not very informative these days - it's just "rc: 1"
<axw> rogpeppe: #1279710
<_mup_> Bug #1279710: failed bootstrap discards stderr <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1279710>
<axw> that's because local provider now uses the script stuff, like everything else
<rogpeppe> axw: ah, good - you're aware of it
<axw> I am, just don't have a good solution yet
<axw> I intend to fix it before 2.0 though for sure
<rogpeppe> axw: yeah, i didn't see any solution that jumped out at me, though i didn't look for that long
<rogpeppe> axw: cool - it made things quite hard to debug at times
<axw> rogpeppe: what caused local bootstrap to fail anyway?
<axw> or you don't know? :)
 * rogpeppe tries to remember
<rogpeppe> axw: sorry, can't remember - some bug in the branch that was being tested, ISTR
<axw> rogpeppe: no worries
 * rogpeppe is an idiot
<rogpeppe> vladk: good morning!
<vladk> rogpeppe: good morning!
 * rogpeppe struggles to work out how to submit a git pull request
<vladk> rogpeppe: I had questionable experience with github pull request
<rogpeppe> vladk: :-)
<rogpeppe> vladk: so many concepts
<vladk> you may PR the whole branch, but it will be merge completely if you add smth after PR
<vladk> if you create a PR between two commit, it may produce unpredictable merge
<vladk> and always head of PR should be on master branch
<rogpeppe> vladk: i've been trying to do something like this: http://paste.ubuntu.com/7089049/
<vladk> I would not use MERGE button on PR and do manual commits
<rogpeppe> vladk: but when i go to create the pull request on github, it says there's no difference between my branch and master
<vladk> you should push before
<rogpeppe> vladk: before pushing the changes?
<vladk> rogpeppe: you create PR on github, so you should push your commits to github before
<rogpeppe> vladk: before creating the pull request?
<rogpeppe> vladk: i did do that, i think
<vladk> rogpeppe: may I look your repo on github?
<rogpeppe> vladk: (i haven't been able to create a pull request yet)
<rogpeppe> vladk: sure
<rogpeppe> vladk: this is what i'm looking at currently: https://github.com/go-yaml/yaml/compare/master...rename-imports
<rogpeppe> vladk: the  branch i want to merge is rename-imports
<rogpeppe> vladk: and i want to create a pull request into github.com/go-yaml/yaml
<vladk> rogpeppe: I don't see any changes in remote-imports
<rogpeppe> vladk: ah!
<vladk> try 'git push origit' w/o branch name
<rogpeppe> vladk: i think my "git branch foo" created a new branch but didn't switch to it
<rogpeppe> vladk: hmm, but i still committed something, so i'd have expected that to push
<rogpeppe> vladk: i'm worried that if i do that, then i'll push directly to the master
<vladk> rogpeppe: it's quite easy to delete recent commits from github, so try, don't worry
<rogpeppe> vladk: after reading the push docs, i *think* i see what's going on now
<vladk> rogpeppe: you should 'git checkout' after 'git branch'
<vladk> rogpeppe: of 'get checkout -b my-branch-name'
<rogpeppe> vladk: yes, that's the conclusion i've come to
<rogpeppe> vladk: and i should do git push origin :remote-branch-name
<rogpeppe> vladk: (i think)
<rogpeppe> vladk: i was missing the colon
<rogpeppe> vladk: which meant that i was naming a local branch
<vladk> rogpeppe: probably 'git push origin :remote-branch-name' push change in one specific branch, while 'git push origin' push commits for all branch, but I always used 'git push origin'
<rogpeppe> vladk: i eventually succeeded with "git push --set-upstream origin rename-imports"
 * fwereade said yesterday that he thought he'd escaped the flu... yeah, not so much
 * fwereade will try to be around off and on but his brain isn't really working very well
<TheMue> fwereade: today is the first day I get better after fighting for three days with the flu too
<axw> fwereade: hey, I have availability sets working in my sandbox :)
<fwereade> axw, awesome news!
<axw> add-machine and placement both disabled
<axw> with a mode to make it like the other providers
<axw> just need to do the SSH tunnelling business
<fwereade> axw, and auto-load-balancing for everything in azure mode, old-style one-service-per-machine in normal mode?
<fwereade> axw, cool
<axw> fwereade: yup
<fwereade> axw, fantastic
<fwereade> axw, how did the environ interfaces end up? because the fact that other providers don't zone-balance is now a bug, and we should get some time scheduled to bring them up to speed :)
<axw> fwereade: I sent a couple of CLs to you to take a look at, to see if the implementation isn't horrible
<axw> heh
<axw> fwereade: so the way I've done it at the moment is I tell StartInstance what the principal units are - that's all Azure needs. For ec2/etc., we'll need an additional callback to get a mapping of units->instance.Id for the services of the principals
<axw> then it can spread units by hand
<fwereade> axw, I don't quite follow, but I'm not sure my input filters are functioning correctly -- I'll look at the CLs
<axw> no worries, it can wait till you're feeling better anyway
<axw> not going to land this until 1.18 is out
<rogpeppe> large but totally trivial review anyone? https://codereview.appspot.com/75970043/
<axw> looking
<axw> rogpeppe: why the gocheck rev bump?
<rogpeppe> axw: because i couldn't be bothered to make a new CL for it
<axw> fair enough :)
<axw> sorry didn't see the comment in the description
<rogpeppe> axw: it fixes a bug in gocheck
<axw> just making sure it was intentional
<rogpeppe> axw: np
<axw> rogpeppe: done
<rogpeppe> axw: ta
<axw> it would be nice if we could review gofix modules, rather than the output
 * axw puts it on the end of a too long list of things to do one day
<rogpeppe> axw: i didn't actually use a gofix module per se
<rogpeppe> axw: i used two tools and did one manual change
<rogpeppe> mgz_: does the bot automatically fetch dependencies now?
<axw> rogpeppe: sure, but you get what I mean - review a small script (or two), rather than a load of files with the same change in each
<rogpeppe> axw: yeah, i do get what you mean, but i'm not convinced that one would actually want to trust that the script works without actually looking at the changes it made
<axw> yeah, some level of trust would be necessary
<rogpeppe> axw: when reviewing a change that's made entirely automatically, i'll hope that the command used is in the CL description, and i'll look at a sample of the output and take the rest on trust
<rogpeppe> axw: that's how i remember similar changes being reviewed in the go source
<rogpeppe> axw: for example when we moved from os.Error to error
<axw> yeah ok, same outcome and level of trust
<axw> and no need for sandboxing and all that
<rogpeppe> axw: i could have included the sam command that i used to make the edit, but noone apart from me understands structural regexps :-)
 * axw is intrigued
<axw> and yet, the dishes won't wash themselves
<rogpeppe> axw: http://plan9.bell-labs.com/magic/man2html/1/sam
<axw> thanks, I'll take a look a bit later
<rogpeppe> axw: you need a dishwasher mate :-)
<adeuring> could somebody please have a look here: https://codereview.appspot.com/75980043 ?
<axw> yeah :(
<axw> I was always told I am the dish washer
<rogpeppe> adeuring: looking
<voidspace> So I got Ubuntu installed on my machine last night. And now I have horrible, horrible problems on both machines. :-)
<rogpeppe> adeuring: LGTM, but i'm surprised that the old code "escaped" a double-underscore to a single one.
<voidspace> My Mac is dying in sympathy
<voidspace> So trying to get *something working
<rogpeppe> voidspace: good luck!
<rogpeppe> adeuring: as it looks as if all underscores were deleted
<rogpeppe> adeuring: which leaves me wondering if there's something we're missing here
<adeuring> rogpeppe: I do not understand this as well... I'll ask thedac (who reported the bug) for more details.
<adeuring> I tried to use the double underscores -- no luck at all
<rogpeppe> adeuring: with the old version?
<rogpeppe> adeuring: maybe that's just spurious
<adeuring> rogpeppe: I tried on saucy with the recent source code, and on precise with the deb package from the stable PPA
<rogpeppe> adeuring: and you didn't see the "escaping" behaviour?
<adeuring> rogpeppe: no -- double underscores were stripped the same way a single underscores
<rogpeppe> adeuring: right, that's what i'd expect
<rogpeppe> adeuring: BTW we're just moving to using gopkg.in/v1/yaml, so you'll need to send a pull request there
<adeuring> rogpeppe: ah, ok...
<rogpeppe> adeuring: the actual repo is at github.com/go-yaml/yaml
<axw> rogpeppe: when you have a moment please, https://codereview.appspot.com/75990043/
<rogpeppe> axw: will do
<natefinch> fwereade: standup
<mgz_> dammit internet
<mgz_> gah!
<voidspace> rogpeppe: I'm going to try and get things working here and I'll ping you when I'm ready to do anything actually useful
<voidspace> rogpeppe: if that's ok
<wwitzel3> brb (hopefully), restart
<rogpeppe> voidspace: sgtm
<voidspace> rogpeppe: and if you could get the ratelimiter installed for the bot so I can land my branch that would be awesome
<rogpeppe> voidspace: just doing that
<voidspace> *great*
<natefinch> wwitzel3: what hours do you normally work?  I tend to get little done between 7 & 9am because I have my younger daughter sleeping either in my arms or (if I'm lucky) next to me.
<wwitzel3> natefinch: I usually try to get started at about 5:45 so I can go through email and bootstrap my memory of what I did yesterday before standup.
<natefinch> wwitzel3: I usually do that 10 minutes before the standup ;)
<wwitzel3> natefinch: and then I "stop" around 3 .. but all that means is I stop feeling bad about walking away .. I never actually get off my computer :P
<natefinch> wwitzel3: haha
<natefinch> that's cool
<wwitzel3> natefinch: or it means I'm hear, but I'll just pretend like I don't see someone pinging me :D
<wwitzel3> here
<wwitzel3> mixed thoughts, was going to say pretend I don't hear them, then switched ping, and then grammatical error .. dumb
<natefinch> wwitzel3: ok.  I'll try to free up by 9 this morning so we can pair.
<natefinch> wwitzel3: haha I've done that
<wwitzel3> natefinch: sure, I have plenty to keep me busy .. there are some code reviews I'm going to peak at and i have some HR stuff I've been neglecting
<natefinch> wwitzel3: cool, if you ever find yourself with nothing to do, take a look at the bugs
<wwitzel3> natefinch: also after your praise of how smooth your upgrade went, I'm running that process now too
<natefinch> wwitzel3: good luck :)
<voidspace> ooh, I might have unity back
<rogpeppe> axw: reviewed
<rogpeppe> mgz_: how do you get your juju client talking to the gobot juju server? presumably you use chinstrap somehow.
<mgz_> rogpeppe: yeah, you just want some ssh config like
<rogpeppe> mgz_: how does that help, given that juju doesn't use ssh to talk to the server?
<mgz_> well, you probably know that bit
<mgz_> then use sshuttle through either the bootstrap node or chinstrap
 * rogpeppe reads sshuttle(8)
<wwitzel3> natefinch: well, it seemed to go well (the upgrade)
<natefinch> wwitzel3: sweet.   lots of good stuff in Trusty
<mgz_> eg: `sshuttle -vr rogpeppe@chinstrap.canonical.com 10.55.61.0/16 >~/sshuttle.log 2>&1 &`
<wwitzel3> natefinch: it also just seems a bit more performant .. and my fan isn't kicking on as much as I do things
<rogpeppe> mgz_: thanks. i would have taken a while to come up with that
<natefinch> wwitzel3: yeah, I'd heard some people had fan troubles in Saucy. I still get a fan going in hangouts sometimes, but I think that's just hangouts
<rogpeppe> mgz_: ha, that doesn't work because it tries to run sudo
<mgz_> wat
 * rogpeppe doesn't background it
<rogpeppe> mgz_: ah, i'm rog@chinstrap
<mgz_> that'll be it
<rogpeppe> mgz_: success, thanks
<rogpeppe> mgz_: why does it do the bzr binds *after* running godeps -u ?
<mgz_> sound't really matter?
<mgz_> *shouldn't
<mgz_> wait, ti does
<rogpeppe> mgz_: don't the binds change the code?
<mgz_> but that's the right way around
 * rogpeppe never used bzr bind
<mgz_> bind makes that branch a checkout of a remote branch
<rogpeppe> mgz_: so it doesn't actually change any files in the branch?
<mgz_> if you did that, then updated *that* bound branch, you'd change the rev the remote branch was on
<mgz_> we don't actually do this
<mgz_> because godeps doesn't change juju-core, which is what we bind
<mgz_> but would be a bit of a bear trap
<rogpeppe> mgz_: what are the binds doing?
<mgz_> the bind is basically so the bot doesn't need to push after committing a successful merge, because that's what tarmac expects
<mgz_> it means acttions on the local branch are reflected on the remote branch
<rogpeppe> mgz_: i guess i don't understand why we want to perform actions on, goose, say
<mgz_> the bot also controls goose landings
<mgz_> but yes, this is now very dodgy
<rogpeppe> mgz_: ah
<mgz_> I'll need to split out goose to a seperate tree set
<rogpeppe> and gwacl, presumably
<mgz_> gwacl already is
<rogpeppe> so no actual need to bind gwacl?
<mgz_> no bind is fine, but it's a bind under ~tarmac/gwacl-trees
<rogpeppe> ah, it's in a separate tree, yes
<mgz_> which is not a shared location with out juju-core ~tarmac/trees location, unlike goos
<sinzui> does anyone know the state of bug 1291400? Is there a branch in review?
<_mup_> Bug #1291400: migrate 1.16 agent config to 1.18 properly (DataDir, Jobs, LogDir) <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1291400>
<axw> rogpeppe: thanks, will pick it back up on monday
<adeuring> rogpeppe: a github MP: https://github.com/go-yaml/yaml/pull/3
<rogpeppe> adeuring: LGTM
<sinzui> rogpeppe, mgz_ jam, natefinch  : I want to release 1.17.5. There are two bugs not fixed. Is it *bad* if we land those fixes for 1.18.0?
<rogpeppe> sinzui: which are the bugs?
<sinzui> rogpeppe, bug 1271144 and bug 1291400
<_mup_> Bug #1271144: br0 not brought up by cloud-init script with MAAS provider <cloud-installer> <landscape> <local-provider> <lxc> <maas> <regression> <juju-core:In Progress by rogpeppe> <https://launchpad.net/bugs/1271144>
<_mup_> Bug #1291400: migrate 1.16 agent config to 1.18 properly (DataDir, Jobs, LogDir) <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1291400>
<rogpeppe> sinzui: i'm just about to land the fix to the first one
<sinzui> I will wait for that, thanks rogpeppe
<sinzui> I may need to beat the azure tests. the cloud has been misbehaving and failing the revision tests
<adeuring> thedac: while i would cliam that i have a fix for bug 1243827, seen from the bugfix it absolutely unclear how using double underscores could avoid the bug. can you tell me which versions of juju-core and juju-deployer you are using?
<_mup_> Bug #1243827: juju is stripping underscore from options <canonical-webops> <config> <goyaml:Fix Released by adeuring> <juju-core:Invalid by adeuring> <juju-deployer:Invalid by hazmat> <https://launchpad.net/bugs/1243827>
<rogpeppe> mgz_: another thing i don't quite understand from that tarmac script: where is the actual branch to be tested downloaded?
<rogpeppe> mgz_: does it happen before the config-changed-script is run?
<mgz_> it's merged into the branch in trees
<mgz_> it's independant, in a cron job that uses tarmac, and the tarmac config (seperate config item) manages
<rogpeppe> mgz_: so we don't run godeps -u on every branch?
<rogpeppe> mgz_: how are we supposed to add a new dependency then?
<mgz_> no, because that's actually insufficient, we'd also need to go get
<rogpeppe> mgz_: (given that go get launchpad.net/juju-core won't get the new dependency)
<rogpeppe> mgz_: yeah
<mgz_> rogpeppe: modify the config to pull that specific dep in advance, trigger the config-changed hook, then the next cron run will have what it needs
<rogpeppe> mgz_: so add an explicit go get to the config?
<mgz_> ie, add a `bzr branch lp:new-dep trees/launchpad.net/new-dep` line or similar
<rogpeppe> mgz_: then remove it after the relevant branch has merged?
<rogpeppe> mgz_: yeah, same difference
<mgz_> yup
 * rogpeppe tries not to be distracted by adding pull support to godeps
<mgz_> sinzui: do you need any more help for the release?
<mgz_> I think we'll have to kick dimiter's bug from 1.17.5 for now
<sinzui> mgz_, I am fine. Azure is not, but since we know azure is not fine, I wont let that damn rogpeppe's branch when it lands
<rogpeppe> sinzui: it's approved and should land in the next 16 years
<rogpeppe> sinzui: ^h^h^h^h^h^hminutes
<sinzui> fab. CI is idle and ready to take it
<mgz_> rogpeppe: text conflict
<rogpeppe> bollocks
<rogpeppe> reapproved
<natefinch> wwitzel3: sorry for the delay, you want to do some pairing?
 * Makyo is back (gone 13:44:06)
<natefinch> jb.go
<natefinch> gah, stupid focus follows mouse
<wwitzel3_> natefinch: hey
<mattyw> not exactly a juju-core question - but is there a way to use the testing/mgo.go stuff to setup a mongo database with a password?
<rogpeppe> mattyw: using juju-core/state?
<rogpeppe> mattyw: or independent of juju?
<mattyw> rogpeppe, totally independant of juju
<rogpeppe> mattyw: have a look at SetAdminMongoPassword in the state package
<rogpeppe> i just grabbed github.com/go-juju BTW, as a placeholder in case we decide to use gopkg.in at some point in the future
<rogpeppe> sinzui: that branch has merged
<sinzui> rogpeppe, CI has already built it, is running unit tests and publishing the test tools for the next round of tests. I think I am 1h from starting the release
<rogpeppe> sinzui: brilliant
<natefinch> rogpeppe, mgz_: any ideas as to why juju would keep telling me it's already bootstrapped amazon when there's definitely no jenv around?
<rogpeppe> natefinch: what's the output of bootstrap --debug ?
<natefinch> 2014-03-14 14:40:52 DEBUG juju.environs.configstore disk.go:64 Making /home/nate/.juju/environments
<natefinch> 2014-03-14 14:40:52 INFO juju.provider.ec2 ec2.go:207 opening environment "amazon"
<natefinch> 2014-03-14 14:40:53 ERROR juju.cmd supercommand.go:296 environment is already bootstrapped
<natefinch> not terribly useful
<rogpeppe> natefinch: what does your amazon environment entry look like in environments.yaml
<rogpeppe> ?
<natefinch>   amazon:
<natefinch>     type: ec2
<natefinch>     admin-secret: 36c59e8a199197b4b3c0950ab844dec4
<natefinch>     # globally unique S3 bucket name
<natefinch>     control-bucket: juju-c2feecf0b1cd9ef0d4ea241dd90e10c3
<natefinch>     # override if your workstation is running a different series to which you are deploying
<natefinch>     # default-series: precise
<natefinch>     bootstrap-timeout: 3600
<natefinch>     # region defaults to us-east-1, override if required
<natefinch>     # region: us-east-1
<natefinch>     # Usually set via the env variable AWS_ACCESS_KEY_ID, but can be specified here
<natefinch>     # access-key: <secret>
<natefinch>     # Usually set via the env variable AWS_SECRET_ACCESS_KEY, but can be specified here
<natefinch>     # secret-key: <secret>
<rogpeppe> natefinch: i bet it works if you delete the control-bucket entry
<natefinch> hmm
<rogpeppe> natefinch: are you sure you have no currently running instances?
<rogpeppe> natefinch: you can delete admin-secret too
<natefinch> rogpeppe: I did manually kill a bootstrap node last night
<rogpeppe> natefinch: well, that'll be the reason
<rogpeppe> natefinch: there's more to an environment than just the jenv file
<natefinch> destroy-environment --force should put us back in a reasonable position, though, and it doesn't
<rogpeppe> natefinch: i don't think so
<rogpeppe> natefinch: in your particular case, perhaps
<rogpeppe> natefinch: but in general we don't know where the environment is if you've deleted the .jenv file
<natefinch> there should never be a time when, after doing destroy-environment --force that juju bootstrap tells me it's still bootstrapped
<natefinch> that's what --force is for
<natefinch> it's the "just fucking do it" flag
<natefinch> anyway....
<rogpeppe> natefinch: it's awkward, because in this case only you'd need to ignore the .jenv file and use info from environments.yaml
<thedac> adeuring: wrt bug#1243827 we were using juju core 1.14.x and juju-deployer 0.2.3+bzr115-0~26~precise1ubuntu1~0 at the time. AFAIK, the problem still remains with 1.16.x and 0.3.0-0ubuntu2. I'll see if I can test that theory today. But the bug has the exact example of the failure.
<_mup_> Bug #1243827: juju is stripping underscore from options <canonical-webops> <config> <goyaml:Fix Released by adeuring> <juju-core:Invalid by adeuring> <juju-deployer:Invalid by hazmat> <https://launchpad.net/bugs/1243827>
<rogpeppe> natefinch: well, i suppose if the .jenv file doesn't exist, we should perhaps create a new Environ and then destroy it
<natefinch> rogpeppe: there's no jenv, that's the thing.  Maybe it's just a matter of the fact that the environments.yaml is an anti-feature
<rogpeppe> natefinch: there *was* a jenv
<natefinch> rogpeppe: yeah
<rogpeppe> natefinch: until you deleted it (probably manually, or perhaps with destroy-environment --force)
<rogpeppe> natefinch: i think it's that control-bucket is an anti-feature
<natefinch> rogpeppe: could be yeah
<rogpeppe> natefinch: without that, you'll get a new env each time
<rogpeppe> natefinch: you can still leak environments though (and you probably would have in this case)
<adeuring> thedac: thanks for the version info. and right, the problem still exists in recent versions -- but I did not test juju 1.14.
<natefinch> rogpeppe: ahh, yeah, good point.
<natefinch> rogpeppe: taking out the control bucket worked
<natefinch> rogpeppe: I'll probably have to go clean up the original bucket
<sinzui> juju devs, r2422 is blessed. I will release it as 1.17.5
<rogpeppe> small review anyone? https://codereview.appspot.com/76120043/
<rogpeppe> natefinch: ^
<natefinch> rogpeppe: pairing with wayne right now, may get to it in a little bit once we figure out what's wrong with my code
<rogpeppe> natefinch: ok
<lazyPower> anyone have a moment to help a user that's having issues getting the cloudimg deployed with juju? I'm baffled as to why juju isn't fetching and deploying the cloudimg tarball for him.
<lazyPower> this is extending past my knowledge of how juju-core works
<sinzui> jamespage, 1.17.5 tarball is released. I am going to make the source packages for the ppas now. While you could make the packages with the current SPB, I changed the testing branch to install the bash completions file in the juju tree. https://launchpad.net/juju-core/+milestone/1.17.5
<jamespage> * Juju support for juju-mongodb is temporally removed. The feature
<jamespage>   will be restored when it is more complete.
<sinzui> jamespage, ~juju-packaging/+archive/devel has the source package now
 * jamespage crys a bit
<sinzui> jamespage, yeah. I hoped it would be put back before today
 * bodie_ offers jamespage a slightly disheveled hankie
<natefinch> rogpeppe, mgz_: any idea why mongo would be running, but jujud wouldn't be able to connect to it?  I can connect to it using the mongo command line, but jujud just says connection refused
<rogpeppe> natefinch: what does (ps alxw | grep mongo) tell you?
<rogpeppe> natefinch: is this in a testing context?
<bodie_> I'm having a hard time understanding some of the "tribal knowledge" around how to actually run and test things
<natefinch> rogpeppe: unfortunately the instance got killed, even with a ridiculously long timeout in my environments.yaml
<bodie_> is there an ideal dev workflow here?
<bodie_> like, my tests were breaking yesterday and I had no idea why -- I assume juju is running without full green lights?
<rogpeppe> natefinch: i need more context
<bodie_> I was going on the assumption that I'd get greens, make changes, see what breaks, fix it, etc
<rogpeppe> lazyPower: i don't think i understand your question
<natefinch> rogpeppe: ok, I'll retry it after lunch
<rogpeppe> lazyPower: what's "cloudimg" ?
<natefinch> bodie_: all the tests should pass... there are a few  that are flaky
<lazyPower> rogpeppe: there's a user outside the us (not sure if this is a cause) - that wasn't able to bootstrap a local environment. The cloudimg tarball was failing to download. He managed to fetch the tarball manually but i didn't know where to tell him to put it
<lazyPower> marcoceppi joined in and clued us in that it lives in /var/cache
<rogpeppe> lazyPower: ah, local environment - i'm afraid i've never looked into it in that much detail so i won't be much help, sorry
<jcw4> natefinch do folks usuall run go test juju-core/... ?
<natefinch> rogpeppe: it looks like the bootstrap-timeout in environments.yaml isn't actually working... at least, my instances are still getting killed way before the timeout I set there
<natefinch> jcw4: yep.   And in fact, code can't land unless that passes
<jcw4> natefinch interesting so if they fail locally somethings wrong on *our* environment?
<natefinch> jcw4: however, those tests take a long time, so generally you want to run a subset during iterative development, and only run the full thing when you're pretty sure your code is right
<jcw4> natefinch I'll grep the docs, but is there a CI link for juju-core?
<natefinch> jcw4: there are tests under juju-core/replicaset that are unfortunately flaky, if it's those that are failing, it's not just you... it's an ongoing problem
<jcw4> also providers/azure
<jcw4> well actually compile failures there
<natefinch> jcw4: make sure you update your dependencies, they may have changed (like the azure provider, I think some changes landed recently to that)
<jcw4> natefinch will do, thx
<natefinch> jcw4: if you do go get launchpad.net/godeps  you'll have a godeps executable, that you can then use from the root juju-core folder  doing godeps -u dependencies.tsv and it'll tell you if anything is out of date
<natefinch> jcw4: you'll have to update the out of date stuff manually, but at least you'll know what needs doing
<jcw4> natefinch excellent much better than manually finding outdated stuff
<bodie_> ok, I'm getting a bunch of test fails with a freshly checked out copy of everything
<bodie_> not too sure what's up
<bodie_> is that normal?
<bodie_> running go test ./... in $GOPATH/src/juju-core
<natefinch> bodie_: can you paste the test output to pastebin.ubuntu.com - that would help figure things out
<bodie_> sure
<bodie_> it's pretty huge though
<bodie_> I think the initial database connection is failing, so pretty much everything else barfs too
<bodie_> but I recompiled Mongo with SSL enabled and all that, and it works fine
<natefinch> bodie_: yeah, I know the test output is huge...  it's something I've wanted to be able to turn on and off for a while, it just hasn't been at the top of the priority list.
<bodie_> lol, from man watch
<bodie_> "you can watch for your administrator to install the latest kernel with watch uname -r (just kidding)"
<bodie_> oh, the "just kidding" isn't in my version of the manpage
<rogpeppe> natefinch: that branch i pointed you towards earlier makes it easier to make test output more or less verbose
<natefinch> rogpeppe: hey, that's cool.
<rogpeppe> natefinch: i'm still swithering about whether it's better as an environment variable or as a flag
<natefinch> rogpeppe: yeah, that's tricky. both would be nice, so you could set it in the environment the way you like the default, and then override with the flag as necessary
<rogpeppe> natefinch: i'm not a big fan of both. i prefer one way for most things.
<rogpeppe> natefinch: i went with the flag because i think that people might be able to break tests that test log output by setting the level to ERROR
<natefinch> rogpeppe: if I had to pick one or the other, I guess the environment variable, since then it's sticky, and I'm guessing there are people who are going to want the default to be one thing or the other
<natefinch> rogpeppe: I think tests that depend on the log output are broken to begin with
<rogpeppe> natefinch: that's not necessarily true
<rogpeppe> natefinch: some tests are specifically testing that something is logged
<natefinch> rogpeppe: I suppose that's valid, though I'd prefer if they were able to hook into the logging system to see that something would have been logged rather than checking the output to see what is actually output
<rogpeppe> natefinch: they are generally doing that
<rogpeppe> natefinch: but the logging system is influenced by the global logging config
<rogpeppe> natefinch: such tests should probably reconfigure the logging system as desired
<rogpeppe> natefinch: but i didn't want to go through making that change everywhere
<natefinch> rogpeppe: yeah, that was my thought, though then that kind of defeats the purpose of being able to adjust how much is logged
<rogpeppe> natefinch: only for those (very few) tests.
<bodie_> natefinch, here's my test output...
<bodie_> http://paste.ubuntu.com/7091353/
<bodie_> but, I totally built mongo with SSL
<natefinch> rogpeppe: ideally there would be two different loggers, one that respects the log level and does the real logging, and one that ignores it that is used for tests.  But I don't know anything about how loggo works to know if that's feasible
<rogpeppe> natefinch: there are many different log levels
<rogpeppe> natefinch: it's hierarchical actually
<bodie_> oh, now it's over 1300 lines
<bodie_> ok.
<rogpeppe> bodie_: that's small :-)
<rogpeppe> bodie_: i had a 500k line output from a test yesterday
<bodie_> lol
<bodie_> that's silly
<bodie_> ain't nobody got time fo dat
<natefinch> bodie_: that's definitely mongo not having ssl, still: mongod: Error parsing command line: unknown option sslOnNormalPorts
<rogpeppe> bodie_: darn straight
<natefinch> bodie_: as for azure....  what branch are you building?
<bodie_> seriously, though.  there really ought to be a fakey backend or something
<rogpeppe> bodie_: yeah.
<rogpeppe> bodie_: choosing the right level for it is the difficult thing
<bodie_> right...
<rogpeppe> bodie_: i've wondered about faking up an in-memory mongo
<bodie_> env variable to define backend testing level?
<bodie_> FAKE_BACKEND=true?
<rogpeppe> bodie_: yeah, but someone's got to implement it
<bodie_> right, heh
<natefinch> rogpeppe: my point is not about having levels or not, my point is about having different log targets.  So you can log like debug+ to the debug log over there and log info+ to stdout and have a testing output that always logs everything.
<rogpeppe> natefinch: you can do that too
<bodie_> this is just the freshly checked out trunk
<rogpeppe> natefinch: except that the global config chokes log messages at source
<natefinch> rogpeppe: well, see, that's the problem :)
<rogpeppe> natefinch: honestly, it's not actually a problem
<bodie_> grrhh...  I wiped out the old mongo installation though
<bodie_> I double-checked just now that mongo and mongod are the copies I built yesterday
<natefinch> bodie_: if you do mongod --help do you see a section of flags labelled SSL?
<bodie_> ugh
<bodie_> no
<natefinch> bodie_: there's your problem
<natefinch> brb
<bodie_> oh I know why
<rogpeppe> natefinch, mgz_: still looking for a review of https://codereview.appspot.com/76120043 if poss
<natefinch> rogpeppe: looking
<natefinch> wwitzel3: you around?
<natefinch> rogpeppe: lgtm'd
<bodie_> alright
<rogpeppe> natefinch: ta
<bodie_> successfully built mongo with ssl
<bodie_> the instructions are screwy
<natefinch> bodie_: I told you
<natefinch> bodie_:  I have a write up somewhere with actually good instructions, but I can't figure out where they went.
<bodie_> when you do scons install, you have to use the --ssl flag both times
<bodie_> simple
<bodie_> i assumed it was something like ... build locally into a ./build folder with SSL...
<bodie_> then scons install uses that
<bodie_> BUT NO
<natefinch> yeah
<sinzui> natefinch, do you have a few minutes to review the branch that updates trunk to 1.17.6?   test-release-hp:
<sinzui>     # juju --show-log bootstrap -e test-release-hp
<sinzui>     type: openstack
<sinzui>     test-mode: true
<sinzui>     use-floating-ip: false
<sinzui>     admin-secret: 1717-burgers-hotdogs-potato-salad-baked-beans
<sinzui>     control-bucket: juju-pavlova-snags--chiko-roll-meat-pie-pasty-sauce-2013-10-10
<sinzui>     default-series: precise
<sinzui>     auth-url: https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
<sinzui>     authorized-keys: |
<sinzui>       ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDyEdd8eDyy2WA6Q+W4GV2NqKX+yfkkW5ogSu3/7DjlqjED7dCNkevq2qpdR1AMkbKTouihWdyc8QKl9lzuwn0zoocXJUVgIOPV+KFxSAhj1djGvryHDjYUwdrLYUMu3CUFeIsRao2cn7EgZs0w1Y1quqr9c8cEg7XsAs0ZMN9YksEjG000VupOIZJNtk+5EYJm/6vNFI83IOn7ctWfjXymBuh7XM8d8vszyYDRdeDXY5Q9VLqHOP7/CFteIvcdHnSC1ObQuKzXRWz+m9thgQnRQjvirdwvDUXhjjQk9MNJZj84EukB8HyAVSN863MfuVGoCsNn7iEdtT6W2nKTWyL3 abentley@speedy
<sinzui>     tools-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/60502529753910/juju-dist/testing/tools
<sinzui>     auth-mode: userpass
<sinzui>     username: "sinzui"
<sinzui>     password: "MyPineappleCalls4uInTheNight"
<sinzui>     tenant-name: "juju-scale-test"
<sinzui>     region: az-3.region-a.geo-1
<sinzui> well not
<sinzui> not that crack
<sinzui> natefinch, https://codereview.appspot.com/76170043
<natefinch> sinzui: you have the best passwords and secrets I've ever seen :)
<sinzui> natefinch, yeah :( I am pretty said to see that go
<natefinch> sinzui: lgtm.... wish we could do the versions some other way than changing sourcecode, though
<mramm> sinzui: our upgrade compatability policy includes 1.18 clients tested against a 1.16 server right?
<sinzui> Yes mramm
<mramm> ok
<mramm> thought so, thanks sinzui
<bodie_> ... Panic: not authorized on admin to execute command { listDatabases: 1 } (PC=0x414386)
<bodie_> that doesn't seem right
<bodie_> unless tests aren't running as my user?
<natefinch> bodie_: I haven't seen that one
<natefinch> bodie_: they run as you
<bodie_> http://paste.ubuntu.com/7091566/
<bodie_> O_o
<natefinch> bodie_: the azure stuff has to be launchpad.net/gwacl being out of sync somehow
<natefinch> bodie_: not sure about the mongo problem....
<bodie_> http://paste.ubuntu.com/7091591/
<bodie_> :S
<rogpeppe> oh bugger, looks like the last time i claimed aws expenses was exactly a year ago
<bodie_> I thought we confirmed the azure provider wasn't working right now
<natefinch> rogpeppe: haha
<rogpeppe> well, i can only try
<bodie_> so uh... the azure provider is working for other people?
<bodie_> like, if you go test, you don't get the error with azure?
<natefinch> bodie_: lemme try
<bodie_> it's right at the top, if you do get it
<natefinch> bodie_: totally broken, you're right
<bodie_> ah
<bodie_> ok
<bodie_> so just disregard or how do I pass tests in order to submit revisions?
<natefinch> bodie_: you won't be able to get code landed with those tests broken like that....  O
<natefinch> bodie_: I'll see if I can fix it
<bodie_> nice!  can I help anyhow?
<bodie_> I'm looking for a place to start digging
<natefinch> bodie_: I'm sure it's just a matter of gwacl getting changed without juju getting corresponding changes in the tests
<rogpeppe> total $1759.26
<natefinch> bodie_: it's a problem with dependencies outside juju-core... I'm sure the gwacl tests pass just fine, it's just that gwacl can't know you also need to fix juju-core (and there's still a chicken and egg problem where you have to check one in first, and that breaks either itself or the other one)
<natefinch> rogpeppe: for 12 months that's not terrible.  Seems a bit high, though.  Did you have some unusually big months for some reason?  mine are usually $30-50 a month
<rogpeppe> natefinch: biggest was $323.39
<natefinch> rogpeppe: that's pretty big
<rogpeppe> natefinch: i was probably doing a lot of live tests then
<natefinch> rogpeppe: btw, looks like the azure tests are broken... probably gwacl changes landing without corresponding juju-core tests.  Is that something the guys down under are working on?
<bodie_> natefinch, I really appreciate all the assistance getting up to speed here.  it's pretty daunting
<natefinch> bodie_: it really is pretty daunting.  We should write up a getting started document. It was hard for me getting up to speed too (I started ~9 months ago)
<natefinch> bodie_: yeah, looks like the signature of one of the functions in gwacl changed a couple days ago, and no one noticed it broke the juju-core tests
<bodie_> ah, good stuff
<bodie_> I feel useful.. sorta
<bodie_> is there a specific version of mongo we should be using?
<bodie_> I just built the latest master, which I think is 2.4
<bodie_> I'm getting an error "not authorized for upsert" which apparently others are also seeing
<bodie_> if anyone is running 2.4 and not seeing that error, would like to know :)
<natefinch> bodie_: I'm running 2.4 and not getting that error... I believe we support 2.2 and above.
<bodie_> oh, i'm on 2.7-pre
<bodie_> O_o
<bodie_> I thought I was on the stable...
<rogpeppe> i am done for the day
<rogpeppe> well past it actually
<wwitzel3> rogpeppe: have a good weekend :)
<rogpeppe> wwitzel3: and you!
<bodie_> fyi: https://svn.boost.org/trac/boost/ticket/7242
<bodie_> ugh, maybe this isn't what I need after all
<bodie_> Panic: not authorized on admin to execute command { listDatabases: 1 } (PC=0x414386)
<bodie_> any thoughts?
<bodie_> using Trusty packaged mongo
<bodie_> this from the test suits
<bodie_> suite*
<natefinch> bodie_: the trusty packaged mongo may not have SSL either.  I haven't seen that particular error before, unfortunately
<natefinch> unfortunately, it's end of day for me here
<bodie_> alrighty :)
<bodie_> gustavo said it has ssl and i'm no longer getting the SSL related error... not sure what this one's about, though
<natefinch> good luck, and if you don't figure it out, there's always \Mmonday :)
<bodie_> farewell and have an excellent weekend
<natefinch> you too
<niemeyer> bodie_: This looks like plain lack of login
<niemeyer> bodie_: and this is not about SSL
<niemeyer> bodie_: Or at least not in an obvious way
<niemeyer> bodie_: It wouldn't get to the point of issuing a listDatabase command and getting "auth denied" if the SSL communication was corrupted
<niemeyer> bodie_: Are you able to run the test in isolation?:
<bodie_> negative
<bodie_> PANIC: mgo_test.go:42: mgoSuite.TearDownTest
<bodie_>   in resetAdminPasswordAndFetchDBNames
<bodie_> hm, that didn't paste properly
<bodie_> home/bodie/go/src/launchpad.net/juju-core/testing/mgo.go:366
<bodie_>   in resetAdminPasswordAndFetchDBNames
<bodie_> obviously /home
<bodie_> http://paste.ubuntu.com/7092264/
<bodie_> I feel like this one should be obvious
<bodie_> but I'm not quite seeing it
<niemeyer> bodie_: Hmm
<niemeyer> bodie_: I'd check how isUnauthorized is implemented
<niemeyer> bodie_: The error apparently *is* a lack of authorization one
<niemeyer> bodie_: It may be doing a string check against the error, and that string may have changed across MongoDB releases
<bodie_> okie doke
<bodie_> howdy thumper
<thumper> o/
<thumper> just passing through
<hazmat> thumper, do we have no-proxy support in our proxy bits?
<hazmat> oh.. that passthrough was a while ago
#juju-dev 2014-03-15
<thumper> hazmat: yes
<hazmat> thumper, cool, found it thanks.. looks like bootstrap only config via env var.
<bits3rpent> go install launchpad.net/juju-core doesn't seem to be installing the juju binary.
<bits3rpent> Anyone experiencing the same issue?
<bodie_> ok I'm getting frustrated by the same thing
<bodie_> I can't seem to install the juju binary since I can't build the whole repo since gwacl is wonky
<bodie_> I can easily install it using apt-get, but I want to be able to test changes to the code, so...
<bodie_> not quite sure where to begin here
<bodie_> I'm willing to tackle the gwacl thing
<bodie_> just need some guidance on that
<bodie_> go install launchpad.net/juju-core/juju isn't doing the trick
<hazmat> bodie_, go get -u -v launchpad.net/juju-core/...
<hazmat> bodie_, i'd set GOPATH to whatever place you want it to end up first
<hazmat> bits3rpent, ^
<bodie_> that's what I did...
<bodie_> http://paste.ubuntu.com/7095694/
<bodie_> http://paste.ubuntu.com/7095719/
<bodie_> no juju binary
<hazmat> bodie_, hmm checking
<hazmat> bodie_, could be an incompatibility in an underlying library, typically those are transient..  or need updating the library first
<hazmat> there's a known set of good revisions in the juju-core checkout but that uses another tool for builds
<hazmat> bodie_, interesting.. so i get those warnings, but things continue.. after you've done the get.. you should be able to do $ go install launchpad.net/juju-core/...
<hazmat> bodie_, it does look like an incompatibility between trunk of gwacl and juju-core
<bodie_> yeah
<bodie_> we discussed that a bit yesterday
<bodie_> I'm not sure how to proceed
<hazmat> bodie_, after the go get .. you can cd $GO_HOME/launchpad.net/gwacl
<bodie_> ok
<hazmat> and bzr up -r 231
<bodie_> NICE
<bodie_> thank you!
<bodie_> I'd be totally down to help get things synchronized
<bodie_> I'm just completely new
<hazmat> bodie_, no worries.. so you can't do get get with -u flag till that incompabiltiy gets resolved
<bodie_> nuts
<bodie_> maybe I could spoof the remote somehow :P
<hazmat> bodie_, i mean you can but you'll have to go back and update the revision
<bodie_> or just always go get -u and then checkout r231
<bodie_> ah ok
<hazmat> bodie_, ie. if you want to patch on core.. just update core by hand..
<bodie_> but then I'll be able to go build <blah> ?
<hazmat> bodie_, go install  will rebuild the binary
<bodie_> ok
<hazmat> bodie_, the known good versions of each lib are in $GO_HOME/launchpad.net/juju-core/dependencies.tsv
<hazmat> generally trunk of each works though... but as you've discovered there are times when not.. i imagine it will get resolved monday
<bodie_> hmm
<bodie_> okie doke
<bodie_> i've been struggling since wednesday to get it to build and run tests properly, so I'm happy if I can get something working at all :/
<hazmat> bodie_, fwiw, i just did the  from scratch thing .. go get -u -v launchpad.net/juju-core/... && cd $GOPATH/src/launchpad.net/gwacl && bzr up -r 231 && go install -v launchpad.net/juju-core/...
<hazmat> worked for me
<bodie_> okay, sweet
#juju-dev 2014-03-16
<bodie_> same here
<bodie_> hrng
<bodie_> I keep getting "not authorized" errors in my tests
<bodie_> running v2.7.0-pre- (stock from ubuntu 14.04 repo)
<bodie_> er, mongo
<rick_h_> bodie_: I'd stick a paste up on paste.ubuntu.com. Most folks will be afk this weekend. Not sure when you'll hear back
<bodie_> all righty
<bodie_> http://paste.ubuntu.com/7099247/ there we go
<thumper> o/ fwereade
<fwereade> thumper, heyhey
<thumper> fwereade: I'm in the hangout
<fwereade> Flowing from my brush
<fwereade> Glowing on my screen
<fwereade> Four lines readable in any order
<fwereade> The essence of thread safety
<voidspace> wwitzel3: ping
<voidspace> fwereade: very poetic :-)
<thumper> o/ voidspace
<voidspace> thumper: hi
<davecheney> o/
<davecheney> is the bug where a single invalid constraint will knock out all constraints fixed ?
<wallyworld> davecheney: bug number?
<davecheney> no idea
<davecheney> gz had it
<davecheney> he was talking about it in #juju
<rick_h_> wallyworld: or thumper got a sec to help bodie_ out with an issue in running tests? http://paste.ubuntu.com/7099247/
<wallyworld> looking
<wallyworld> davecheney: ok, will ask gz, i hadn't heard of that issue
<wallyworld> rick_h_: hmmmm. i've not seen that error before
<davecheney> rick_h_: bodie_ wrong mongo version
<rick_h_> davecheney: ah cool. Hopefully he'll see this. He was hitting up the channel this weekend
<rick_h_> thanks for looking davecheney/wal
<rick_h_> wallyworld:
#juju-dev 2015-03-09
<mup> Bug #1429680 was opened: keystone charm do not support conf file injection <juju-core:New> <https://launchpad.net/bugs/1429680>
<mup> Bug #1429680 changed: keystone charm do not support conf file injection <juju-core:New> <https://launchpad.net/bugs/1429680>
<mup> Bug #1429680 was opened: keystone charm do not support conf file injection <juju-core:New> <https://launchpad.net/bugs/1429680>
<mattyw> morning all
<mattyw> axw, are you still around?
<axw> mattyw: hey, yes
<dooferlad> morning! o/
<dimitern> morning dooferlad
<dimitern> dooferlad, how's your new shiny maas? :)
<dooferlad> dimitern: not too bad. Unfortunately they don't support AMT and MAAS can get quite confused with WOL.
<dooferlad> dimitern: so plenty of pressing power buttons.
<dooferlad> dimitern: though I did get some useful debugging done on Friday with them :-)
<dimitern> dooferlad, ah, right
<voidspace> morning all
<dimitern> hey voidspace! welcome back :)
<dooferlad> voidspace! Welcome back!
<TheMue> morning o/
<voidspace> dimitern: dooferlad: TheMue: morning :-)
<dimitern> dooferlad, I managed to get maas installed on an old i386 P4 box and added my 2 nucs in it
<dooferlad> dimitern: great!
<dimitern> dooferlad, it works - but I have mixed success with amt - it works like 50% of the time
<dimitern> dooferlad, my goal is to have openstack installed on top of that and share it with you guys - maas is already visible at http://kubrik.us.to:8888/MAAS/
<dimitern> dooferlad, but to expose the nodes behind it a bit more networking needs to happen
<dooferlad> dimitern: heh, I know the feeling!
<dimitern> morning, TheMue, I'm omw
<TheMue> dimitern: /me too
<perrito666> morning
<TheMue> perrito666: heya
 * perrito666 needs to update the fw on his samsung evo 840 and notices the first step requires a ... DOS bootable usb
<voidspace> yay, powercut over
<voidspace> dimitern: ping
<dimitern> voidspace, hey, nice!
<dimitern> voidspace, just in time of our 1:1 :)
<voidspace> hah, ok
<voidspace> dimitern: omw
<mup> Bug #1429790 was opened: juju debug-hooks not working when tmux not installed <juju-core:New> <https://launchpad.net/bugs/1429790>
<voidspace> dimitern: oh, I forgot to mention - I ordered a new laptop
<voidspace> dimitern: so I will be running Ubuntu native at the Nuremberg sprint
<dimitern> voidspace, nice!
<dimitern> voidspace, not macbook ?
<voidspace> dimitern: Dell XPS 15 - quad core i7, 16Gb ram
<voidspace> dimitern: no :-)
<dimitern> voidspace, sweeet!
<voidspace> dimitern: although I will need to keep it to sync with my iphone
<voidspace> dimitern: I have an Ubuntu phone on order too
<voidspace> dimitern: if it's good enough to use as a main phone maybe I can ditch the iphone too...
<dimitern> voidspace, I have my eye on that same model - will need to replace my current one in october
<voidspace> dimitern: I got a refurb one for Â£1000
<dimitern> voidspace, ah, that's a bit tricky - last time I tried to replace my main phone with ubuntu it wasn't quite usable - mostly due to the battery running out in a couple of hours
<voidspace> dimitern: right, we'll see
<voidspace> dimitern: I'll *probably* go to Android as having a good camera is important to me
<voidspace> dimitern: but with a camera, a browser and maps I can do most things
<dimitern> voidspace, I'll be interested to hear about your experience with the ubuntu phone once you had it for a while
<voidspace> dimitern: apparently the google maps *web app* works pretty well with Ubuntu phone
<voidspace> dimitern: and I *presume* they've sorted battery life as they wouldn't be able to do a public release like that
<dimitern> voidspace, I hope so :)
<voidspace> Just discovered why the power cut. Someone posted some photos to facebook.
<voidspace> "The tipper lorry, having dumped his load of topsoil (on the wrong side of the road by the look of it) drove forward with the tipper still up and pulled down a power cable."
<dimitern> wow!
<dimitern> voidspace, priceless *lol*
<voidspace> yeah, nice :-)
 * TheMue steps out for a moment, bbiab
<mup> Bug #1429853 was opened: unit test TestInstallCommandsShutdown...initSystemSuite fails <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1429853>
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1429853
<TheMue> back
 * dimitern steps out for ~1.5h
<mup> Bug #1429864 was opened: juju should support putting units/services into WIP/frozen mode <juju-core:New> <https://launchpad.net/bugs/1429864>
<sinzui> ericsnow, I think bug 1429853 relates to your commit.
<mup> Bug #1429853: unit test TestInstallCommandsShutdown...initSystemSuite fails <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1429853>
<ericsnow> sinzui: oh, it definitely does :)
<ericsnow> sinzui: I thought I was being careful about map order in tests this time :(
<sinzui> ericsnow, I am making a snapshot of the vivid-slave before trying to switch to systemd. we have the package, but I need to mess with the boot options to make it work';
<ericsnow> sinzui: won't the new images coming out today do that for you?
<sinzui> ericsnow, yes, but that requires me to schedule the build of a new slave
<ericsnow> sinzui: ah
<ericsnow> could I get a quick review on http://reviews.vapour.ws/r/1105/
<ericsnow> it fixes the CI blocker
<mgz> ericsnow: lgtm
<ericsnow> mgz: thanks
<mup> Bug #1429680 changed: keystone charm do not support conf file injection <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1429680>
<ericsnow> mgz: I'm not sure this worked right: http://juju-ci.vapour.ws:8080/job/github-merge-juju/2426/console
<ericsnow> mgz: it passed without actually running any tests?
<mgz> ericsnow: looking
<mgz> yerp, that's terrible
<mgz> well, at least it was your change and it's a nice tiddly one that's not relevent for the lander
<mgz> jesse's also had the same thing
<ericsnow> mgz: yep :)
<ericsnow> mgz: perhaps it's related to the grub messages?
<ericsnow> mgz: probably not
<mgz> yeah, it's the run-unit-test script not erroring out when it should
<ericsnow> mgz: ah
<mgz> I have a short-term fix and a better longer one
<ericsnow> mgz: cool
<ericsnow> mgz: thanks for jumping on that
<ericsnow> wwitzel3: you see that email about HA?
<ericsnow> mgz: why does the publish-revision job take so long?
<mgz> ericsnow: okay, updated the ami, which is the quick fix
<mgz> ericsnow: does quite a bit of work... but does seem to have been taking about an hour for a while
<ericsnow> mgz: no problem, just curious (since it's the blocker for all the other release jobs)
<mgz> ericsnow: it may be worth trying to speed up, I've not looked into where the time is spent
<ericsnow> mgz: if you have time for it :)
<mgz> we have to build on several platforms, may be more parellisable
<ericsnow> mgz: when trying to fix a failed CI test any improvement to the total time-to-test-comletion is super helpful
<wwitzel3> ericsnow: looking now
<ericsnow> wwitzel3: thanks!
<wwitzel3> ericsnow: what list?
<ericsnow> wwitzel3: no list
<wwitzel3> ericsnow: don't see it
<ericsnow> wwitzel3: the "Nate out today" thread
<wwitzel3> ericsnow: I see it, thanks
<alexisb> wwitzel3, btw, thank you for the extra hours on that oil bug
<wwitzel3> alexisb: sorry I couldn't get it figured out, I'm convinced that it is related to the power script not resulting in a clean shutdown.
<ericsnow> sinzui: am I okay marking bug 1429853 as "fix released" since the failing CI test just passed?
<mup> Bug #1429853: unit test TestInstallCommandsShutdown...initSystemSuite fails <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Fix Committed by ericsnowcurrently> <https://launchpad.net/bugs/1429853>
<sinzui> ericsnow, we really should wait for all the run-unit-tests to pass
<sinzui> I am waiting for run-unit-tests-precise-i386 to complete
<ericsnow> sinzui: okay; I'm always just antsy to unblock CI :)
<sinzui> ericsnow, i understand. I want to avoid the 32 days and 77 commits case were fixed blockers, but never actually got a passing revision
<ericsnow> sinzui: agreed
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: None
<sinzui> dimitern, do you have a moment to review http://reviews.vapour.ws/r/1018/
<dimitern> sinzui, sure, looking
<dimitern> sinzui, it seems it has been submitted already
<sinzui> dimitern, sorry, wrong review, me looks
<mup> Bug #1429853 changed: unit test TestInstallCommandsShutdown...initSystemSuite fails <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1429853>
<sinzui> dimitern, review board might be lagging. https://github.com/juju/juju/pull/1776 is what I am working with
<dimitern> sinzui, LGTM
<sinzui> thank you dimitern
<mattyw> katco, ping?
<katco> mattyw: hello!
<sinzui> dimitern, TheMue: does bug 1428430 need to be fixed before we branch 1.23
<mup> Bug #1428430: AllWatcher does not remove last closed port for a unit, last removed service config <api> <juju-core:Triaged by themue> <https://launchpad.net/bugs/1428430>
<TheMue> sinzui: I'm on it, when exactly will you branch?
<sinzui> TheMue, I was planning to do it when the current revision passes because I thought all the bugs for 1.23-beta1 were fixed. I didn't see the new bug
<sinzui> TheMue, I think the pressure is from core developers wanting to merge 1.24 work. I can branch from any commit
<mup> Bug #1427149 changed: Tests require predictable map ordering <ci> <regression> <test-failure> <juju-core:Fix Released by dooferlad> <https://launchpad.net/bugs/1427149>
<TheMue> sinzui: as I said, I'm on it and I hope I will have it in a few moments. fighting a bit with the test
<dimitern> sinzui, TheMue, it's ok to move that bug for 1.24, it's not critical
<sinzui> TheMue, okay. I am not rushing. I have the 1.22-beta5 release to do
<dimitern> TheMue, if you manage, ok, if not - no problem
<TheMue> dimitern: but I want to *lol*
<mup> Bug #1404397 changed: charm actions.yaml simplification breaks master tests <actions> <charm> <tech-debt> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1404397>
<mup> Bug #1415961 changed: juju gives up on bootstrapping with 'bootstrap instance started but did not change to Deployed state' <bootstrap> <maas-provider> <oil> <juju-core:Fix Released by dooferlad> <juju-core 1.22:Fix Released by wallyworld> <https://launchpad.net/bugs/1415961>
<mup> Bug #1416928 changed: juju agent using lxcbr0 address as apiaddress instead of juju-br0 breaks agents <api> <lxc> <network> <juju-core:Fix Released by dooferlad> <juju-core 1.21:Fix Released by dimitern> <juju-core 1.22:Fix Released by dooferlad> <https://launchpad.net/bugs/1416928>
<mup> Bug #1417594 changed: failure to retrieve the template to clone: lxc container with 1.22 beta2 <lxc> <oil> <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix Released by wallyworld> <https://launchpad.net/bugs/1417594>
<mup> Bug #1420049 changed: ppc64el - jujud: Syntax error: word unexpected (expecting ")") <deploy> <openstack> <regression> <uosci> <juju-core:Fix Released by axwalk> <juju-core 1.22:Fix Released by axwalk> <https://launchpad.net/bugs/1420049>
<mup> Bug #1423036 changed: precise services cannot be deployed (again) <ci> <precise> <regression> <ubuntu-cloud-archive:Fix Released> <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix Released by wallyworld> <https://launchpad.net/bugs/1423036>
<mup> Bug #1425435 changed: juju-deployer/jujuclient incompatibility with 1.21.3 <api> <cts> <network> <oil> <openstack> <orange-box> <regression> <uosci> <juju-core:Fix Released by themue> <juju-core 1.22:Fix Released by themue> <https://launchpad.net/bugs/1425435>
<marcoceppi> sinzui: can we get this elevated to a critical? lp1424069
<sinzui> marcoceppi, I can raise it in the release meeting in 2 hours
<marcoceppi> sinzui: thanks, it's a regression still present in trunk
<marcoceppi> tested again as of Friday
<marcoceppi> by aisrael
<sinzui> marcoceppi, I added the tag to emphasis regression
<marcoceppi> sinzui: I was under the impression that critical severity level was to mark regressions, but I don't have a firm understanding of how the triaging works
<sinzui> marcoceppi, it is but Core have negotiated an ambigous High to NOT fix regressions immediately to ensure they can keep merging
<marcoceppi> sinzui: that sounds lame to me ;)
<sinzui> marcoceppi, critical means stop, fix this now. I think your issue is critical, but I need management to agree
<marcoceppi> sinzui: ack, thanks for the clarification
<marcoceppi> arosales: ^^
<arosales> well it should be community input, but I would say a regression of this nature that could leave a user without a way to fix a issue is a big deal
<arosales> marcoceppi:  sinzui ^
<sinzui> arosales, no one disagrees about fixing it. the issue is in trunk, not a release. Core will fix it for a release
<sinzui> arosales, and as a stakeholder, you have every right to say fix now
<arosales> marcoceppi: aisrael: can you guys sync up with sinzui and folks here on the nature of the problem and when a fix could is needed, perhaps the next release going out is sufficient
<sinzui> arosales, marcoceppi aisrael: as you are running master, you get the fix as soon as it is merged.
<sinzui> arosales, marcoceppi aisrael: as Core are focused on releasing 1.22.0 and stablising 1.23 for a beta release, your bug will be fixed soon, core is looking for an engineer now
<aisrael> sinzui: arosales: marcoceppi: ack, I'll re-test as soon as I see the merge hit.
<marcoceppi> sinzui: I think this exists in 1.23, is there an alpha out?
<sinzui> marcoceppi, no. no releases until we cut 1.22.0
<marcoceppi> sinzui: ack
<arosales> sinzui: thanks for the help
<alexisb> katco, perrito666, cherylj, ericsnow, any of you have spare cycles (its ok to say no, I am just looking for available folks)?
<ericsnow> alexisb: probably not today
<katco> alexisb: i'm in the middle of trying to push something up, but if it's important you can intervene :)
<perrito666> wow, that took my brain a lot to process you where not asking me for a byke
<perrito666> bike
<cherylj> alexisb: I have some time.
<perrito666> alexisb: sorry, nope
<ericsnow> alexisb: or until vivid/systemd settles down :)
<alexisb> cherylj, are you comfortable looking at a regression bug?
<cherylj> alexisb: I can try!
<alexisb> maybe if katco can provide some backup
<alexisb> ok can you please take a look at 1424069
<cherylj> alexisb: looking...
<katco> https://bugs.launchpad.net/juju-core/+bug/1424069
<mup> Bug #1424069: juju resolve doesn't recognize error state <regression> <resolved> <juju-core:Triaged> <https://launchpad.net/bugs/1424069>
<alexisb> perrito666, heh
<alexisb> perrito666, i will make sure to use lots of common American metaphors for you from now on
<perrito666> alexisb: that is how you get a bike on the mail
<perrito666> be warned
<alexisb> thumper, just so you know I volunteered cherylj for a bug that will soon be marked as critical
<alexisb> she can fill you in on the details
<alexisb> o and thumper happy tuesday :)
<perrito666> that was evil
<TheMue> anyone free to review http://reviews.vapour.ws/r/1106/, it's the fix for #1428430
<mup> Bug #1428430: AllWatcher does not remove last closed port for a unit, last removed service config <api> <juju-core:Triaged by themue> <https://launchpad.net/bugs/1428430>
<menn0> katco: ping
<katco> menn0: hey there menn0!
<menn0> katco: hey hye
<menn0> hey even
<menn0> katco: i'm probably going to create a feature test soon and wanted to ask you some questions
<katco> menn0: awesome!
<menn0> katco: looking at the leadership feature test i noticed that the setup does a lot of work that seems to double up on what AgentSuite (embedded by the feature test's suite) can do
<menn0> katco: in terms of setting up the machine agent
<menn0> katco: i'm guessing there's some parts that didn't work the way you needed them to. do you remember what?
<katco> menn0: right; that is a bit of an anachronism
<menn0> katco: i'm hoping to short cut any lesson you learned
<katco> menn0: i believe it was more that eventually we were wanting to obviate the agent tests' full-stack testing
<katco> menn0: iirc, the agent suite was just doing way more than the leadership functionality needed
<menn0> katco: ok so, it was more a matter of starting afresh for the feature tests?
<katco> menn0: yeah
<katco> menn0: i.e. make the feature tests as linear and clean as possible
<katco> menn0: no chain of suites
<menn0> katco: cool. I just wanted to be sure I wasn't missing something
<katco> menn0: nothing that at least i can remember :)
<menn0> katco: I think my feature test (for logging to the db) will require a similar setup to the leadership feature test
<menn0> katco: you're not against sharing test setup within the featuretest package right?
<katco> menn0: personally? i don't like that pattern. it makes the tests very brittle and hard to understand
<katco> menn0: but i think that's probably up to the developer
<katco> menn0: i am a big fan of very dumb, linear, tons of code tests
<menn0> katco: i see where you're coming from but part of me doesn't like copying and pasting all that code to set up the machine and machine agent
<katco> menn0: because when i'm reading a test, it's telling me a story, and i don't want to learn a 2nd way to interact with juju
<katco> menn0: right, that's the inverse argument
<katco> menn0: at the very least, if we wanted to de-dupe code
<menn0> katco: yeah I get that. especially when there's lots of alternate ways of interacting with juju in different test suites
<katco> menn0: i would say pull functionality out into functions, and call the same functions from multiple suites
<katco> menn0: and then you don't have to chain suites
<menn0> menn0: I like that approach. easier to understand. I've been doing that a bit recently in some apiserver tests, although there it's a mix of helper funcs and shared suites.
<menn0> katco: ^^^ (I didn't mean to talk to myself :))
 * menn0 needs more coffee
<katco> menn0: haha :)
<menn0> katco: well i'll see how the db logging feature tests pan out. i'll aim to keep wha I do in line with the existing approach.
<katco> menn0: so that's my take on things. you are awesome, and your code will be as such :)
<menn0> katco: thanks but that's not guaranteed :)
<menn0> katco: thanks for the chat. very helpful.
<katco> menn0: happy to help!
<katco> for anyone working with restful services + wadl files in go: https://github.com/katco-/wadl2go
 * katco away for a bit
<alexisb> cherylj, can you please assign yourself to lp 1424069?
<voidspace> right, way past EOD
<voidspace> g'night all
<cherylj> alexisb: sure
<DaveJ__> Hi - can anyone tell me if JuJu supports CentOS/RedHat guests yet?  I looked about a year ago and it didn't appear ready yet, but can't find any updated info
<bogdanteleaga> DaveJ_: not yet, but we're working on it right now
<DaveJ__> bogdantelega: Thanks - is there any kind of ETA for that feature? Or is there even a way I can build the packages and try it out myself?
<DaveJ__> Our product currently runs on CentOS, and I'd really like to try and deploy it as set of charms
<bogdanteleaga> DaveJ_: well currently we're working on the userdata part: the codebase only supports apt for linux packages and yum is needed on centos
<bogdanteleaga> DaveJ_: we want to finish it until the end of the month
<bogdanteleaga> DaveJ_: there might still be some broken functionality at that point but it should be usable to some extent
<DaveJ__> bogdanteleaga: No worries, I'm not afraid to get into the code and work around any issues.
<DaveJ__> Even if I can get a basic prototype working
<DaveJ__> bogdanteleaga:  Anywhere I can track the progess?  End of the month doesn't seem that bad
<bogdanteleaga> DaveJ_: look for centOS related pull requests on github
<bogdanteleaga> DaveJ_: https://github.com/juju/juju/pull/1759
<bogdanteleaga> DaveJ_: this is a start, but there's more to be done
<DaveJ__> bogdanteleaga: Understood. Thanks for the info
<mup> Bug #1430049 was opened: unit "ceph/0" is not assigned to a machine when deploying with juju 1.22-beta5 <juju-core:New> <https://launchpad.net/bugs/1430049>
<davecheney> hr.canonical.com is so stupif
<davecheney> it just let me enter a personal objective "gain the power of flight"
<davecheney> and back date it to the day i was born
<katco> wallyworld_: anastasiamac: axw: i think i keep freezing
<axw> katco: last thing I heard from you was "our dogs"
<wallyworld_> katco: sprry, we hung up on you
<axw> or maybe "out dogs"
<katco> "out dogs" ;)
<wallyworld_> lol
<anastasiamac> :D
<davecheney> what's out dog ?
<katco> davecheney: it's kind of like up dog?
<axw> out, dogs
<anastasiamac> who let the dogs out
<davecheney> ... wait a second !
<katco> ok off to dinner... tc all!
<axw> enjoy
<anastasiamac> menn0: thumper: I was reviewing http://reviews.vapour.ws/r/1107/ but it was stamped and merged under me :D u r fast...
<thumper> :)
<thumper> sorry
<thumper> davecheney: heh
<cherylj> perrito666: are you still around?
<cherylj> Is there anyone else around who worked on the Unit Agent stuff?
<anastasiamac> cherylj: maybe just ask a question - m sure that someone may be able to help :D
<wallyworld_> cherylj: what did you want to know about unit agent?
#juju-dev 2015-03-10
<perrito666> cherylj: back
<menn0> anastasiamac: i'll address those review comments and get those merged into the feature branch separately. thanks.
<anastasiamac> menn0: really? thnx :D
<cherylj> hi perrito666, I'm looking at bug 1424069 where juju resolved fails with the error "ERROR unit "ubuntu/0" is not in an error state"
<mup> Bug #1424069: juju resolve doesn't recognize error state <regression> <resolved> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1424069>
<cherylj> and it looks like what is happening, is that when the install hook fails, the Unit Agent status gets set to error
<cherylj> but when we try to run juju resolved, it looks at the state of the Unit, not the UnitAgent
<perrito666> cherylj: I ffed up that then.
<perrito666> cherylj: it should in all cases use unitAgent
<perrito666> unit should not be used yet
<perrito666> cherylj: that particular behavior seems to be missing a test or it would have failed when I made the change
<cherylj> perrito666: did you want to take this bug?  I can help out if you tell me what needs to happen, if you're running low on time :)
<perrito666> cherylj: well it deppends is this alreay critical?
<cherylj> yes
<perrito666> how fun
<perrito666> cherylj: what time is it for you?
<wallyworld_> we have a couple of days
<wallyworld_> 1.23 is not going to be released until thursday most likely
<cherylj> perrito666:  7PM
<perrito666> cherylj: well its unfair that you are working so late because of me
<perrito666> assign it to me
<cherylj> perrito666: well it sounds like we have a little bit of time
<perrito666> wallyworld_: yes, but his bug is most likely blocking CI
<cherylj> perrito666: I do need to put my daughter to bed now, but I can still help out once she's down.
<wallyworld_> oh, nothing is in the topic
<perrito666> cherylj: could you point me to where you found the general error happening? ill grow some decent test and fix it
<perrito666> wallyworld_: its critical, I assume it is locking
<cherylj> there's a testcase in the bug
<cherylj> which I was able to reproduce easily on my local system
<wallyworld_> perrito666: yeah, marked as critical regression, but topic not updated
<perrito666> wallyworld_: I believe that is done by hand
<wallyworld_> perrito666: it's late for you too, i can pick this up
<cherylj> perrito666: I added some logging and found that before the regression, upon the install hook failure, the state of the unit was set to StatusError, so juju resolved worked fine
<cherylj> The current behavior is that upon the install hook failure, the UnitAgent status is updated to StatusError (and nothing ever changes with the status of the Unit)
<perrito666> cherylj: that is expected behavior
<cherylj> But the juju resolved command still gets the status of the Unit to determine whether or not the unit is in an error state
<perrito666> the issue seems to be the code checking if unit is broken
<cherylj> perrito666: ok
<perrito666> cherylj: tx a lot
<perrito666> Ill go have dinner and fix it afterwards
<cherylj> perrito666: np, let me know if there's anything else I can help with
<cherylj> I'll bbl too
<wallyworld_> perrito666: i've taken the bug, fixing now
<perrito666> wallyworld_: aren't you a nice person?
<wallyworld_> perrito666: sometimes :-) it's fixed, including a test, but i'm just looking at any other tests that may be needed
<perrito666> wallyworld_: well, that test is going to be very useful on the change that I am practicing now
<perrito666> you know the worse part? I need to change that back to unit
<wallyworld_> sigh
<wallyworld_> axw: a small one to fix blocker http://reviews.vapour.ws/r/1109/
<axw> looking
<axw> wallyworld_: seems odd that we'd be checking the agent status, rather than the unit status... what's the distinction meant to be?
<wallyworld_> axw: agent status is what we check now before any health related work
<wallyworld_> moving forward that will change
<axw> wallyworld_: I see. can you please add a comment to that effect?
<wallyworld_> sure
<ericsnow> could I get another pair of eyes on http://reviews.vapour.ws/r/1102/?
<axw> anastasiamac wallyworld_: this one is ready for review now - http://reviews.vapour.ws/r/993/
<wallyworld_> ok
<axw> wallyworld_: I ended up taking out all of the implementation of the storage watching code in this branch, because it was ~2000 LOC all up :)
<axw> now about 600
<wallyworld_> sure, i may duck out to get lunch and look after
<axw> nps
<wallyworld_> thumper: why do we have the agents starting in debug mode prior to determining what logging level to use?
<thumper> histerical raisins
<thumper> wallyworld_: we didn't want to miss important bits at startup
<wallyworld_> thumper: thought so. but that means we leak credentials :-(
<wallyworld_> we can log apiserver request/response at trace level, but will also liekly need to look to suppress cloud init logging etc
<thumper> wallyworld_: here's an idea
<thumper> when we write out the service script, we look at the current log level for the juju module
<thumper> if it is set to DEBUG or below, we start with that
<thumper> otherwise start with INFO
<thumper> what this means though
<thumper> is that some agents may start with info and some with debug if changes are made during deployment
<thumper> the original idea was to have a defined, known starting point that was the same everywhere
<thumper> there is no technical reason why it can't be different
<wallyworld_> i agree with that - i just wish we din't include credentials
<thumper> the original design went for consistency
<thumper> we are outputting everything
<wallyworld_> the issue is that even if we start with info and then change to debug, we'll still leak
<thumper> yes
<wallyworld_> so messing with logging levels is only a band aide which is not that effective
<thumper> correct
<wallyworld_> sigh
<wallyworld_> seems like it needs to be done properly
<thumper> so... alternative options
<thumper> don't write out credentials in the log
<davecheney> 15:10 < thumper> don't write out credentials in the log
<davecheney> ^ yes, that
<thumper> however, since we log all api traffic at debug
<thumper> we need to do one of two things:
<wallyworld_> we need to be able to strip credentials but easier said than done
<thumper> be able to identify the values that are creds
<thumper> or not write out all the traffic
<wallyworld_> agreed, latter is flawed because sometimes you need all traffic
<wallyworld_> without leaking
<thumper> for debugging yes, for normal users, no
<thumper> I don't think it is feasible to say "we
<thumper> "we'll log everything except credentials"
<wallyworld_> well, users need it even if they don't know it
<thumper> it doesn't make sense
<wallyworld_> since users will upload all-machines.log and then expect us to fix their problem
<wallyworld_> or we need to turn on debug logging
<wallyworld_> so we do need to allow for verbose logging minus secrets
<thumper> yep, that's hard
<wallyworld_> for now need a quick fix - can log api server requests at trace level
<wallyworld_> and look at not dumping cloud init script at startup unless also at trace
<wallyworld_> well, can log api server request metadata at debug
<wallyworld_> but not contents
<thumper> seems like a reasonable approach
<wallyworld_> i'll do that
<wallyworld_> axw: rb not updating; this one fixes a 1.23 release issue with leaking creds https://github.com/juju/juju/pull/1782
<axw> looking
<axw> wallyworld_: done
<wallyworld_> ty
<wallyworld_> axw: i'm off for a bit to go see Wicked, won't be able to look at your latest PRs till later tonight after I get back, or maybe tomorrow morning
<axw> wallyworld_: no worries. enjoy :)
<axw> no great rush, I've got things to carry on with
<wallyworld_> axw: wil do, if you are bored, there's also the storage provisioner one http://reviews.vapour.ws/r/1108/
<axw> wallyworld_: sure, I'll finish off this branch then take a look
<wallyworld_> no hurry of course :-)
<anastasiamac> axw: wallyworld_: i mite be able to look at some of these too :D
<axw> anastasiamac: great, thanks
<wallyworld_> great
<mattyw> morning all
<Muntaner> hello devs
<Muntaner> I'm trying to use the load balancer charm HAProxy for my charm. I set up everything, but when I try to access to the public IP of the load balancer, I get a 503 Service Unavailable. How can I diagnose my situation?
<davecheney> rogpeppe3: oh pooh , you left
<rogpeppe3> davecheney: no i didn't
<mup> Bug #1430205 was opened: lxc template needs refreshing every 24 hours <juju-core:New> <https://launchpad.net/bugs/1430205>
<dooferlad> morning! o/
<dimitern> dooferlad, o/
<dimitern> dooferlad, I've been reading your kvm notes from yesterday
<dimitern> dooferlad, how did that reject rules appear in iptables? from libvirt?
<dooferlad> dimitern: I have no idea! I haven't tracked it down yet.
<dimitern> dooferlad, also, have you checked the nat table? there should be an SNAT rule
<TheMue> morning o/
<dimitern> TheMue, o/
<dooferlad> TheMue: o/
<dooferlad> dimitern: bother, I didn't save those :-( I will get a duplicate environment up and get back to you.
<dimitern> dooferlad, cheers
<mup> Bug #1430220 was opened: Swift container (but not objects) deleted, bootstrap and destroy-environment fail <juju-core:New> <https://launchpad.net/bugs/1430220>
<coreycb> TheMue, do you know the syntax for specifying an action-get key?
<TheMue> coreycb: sorry, not directly. would have to look too
 * TheMue is looking
<coreycb> TheMue, ok np. figured I'd ask you since the other guys are not likely online.
<coreycb> I'd tried a few ways based on https://jujucharms.com/docs/authors-charm-actions#action-get but wasn't getting anything back
<TheMue> coreycb: yes, it's a bit early
<TheMue> coreycb: the example shows a nested variable name. "outfile" and "name" are parts of it. if your "foo" is on the top level it's only "action-get foo".
<coreycb> TheMue, ok thanks.  yeah that example's a bit odd since 'name' is never defined as far as I can see.  as an aside, I'm running with an experimental version of juju that has leadership elections so I'm going to try on 1.22-beta5.
<TheMue> coreycb: indeed, it doesn't match the actions.yaml shown above. "compression.type" would be a better example.
<mup> Bug #1430225 was opened: juju br0/juju-br0 does not observe dhcp mtu settings <juju-core:New> <https://launchpad.net/bugs/1430225>
<dimitern> TheMue, standup?
<TheMue> dimitern: yes, trying to connect. somehow my browser dislikes me today :(
<TheMue> dimitern: aargh, not only the browser, all new connections. will reboot now
<mattyw> anastasiamac, excellent questions on your review, thanks very much
<mup> Bug #1430245 was opened: Transient error with juju-unit-agent on windows hosts <windows> <juju-core:New> <https://launchpad.net/bugs/1430245>
<coreycb> TheMue, ok so I found the issue
<TheMue> coreycb: what has it been?
<jam> dimitern: ping
<dimitern> jam, pong
<jam> I'm running into an intermittent failure, that we've tracked down to maybe a network issue
<jam> specifically in master
<coreycb> I was using 'params:' instead of 'properties:' in actions.yaml.  params doesn't seem to work but properties does.  so maybe the doc just needs an update, not sure.
<coreycb> TheMue, ^
<jam> if you run: cd worker/uniter/filter
<jam> $ time go test -c && for i in `seq 10`; do ./filter.test -gocheck.v -gocheck.f ConfigEvents & sleep 0.01; done
<jam> that runs the one test 10 times in parallel
<jam> it seems we are getting an extra AddressChanged event
<jam> and when you touch the test to remove the line where it sets the address
<jam> it starts passing consistently, when we shouldn't be getting *any* events since the address wouldn't have been set
<dimitern> jam, I'll have a look
<dimitern> jam, which test is failing?
<jam> -gocheck.f ConfigEvent
<jam> TestConfigEvents and TestInitialAddressEventIgnored both suffer from this
<TheMue> coreycb: as I understand it "params" is for the definition parameters (top level), "properties" for their potential additional properties
<dimitern> jam, ok, thanks, will let you know if I can reproduce it locally
<coreycb> TheMue, that's what the doc seems to say too, but it doesn't appear to be the case
<jam> dimitern: you might need to bump up "10" 1-in-10 fails for me here
<TheMue> coreycb: oh, have to talk to bodie and jw4 about it. could you pastebin your actions.yaml?
<dimitern> jam, so far 2-in-20 failed
<dimitern> 3 even
<dimitern> now 4
<coreycb> TheMue, very basic, this works but if you swap params for properties it didn't work for me  --  http://pastebin.ubuntu.com/
<jam> dimitern: right, so often enough that we have a clear problem :)
<coreycb> sorry, http://pastebin.ubuntu.com/10573902/
<dimitern> jam, it seems to fail twice as frequently here - always with github.com/juju/juju/testing/channel.go:63 - unexpected receive
<dimitern> jam, is that the same issue you're seeing?
<jam> dimitern: oh, I'm not saying the count, just that it fails regularly. 10 was enough that every attempt failed for me
<jam> dimitern: but UnexpectedReceive is the failure
<TheMue> coreycb: ok, thanks. will discuss it with bodie and jw4 and come back to you later
<coreycb> thanks
<dimitern> jam, ok, why do you think it's a network issue?
<jam> dimitern: if you comment out the line in the test SetNetworkAddress, the test *passes*, when the test expects nothing to set the address
<jam> since it didn't do it
<jam> dimitern: the test is exercising the logic that just setting address or just setting charm url doesn't generate an event until the other one occurs
<jam> dimitern: now this is a test running in JujuConnSuite, and the SetUpTest is using s.unit.AssignToNewMachine()
<jam> so I don't know what that gets running in the background
<jam> but apparently we're assigning addresses to the machine being tested
<jam> and if it happens at exactly the right moment
<dimitern> jam, well, commenting out SetAddresses before starting the watcher papers over the problem I think
<jam> we end up with 2 changed events, and the test fails
<jam> dimitern: the point is that something in JujuConnSuite is forcing addresses for machines, and the test assumes it isn't, I don't *directly* care about what the fix is, it is just an intermittent test that blocked me landing code because I thought I broke something here
<perrito666> morning
<TheMue> perrito666: o/
<dimitern> jam, right, will dig deeper into it
<jam> certainly some sort of "don't touch me" flag here would be nice
<jam> I can understand that for most tests maybe we want to set an address because we have to have something to work with
<jam> and this particular test was written when nothing was assigned
<dimitern> jam, I guess removing setAddresses from there should do the trick
<jam> dimitern: so just commenting out SetAddress in the test breaks what the test actually cares about testing
<jam> (that we don't fire events until both address and charm url are set)
<jam> dimitern: I guess ideally we'd be more decoupled, but it is written using JujuConnSuite, and I don't know we want to rework all of that
<jam> dimitern: Just know that TestInitialAddressEventIgnored also runs into this problem
<jam> dimitern: https://bugs.launchpad.net/juju-core/+bug/1426394
<mup> Bug #1426394: TestConfigEvents random failure <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1426394>
<jam> so mgz is seeing it in CI
<dimitern> jam, I think the problem is s.machine in that suite is machine 0, which shouldn't be
<anastasiamac> mattyw: :D np - m enjoying reading the code! yvm for ur PR :D
<mattyw> anastasiamac, I just had one question emailed you about it
<jam> dimitern: I don't think the test cares, so if you want to have SetUpSuite create 2 machines, I'm ok with that. Though definitely comment why
<anastasiamac> mattyw: is this the comment about returning a valid data when error occurs?
<mattyw> anastasiamac, that's the one, I wasn't sure if you were just commenting on expecting action
<anastasiamac> mattyw: i liked the 3 part return :D
<anastasiamac> mattyw: but in every similar piece of code i've seen, when an error is returned
<anastasiamac> mattyw: all other parts are either nil or the most basic "empty" default
<anastasiamac> mattyw: whereas you were returning a MeterNotAvailable value...
<anastasiamac> mattyw: all i was saying that it jumped at me :)
<mattyw> anastasiamac, ah I see
<anastasiamac> mattyw: :D
 * TheMue steps out for a moment
<mup> Bug #1430225 changed: juju br0/juju-br0 does not observe dhcp mtu settings <cts> <juju-core:New> <https://launchpad.net/bugs/1430225>
<jam> anyone else upgraded to go 1.4?
<jam> go fmt stopped working if you have a symlink in your PWD
<jam> and it changed how imports are sorted
<jam> :(
<dimitern> jam, I found the issue
<jam> dimitern: yay
<dimitern> jam, as usual in these cases a 1 line change :)
<jam> dimitern: all about finding that line and knowing it won't break other expectations
<mgz> dimitern: oo, go on, what's the change?
<dimitern> jam, can you try it to see if you can still reproduce it? just add seenConfigChange = false after f.outConfig = nil on lin 470
<jam> dimitern: that sounds like a very bad change
<jam> it means that future address changes won't be notified until config changes again
<dimitern> jam, I've successfully run both TestConfigEvents and TestInitialAddressEventIgnored multiple times with seq 50
<dimitern> jam, well, the config watcher is getting restarted on setCharm
<jam> dimitern: so the code intends to not send the *first* notification until it sees both Config and Address changes, but future changes to either should always trigger a change.
<dimitern> jam, and that's not accounted for in the maybePrepareConfigEvent
<dimitern> jam, not quite
<dimitern> jam, if the config watcher was restarted, we *always* get the initial empty event
<jam> dimitern: but if we just did setCharm is it actually wrong to say the config changed?
<dimitern> jam, well another disturbing thing is the configChanges chan gets reset to the newly started configw.Changes() after it gets started
<dimitern> jam, (again in setCharm event handling), so there's a possibility of a race reading from the old configChanges while configw is getting restarted
<dimitern> jam, this might be the other missing piece, as running all tests 50 times in parallel I only got 3 failures - now trying with that additional fix - adding configChanges = nil; seenConfigChange = false just after "changing charm to %q"
 * dimitern 's machine is visibly lagging when running 50 instances of the filter tests in parallel
<jam> dimitern: well I believe that is also running 50 mongod instances in parallel
<dimitern> yeah :)
<dimitern> jam, ok, so only 2 failures now - both in TestGetMeterStatus
<dimitern> jam, correction - TestMeterStatusEvents
<dimitern> jam, with seq 20 I can't reproduce it, and since my machine was under quite a load and I can see in the log just before "timeout waiting for receive" that the meter status was indeed sent
<dimitern> jam, I'd attribute that these failures to heavy lagging under load
<jam> dimitern: if it is just ShortWait sort of timeout, that would be ok
<jam> depends on the failure
<dimitern> jam, it is a LongWait I think, but I'm running the tests a few more times
<dimitern> jam, ok, now 1 failure out of 50 for all tests - this time TestActionEvents and it *is* a ShortWait
<jam> dimitern: so the change sounds good given the testing we have in place, but I wouldn't be surprised if our testing wasn't actually as complete as we would like. So this is something that we'd want to think through well. Possibly running past fwereade
<dimitern> jam, ok, so far we've established I believe it's not a networking issue but a watcher/race sort of thing; I'll propose a fix with those few lines added and provide a way to test it using your snippet
<dimitern> jam, I'd
<dimitern> jam, (really bad lag!)
<dimitern> jam, I'd ask you to retry reproducing it with the fix
<jam> dimitern: I'm certainly willing to try it. My concern isn't that it doesn't fix what I saw, but if it breaks other expectations
<dimitern> jam, agreed, I'll ask fwereade to have a look as well
<alexisb> all we have a critical bug blocking 1.22, I need some volunteers:
<alexisb> https://bugs.launchpad.net/juju-core/+bug/1430049
<mup> Bug #1430049: unit "ceph/0" is not assigned to a machine when deploying with juju 1.22-beta5 <oil> <juju-core:Triaged> <https://launchpad.net/bugs/1430049>
<alexisb> wallyworld_, has done initial investigation but it will require some f2f time with jhobbs in US hours
<dimitern> dooferlad, TheMue, voidspace, guys, I'd appreciate if some of you could join the maas+juju interlock in 2 minutes
<mgz> gsamfira: bug 1430340
<mup> Bug #1430340: Failing to create tempdir in tests on windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1430340>
<gsamfira> mgz: thanks :D
<gsamfira> grabbing a coffee and looking
<dooferlad> dimitern: on it
<dimitern> dooferlad, cheers!
<voidspace> dimitern: yep, just grabbing coffee
<dimitern> voidspace, ta!
<mup> Bug #1430340 was opened: Failing to create tempdir in tests on windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1430340>
<mgz> gsamfira: for interest, the fix I'd made previously was https://github.com/juju/testing/pull/52 for IsolationSuite usage in juju, making sure we actually passed all the windows variables through to subprocesses regardless of case
<gsamfira> cool! Windows does not realy care about case, but I remember doing case insensitive match when I pushed this. Might be mistaken though
<gsamfira> be roght back
<gsamfira> *right
<wwitzel3> perrito666: ping
<perrito666> wwitzel3: pong
<perrito666> wasnt me
<perrito666> wwitzel3: ahh dst
<wwitzel3> perrito666: standup
<perrito666> wwitzel3: yup sorry I kieep thinking its in one hour
<mup> Bug #1430340 changed: Failing to create tempdir in tests on windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1430340>
<mup> Bug #1430340 was opened: Failing to create tempdir in tests on windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1430340>
<axw> fwereade: if you have any time today, I'd appreciate a review of the storage hook source: http://reviews.vapour.ws/r/1113/
<gsamfira> mgz: running tests now
<mgz> gsamfira: ace
<gsamfira> mgz: http://paste.ubuntu.com/10574918/ :)
<gsamfira> going to fix tests today. We need that CI up as soon as humanly possible :D
<gsamfira> recloned master just to make sure
<gsamfira> yup, my bad. unclean env on my machine. Apologies
<mgz> yeah, wondered if it was a dep issue
<gsamfira> testing again
<axw> can someone please review https://github.com/juju/juju/pull/1789 - fixes the critical blocker in 1.22
<dimitern> axw, LGTM
<axw> dimitern: thanks
<gsamfira> mgz: I mentioned this before, and I don't know if its possible, but using WinRM to run the tests might avoid environment issues you get with OpenSSH. You can still use OpenSSH to scp files, just don't add it to the environment
<gsamfira> mgz: there is a python script that can be used for this, and it can be run from any Linux box
<gsamfira> mgz: https://github.com/cloudbase/pywinrm/blob/master/wsmancmd.py
<gsamfira> mgz: we use this in the Hyper-V OpenStack CI to run the tempest tests
<mgz> gsamfira: I think the ssh stuff is pretty clean - I was worried we were polluting with cygwin junk but turned out not to be the case
<dimitern> TheMue, please update the status on bug 1428430
<mup> Bug #1428430: AllWatcher does not remove last closed port for a unit, last removed service config <api> <juju-core:Triaged by themue> <https://launchpad.net/bugs/1428430>
<TheMue> dimitern: will do
<mup> Bug #1403955 was opened: DHCP's "Option interface-mtu 9000" is being ignored on bridge interface br0 <cts> <juju-core:New> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1403955>
<axw> dimitern: I'm off to bed, would you mind keeping an eye on that merge and restart it if needed?
<dimitern> axw, sure, np
<voidspace> dimitern: so only the State can run queries (like "find all IP addresses for this machine ID"
<voidspace> dimitern: because the collection names are private
<voidspace> dimitern: so a new state method to find them all
<voidspace> dimitern: or I could just have a state method to remove them all and move all that code
<dimitern> voidspace, yeah, that sounds good
<voidspace> dimitern: (it's only iterate over the returned collection and call addr.Remove)
<voidspace> dimitern: a state method to fetch them all, or a state method to remove them all - which do you think?
<voidspace> dimitern: I think maybe just a method to find them all
<dimitern> voidspace, I think the latter is better
<voidspace> hah
<voidspace> dimitern: ok, maybe
<voidspace> it's what we specifically need I guess
<voidspace> fairy 'nuff
<dimitern> voidspace, if we need to find them we'll add another method
<voidspace> let a thousand methods bloom
<dimitern> voidspace, :)
<dimitern> voidspace, however, let me check a thing first
<voidspace> dimitern: this is the core logic
<voidspace> dimitern: http://pastebin.ubuntu.com/10575094/
<voidspace> dimitern: it needs some error handling around the addr.Remove() and it's in the wrong place (needs to be in State)
<voidspace> but that's all it's doing
<dimitern> voidspace, yeah - no existing "find all ips for a machine/subnet"
<voidspace> dimitern: I was pretty sure there wasn't as I've worked with most of the IPAddress code
<voidspace> it's new for the networking stuff
<voidspace> we're only modelling IP addresses allocated for containers so far
<dimitern> voidspace, yeah, so I guess it does sound better to have a machine.AllocatedIPAddresses() method
<dimitern> voidspace, and then call ip.Remove() on each.
<voidspace> dimitern: ok, I'll do that
<voidspace> dimitern: thanks
<voidspace> dimitern: for machine id I want container.InstanceId() right?
<dimitern> voidspace, no, just machine id
<voidspace> dimitern: I have a state.Machine called container
<dimitern> voidspace, we only need instanceId when talking to the provider - i.e. ReleaseAddress
<voidspace> ah
<voidspace> so ID() instead of InstanceId()
<voidspace> or at least Id()
<dimitern> voidspace, yes
<dimitern> fwereade, I'd appreciate if you can have a look at this http://reviews.vapour.ws/r/1118/ which should fix a uniter filter intermittent test failure
<mup> Bug #1307728 changed: ensure-availability command should show actions performed <ha> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1307728>
<mup> Bug #1403955 changed: DHCP's "Option interface-mtu 9000" is being ignored on bridge interface br0 <cts> <juju-core:Invalid> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1403955>
<dimitern> jam, there's the fix btw ^^ if you can give it a try and see if you still reproduce the issue that'll be great
<sinzui> dimitern, which milestone should bug 1428439 be on
<mup> Bug #1428439: retry-provisioning launches instances for containers; cannot retry containers at all <juju-core:Triaged> <https://launchpad.net/bugs/1428439>
<sinzui> ericsnow, do you think we need a two vivid slaves with systemd and upstart to be certain juju works with both configurations?
<mup> Bug #1403955 was opened: DHCP's "Option interface-mtu 9000" is being ignored on bridge interface br0 <cts> <kvm> <lxc> <network> <juju-core:Triaged> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1403955>
<ericsnow> sinzui: not really, considering that vivid operates under upstart only in "old" pre-release images and the official releases will all be systemd
<dimitern> sinzui, 1.23-beta1 is ok I think
<ericsnow> sinzui: then again, but it also depends on if upstart will be officially supported as the booted init systemd on vivid
<sinzui> ericsnow, what about people upgrading from utopic. they don't switch unless Ubuntu make further packaging changes
<ericsnow> sinzui: I doubt it will be, but if it is then 2 slaves may make sense
<ericsnow> sinzui: good point
<sinzui> ericsnow, I am still unsure. maybe I should just take the time and have two slaves. I can turn one off later
<ericsnow> sinzui: so does that mean upstart will be supported up to the next LTS?
<ericsnow> sinzui: sounds good
<sinzui> ericsnow, It might be required as a legacy.
<ericsnow> sinzui: it should continue to work under upstart regardless
<sinzui> ericsnow, yeah, and the current setup proves that. I want to be sure it still works if Ubuntu about-face
<ericsnow> sinzui: but it does mean having 2 slaves for each release starting with vivid, no?
<ericsnow> sinzui: there *is* a contingency plan for dropping systemd in vivid, so good thinking
<sinzui> ericsnow, maybe? its about Ubuntu making systemd-sysv a requirement
<ericsnow> sinzui: do we use feature flags in CI?
<sinzui> ericsnow, no
<sinzui> ericsnow, are the set in environments.yaml?
<ericsnow> sinzui: I believe it's from environment variables
<sinzui> ericsnow, it it works from shell env vars, I think we can reconfigure jobs as needed. If it comes from the juju env, we need to make config and or code changes to use them
<ericsnow> sinzui: if we run a vivid slave with upstart then it may make sense to use a feature flag to indicate upstart-on-vivid
<sinzui> ericsnow, understood
<ericsnow> sinzui: there's a fallback case for init system discovery that hard-codes vivid (and subsequent releases) to systemd
<sinzui> ericsnow, I am now at the point of wondering what will happen when I try to provision a new slave. The charm/package uses upstart
<sinzui> ericsnow, if there is no legacy upstart support, the slave wont run without me writing a systemd script
<ericsnow> sinzui: yeah, we'll see :(
<ericsnow> sinzui: there is some accommodation for that, but I don't know to what degree (they were still discussing it a month+ ago)
<ericsnow> sinzui: I'll try to find out
<ericsnow> sinzui: can you tell if the latest vivid image is booting systemd?
<sinzui> ericsnow, I haven't looked
<ericsnow> sinzui: k
<sinzui> ericsnow, since clouds don't support ubuntu devel images, nor does the current juju support systemd. I need to build this by hand
<ericsnow> sinzui: ah, makes sense
<ericsnow> sinzui: it's like that for each new series, huh?
<sinzui> I have done this before. it just take longer for me to type and upload things that juju and charms
<sinzui> ericsnow, I usually wait for an official release of juju after a series opens. that gives me a juju and an agent. then I provision the previous series in a cloud, dist-upgrade, the manually provision. that is how we got the trusty, utopic, and current vivid
<gsamfira> any chance I can get a review on this: http://reviews.vapour.ws/r/1119
<gsamfira> so we can start testing on windows? :)
<sinzui> ericsnow, I am having a misadventure on the current vivid-slave. lxc blewup. I am watching a case were we upgrade from stable juju to unstable, which might be upset because 1.21.3 doesn't know about systemd. I will report real details in a bit
<ericsnow> sinzui: k
<alexisb> http://www.opencompute.org/community/events/summit/ocp-us-summit-2015-live-streaming
<alexisb> thanks kwmonroe for the pointer ^^
<alexisb> perrito666, this is our OCP demo
 * gsamfira is going through  second bowl of popcorn
<alexisb> gsamfira, :)
<sinzui> ericsnow, looks like lxc template creation is broken. 1.21.3 cannot create a template with the current vivid lxc images. I am going to clean, upgrade, then retest that unstable juju loves the new image. Then try the upgrade again
<alexisb> shouldnt you be asleep
<ericsnow> sinzui: right
<gsamfira> alexisb: not yet. Its 20:24 here :D
<gsamfira> and I actually grabbed a few hours last night :)
<ericsnow> sinzui: we send upstart-specific commands to cloud-init and in the LXC clonetemplate code
<sinzui> ericsnow, I believe we will not be counting the upgrade test until 1.23.0 is released
<ericsnow> sinzui: so if vivid is booting off systemd now then you'll see that sort of failure
<sinzui> ericsnow, thank you. that predicts my question
<ericsnow> sinzui: can you check PID 1?
<sinzui> ericsnow, I cannot even get into the container. it is a zombie
<ericsnow> sinzui: yuck
<sinzui> I am kill lxc procs because the controls are useless
<ericsnow> sinzui: not even with lxc-console?
<sinzui> it is stalled
<ericsnow> sinzui: bummer
<ericsnow> sinzui: oh
<ericsnow> sinzui: there's a known issue here
<ericsnow> sinzui: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1347020
<mup> Bug #1347020: systemd does not boot in a container <systemd-boot> <lxc (Ubuntu):Fix Released by stgraber> <lxc (Ubuntu Trusty):Triaged by stgraber> <https://launchpad.net/bugs/1347020>
<sinzui> ericsnow, that looks similar
<ericsnow> sinzui: basically, vivid with systemd in a container on a non-vivid host won't work
<alexisb> with natefinch out is there someone who can review this for cloudbase (aka gsamfira):
<alexisb> https://github.com/juju/juju/pull/1791
<alexisb> it is fixing our windows tests that will soon have the power to block trunk
<ericsnow> sinzui: well, it works on trusty and utopic if you update a few things from e.g. "trusty-updates"
<ericsnow> sinzui: I expect it won't work on precise
<hazmat> anybody who wants to help a user debug while local provider is broken on vivid should join #juju and talk to lamont
<hazmat> s/while/why
<sinzui> ericsnow, the comedy gets better. The jenkins upstart job got removed. I am now committed to rebuilding a slave that meets your needs. I am going to disable all the vivid jobs until the replacement is ready
<ericsnow> sinzui: okay
<cmars> gsamfira, i'll take a look at it.
<alexisb> cmars, thank you!
<gsamfira> cmars: thank you! :)
<ericsnow> sinzui: I'm going to land the patch that specifies that vivid uses systemd
<voidspace> g'night all
<ericsnow> sinzui: then write a follow up patch that adds a feature flag for the vivid-on-upstart case
<sinzui> ericsnow, I suppose you should as it clearly doesn't want me using upstart any more
<ericsnow> sinzui: also, the ubuntu folks verified that vivid did indeed switch yesterday
<ericsnow> sinzui: I've landed the vivid-is-systemd patch so vivid should be able to work now (LXC host issues aside)
<sinzui> ericsnow, thank you.
<cmars> gsamfira, reviewed
<thumper> morning
<ericsnow> sinzui: upstart feature-flag patch: http://reviews.vapour.ws/r/1121/
<sinzui> fab
<ericsnow> sinzui: that should help in the case we have a bot running upstart
<sinzui> ericsnow, I am writing a systemd script for jenkins. I wonder how many charms will need to change
<alexisb> morning thumper
<ericsnow> sinzui: well, since charms are series-specific the impact is explicitly limited--vivid charms will have to make sure they work under systemd
<ericsnow> sinzui: how many charms will have to adjust?  good question
<sinzui> ericsnow, yeah. We mark our charms because we need them the day the series is born
<thumper> cherylj: if you want to start our 1:1 early, I'm fine with that
<cherylj> thumper: sure, I'll join the hangout again
<perrito666> now, that disconnect button is too big
<thumper> cmars: anything you want to catch up with today?
<mup> Bug #1420049 was opened: ppc64el - jujud: Syntax error: word unexpected (expecting ")") <deploy> <openstack> <regression> <uosci> <juju-core:In Progress by axwalk> <juju-core 1.22:Fix Released by axwalk> <https://launchpad.net/bugs/1420049>
<ericsnow> thumper: thanks for taking a look at that vivid-LXC issue
<menn0> thumper, davecheney: https://github.com/juju/utils/pull/115
 * thumper looks
<ericsnow> menn0: FYI, the github-RB hooks work for the utils repo too
<menn0> ericsnow: yep I saw that. good stuff.
<ericsnow> menn0: I've been meaning to do it for the other repos but need round tuits
<thumper> menn0: review done
 * thumper hands ericsnow a round tuit
 * jw4 learned something new today - round tuits - :)
<mgz> not to be confused with square tuits
<jw4> haha
<perrito666> you both just made me google
<perrito666> uff, finally all tests pass
<jw4> perrito666, I blame it on ericsnow ... I was 50 years behind
<ericsnow> jw4: :)
<perrito666> ok, EOD
<perrito666> cheers
<jw4> bye perrito666
<jjox> hazmat: hi there o/ - fyc -> https://code.launchpad.net/~jjo/juju-deployer/fix-juju-deployer-diff-for-multiple-relations-between-servicepair/+merge/252271
<hazmat> jjox: solid
<hazmat> jjox: did you see my comments on the bug... much of this can be simplified by simply mocking juju
<jjox> hazmat: ah, hm ... hadn't - will peek tomorrow
<wallyworld> axw: just started looking at the local state PR. is there a reason not to reuse stuff from uniter/operation/state.go?
#juju-dev 2015-03-11
<menn0> davecheney: I just added a mutex to control access to Deque and it completely ruins performance. given that I don't need it for my use case i'm going to leave it out. ppl who need goroutine safety can do that at a higher layer.
<davecheney> menn0: ok, thanks, that was my question
<menn0> davecheney: it beats me why, but using *list.List and list.New() is consistently a few % faster than using the list.List zero value.
<davecheney> probably alignment
<menn0> davecheney: it's not enough of a difference to worry me but it's interesting anyway
<davecheney> it won't matter in real life
 * menn0 agrees
<davecheney> your dedication to performance is admirable
<davecheney> but i think it will be lost in the niose
<davecheney> juju talks over the network
<davecheney> that's what takes all the time
<wallyworld> axw: can we chat when you get back?
<menn0> davecheney: sure. i'm going to use the zero value.
<menn0> davecheney, thumper, waigani_: updated the Deque PR. http://reviews.vapour.ws/r/1122/diff/2-3/
<thumper> menn0: I've given it a shipit, but it would be good manners to wait for davecheney and waigani_ :-)
<thumper> I've got this cache file configstore thing done now (I think)
<thumper> running full test suite
<waigani_> menn0: shipit from me
<thumper> how do we know if the bot is hooked up for merging?
 * thumper is thinking of juju/utils
<menn0> thumper: i'm guessing it isn't b/c merges are being done by real people instead of jujubot
<thumper> hmm
<thumper> I'm having a WTF moment
<thumper> anyone else getting errors running tests in service/upstart?
<thumper> I get five failures
<thumper> all the same place
<thumper> upstart_test.go:240:
<thumper>     c.Assert(err, gc.ErrorMatches, ".*exit status 99.*")
<thumper> ... error string = "exit status 2"
<thumper> ... regex string = ".*exit status 99.*"
<jw4> I wasn't a couple hours ago - I'll pull and test again
<thumper> I get this test failure on master
<jw4> nope, no new changes since I last ran a clean test run
<thumper> both when I rull all the tests, and just the tests for that package
<thumper> wtf?
<jw4> I'm on trusty
<thumper> me too
<axw> davecheney: re "state", I sympathise, but I can't think of a better name. any suggestions for an alternative that describes persistent local state?
<wallyworld> axw: got time for a quick hangout?
<thumper> interestingly TestStart does the same sort of thing, but it passes
<axw> wallyworld: sure, see you in tanzanite?
 * thumper tries to work out what is different
<wallyworld> ok
<jw4> thumper, I'm curious what extra logging around line 168 of service/upstart/upstart.go would reveal?
<waigani_> menn0, thumper: http://reviews.vapour.ws/r/1115/ - envusers is going to follow a similar path. So I'd like to get this reviewed first.
<thumper> jw4: line 168?
<thumper> TestRemoveStopped passes
<jw4> in the Running() method?
<thumper> sorry, thought you meant the _test file
<jw4> i.e. it feels like some system level failure is happening (with a return code of 2) before the expected failure with return code of 99
<axw> wallyworld: are you shippitting http://reviews.vapour.ws/r/1112/ ?
<wallyworld> ah bollocks, yes sorry
<wallyworld> done
<thumper> What the actual fuck
<thumper> damn screwed test isolation problem
<thumper> if I run the test by itself, it passes
<thumper> but the whole suite, it fails
<jw4> :(
 * thumper pokes
<jw4> although to be perfectly honest... when it comes to debugging.... words like WT(A)F are actually beautiful.  The programmers inverse of Eureka!
<thumper> hah
<thumper> [LOG] 0:00.074 DEBUG juju.service.upstart Running out: "\x1b[1;33mjuju status -e local\x1b[0m\nerror: cannot determine juju home, required environment variables are not set\n\x1b[1;33mjuju status -e wikienv\x1b[0m\nerror: cannot determine juju home, required environment variables are not set\n\x1b[1;33mjuju status -e kibanaenv\x1b[0m\nerror: cannot determine juju home, required environment variables are not set\n"
<thumper> that occurs before each failure
<jw4> weird... seems like that should at minimum be logged at WARN?
<thumper> that is the logging line I added
<davecheney> what is all that colorisation shit in there ?
<thumper> but the question is 'why' ?
<thumper> davecheney: I believe so
<thumper> it looks like something is calling out to my shell
<thumper> and I must have some weird executable in the path
<davecheney> and it's sourcing your .profile
<davecheney> so who knows what kind of stuff is in the environment
<jw4> blech
<thumper> something is so wrong with this
<thumper> the Running method swallows this error very easily
<thumper> ha
<thumper> haha
<thumper> fucker
 * thumper has /home/tim/bin/status
<thumper> which has colour output
 * thumper fixes
<thumper> the test fakes out the start/stop/status generally
<thumper> but upstart service Start calls Running, which calls the status command
<thumper> which is getting fooled by my command
 * thumper thinks
<thumper> why is it getting my path?
<thumper> I thought our IsolationTestSuite fixed that
<jw4> hahha
<thumper> uh ha
<thumper> testing.BaseSuite doesn't use the IsolationSuite
<thumper> it looks like someone looked at it
<thumper> but didn't make the change because too many tests failed
<thumper> as they relied on things in the path
<davecheney> thumper: rename the suite
<davecheney> SortOfIsolationSuite
<thumper> davecheney: the problem is that we aren't using the IsolationSuite
<davecheney>  // SortOfIsolationSuite tries to isolate your tests, sort of, it works, mostly.
<thumper> and I think we should be
<axw> wallyworld: you're still going to disable the storage provisioners in this branch, right?
<wallyworld> axw: i disabled the workers
<axw> wallyworld: cmd/jujud/agent/machine.go is still starting those workers...
 * thumper tries to think about how best to fix this
<wallyworld> axw: hmmm, ok, let me check - i modified the tests to ensure the workers weren't running
<axw> oh I see, there's a bool
<axw> wallyworld: don't disable *all* of the storage workers - the others work fine
<wallyworld> axw: yeah, i just saw that
<wallyworld> doh
<wallyworld> will fix
<thumper> https://github.com/juju/juju/pull/1794
<thumper> davecheney, menn0: trivial one http://reviews.vapour.ws/r/1123/
<menn0> thumper: will take a look once i've finished looking at Jesse's branch
<menn0> waigani_: just looking at the lowercase user name branch
<menn0> waigani_: is it supposed to be included in 1.23?
<menn0> davecheney: are you happy for http://reviews.vapour.ws/r/1122/ to be merged now?
<davecheney> menn0: one final round of comments
<menn0> waigani_: just finished reviewing your username case PR. I think I see a bug but otherwise good.
<davecheney> menn0: lgtm
<menn0> davecheney: kk. I don't see them yet. Are you still working on them?
<davecheney> was going to argue about type blockT []interface{}
<davecheney> but i couldn't be bothered
<davecheney> it's too hot for pedantry today
<menn0> davecheney: :)
<waigani_> menn0: txn documentation says that an insert will go ahead even if the doc already exists (thats why there is an exists assert). I was going off that - but I'll add a test to make sure.
<thumper> here is another: http://reviews.vapour.ws/r/1124/diff/#
<thumper> no type called State in there :)
<thumper> coffee time
<menn0> waigani_: fair enough. if that works then great (even if it's a little confusing). the test would be good.
<waigani_> menn0: actually, I was wondering about the blockT also, if it also caught davecheney's eye - maybe at least a comment?
<waigani_> menn0: what does the "T" stand for?
<menn0> waigani_: Type
<menn0> waigani_: I wanted to use "block" for variable names
<waigani_> right
<davecheney> if we're going to have the discussion
<davecheney> lets not
<menn0> +1
<davecheney> gofmt -r 'blockT -> []interface{}' -w *.go
<davecheney> done
<menn0> -0
<menn0> :)
<davecheney> menn0: yellow card, only two's compliment allowed in this channel
<menn0> ha
<wallyworld> axw: free for another chat?
<axw> wallyworld: just a minute please, propsing a branch
<wallyworld> sure, np
<beisner>  /o\  fyi 1.22beta6 does not appear to resolve bug 1430049
<mup> Bug #1430049: unit "ceph/0" is not assigned to a machine when deploying with juju 1.22-beta5 <oil> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1430049>
<axw> -_________________-
<axw> beisner: ah, from status it's still running beta5
<beisner> and...  \o/  fyi 1.22beta6 DOES appear to resolve bug 1420049 !
<mup> Bug #1420049: ppc64el - jujud: Syntax error: word unexpected (expecting ")") <deploy> <openstack> <regression> <uosci> <juju-core:In Progress by axwalk> <juju-core 1.22:Fix Released by axwalk> <https://launchpad.net/bugs/1420049>
<axw> wallyworld: see you in tanzanite again
<axw> beisner: yay, thanks for verifying
<wallyworld> axw: sure
<beisner> axw, ah indded.   not sure juju-deployer can do equivalent to upload-tools (?).  so I'll have to wait for beta6 tools to hit the streams.
<rick_h_> beisner: not following, deployer runs on bootstrapped env so you just need --upload-tools at bootstrap?
<rick_h_> beisner: juju-quickstart takes --upload-tools flag?
 * rick_h_ rereads bug
<beisner> just reproducing the same way we caught it (bootstrapping via juju-deployer -B)
<rick_h_> beisner: ah, didn't notice that in the bug anywhere
<beisner> rick_h_, it's entirely possible that the problem exists with a bootstrep / then deploy.  but i'm not sure atm.
<rick_h_> beisner: understood
<beisner> rick_h_, easy enough to test though, so i'll kick off a modified loop with beta6 and --upload-tools.
<beisner> rick_h_, will check back in tomorrow on that.
<rick_h_> beisner: np, just hoped to save you some stream waiting
<beisner> rick_h_, yep, appreciate that.
<beisner> and she's off!
<beisner> thanks for the fix(es)!  checking out.
<mup> Bug #1282642 changed: Bootstrap prefers .jenv over environments.yaml <bootstrap> <ci> <config> <destroy-environment> <regression> <juju-core:Won't Fix> <https://launchpad.net/bugs/1282642>
<menn0> thumper: review done. there's lots of comments but they're already pretty minor
<thumper> menn0: thanks
<menn0> waigani_: ship it with some minor comments for http://reviews.vapour.ws/r/1115/
<wallyworld> axw: about to lose power here for 20 mins fyi
<axw> wallyworld: okey dokey
<waigani_> menn0: thanks
<mup> Bug #1430639 was opened: juju can't bootstrap in LXC <juju-core:New> <https://launchpad.net/bugs/1430639>
<urulama> wallyworld_: hey, no worries, see you next week
<wallyworld_> urulama: ty, about to go :-)
<jam> axw: are you around?
<jam> you're the one that implemented getting "config-changed" events when the IP address changes, right?
<axw> jam: hey, yes
<jam> I'm looking at http://reviews.vapour.ws/r/1118/diff/# from dimiter and I'm thinking at least some of it isn't the right fix.
<axw> looking
<jam> axw: I *just* published my comments on it
<jam> I'm testing that it fixes the test I saw failing now
<axw> jam: I'll keep thinking, but I had the same thoughts as you initially: we should only reset seenConfigChanged/configChanges when the charm changes
<jam> axw: do you agree that it might suppress address events ?
<axw> jam: yes
<jam> axw: have you seen the intermittent failures?
<axw> I suppose we're missing a test then
<axw> no, I haven't
<axw> jam: that test could indeed trigger the address watcher twice: once for the initial, another time for the update
<jam> axw: I'm pretty sure that test was written when we didn't have an initial, thoughts on how to clean it up?
<axw> jam: only thing I can think to do is to take out the SetAddresses, or move it to before the filter is started
<axw> same effect either way, but the latter might be easier to understand
<jam> axw: well the whole test is that we don't generate an event until we SetAddress, right?
<jam> "TestIgnoreInitialAddressEvent"
<axw> jam: sorry, I was thinking of TestConfigAndAddressEventsDiscarded
 * axw runs a test
<axw> jam: that test is a bit weird. I would think it should be setting the charm URL, then checking that no config-changed is emitted, then setting the addresses and checking one is emitted
<axw> jam: eh, except that an address change *always* gets generated. its watcher is started immediately, regardless of whether there's a charm URL
<axw> jam: I think the test doesn't make sense anymore. It was written when the logic was different, https://github.com/juju/juju/blob/1.21/worker/uniter/filter.go
<jam> axw: so *something* should be testing the seenConfigChange sort of tests
<jam> if we remove the tests, then we should be removing the "maybeConfigEvent" code
<jam> axw: interestingly in https://github.com/juju/juju/blob/1.21/worker/uniter/filter_test.go it says "we don't set addresses here" for TestInitialAddressEventIgnored code
<axw> jam: the test doesn't make sense anymore. it's not that we *ignore* the initial address event (anymore), but we don't take any action until *both* the initial config AND address are received
<axw> the logic did change, the test should have been dropped
<axw> there are other tests that check that config-changed doesn't get emitted until the charm URL is set (which triggers input config events)
<axw> any test that SetAddresses before setting the charm URL is probably broken
<jam> axw: so potentially one of the tests but TestConfigEvents is also failing with the same race
<axw> jam: yeah, that initial SetAddresses in TestConfigEvents should be removed
<axw> in TestConfigAndAddressEvents, we ought to wait for an event after setting the charm, then trigger a config change and an address change and wait for a coalesced config-changed
<dimitern> :) morning
<axw> in TestConfigAndAddressEventsDiscarded, drop the SetAddresses
<axw> morning dimitern
<axw> or wait.. afternoon, if we're saying our own time :)
<dimitern> :) hah yeah
<dimitern> did my controversial fix for the filter tests spawn a discussion?
<jam> dimitern: morning
<jam> dimitern: well, I tried to go back to the source of this code and get some input
<dimitern> jam, yeah, looking at it again this morning I'm not quite sure why the logic around config events has to be so complicated
<jam> dimitern: my guess is that axw didn't want to disturb existing charms when he updated the events to also trigger when address changes occur
<jam> *probably* the issue was sending a config-changed event when the IP address changed the first time, but there was no config data yet.
<dimitern> jam, axw, but why the address change reporting needs to be delayed?
<jam> dimitern: "no config data yet" is my guess
<axw> dimitern: config-changed mustn't be generated until there's a charm set. I don't recall why we went with config-changed over address-changed
<dimitern> axw, ah! that's the outcome of the address-changed discussion
<dimitern> now it starts to make more sense to me
<axw> dimitern: yeah, I think we went with config-changed because networking wasn't ready yet, and William wanted address-changed to be network-aware
<axw> so firing additional config-changed was seen as better than nothing, which is what we had before
<dimitern> axw, right, and IIRC the idea of having address-changed was abandoned
<axw> I haven't heard any more of it
<axw> we abandoned automatically setting addresses
<axw> I mean, updating
<axw> because proxy charms
<dimitern> I see, yeah
<dimitern> anyway, I'd still like fwereade to have a look at that fix as well
<dimitern> and ideally support it with at least *a test*
<axw> dimitern: btw I agree with jam's analysis on the bug. some of the tests look broken to me, since as jam says, they're firing two address changes (the initial watcher response, and then another change due to SetAddresses)
<dimitern> axw, that looked really odd to me as well - why fire 2 address changes
<dimitern> (the inevitable initial one and then setting another one)
<dimitern> axw, how about moving the setAddress before starting the filter?
<axw> dimitern: which test?
<dimitern> that would make more sense to simulate a precondition
<dimitern> axw, TestConfigEvents I think
<axw> dimitern: I think that'd be fine, or just drop the first SetAddresses and leave a comment.
<axw> a comment might be in order either way, really
<dimitern> axw, yeah, considering 3 of us can't quite get the intent there for a while :)
<TheMue> morning o/
<dimitern> TheMue, morning
<TheMue> hmmpf, forgot to publish my review comments yesterday :/
<TheMue> dimitern: would you mind another look at http://reviews.vapour.ws/r/1106/ ?
<dimitern> TheMue, sure
<dimitern> TheMue, in TestUnsetServices I'm not quite sure what you're testing - I can see foo=bar being set but why isn't it included in the next ServiceInfo change?
<TheMue> dimitern: the first part, the setting, simply has been copied from the according setting test above to see if it works. and then, after destroying  the service, the test failed, because the settings still contained the blog-title
<TheMue> dimitern: then, after implementing removed(), they are cleared as wanted
<TheMue> dimitern: so it's simply to have a scenario
<dimitern> TheMue, ok, that's good, but why is foo=bar that you set missing from the change?
<TheMue> dimitern: here I would myself have to look how the helper setServiceConfigAttr() works. if it should not also clear this setting, then also one of the old tests above works wrong
<dimitern> TheMue, yeah - or perhaps foo is an invalid setting for that charm and it's not getting set in the first place; if so then use one of the valid settings instead of foo (outlook IIRC was a valid setting for the testing wordpress charm)
<TheMue> dimitern: Service.UpdateConfigSettings() seems to be the reason, will take a deeper look
<dimitern> TheMue, if there is an error getting ignored somewhere, that's bad :)
<TheMue> dimitern: oh, that may too be the reason. if it's so I'll change the test above too so that it works correctly
<dimitern> TheMue, cheers
<axw> wallyworld: I've got a branch adding scope to volume and filesystem tags, and Watch{Environ,Machine}Volumes
<axw> will propose soon
<axw> (as I've run tests)
<voidspace> dimitern: dooferlad: TheMue: can we postpone / delay standup by ten minutes? I have to take child number 1 to the childminders. (Just round the corner.)
<dooferlad> voidspace: fine by me
<TheMue> voidspace: +1
<dimitern> voidspace, np
<voidspace> thanks guys, biab
<voidspace> dimitern: TheMue: dooferlad: back
<TheMue> coming
<dimitern> voidspace, dooferlad, TheMue, omw
<dimitern> dooferlad, I found the issue
<dooferlad> dimitern: cool!
<dooferlad> dimitern: what was it?
<dimitern> dooferlad, :) well, it's good I added generation of /etc/network/interfaces for the lxc template as well, but I should've removed it on shutdown
<dooferlad> dimitern: ah.
<dimitern> dooferlad, as the clone starts with the template's /e/n/i
<dimitern> dooferlad, yeah, I wish I thought of that 2 days ago :/
<dooferlad> dimitern: indeed.
<dimitern> :) OTOH I learned more about upstart, init, and cloud-init internals in the past week than I would've ever wanted
<TheMue> dimitern: found it. you can write into your initial settings any stupid stuff you want. as long as the service itself doesn't know them (via Update...) they are removed with the first change event
<TheMue> dimitern: so here it's no wonder that strange foo settings in the tests above are removed then
<dimitern> TheMue, oh, that's nasty :) good catch
<TheMue> dimitern: yeah, will change the service setting tests too
<dimitern> TheMue, thanks!
<dimitern> dooferlad, I found possibly the most resilient set of commands for /e/n/i - http://paste.ubuntu.com/10579764/
<dimitern> dooferlad, "replace" works even when the lxc package pre-populates the routes (which it doesn't do *always*) so it won't fail due to duplicate routes
<dimitern> dooferlad, and "|| true" ensures no errors if ifup failed to bring the NIC up completely or ip link set dev eth0 up was called separately, without the rest of the commands (like the network-interface-container.conf job "helpfully" does)
 * TheMue steps out for a moment
<voidspace> oops, before doing <Ctrl-a><del> on the sprint logistics spreadsheet make sure the focus is in the right place
<voidspace> I momentarily deleted the entire sheet...
<dimitern> voidspace, wow :) good to know
<voidspace> dimitern: :-)
<voidspace> dimitern: luckily undo works...
<dimitern> voidspace, indeed
<dimitern> ericsnow, wwitzel3, hey guys - are you around by any chance?
<dimitern> too early perhaps
<perrito666> dimitern: too early
<perrito666> dimitern: at least for eric I believe is a little before 6a,
<dimitern> perrito666, yep
<dimitern> perrito666, I'm thinking of joining your standup today to discuss some ideas around testing and stubs
<perrito666> dimitern: you are welcome to do so, bring candies though
<dimitern> perrito666, :) will do
<perrito666> well There is ongoing organization for the go meetup in my city, it will be a lovely two person lunch
<perrito666> to all my fellow americans https://www.youtube.com/embed/br0NW9ufUUw
<dimitern> dooferlad, ok, scratch that - even *more* resilient version is http://paste.ubuntu.com/10580025/
<dooferlad> dimitern: where did the address and netmask go?
<dimitern> dooferlad, in a pre-up step
<dooferlad> dimitern: ohh, haven't seen that before
<dimitern> dooferlad, and the type is now manual, not static - ifup is too smart and tries to run ip addr add before running any pre-up scrips
<dimitern> dooferlad, still works with "ifup -a" or "ifup -a --allow auto" though, which is good
<dooferlad> dimitern: clearly need a pre-pre-up :-)
<dimitern> dooferlad, *lol*
<dimitern> dooferlad, I'd appreciate if you can  independently test whether it will also work for kvm containers
<dooferlad> dimitern: soon as I have this test fixed I will try it.
<dimitern> dooferlad, cheers
<mup> Bug #1430639 changed: juju can't bootstrap in LXC <juju-core:Won't Fix> <https://launchpad.net/bugs/1430639>
<mup> Bug #1430791 was opened: Upgrade-juju is broken on most/all substrates <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1430791>
<dimitern> dooferlad, it works like charm! very fast startup now - both the template and all the clones and no errors
<dooferlad> dimitern: sweet!
<dimitern> dooferlad, now testing on precise
<dimitern> dooferlad, I had to add a step to pre-render /e/n/i just before the container starts though
<dimitern> sinzui, hey re that upgrade issue
<dimitern> sinzui, I've seen it happen on trunk since some time now - if you immediately try to call any command after upgrade-juju it always fails with "connection is shutdown", but if you retry after a few seconds it works
<dimitern> sinzui, I guess that's something to do with updating the apiserver TLS cert like during bootstrap
<sinzui> dimitern, CI has done that for more than a year
<sinzui> This looks like something new
<dimitern> sinzui, I think if you change the job to retry a few times when calling status after upgrade-juju successfully completes it will all go green (or mostly)
<dimitern> sinzui, in a normal case, e.g. if you're not impatient and wait a few seconds before running anything after upgrade-juju, I can confirm it works OK on trunk
<dimitern> sinzui, but you're also correct that it used to work, now the UX is slightly worse - I guess automated scripts and tools like juju-deployer will be affected unless they do retry commands after upgrade
<sinzui> dimitern, yeah, abentley  and I are pondering the deployer case
<mattyw_> katco, ping?
<katco> mattyw_: hey hey
<mattyw_> katco, morning - I gave you one minute after 9am so you'd have time to get coffee - so I reckon you owe me a favour ;)
<katco> mattyw_: LOL
<natefinch> wwitzel3: standup?
<jam> katco: just the person I wanted to ping :) You did LeadershipSettingsAccessor, right?
<katco> jam: o/ yes i did
<wwitzel3> natefinch: yep
<jam> so I'm trying to sort through the various layers to enable the Uniter to call WatchLeadershipSettings
<katco> jam: ok
<jam> one concern is that both the APIServer internally and the API Client externally use LeadershipSettingsAccessor (I believe) which has
<jam> WatchLeadershipSettings(serviceId)
<jam> but IIRC, we should be passing in a unit.Tag for the api client, shouldn't we?
<jam> katco: do you have any actual end-to-end API client tests? Most of the tests I can see are stubbed out
<jam> so I couldn't tell what you should actually be passing for serviceID
<katco> jam: hm. this seems to be a point of confusion on the team... i think people have told me that tags are otw format only, client-facing are ids
<jam> because the tests are using "foobar"
<jam> which isn't a valid unit ID
<jam> nor a valid unit tag
<katco> jam: https://github.com/juju/juju/blob/master/featuretests/leadership_test.go
<jam> katco: so names.Tag is an object that is generally used in clients, I believe
<jam> vs a string
<katco> jam: we place end-to-end tests in featuretests/ now, hopefully there is something in there that looks promising
<katco> jam: huh. i swear fwereade and wallyworld were told me that tags were OtW format/internal use only and shouldn't be exposed to clients
<jam> tag-as-a-string is the wire format, yes
<jam> names.Tag is used all over the place
<katco> they said this was a mistake we were trying to correct
<wallyworld> and it should n't be
<katco> look at that
<wallyworld> users should pass in names / ids
<katco> i summoned wallyworld
<wallyworld> which are converted to tags
<jam> katco: anyway that particular part I'll just adjust to your api, just trying to find out what to pass.
<jam> but I realize I'm passing the Unit.Id() which isn't the service id
<wallyworld> tags just serve to provide additional typing info for an id
<katco> jam: does this help? https://github.com/juju/juju/blob/master/featuretests/leadership_test.go#L256
<jam> katco: those particular tests create a service then a unit, i need to go from a unit to its service id
<fwereade> jam, katco, wallyworld: indeed, it's really a wire-format thing; and I'm not *especially* bothered about using tags in agent code, because sometimes it really does seems to make more sense; but we absolutely mustn't leak them into the UI
<katco> jam: looks like this is one path: unit.Service().Tag().Id()
<fwereade> jam, katco, wallyworld: (I am a *little* bothered by it, because it feels like it's close to being a layering violation, but I'm not really in love with the remote objects we use there either, and they'd be the other obvious source of typing information when we need it)
<katco> jam: not sure if that's the best approach... especially if you're within a charm
<wallyworld> our remoting approach kinda sucks
<fwereade> katco, wallyworld, jam: names.UnitService()?
<wallyworld> should be SOA; attaching remoting layer to domain objects is plain wrong
<jam> fwereade: well the test suite has a self.service object
<katco> fwereade: that sounds more correct from within the context of a unit
<jam> and the uniter also has a self.service
<fwereade> wallyworld, it does indeed, it's just a layer of unnecessary yuck :)
<wallyworld> wish we could change it, but needs to evolve; so much legacy
<fwereade> wallyworld, well, in general, there's no need to double down on the weird remote objects style
<dimitern> dooferlad, success! both trusty and precise lxc containers work fine
<dooferlad> dimitern: :-D
<wallyworld> fwereade: so long as new work is done correctly.... :-)
<fwereade> wallyworld, I'd rather see new code that just talks to an api object directly rather than tack stuff onto the already-bloated remotes
<fwereade> wallyworld, exactly
<wallyworld> yup
<katco> jam: does that help at all? i'd be happy to discuss further to help out with the api
<jam> katco: I found the problem I was having, now on to the next problem :)
<katco> jam: ah the life of a techy ;)
<jam> hm.. unknown watche id
<katco> wallyworld: hey, exactly which type of robot are you?
<wallyworld> katco: WAL-E :-D
<katco> LOL
<katco> oh my... i was not prepared for that awesome rebuttal
<wallyworld> :-D
<katco> well played, sir.
<wallyworld> for once, yes
<wallyworld> now i do need sleep, catch you tomorrow
<katco> cya
<jam> night wallyworld
 * katco makes more tea
<dimitern> katco, hey there
<dimitern> fwereade, hey, can you spare 10m and have a look at http://reviews.vapour.ws/r/1118/ at some point please?
<katco> dimitern: hey dimiter!
<katco> man my irc notifications don't seem to be working
<dimitern> katco, :) oh
<katco> oh no wait... they were... i was looking at a different screen haha
<katco> dimitern: how are you!
<dimitern> katco, I wanted to ask about goose cinder support and a test double for it
<katco> dimitern: right, sorry i owe you an update on that
<dimitern> katco, good, thanks, how about you?
<katco> dimitern: doing pretty good! busy busy
<dimitern> katco, :) I bet
<katco> dimitern: so, i talked with wallyworld, and we'd like to log a bug for the test-doubles so we can land the patch and unblock the storage provider i'm working on
<dimitern> katco, ok, so no rush on the cinder stuff - just wanted to double check you've seen it
<katco> dimitern: is that ok with you? i don't think it should be that hard to write
<katco> dimitern: yeah, sorry i haven't responded =/ we were discussing all the various strategies
<dimitern> katco, it's ok, but how are you going to write the tests for the storage provider without test doubles?
<katco> dimitern: tbh i haven't gotten that far. definitely will have some unit tests, and then i'd have to take a look at the tests axw and wallyworld have already written to see the strategy
<katco> dimitern: it may be that i'll need to land a test double before i can write those tests. i don't know yet
 * perrito666 jumps to systemd
<ericsnow> dimitern: that testing thread: https://lists.ubuntu.com/archives/juju-dev/2015-February/004164.html
<dimitern> katco, ok, it's fine to land it later, but we'll need proper "local live" tests with a test double like we do for other features
<dimitern> ericsnow, great, thanks for saving me some digging! :)
<ericsnow> dimitern: :)
<katco> dimitern: understood
<katco> dimitern: i will file a bug today to ensure it doesn't fall off our radar
<dimitern> katco, awesome, thanks!
<katco> dimitern: ty for the reviews/thoughts!
<dimitern> katco, np :)
<jam> katco: ok, I do have a question now.
<jam> katco: AFAICT WatchLeadershipSettings isn't registering its watcher with the api server
<jam> because I can see the API response giving a NotifyWatcherId of 9
<jam> and then I see a call to NotifyWatcher Next 9
<jam> and it comes back with no-such-watcher-id
<jam> now, I can see a featuretest for Changes()
<jam> but it seems to only assert that we get *a* change
<jam> and the *first* change for watchers is generated client side
<jam> so it may be that we aren't ever actually trying to watch for a real change
 * katco reading
<katco> gm
<katco> *hm
<katco> let me look at the test again
<katco> jam: i see what you mean... let me check and see if we ignore that 1st even somewhere in the chain
<katco> jam: here's the function that instantiates the new watchers: https://github.com/juju/juju/blob/master/api/uniter/uniter.go#L55-L57
<katco> jam: it looks like it's being passed in a facade, do you know if instantiating a notifywatcher will register it with the api server?
<katco> jam: either way, i think you're right that the test is probably incorrect
<jam> katco: so to go back to the featuretest quickly, if you just comment out the Merge line, does it still pass (I think it should because of initial event stuff)
<jam> it passes when I try to comment out the Merge line
<katco> jam: yeah i think the test is probably lying to us
<jam> I tried a quick "call it twice" but currently trying to make that do what I think it should
<katco> jam: is it better practice to consume the first event within WatchLeadershipSettings, or within the caller of that?
<jam> katco: so code that hangs on a watcher usually needs to do initialization
<jam> so we expect an initial event
<jam> because Watchers always generate an initial event
<jam> we don't bother sending it over the wire
<katco> jam: ah ok. so maybe consume that initial event in WatchLeadershipSettings?
<jam> and client-side code always generates the event, and server-side code suppresses it
<jam> katco: so I got the test to fail like it should. I'll paste the diff
<katco> cool
<ericsnow> fwereade: ping
<jam> katco: http://paste.ubuntu.com/10580495/
 * katco looking
<katco> jam: so it appears as thought the code is not registering the watcher properly, as you said
<jam> katco: https://github.com/juju/juju/blob/master/apiserver/upgrader/unitupgrader.go#L46
<jam> is how we create a watcher, consume the initial event, and register that watcher with the API server resources
<katco> jam: well.. is there a race in that test? do we check for changes before the merge can complete?
<jam> katco: so the diff that I posted
<jam> there will *always* be an initial change, generate client side
<jam> katco: https://github.com/juju/juju/blob/master/api/watcher/watcher.go#L165
<katco> jam: right, but the second pull, L12... is it possible we hit that before L21 will produce an event?
<jam> katco: it is a channel
<jam> if we get there, it will block
<jam> it is actually a synchronous channel
<jam> so if *either* side gets there they block until the other side gets there too
<jam> (though one side should be a select loop, so it can handle shutdown, etc)
<katco> doh sorry, i mistook the ok for an indication of successfully retrieving from the channel
<jam> technically, we should be doing something more like NotifyAsserter which will timeout rather than hang the test suite
<jam> katco: you get ok = false if the channel is *closed*
<jam> katco: in which case you do still get a return from the channel
<jam> but the ok says "you're getting returned because the channel was closed, not because I had data for you"
<katco> gotcha, thanks for the extanalpion
<katco> wow
<katco> i butchered that word
<katco> explanation
<jam> katco: funny, I read it just fine
<katco> jam: have you ever seen that study that showed people read english words fine as long as the first and last letters are correct?
<jam> yeah, multiple times
<jam> and it gets worse and worse by the end
<katco> wow... i just noticed i perfectly reversed the middle of that word
<katco> weird
<katco> anyway
<katco> so it looks like there's a bit of work to patch this stuff up
<katco> i'm working on storage stuff for tanzanite, but i realize leadership is due and important as well
<jam> katco: so you can use NotifyAsserter when working with channels like this. It does stuff like assert "there is exactly 1 message on the channel right now"
<katco> ah cool
<katco> which package is that in?
<jam> katco: I'm about 2 hours past EOD, I *might* come back and look into this tonight, but if you can get to it by my morning I'll be happy to review a patch
<jam> katco: github.com/juju/juju/testing IIRC
<katco> jam: ok i'll pop it onto my queue and should have a patch by your tomorrow
<katco> pop it onto my stack rather :p
<katco> lifo ;)
<jam> katco: https://github.com/juju/juju/blob/master/worker/uniter/filter/filter_test.go is using NotifyAsserterC
<jam> if you want examples
<katco> perfect ty
<katco> jam: ty for troubleshooting that, and sorry for the issues
<jam> it does make tests a bit slower sometimes, because if you want to assert that there isn't any data pending on a channel, it waits 50ms to ensure that there really isn't anything there.
<jam> katco: np. I'm glad I could track it down. I don't quite know whether my code is correct or not yet :)
<katco> jam: haha isn't that a fun time in a change? :)
<jam> fwereade: if you're around git@github.com:jameinel/juju leader-settings-tests is the work in progress
<jam> I think it is close to correct, caveat it crashes the Uniter because of the LeadershipSettingsWatcher bug
<mup> Bug #1430839 was opened: juju-run and juju-backup break 'juju help plugins' <juju-core:New> <https://launchpad.net/bugs/1430839>
<jam> ok, EOD, have a good one
<katco> jam: tc!
<dimitern> dooferlad, I did find an issue with lxc-clone: true and lxc-clone-aufs: true - the rootfs is no longer a dir in the clone, but a snapshot (<lxc-dir>/name/delta0)
<mattyw_> ericsnow, are you still the reviewboard go to fella?
<ericsnow> mattyw_: depends on your question :)
<mattyw_> ericsnow, a review not being picked up https://github.com/juju/juju/pull/1802
<mattyw_> ericsnow, and shall I just do it by hand?
<perrito666> this might be nice/useful for those of you who still hate git https://github.com/arialdomartini/oh-my-git
<katco> TheMue: ping
<ericsnow> mattyw_: yeah, do it by hand (rbt post)
<mattyw_> ericsnow, will do thanks
<katco> perrito666: i am the least helpful person on git commands because i use this exclusively: https://www.youtube.com/watch?v=zobx3T7hGNA
<ericsnow> mattyw_: yeah, sorry, it's a consequence of switching host and having to upgrade reviewboard and reviewboard changing APIs between micro versions :(
<ericsnow> mattyw_: actually don't do it by hand
<ericsnow> mattyw_: I forced GH to redeliver the PR event and it worked this time
<mattyw_> ericsnow, awesome, thank you very much
<TheMue> katco: pong
<katco> TheMue: hey i saw you had a topic scheduled for Nuremberg re: timeboxed iterations
<katco> TheMue: would you mind if i piggy-back on that to discuss timeboxed estimation? or should that be a separate topic?
<TheMue> katco: yep, would like to talk about it based on my experiences
<TheMue> katco: no, would appreciate it. estimation is beside reqeng one of my favorite topics
<katco> TheMue: awesome! :)
<katco> TheMue: i'll just /katco on the owners then?
<TheMue> katco: yes, and we both then could prepare for it during the next weeks
<mattyw_> katco, TheMue if I'm available for that session I'll be in it
<katco> TheMue: sounds good. tyvm for bringing that up
<TheMue> mattyw_: +1
<TheMue> katco: yw
<mup> Bug #1430898 was opened: run-unit-tests ppc64el timeout <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1430898>
<voidspace> TheMue: ping
<mgz> mattyw_: have you got a mo to do second review on reviews.vapour.ws/r/1119 please? :)
<voidspace> dimitern: ok, so the test is failing because the allocated addresses aren't being found
<voidspace> dimitern: so a genuine failure...
<voidspace> :-)
<mattyw_> mgz, will do
<dimitern> voidspace, hmm.. that's odd - are they allocated to the container?
<voidspace> yep
<voidspace> I'm calling AllocateTo
<voidspace> I haven't yet tested the new state code that fetches them though
<voidspace> dimitern: probably time to write a test for that code...
<voidspace> :-)
<dimitern> voidspace, ah, that might be it :)
<voidspace> this is why we tset
<voidspace> *test even
<voidspace> dimitern: and indeed a simple test for the new state method fails to return any addresses
<voidspace> oops
<dimitern> voidspace, that's odd, might be related to the multi-env changes in state (all PKs have "<envUUID>:" prepended)
<voidspace> debugging
<dimitern> voidspace, what's you query?
<voidspace> it's probably me
<voidspace> dimitern:         iter := addresses.Find(bson.D{{"machineid", machineId}}).Iter()
<voidspace> pretty simple
<dimitern> voidspace, right.. let me have a look in state
<voidspace> dimitern: no, I'm being dumb
<voidspace> dimitern: I think I'm fetching the wrong field for the address
<voidspace> should be Value and I'm using Address
<voidspace> dimitern: pretty sure that's it
<voidspace> running the test now
<dimitern> voidspace, ah, ok - there's not even an address there
<voidspace> yeah :-D
<voidspace> state test passes, now trying apiserver/provisioner
<voidspace> dimitern: yay - fails for the right reason now - wrong error message
<dimitern> voidspace, good! :)
<voidspace> awesome
<voidspace> the rest should be plain sailing ;-)
<dimitern> always nice to hear :)
<mattyw_> mgz, done
<bodie_> coreycb, you around?
<mgz> mattyw_: thanks!
<gsamfira1> mattyw_: Thank you :)
<sinzui> natefinch, dimitern katco: do either of you have a few minutes to review this momentous branch http://reviews.vapour.ws/r/1132/
<mattyw_> sinzui, I could do technically, but I'm not graduated so it wouldn't help :)
<dimitern> sinzui, sure, looking
<dimitern> sinzui, done
<sinzui> thank you dimitern
<natefinch> wwitzel3: how goes?
<perrito666> bbl
<mup> Bug #1430943 was opened: Subordinate charms are not displayed in `juju status`. <juju-core:New> <https://launchpad.net/bugs/1430943>
<lazyPower> When are we going to land enablement on cloud providers of windows images? We have the series available, but cannot do anything with it outside of a MAAS host - just curious if this work is scheduled for this cycle or an upcoming.
<voidspace> biab
<natefinch> lazyPower:  re: windows, we have steps to enable it on a private openstack.  For public clouds, it's tricky, because we need special windows images... I don't know what the status of getting our windows images available on the various clouds is. alexisb might have an idea.
<lazyPower> natefinch: i figured that was the clincher, needing to get the baked in cloudebase-cloudinit service on the core windows image with sysprep
<lazyPower> as we dont support custom AMI's or anything - its kind of a non starter atm right?
<natefinch> lazyPower: custom AMI's is something we've been talking about... we should bring it up in nuremberg, that might fix a lot of our problems (and a lot of people want custom AMI's anyway)
<lazyPower> natefinch: we're going to have fun supporting that one :)
<lazyPower> may the games begin!
<perrito666> can anyone with superpower rubberstamp this? http://reviews.vapour.ws/r/1088/
<natefinch> lazyPower: gah you broke ctrl+mousewheel zooming on your site
<lazyPower> i did?
<lazyPower> hah, i did
<lazyPower> OLE!
<natefinch> perrito666: reviewing
<natefinch> perrito666: oops, or not: There was an error displaying this diff.
<perrito666> natefinch: well, my day has been on those lines so far
<natefinch> lazyPower: some custom scrolling javascript stuff, huh?  I can tell the page scrolls differently than normal, though I couldn't explain how
<perrito666> today I had to drive at the worse hour to the exact center of the village just because two peoeple could not properly pass an envelope from one to the other so I could get it in another moment
<lazyPower> natefinch: source is here :) https://github.com/chuckbutler/pelican-porto
<natefinch> lazyPower: betcha it's the scroll to top thingy.... but my knowledge of javascript doesn't really go much beyond the basics.
<natefinch> lazyPower: which is to say: I'm better at filing bugs than fixing them ;)
<natefinch> perrito666: I think you can just do rbt post -u from the correct branch locally, and it'll update...
<perrito666> natefinch: that is not my proposal, actually
<natefinch> perrito666: oh, you're right.  sorry, wasn't paying attention
<natefinch> man, I hate testing with local provider... I never know if the problem I'm having is because local is a special kind of stupid, or if it's a general problem
<beisner> lol natefinch
<natefinch> I guess that should be "a stupid kind of special"  but both are mostly correct
<perrito666> ericsnow: do you know why https://github.com/juju/juju/pull/1805 is not being reviewboardized_
<ericsnow> perrito666: non-ascii characters
<ericsnow> perrito666: it's a bug in the GH hook
<ericsnow> perrito666: try using rbt post
<perrito666> I think I missed the chars
<perrito666> do you see them?
<ericsnow> perrito666: nope
<ericsnow> perrito666: could be in the metadata GH sends
<perrito666> ah
<perrito666> mmm and now rbt cannot reach reviewboard
<perrito666> ... what a day
<katco> who is familiar with watchers?
<ericsnow> katco: like with who watches them or something
<ericsnow> katco: <wink>
<katco> ericsnow: the comedian.
<katco> ericsnow: https://www.youtube.com/watch?v=YhJMAaix0CA
<thumper> sinzui: re bug 1430898, it may be related to bug 1430791.
<mup> Bug #1430898: run-unit-tests ppc64el timeout <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1430898>
<mup> Bug #1430791: Upgrade-juju is broken on most/all substrates <ci> <regression> <upgrade-juju> <juju-core:In Progress by waigani> <https://launchpad.net/bugs/1430791>
<thumper> sinzui: waigani is working on the latter, and once it lands, I may get you to see if it fixes the former
<sinzui> thumper, That would be nice
<waigani> thumper, sinzui: http://reviews.vapour.ws/r/1133/
<thumper> waigani: you have checked in state/main
<thumper> waigani: can you reset on master and remove it plz?
<waigani> thumper: ugh, sorry
<waigani> thumper: gone
<thumper> waigani: +1, make sure you use the "fixes-1430791" bit in the merge instruction
<perrito666> anyone knows how to tell rbt to ignore ssl certs?
<jw4> ericsnow, did you get your watcher questions answered?
<ericsnow> jw4: :)
<jw4> :)
<jw4> Looks like it was katco with the original question too
<katco> jw4: i've got a crack team of wallyworld looking into it :)
<jw4> katco, awesome - :)
<katco> jw4: we are just seeing some odd test behavior. we register a watcher on the apiserver, but then it seems it can't find it.
<jw4> hmm interesting
<perrito666> ericsnow: know how to ask rbt to ignore ssl cert?
<perrito666> mm and now rbt asks me for a user
<ericsnow> perrito666: https://github.com/juju/juju/blob/master/doc/contributions/reviewboard.md#rbt-authentication
<perrito666> ericsnow: I have read the code, it seems there is no way, in my version to bypas ssl
<perrito666> but anyway I would like this to work as is :p
<ericsnow> perrito666: why do you want to bypass SSL?
<ericsnow> perrito666: (I don't know if there's a way)
<perrito666> ericsnow: I an getting an error in my vivid install but Ill recreate the virtualenv just in case
<ericsnow> perrito666: the phrase *bleeding* edge comes to mind :)
<menn0> thumper or davecheney: jujud test fixes http://reviews.vapour.ws/r/1137/  (noticed while looking at an unrelated panic)
#juju-dev 2015-03-12
<niedbalski> ericsnow, ping
<niedbalski> ericsnow, http://reviews.vapour.ws/r/1088/ , re-submitted, thanks !
<jam> katco: just wondering if you got to the LeaderSettings watcher registration issue or whether I should be picking it up
<katco> jam: hey i'm working on that right now
<jam> katco: k
<katco> jam: it turned out to be a bit trickier to diagnose; the code was registering the worker correctly, but the worker itself didn't adhere to the state.NotifyWatcher interface
<katco> jam: wallyworld figured that one out
<katco> jam: so there's a few issues; one question is why is the type assertion failure not being bubbled up
<katco> jam: but at any rate, i'll have a patch momentarily. had to eat dinner.
<jam> katco: when you EOD can you make sure to push it up so I can pick it up if it isn't done? and a ping to me to look at it would be good
<katco> jam: it's 21:22 here, so i'll get a PR in before i stop :)
<katco> jam: http://reviews.vapour.ws/r/1141/ i will stay on for questions/fixes
<jam> katco: looking now
<katco> jam: wallyworld made an excellent suggestion for making the test better, but i don't have the time tonight to grok it; i can shore that up later: http://pastebin.ubuntu.com/10582820/
<jam> katco: right. I was about to suggest using the NotifyAsserter directly on the Changes rather than a secondary stream
<jam> katco: reviewed
<katco> jam: ty looking
 * katco smacks head
<katco> jam: so, you're absolutely right
<katco> jam: do you think i should just implement a full watcher instead of attempting to piggy-back off of settingsWatcher?
<katco> jam: or do you think a simple for loop wrapping the settings-watcher's change is sufficient?
<jam> katco: if all it needs to do is proxy another one, thats fine (I think). Is there a reason you can't just hand back the settings watcher itself?
<katco> jam: yeah, the state.NotifyWatcher signature demands a signature of type Changes() <-chan struct{}
<jam> katco: if you are embedding *settingsWatcher, doesn't that give you *its* Changes channel?
<katco> jam: and that's the bit preventing it from being registered
<jam> ah, so we have changes but its not a struct{}
<jam> k
<katco> jam: the settings watcher is not of type NotifyWatcher; it's signature is <-chan Setting something or other
<jam> katco: from being registered? or from being instantiated ?
<katco> jam: i believe from being registered
<katco> jam: we do some reflection from what wallyworld found
<katco> jam: and anything stuffed into resources which does not conform to NotifyWatcher will not be utilized
<jam> katco: right but that's not *registration* thats *extraction*
<jam> katco: because resources can be whatever
<katco> jam: right sorry; extraction
<jam> but when you go to look them up it checks that the thing that's there matches.
<katco> right
<jam> probably we should have good logging as well
<jam> something that says "I found something, but its not what you're looking for"
<katco> yeah we discussed that; it's a bit odd that this didn't trigger some kind of error/log, *something*
<jam> we generally don't send that as an error to the caller (though I think we could here) because of Permission/Priority concerns.
<katco> we also discussed maybe having separate register methods for different types to force this into a compile-time error
<jam> anyway, I'll be back in 10
<katco> k, i'll implement the for loop
<katco> jam: i'm a bit concerned about what happens when the embedded settingsWatcher is told to stop; what would happen to the LeadershipSettingsWatcher's Changes loop?
<wallyworld> katco: coming late to the party, but in such cases you'd implement Stop() on the outer struct and call the embedded Stop() as well as cleaning up anything else
<wallyworld> axw: i've made a few comments on the gh pr
<katco> wallyworld: yeah the thought crossed my mind; but that means i need to probably implement a tomb of my own
<axw> wallyworld: thanks. I've gotta head out shortly, will respond when I get back
<wallyworld> sure
<wallyworld> katco: yeah maybe, i'd need to check the code in more detail. the reason for not using settings watcher directly is the changes method returns a settings delta, not just an event? and we want notify watcher samantics?
<katco> wallyworld: yeah
<wallyworld> hmmmm
<wallyworld> katco: could we just add a Changes() <-chan struct{} to settings watcher
<wallyworld> to make it also a notify watcher
<katco> i don't think go allows overloading like that
<katco> naming conflict
<katco> it doesn't take into account the entire signature
<wallyworld> ah, yeah
<wallyworld> not i remember why i hate Go often
<wallyworld> now
<katco> lol
<wallyworld> that and no exceptions and no generics
<katco> it forces you to stay simple... which can be an immensely good thing :)
<wallyworld> katco: so we could extract a base settings watcher and embed that in 2 variations - what we have now and a new notify variant
<wallyworld> the 2 new versions of the containing struct would have the different Changes() signatures
<katco> wallyworld: hm. i'm kind of liking that idea. i do worry about unforeseen consequences
<wallyworld> that's what tests are for :-D
<wallyworld> i think we have pretty ok coverage in that area
<katco> maybe earlier today i would have agreed with that
<katco> :)
<wallyworld> lol
<katco> i think i'll give that way a go though
<katco> good idea
<wallyworld> good luck :-)
<wallyworld> i think it will work fine, and solves the issues encountered
<katco> wallyworld: mmm... i think that just makes the issue worse. because i'd then have the same problem in 2 places. the base watcher would still have to notify on a <-chan settings
<wallyworld> couldn't the base watcher use an embedded interface to call when it needs to notify?
<wallyworld> i'd need to look at the code to map it out
<katco> wallyworld: hm. no that may work.
<wallyworld> the one with the notify semantics would just discard the info given and shove struct{} onto a channel
<katco> wallyworld: right, but you're still talking to a channel; you lose the magic of select
<katco> so the way this would work is the i'm in the base type's select, specifically in the case statement that received a change
<katco> i then call the notify function that was passed to me
<katco> that notify function has to send on a channel
<katco> except that now if stop is called, it will block
<katco> as it's implemented now, the select would select the stop even if it was waiting on sending the notification
<katco> because it's in its select
<wallyworld> hmmm
<katco> now mind you
<katco> my brain is revolting a bit at this time of night
<thumper> wallyworld: do you have a link to the CI jobs on jenkins?
<katco> this is where generics would be wonderful
<thumper> url hacking is failing me
<katco> thumper: o/
<thumper> hey katco
<wallyworld> thumper: http://juju-ci.vapour.ws:8080/job/github-merge-juju/ ?
<wallyworld> or http://juju-ci.vapour.ws:8080/
<wallyworld> the last one is the dashboard for all the jobs
<thumper> that doesn't have the CI jobs on it
<wallyworld> thumper: you need to be logged in
<thumper> as who?
<thumper> I don't have any creds that AFAIK
<rick_h_> thumper: everything got locked down due to the creds leakage issue raised last week
<thumper> hey rick_h_
<wallyworld> katco: wanna leave it till tomorrow? I'll see if i can hack it a bit today
<katco> wallyworld: nah i'm close
<katco> wallyworld: i don't think there's a way around it. i have to do a copy/paste job. it needs its own tomb setup
<wallyworld> yeah, could be right
<rick_h_> thumper: howdy
<katco> this is when i think go's simplicity works against it
<wallyworld> exactly
<wallyworld> and i run into that *all* the time
<katco> i think they are trying to solve this still
<wallyworld> for me, it's too simple
<katco> but code duplication of this complexity is annoying
<wallyworld> leads to *lots* ot cut and paste bilerplate
<wallyworld>  code duplication of any complexity can get annoying - if i never see another cut and paste arraycontainsstring() it will be too soon
<thumper> FYI: both CI blockers have now been cleared
<thumper> verified that the fix that landed earlier fixed the upgrade issue
<thumper> and the power test isn't timing out on cmd/jujud/agent either
<thumper> pretty sure those two things were related
<katco> wallyworld: jam: helper method for registering watcher: RegisterWatcher(...) or RegisterNotifyWatcher(...)?
<wallyworld> Notify imo
<wallyworld> as there are idfferent kinds
<wallyworld> eg StringsWatcher
<mup> Bug #1430791 changed: Upgrade-juju is broken on most/all substrates <ci> <regression> <upgrade-juju> <juju-core:Fix Released by waigani> <https://launchpad.net/bugs/1430791>
<mup> Bug #1430898 changed: run-unit-tests ppc64el timeout <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Fix Released> <https://launchpad.net/bugs/1430898>
<mup> Bug #1431130 was opened: make kvm containers addressable (esp. on MAAS) <addressability> <kvm> <maas-provider> <network> <juju-core:In Progress by dooferlad> <https://launchpad.net/bugs/1431130>
<katco> wallyworld: gosh... it makes apiserver/common depend on state
<katco> i don't know if it's worth that.
<wallyworld> no
<wallyworld> let's not do that
<katco> pushing up changes with a little piece of my rotted heart.
<wallyworld> lol
<katco> http://reviews.vapour.ws/r/1141/
<katco> i think we could probably do something better by passing in a channel into a base type
<katco> but i'm not sharp enough right now to conceive of that
<wallyworld> katco: i know it's late for you, and i'm happy to do it, but i do think the test needs to use something like the pastebin i offerred earlier
<katco> wallyworld: i was going to land that separately... if you want it in i'll have to hand off
<katco> wallyworld: otherwise i won't be fresh for tomorrow. b/c babies don't have a snooze ;)
<wallyworld> katco: ok, so long as you land first thing tomorrow before we branch 1.23 :-)
<katco> wallyworld: land the separate branch? or land changes you will make?
<wallyworld> the followup branch with revised tests
<wallyworld> this branch at least will fix the issue itself
<katco> wallyworld: keep in mind i also have to do the amz fix
<wallyworld> i can do the tests one
<katco> wallyworld: it would be appreciated... so does this one have a ship it then?
<wallyworld> katco: just eyeballing changes - they are a cut and paste right?
<katco> wallyworld: yup
<katco> wallyworld: from settingsWatcher
<wallyworld> katco: ok, tyvm for perservering with this, +1
 * katco teuche, Jenkins. it is i who waits on you.
<katco> touche even
<mup> Bug #1430898 was opened: run-unit-tests ppc64el timeout <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:In Progress> <https://launchpad.net/bugs/1430898>
<mup> Bug #1431134 was opened: fix container addressability issues with cloud-init, precise, when lxc-clone is true <addressability> <cloud-init> <ec2-provider> <lxc> <maas-provider> <network> <precise> <usability> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1431134>
<axw> wallyworld: I just realised there's more to do, I hadn't updated the apiserver/storageprovisioner code to use the new watchers
<axw> doing so now
<katco> gah
<katco> between the time i submitted and the job queued up that bug hit
<katco> and now it rejected the build
<katco> can someone land this when trunk opens up? https://github.com/juju/juju/pull/1813
<katco> i need to go to bed
<katco> wallyworld: axw: anastasiamac ^^
<axw> katco: ok, will keep an eye out
<axw> good night
<katco> axw: ty sir. have a good day
<jam> wallyworld: katco: so I looked at the change, and we need to tweak it a bit for not actually being a settings watcher, also it is completely untested as is. But if you want to hand it off to me, since I actually need it for my patch, I can pick it up.
<jam> wallyworld: would you rather just hand it off to me?
<wallyworld> jam: if you want, i realise the tests need updating, but it was late for katco so i was going to follow up with a branch, but if you want to do it for your patch that wuld be fine with me
<jam> wallyworld: well both that we now have a lot of untested code in LeadershipSettingsWatcher, and the test needs tweaking for featuretests
<wallyworld> jam: agreed, hence the tests were to be updated straight away
<wallyworld> jam: i was looking to use something like statetesting.NewNotifyWatcherC as is used elsewhere
<wallyworld> was that ow you wre going to redo the tests?
<jam> wallyworld: yeah, I'm grabbing your paste as a reference
<wallyworld> oh, that paste was just an initial brain dum
<wallyworld> p
<jam> wallyworld: I had forgotten that NotifyWatcherC was the original that I used to create the NotifyAsserterC channel stuff
<wallyworld> but should be close
<jam> wallyworld: yeah, I'm tweaking it, but it gives a good reference
<jam> like, we have to import statetesting :)
<wallyworld> of course :-)
<wallyworld> i didn't compile it, just dumpt the text into pastebin
<wallyworld> jam: also for later, did you talk with mark about Properties vs Hints?
<jam> no, mark was out of town last week
<wallyworld> ok, np
<jam> wallyworld: do you know why featuretests starts mongo in a different way?
<wallyworld> no
<wallyworld> i either didn't realise or forgot that it did
<wallyworld> jam: but isn't this standard? coretesting.MgoTestPackage(t)
<jam> hmmâ¦ maybe it doesn't. I was running into problems with a âjournal flag but only for the featuretests directory
<wallyworld> in package_test
<jam> but they seem to be running now
<jam> maybe it was something else
<wallyworld> symlink issue :-P
<jam> wallyworld: from what I've seen could be
<wallyworld> oh, i was joking :-)
<jam> nope, still happening: [LOG] 0:03.670 ERROR juju.worker exited "state": cannot create database index: cannot use 'j' option when a host does not have journaling enabled
<jam> my state worker spins endlessly in those tests
<wallyworld> hmmmm, not seen that before
<axw> wallyworld: can you please check my latest diff in http://reviews.vapour.ws/r/1128/
<axw> wallyworld: I've updated storageprovisioner to match the scope changes
<wallyworld> sure
<wallyworld> dimitern: do you know why the blocking bug fails on ppc64 only?
<dimitern> wallyworld, I don't know that for sure - it might be gccgo-specific
<wallyworld> dimitern: it's hard to tell from the console log - there's so many go routines, most of which are blocked on a select; where did you see an concrete error info?
<dimitern> wallyworld, search for the test I mentioned in the log - it's there, about the newDummyWorker (or whatever, the second thing) - towards the end of the panic dump you can see upgrade_test.go:1779 (or was it machine_test.go)
<wallyworld> dimitern: thank you, my eyes weren't working properly
<dimitern> wallyworld, oh, np - it's hard to see anything useful in most of these panics
<axw> wallyworld: we're going to need to expand the storage provider and provisioner. if a machine reboots, we'll have to recreate tmpfs/loop
<TheMue> morning o/
<dimitern> TheMue, o/
<wallyworld> axw: unless they say they want transient right?
<axw> wallyworld: transient volumes should still be attached after restart, they just may be empty
<axw> wallyworld: at least, in the ec2 definition
<wallyworld> yes, that is true
<axw> wallyworld: I'm thinking of folding the filesystem mounting into the storage provider/provisioner
<axw> wallyworld: and how to deal with remounting after restart. then I realised we need to do the same for tmpfs/loop
 * TheMue just read the State discussion. yeah, there are always repeating names in many projects, e.g. also Context.
<axw> TheMue:  it occurred to me as I used the word "context" a lot in that email ;)
<TheMue> ..ooOO( Data, Dump, Thing, Stuff, ... )
<TheMue> axw: yeah, as "context of a state"
<axw> wallyworld: so, my current thinking is that we require Attach{Volume,Filesystem} to be idempotent
<wallyworld> axw: i think it makes sense for the provisioner worker to have that responsibility of dealing with restart/remounting
<wallyworld> yes
<axw> wallyworld: and then the first time we see an already provisioned storage, we request a re-attach
<wallyworld> like unit hooks
<axw> which may be a no-op
<wallyworld> yep
<axw> wallyworld: also, we don't need to think about htis too much now, but I think we're going to want to run Attach on both env + machine for env-scoped attachments
<axw> wallyworld: e.g. for Ceph where we'd want to prepare the resource in the env storage provisioner, then attach/mount in the machine storage provisioner
<wallyworld> yeah, i haven't thought about that
<wallyworld> not enough brain space
<axw> that can come later, I'm pretty confident we can work it in
<axw> wallyworld: anyway, I think I'll work on provisioning filesystems/filesystem attachments in the storage provisioner now
<axw> I won't worry about loop/tmpfs for the moment, will come back to them later
<wallyworld> you mean the restart behaviout for tmpfs/loop?
<axw> wallyworld: we don't watch volume attachments in the storage provisioner yet, so we can't handle them until we do
<axw> wallyworld: if you prefer I can do it for volumes first, I was going to leave it so you could continue on it in parallel
<wallyworld> axw: ok, start on fs, i have some health stuff to do but can then pick up volumes
<axw> though TBH, it's kinda much the same... I might see if I can abstract some stuff to make the core logic common
<wallyworld> ok sounds good
<katco> ugh sleepless night in our household o.0
<katco> going to work on goamz while i'm up
<coreycb> bodie_, here now.  I'm on UK time this week and back to US eastern time next week.
<wallyworld> katco: is that you or a bot?
<katco> beep boop
<wallyworld> R2D2?
<katco> lol
<katco> i'm going to drink an r2d2 sized coffee
<katco> in a few hours
<wallyworld> double shot
<katco> we think she is teething again
<voidspace> dimitern: if you get a chance
<voidspace> dimitern: http://reviews.vapour.ws/r/1138/
<dimitern> voidspace, cheers, will look shortly
<dimitern> voidspace, dooferlad, TheMue, please take a look at this http://reviews.vapour.ws/r/1144/
<TheMue> dimitern: *click*
<voidspace> dimitern: s/live tests on MASS/live tests on MAAS/ ?
<voidspace> dimitern: in PR description
<voidspace> dimitern: ah, dhcp leases... :-)
<voidspace> dimitern: sounds like you've been having fun working all this out
<dimitern> voidspace, :) oh yeah - ofc
<dimitern> voidspace, it was a fun week, yeah :D
<dimitern> TheMue, thanks!
<TheMue> voidspace: MASS is ok, Metal as Super-Service
 * dimitern has bootstrapped a maas environment at least a 100 times in the past 4 days
<TheMue> ouch
<dimitern> voidspace, you've got a review
<voidspace> dimitern: thanks
<voidspace> dimitern: PR updated with extended test
<dimitern> voidspace, great! I'm having another look
<dimitern> voidspace, looks good to land
<voidspace> dimitern: cool, thanks
<katco> axw: ping
<wallyworld> katco: axw is here now, if you ping him again, or at least rub his lamp, he may appear in a puff of smoke
<wallyworld> or he was
<katco> lol
<katco> i think i figured it out, i just wanted to pick his brain
<mup> Bug #1431286 was opened: juju bootstrap fails when http_proxy is set in environments.yaml <juju-core:New> <https://launchpad.net/bugs/1431286>
<katco> wallyworld: got time for a review of goamz?
<katco> wallyworld: dimitern: https://github.com/go-amz/amz/pull/37
<dimitern> katco, will have a look shortly
<katco> dimitern: thx
<TheMue> dimitern: ready with a first pass and like it so far. but would like a 2nd review
<dimitern> TheMue, sure, thanks so far :)
 * TheMue steps out for lunch
 * fwereade has confused his sleeping patterns again and would like to beg for someone to implement a trivial branch for him while he goes and has breakfast and stuff
<wallyworld> katco: sorry, was afk. looking
<axw_> katco: pong
<katco> wallyworld: np at all
 * axw_ emerges in a puff of smoke
<katco> lol
<katco> axw_: hey, was mostly going to pick your brain re: io.Copy moving the stream pointer of a request body to the end
 * fwereade needs "leader-elected", "leader-deposed", and "leader-settings-changed" hooks added to charm/hooks
<wallyworld> ah, awx has already reviewed
<katco> wallyworld: ah cool
<axw> katco: I just commentd on the PR
<axw> came on here to check if you were around :)
<fwereade> axw, jam, I have a couple of concerns re http://reviews.vapour.ws/r/959/ -- possibly unfounded? please consider
<axw> fwereade: okey dokey
<katco> axw: so i don't think making a type assertion for ReadSeeker will work, because the body of http.NewRequest wraps the incoming body in a ioutil.NopCloser
<jam> fwereade: I agree with you about axw's comment. I had one written myself, but forgot that publish button
<jam> why is it at the top....!
<fwereade> axw, actually I'm not sure I expressed them very well -- I just have a bit of a nervous about not coalescing
<fwereade> axw, it was an uphill struggle to convince myself it was ok in the client watcher code
<axw> katco: rats
<katco> axw: i know, i tried that at first =/
<jam> fwereade: so the tick-tock stuff was just preserving what we had and going forward with it. I also wasn't entirely happy as it seems very different from our standard behavior.
<katco> axw: i also realize there is an API break; but we *just* cut v3
<katco> axw: not really that sure what our stance is on that
<axw> katco: fair enough
<katco> axw: i suppose i'll let dimitern weigh in on that one
<dooferlad> dimitern: Could you take a look at https://github.com/juju/testing/pull/54/files
<axw> fwereade: I don't really understand about the watcher ownership. if the source Stop()s it cleanly, why wouldn't its Err() be nil?
<fwereade> axw, IMO the source shouldn't stop its watcher until it's exiting its goroutine
<jam> katco: official versions is "what was in a stable release" IMO
<dooferlad> dimitern: I tried running the .bat version and it doesn't work inside "wine cmd" so I am guessing it was broken anyway. I can test in Windows in a moment.
<katco> jam: stable release of juju? or of goamz?
<katco> jam: how did the leader settings work out btw. sorry i had to (try) and go to bed last night
<jam> katco: well goamz is trickier, as its a dependency. I thought it was an API version.
<jam> but if things are just broken in v3 seems like people can't use it
<jam> katco: working on the patch to tests, been very distracted this morning, but I can build on what you landed.
<katco> jam: well, not so much broken. v4 signing just uses more memory than it should.
<fwereade> axw, "just one deferred Stop, guaranteed to happen after the loop exits" feels much cleaner -- and thus indicates that any close of that channel inside the loop implies that *if* there's no error returned *someone else* has interfered with us, and we should complain
<axw> fwereade: I'm sure I'm being slow, but I still don't see how that affects the error state of the watcher. One way or another the source stops the watcher; if it's *asking* to stop it, why would the watcher stop with an error?
<jam> katco: does fixing the memory issue actually cause a change to the API?
<katco> jam: distracted? people on juju core? no! ;)
<jam> axw: if someone stops something you depend on, isnt that an error?
<jam> axw: (if I pull the handbrake while you're driving, I don't think you'd exit cleanly :)
<katco> jam: it does, because it necessitates some signature changes to take in io.ReadSeeker and not io.Reader
<katco> jam: we don't know how awesome axw is at drifting...
<axw> fwereade: so liveSource.Stop() should always error?
<katco> jam: if you're interested, here is one of the API breakers: https://github.com/go-amz/amz/pull/37/files#diff-7db3cb93944d57b2ffc803281c906018R209
 * axw is confused
<fwereade> axw, I think what would happen is
<jam> katco: so if you're really concerned, you could just add a new function PutReadSeeker
<fwereade> axw, liveSource.Stop calls q.tomb.Kill(nil); return q.tomb.Wait()
<jam> not an API break by most people's definition
<fwereade> axw, the watcher lives on happily in the background
<jam> I guess if the underlying code *requires* read seaker
<katco> jam: except the old function could not exist as is
<jam> seeker
<katco> jam: yeah exactly
<jam> katco: well you could always read and buffer
<dimitern> katco, dooferlad, ok, I'm back and responding in order :)
<katco> jam: well, that's what causes the memory concern. that's what we were doing prior
<fwereade> axw, the main loop observes that <-q.tomb.Dying(), and exits with ErrDying, whithout touching the watcher
<axw> fwereade: ahh, I see. it makes sense when taken with your other comment
<katco> dimitern: ty!
<fwereade> axw, cool :)
<fwereade> axw, and fwiw I don't think you're being slow, these types with internal goroutines are subtle and quick to anger
<katco> daughter is awake (at the appropriate time this time)... afk for a bit
<dooferlad> dimitern: no rush - was about to get lunch anyway
<dimitern> dooferlad, cheers
<dimitern> katco, reviewed
<dimitern> katco, couldn't help much I'm afraid
<axw> jam: if you come up with something better than the tick/tock, I'm happy to change the storage source. I'll fix the EnsureErr/watcher.Stop bit tomorrow
<jam> axw: well that patch hasn't landed has it?
<axw> jam: which patch?
<axw> jam: I mean, I'll change the storage source to match whatever you do for the relation one
<jam> axw: ah
<jam> katco: so you can add a new memory-efficient API, and leave the old inneficient one around for compatibility
<fwereade> jam, just posted http://reviews.vapour.ws/r/1145/, please let me know if you spot any impedance mismatches with what you're doing with Filter
<jam> fwereade: if it is run-leader-elected, I just merged it, the API fits
<fwereade> jam, will be writing proper tests for it today once I'm ~functional again
<fwereade> jam, cool
<fwereade> jam, pushed an update or two since I last spoke to you
<fwereade> jam, but I don't think I touched leader-settingschanged
<fwereade> jam, would also appreciate a second opinion on http://reviews.vapour.ws/r/1134/
<fwereade> jam, and if you run your merged branch and find that it looks like it roughly works, would you send jamespage a link to the code?
<fwereade> jam, l-s-c is the important bit for him AIUI, l-e is just a bonus, but giving the structure a bit of a kick around would be good all the same
<katco> dimitern: ty for the review
<katco> dimitern: so i can't keep the old version. it will error if we don't seek back to 0 in the stream
<dimitern> katco, right, and it can't be used only for signV2 ?
<katco> dimitern: correct. we deprecated v2 for s3
<dimitern> katco, can you at least add PutReader that has the same signature, but panics when called (e.g. "use PutReadSeeker in v3") ?
<katco> dimitern: absolutely
<dimitern> katco, +1
<katco> dimitern: well actually... typing this out. isn't a compile time error better than a runtime panic?
<dimitern> katco, depends I guess
<dimitern> katco, ok, let's suck it up then
<dimitern> katco, leave PutReader as you did it
<katco> dimitern: or i could change the name, and any references to the old name would be compile-time errors
<dimitern> katco, and add a comment about the change (was there an issue # as well?) - in case somebody wonders
<katco> dimitern: yes there was
<dimitern> katco, it will still be a compile time error if the name is the same but not the interface type
<katco> dimitern: ok, so land it as-is?
<katco> dimitern: or i'm sorry... with a comment
<katco> dimitern: updated: https://github.com/go-amz/amz/pull/37
<dimitern> katco, with a comment, but how about those benchmarks?
<jam> fwereade: well. my tests are currently hanging right in the middle, which I don't understand, as they should at least be timing out, but I'm trying to sort through it.
<jam> I tried SIGQUIT but the scrollback is soo big
<jam> ah, debug has it, the Merge API request is hanging
<jam> katco: !!! :)
<dimitern> dooferlad, reviewed, how about you to review mine ? :)
<katco> jam: merge settings is hanging?
<jam> yep
<jam> I see the request go in, and no response
<jam> debugging now
<jam> katco: http://paste.ubuntu.com/10585240/
<katco> dimitern: i don't know how long this will take, but we want these changes in before the cut of v1.23 today
<katco> dimitern: i need to help jam out right now
<katco> jam: do you have a link handy to show the code that generates this?
<fwereade> jam, ah, I think I know what you need
<fwereade> jam, run a WorkerLoop in the test
<katco> that would do it
<katco> fwereade: i don't think i've actually said hi to you in awhile
<katco> sorry i've been scattered all over the place
<fwereade> katco, not sure I have either, how goes it?
<katco> i am a bit o.0
<katco> up all night with my daughter
<fwereade> katco, no worries, I have been in full basement-troll mode myself
<fwereade> katco, ouch
<katco> lol
<katco> can i be a basement elf instead?
<jw4> Galadriel Cox-Buday
<fwereade> haha
<katco> high praise jw4
<jw4> :)
<dimitern> katco, ok, so land it then
<katco> dimitern: at worst it will not make things worse
<dimitern> katco, you have my lgtm and I get it :)
<fwereade> speaking of trolls, if anyone was reading hpmor and then kinda dropped out because update delays, it's finishing in 2 days
<jam> katco: so is there an easy way to update leadership settings outside the API? I'd probably just work around this, but we'd want to fix it
<fwereade> jam, I think we do want to fix it, but I'm shelving it under the general "we need to fix shared resources" heading in my mind
<katco> jam: sure, let me grab some code
<jam> fwereade: "fix it" being ?
<fwereade> jam, ie an apiserver needs an explicitly-created and managed lease-manager to run, rather than just hoping someone else has started one in the background
<fwereade> jam, katco: the action-at-a-distance thing of WorkerLoop is what makes me edgy
<jam> katco: so I added debugf statements, and I see it get to the point of checking if we're still the leader
<jam> then it hangs
<jam> without getting a failure or success
<fwereade> jam, are you running a WorkerLoop in the test?
<jam> fwereade: TestLeaderSettingsEvents code
<jam> do you need another worker running?
<fwereade> jam, yeah
<fwereade>                 workerLoop := lease.WorkerLoop(st)
<fwereade>                 return worker.NewSimpleWorker(workerLoop), nil
<fwereade> if you've got one of those running for the duration of the test it'll be fine
<jam> fwereade: for the *api* not to hang?
<fwereade> jam, yes
<jam> *ugh*
<fwereade> jam, hence what I was saying above about making the dependency explicit
<jam> at th ev
<katco> jam: the reason being there needs to be a lease manager instance running
<jam> at the very least dying an ugly death of "nobody responding" would be far better
<katco> jam: and the lease manager runs in a worker
<jam> katco: fwereade: wouldn't JujuConnSuite be the place to be starting this sort of thing?
<katco> jam: to answer your other question, you can get leadership settings off of state, and then update that
<katco> jam: currentSettings, err := st.ReadLeadershipSettings(serviceId)
<katco> 		currentSettings.Update(rawSettings)
<katco> 		_, err = currentSettings.Write()
<fwereade> jam, katco: I guess that's probably better but I'd almost rather have the ugliness front-and-centre so we maintain an incentive to notice and fix it
<fwereade> jam, katco: perhaps that is optimistic
<katco> fwereade: you are saying there is an issue running the lease manager in a worker?
<fwereade> jam, katco: but even so, what I hate about JCS is that it just sets up *everything* our code might need
<fwereade> katco, I'm saying that having the api able to hang because a completely separate worker hasn't been started is an issue
<katco> fwereade: i don't think we can do much better than an error there. the two are coupled
<fwereade> katco, we could start it explicitly inside apiserver and that'd be ~fine, modulo all the yuckiness of writing ad-hoc Runners into our workers
<fwereade> katco, or we could [write some more infrastructure] and have it created and passed into the apiserver worker, and that'd also be fine
<fwereade> katco, it's really just the action-at-a-distance thing
<katco> fwereade: what about simply returning bool if it's not setup yet, and then returning an error?
<fwereade> katco, (and, btw, it's not very safe -- I caused myself a bunch of confusion by accidentally starting two workers in a test)
<katco> fwereade: of course that should never happen in an actual environment
<fwereade> katco, doesn't address the "why does package X arbitrarily depend on someone else calling some function in package Y"
<jam> katco: fwereade: now I'm getting "failed to merge leadership settings"
<jam> seems wordpress/0 isn't the leader of wordpress ?
<jam> fwereade: so I need to claim leadership and all that stuff first?
<jam> I think I'll just write to state
<jam> far less setup overhead
<jam> fwereade: ideally this sort of test wouldn't even need a state running
<katco> fwereade: well, i mean that's just properly decoupled code, right? the leadership api has a dependency on the lease manager. best to keep those decoupled than to more tightly couple them for the sake of testing
<katco> jam: yeah you have to be leader
<katco> jam: dems the business rules
<jam> katco: being able to run an API server that will hang responding to API calls seems a bit broken
<katco> jam: i'm claiming that's a separate issue; we can easily return an error there
<katco> jam: also i'm questioning whether that situation will arise anywhere but full-stack testing scenarios
<jam> katco: I'm saying that having the Leadership API exposed means it should be running the stuff it needs underneath
<jam> IMO
<fwereade> katco, they're coupled,  full stop. not very tightly, but they are; and so an explicit dependency, whether external ("you can't create an apiserver without a lease manager") or internal ("when starting an apiserver, start and track a lease manager") is desirable
<fwereade> katco, and IMO an external one is far superior
<fwereade> katco, that way you don't have the mess around monkey-patching global funcs to test the code with the dependency
<jam> fwereade: agreed (I think we're saying the same thing from different POV)
<fwereade> jam, yeah, I haven't perceived us disagreeing
<jam> fwereade: well I disagree with you all the time, but that's just when you're wrong :)
<fwereade> jam, this was one of the major motivations for what I've been saying about JFDIing a worker registry
<fwereade> jam, ;p
<jam> fwereade: on a different note, we have a small naming issue between LeadershipSetting and LeaderSetting
<jam> fwereade: I have the feeling we should try to standardize somewhere
<fwereade> jam, bugger, I have not been consistent, have I
<jam> fwereade: the API you requested from me was LeaderSettingsEvents
<fwereade> jam, I think LeaderSettings is perhaps more correct?
<jam> the uniter method is s.uniter.LeadershipSettings.Merge
<fwereade> jam, they're the settings that are/were set by the leader, not the settinngs that pertain to the concept of leadership
<jam> fwereade: I feel like LeadershipSettings would be "who is the leader" (settings about Leadership) vs "LeaderSettings" is what did the leader *say*
<fwereade> jam, paraphrase jinx
<jam> fwereade: :)
<jam> fwereade: so I'll stick with LeaderSettings at this level, and we can look to push that down into apiserver/leadership/settings.go
<fwereade> jam, shtm, thanks
<jam> sounds hot to me ?
<fwereade> jam, that was *exactly* what I though, and decided to let it stand :)
<jam> so hairy thank mary?
<katco> fwereade: sorry back
<fwereade> katco, np
<jam> fwereade: so one thing that surprised me
<jam> fwereade: Discard*Event
<jam> blocks trying to send the discard message
<jam> is that what we want?
<voidspace> dimitern: ping
<fwereade> jam, yes
<jam> I guess so
<fwereade> jam, it guarantees that by the time the func returns, the discard has completed, and we can safely read from the corresponding channel without getting another event until there's a real change
<katco> fwereade: i agree with creating an apiserver and passing in a lease manager. i don't think i knew how to do that given the constraints of the apiserver instantiation at runtime
<jam> fwereade: I really wish we didn't have 50 lines of simplestreams noise when a test fails...
<fwereade> jam, me too
<jam> (though that's a sign of running too much in jujuconnsuite
<fwereade> katco, yeah, exactly, that was where I fucked up
<fwereade> katco, I sorta threw it over the wall at you without making clear that the worker dependency was going to be a problem
<katco> fwereade: but you can at least see the nice dependency tree being built up; it's completely possible if apiservers allowed for it
<dimitern> voidspace, pong
<katco> i.e. leadershipservice takes a leadershipmanager takes a leasemanager
<fwereade> katco, oh absolutely -- I think all our workers need that
<katco> IoC?
<katco> or the n-tier thing
<fwereade> katco, pretty much -- I think the dependency chains are potentially long
<fwereade> katco, in terms of things that can be usefully modelled as persistent shared resources
<katco> that's where DI comes in usually... with go we'd have to use factories
<fwereade> katco, pretty much everything needs (1) an api connection and (2) an environment in which we can guarantee we won't be running upgrade steps
<fwereade> katco, the apiserver needs (2), and a state connection, and a lease manager
<katco> yeah
<fwereade> katco, so I *think* I have some worthwhile ideas here and I hope to get onto them soon
<katco> awesome :)
<jam> fwereade: so I'm trying to write a test that we can Discard the first leadership message
<jam> however, the test gets to DiscardLeaderSettingsEvent before it actually gets the underlying first watcher event
<fwereade> jam, don't start reading from the discard chan until you've got the first watcher event
<jam> fwereade: is that *correct* or just good for testing?
<fwereade> jam, I think it's correct
<voidspace> dimitern: when we call StopInstances on the kvm / lxc brokers we use instance ids not container tags
<voidspace> dimitern: so inside StopInstances we don't have that information
<fwereade> jam, otherwise no client can create,discard without races
<voidspace> dimitern: it's not obvious to me how to go from one to the other
<voidspace> dimitern: it's provisionerTask that calls StopInstances and it's called with instance.Instance which doesn't have container id either
<fwereade> voidspace, whatever did the StartInstance ought to have somehow recorded the instance id assigned to the container
<voidspace> fwereade: ok, I'll look - thanks
<jam> fwereade: k, it *is* what discardConfig does, but I felt having the extra local variable was clumsy. will do
<fwereade> voidspace, instance id is the responsibility of the broker layer -- it creates them and returns them and ought to be able to understand them when passed back in
<fwereade> jam, that whole type is clumsy :(
<jam> heh
<voidspace> fwereade: it's definitely "somehow recorded", it just might be that we need to change the provisioner api to take instance id and do the lookup itself
<jam> its only 1000 lines in a single select loop
<jam> surely that can't be bad
<voidspace> fwereade: the new provisioner api takes a container id
<jam> fwereade: I will say, the need to have everything in the select is a bit of a shame
<jam> you can't farm out to other selects
<jam> I guess if they all pipelined into a channel
 * fwereade scratches head at voidspace -- link me code please?
<voidspace> fwereade: I'll look at how the two are associated when the instance is started
<voidspace> fwereade: this just landed this morning, a new ReleaseContainerAddresses api
<voidspace> fwereade: for releasing (with the provider and in state) and IP addresses statically allocated to the container
<voidspace> fwereade: so it needs to be called on StopInstances
<dooferlad> dimitern: Oh, I had already looked at yours but I thought TheMue said he was doing the review. Looked fine, but I didn't dig into great detail in terms of code style, I was mostly interested in the new/changed functionality.
<voidspace> fwereade: https://github.com/juju/juju/pull/1810/files
 * jam still loves that "remove my local changes" in git is "git co file"
<jam> I *guess* checkout the file again sort of makes sense if you cross your eyes and pull your brain out through your nose...
<jam> and it *is* the way you do it in SVN as well
<dimitern> dooferlad, ok, so you can approve it :)
<dimitern> voidspace, let me have a look
<fwereade> voidspace, it looks to me like that code is itself getting instance ids from container ids because those ids are recorded in the state anyway?
<fwereade> voidspace, would you explain the problem again, I think I'm confused
<voidspace> fwereade: that code needs the container id and the instance id
<voidspace> fwereade: the ip addresses are allocated to the container id in state and the provider needs the instance id to do the release on the provider side
<voidspace> fwereade: so we need both
<fwereade> voidspace, right, and given a container id it looks in the state to figure out the instance id assigned to it
<voidspace> fwereade: that api needs to be called when we shut down a container - so at StopInstances time
<voidspace> fwereade: yes, from a container id we can get instance id
<voidspace> fwereade: but at StopInstances time we have instance id and not container id
<voidspace> fwereade: so I either need the broker to go from instance id to container id
<voidspace> fwereade: or change the api to take instance ids
<fwereade> voidspace, I don't *think* so, but let me marshal my thoughts for a couple of minutes
<voidspace> cool
<jam> fwereade: so, WantLeaderSettingsEvents() I'd like to discuss the proper way to enabled/disable them. As I think we decided you definitely want an event immediately if you pass true, even if we didn't think there was an event.
<jam> fwereade: do you feel it is better to point-to/not-point to a Changes channel, always read from the channel but not send it on, or create a new watcher each time
<jam> certainly in handling a "true" case, we can just set outLeaderSettingsEventOn = outLeaderSettingsEvents and it will send an event.
<jam> but it *might* send 2 events if there is also a Changes() pending, right?
<fwereade> jam, queued
<fwereade> voidspace, context assumption
<fwereade> voidspace, we're stopping an instance, and we have to unassign all the addresses before we stop the instance, because we can't reference the instance once it's destroyed
<fwereade> voidspace, accurate?
<voidspace> fwereade: not quite, we can do it afterwards - order isn't important
<voidspace> fwereade: we just need to ensure that we release the ip addresses with the provider when we shut down the instance
<mup> Bug #1427342 changed: juju package should have strict version dep with juju-core <packaging> <juju-core:Won't Fix by sinzui> <https://launchpad.net/bugs/1427342>
<fwereade> voidspace, ok, who is it that's assigning these addresses?
<voidspace> fwereade: the broker in StartInstance does
<fwereade> voidspace, dimitern: I can't remember how dynamic we're trying to be at the moment
<voidspace> fwereade: by calling the provisioner PrepareContainerInterfaceInfo which requests an IP address from the provider and associates it with host nic
<fwereade> voidspace, dimitern: ok, so, looking ahead a little
<voidspace> fwereade: the broker then sets up the iptables rules and container config so that IP address routes to the new container
<dimitern> voidspace, fwereade, so from an instanceId for an lxc container you can go back to the machine it
<fwereade> voidspace, dimitern: what's the plan for when we're adding new addresses to containers at runtime?
<dimitern> voidspace, fwereade it's always the same format: prefix+newMachineTag(containerMachineId).String()
<fwereade> voidspace, dimitern: that feels hard to integrate with the provisioner
<dimitern> fwereade, voidspace, we haven't go that far
<dimitern> gone
<fwereade> dimitern, [that feels to me like a broker implementation detail on which we should not be depending?]
<voidspace> I agree
<dimitern> fwereade, it's a container/lxc implementation detail
<fwereade> dimitern, voidspace: ok, still thinking
<fwereade> dimitern, voidspace: what's the lifecycle handling like for addresses?
<dimitern> voidspace, fwereade, you can get the prefix (usually "juju" but can be other as well) from the broker.manager.ContainerConfig() -> PopValue(container.ConfigName)
<dimitern> fwereade, voidspace, we release addresses when their assigned machine is no longer alive (at least for containers - that's the first step)
<voidspace> fwereade: the don't have a Life (as nothing else references them) - so they're created when allocated (currently only for a new container) and removed when the container is stopped
<voidspace> fwereade: or at least they will be removed...
<sinzui> dimitern, fwereade katco, mgz: do you have a minute to review a branch for the release of 1.22.0 http://reviews.vapour.ws/r/1146/
<dimitern> sinzui, looking
<fwereade> dimitern, voidspace: ok, my main worry here is that the provisioner shouldn't really be responsible for it
<mgz> dimitern wins...
<fwereade> dimitern, voidspace: looking ahead, we're going to want to be assigning and removing addresses dynamically
<fwereade> dimitern, voidspace: I know we're not doing that yet, so having the assignment in the provisioner *for now* is probably ok
<dimitern> sinzui, ship it
<fwereade> dimitern, voidspace: but if we're just implementing the removal now, I think it would probably be cleaner for machine death to queue a cleanup that directs all associated addresses to the attention of some address worker
<dimitern> fwereade, *for now* yeah - that's the key takeaway :)
<fwereade> voidspace, dimitern: yeah, I understand it's a viable shortcut for now
<dimitern> fwereade, the plan going forward is to resurrect and fix the networker, which will take care of the dynamic config at run time
<fwereade> voidspace, dimitern: I'm just suggesting that my spidey sense says this is a networker's job, and that it's better to implement a half-networker for cleanup that to add the cleanup to the provisioner as well
<fwereade> voidspace, dimitern: which is already creaking under the weight
<fwereade> voidspace, dimitern: and will need to have the address logic extracted regardless
<fwereade> voidspace, dimitern: am I making sense here?
<voidspace> fwereade: dimitern: so either put it in the broker for now and move it to the networker when that is resurrected, or create a new worker and move *that* to the networker later
<voidspace> the former seems like less busy work :-)
<dimitern> fwereade, I agree putting that responsibility into the provisioner is not ideal and can only be temporary
<dimitern> fwereade, voidspace, yes - I was going to suggest what voidspace just said
<voidspace> or resurrect the networker now with just this functionality enabled
<dimitern> "the former" I mean
<fwereade> jam, yes, it might send 2 events if a change happens soon enough before we discard
<dimitern> voidspace, unfortunately it's a wee bit beyond repair in its current state
<voidspace> ah, ok
<jam> fwereade: I'm trying to sort through it now, because my current code is sending 2 events when I set it to true
<fwereade> jam, I think for all practical purposes the problematic case is at startup, though, and we get good behaviour by only accepting discards once we've had an initial change
<jam> fwereade: I have that behavior no problem. I'm talking about Want(true) vs Want(false)
<fwereade> jam, is the watcher being read from continuously, or do you switch off reading from the source when the filter's client doesn't want the results?
<jam> fwereade: currently the watcher is read continuously but it doesn't forward on an event when it is set to false
<dimitern> fwereade, so can we agree, for now to leave the cleanup of addresses in the brokers with a TODO + bug filed to move it into the networker ASAP
<jam> fwereade: it seems to be a timing thing
<jam> specifically
<jam> if you start with Want(false) and then watch for changes, you get nothing as expected
<jam> and then making a change, again gets nothing as expected
<jam> but internally that change hasn't actually been seen
<fwereade> dimitern, if we risk messing around with magical conversions from instance id to container id, I'm not sure we can
<fwereade> dimitern, but wait
<fwereade> dimitern, the provisioner knows container id and instance id
<fwereade> dimitern, making StopInstances stop magically-associated addresses feels like a Very  Bad Thing -- shouldn't the provisioner itself just loop through a dead machine;'s addresses, cleaning them up, and then finish off the machine?
<fwereade> we already have all the info we need there
<dimitern> fwereade, we absolutely can pass instanceIds to ReleaseContainerAddresses API
<fwereade> dimitern, but why is the api server talking to the env to release them in the first place?
<fwereade> dimitern, the provisioner has the env right there
<dimitern> fwereade, that way the provisioner having state access can find the container id from the instance id
<fwereade> dimitern, it has the instance ids
<fwereade> dimitern, the provisioner is already working in terms of container ids
<dimitern> fwereade, because addrs are given to juju by the environment, and we keep track of them in state - i.e. we need access to both
<fwereade> dimitern, the provisioner *already has an Environ* -- surely it should not be farming environ calls out to the api server
<fwereade> dimitern, it can say "hey tell me about the addresses for this container id" as well
<fwereade> dimitern, and then tell the Environ to clean up those addresses
<dimitern> fwereade, no it doesn't actually
<dimitern> fwereade, we're talking about the provisioner on the machine, which doesn't have access to the complete environ
<fwereade> dimitern, oh, hell, sorry
 * fwereade recontextuallises
<voidspace> heh
<dimitern> fwereade, so that's the obstacle there - and it felt ok to me to do it in the env provisioner, where we have both access to state and the environ
<fwereade> voidspace, dimitern: so, a container has (let's say just) one address
<fwereade> voidspace, dimitern: when its host is finished with the container, it needs to somehow inform the state server that the address is no longer required
<dimitern> fwereade, voidspace, that's part of the story - the other part needs to happen on the host (cleaning up routes, etc.)
<fwereade> voidspace, dimitern: and the state server needs to somehow determine <x information> about that container's address, so that it can [unassign it from the instance?] and destroy the address itself
<voidspace> fwereade: the host, state and the provider all need action
<voidspace> fwereade: yes, currently that means - fetch all ip addresses associated with the container id from state, unassign them from *the host nic* with the provider and delete from state
<fwereade> voidspace, dimitern: I'm mainly focused on the interactions between the model and the environ here, I assume for the purposes of this discussion that host cleanup is fine and once it's no longer using the address it needs to make sure it's cleaned up somewhow
<dimitern> fwereade, voidspace, yeah, and that information for now is just a doc in the ipaddressesC
<jam> fwereade: I need to EOD, I think things are "working" but the double change is causing the test to break. Any chance I can hand off to you?
<fwereade> jam, go for it, just push and remind me the branch
<fwereade> dimitern, voidspace: forgive my slowness, but let me restate again
<dimitern> fwereade, no worries, should we do a g+ in fact?
<fwereade> dimitern, voidspace: we have: [a Dead container]; [a host that's just destroyed the container instance]; [a host machine with an address assigned for the dead container]
<fwereade> dimitern, voidspace: is irc ok? I feel like I'll need to be doing some looking back
<voidspace> irc is fine with me
<dimitern> fwereade, voidspace, fine with me too
<dimitern> fwereade, voidspace, so what do you mean by that last part
<voidspace> fwereade: you are correct, with the note that "host machine with an address assigned for the dead container" means that this address is assigned by the *provider* and needs unassigning there
<dimitern> voidspace, yeah, that was my point as well
<fwereade> voidspace, dimitern: so, this provisioner is *almost* ready to remove the Dead machine, but it doesn't want to do that until it's cleaned up everything that the machine depends on, lest they leak
<fwereade> voidspace, and it doesn't have an env connection itself
<fwereade> dimitern, ^^
<voidspace> fwereade: we don't really care about order - but we do need to ensure they don't leak
<dimitern> fwereade, voidspace, so far, so good yeah
<fwereade> voidspace, that's what Dead means essentially
<fwereade> voidspace, nothing should be seeing or interacting with this entity except its appointed undertaker as it were
<fwereade> voidspace, ie the provisioner
<voidspace> heh to "appointed undertaker"
<fwereade> voidspace, dimitern: so it would seem like the thing to do would be to mark the address dying, and have another worker detect that, clean up the environ-side resource, and set it to dead
<fwereade> voidspace, dimitern: and in general to follow the model defined by machine, where there's a model object with a lifetime that may or may not have an associated environ reference
<fwereade> voidspace, dimitern: and in the long term that'd be how we'd create and assign them as well, I expect, but that doesn't have to happen now
<dimitern> fwereade, voidspace, that *does* make sense to me
<fwereade> voidspace, dimitern: so in that case we should just be able to Destroy() all those model-side addresses and have a worker to clean them all up separately
<dimitern> fwereade, voidspace, and fits well with the original job of the networker - watching for changes to NICs and addresses and reacting
<voidspace> except for allocation we can't add indirection - we must sychronously request the address and then do associated setup
<voidspace> as we don't know if allocation of any specific address will succeed
<voidspace> and we want the address at container  setup to write the configs correctly
<voidspace> for death it's fine
<dimitern> yeah
<fwereade> voidspace, you don't have to do it now, but I don't think it's impossible -- in the easy static case you just gate container setup on address validity, and in the yucky dynamic case you're no more screwed than before ;p
<voidspace> fwereade: dimitern: which is fine for the apiserver implementation, but *still* leaves the same question about what information to pass from the machine worker to the apiserver
<voidspace> fwereade: yep, cool
<voidspace> and instance id is fine as we know the host / provider so the api server can get the container id
<voidspace> and that maybe the least effort to provide
<dimitern> fwereade, I'd rather *not* gate container creation on address availability
<fwereade> voidspace, I think it's just a matter of giving the addresses tags (which I'm pretty sure should not include their ip, they should probably just be address-<uuid>)
<dimitern> fwereade, we'll end up in a world of pain
<dimitern> fwereade, even now, if we can't allocate a static IP we still start the container - it will be addressable from its host
<fwereade> voidspace, and then we can (1) ask for the machine's addresses, and get juju entity ids, which we then pass back up in a Destroy call
<fwereade> voidspace, I think?
<fwereade> dimitern, disagree, I think
<fwereade> dimitern, when we create a container, we know what addresses it'll need [in the static case]
<voidspace> fwereade: so, are you suggesting that we watch for model side container death to Destroy the addresses
<dimitern> fwereade, static addresses being available cannot be guaranteed
<voidspace> fwereade: or that the provisioner requests destruction?
<voidspace> because it would be much easier to not have the provisioner have to know all the addresses
<fwereade> voidspace, I think the provisioner requests destruction of those addresses because it knows they're no longer needed
<fwereade> dimitern, right, but consider
<dimitern> fwereade, yes, but we only try to allocate it when we're about to start it initially
<voidspace> so how is the provisioner going to know the addresses to request destruction of?
<fwereade> voidspace, I think it should be able to ask for them?
<voidspace> if it has to make an api call to get addresses associated with the container, and then request destruction of them
<dimitern> voidspace, by their life cycle value I guess
<fwereade> voidspace, although actually that is a pointless round trip
<voidspace> it might as well just make one call requesting the destruction of the addresses for this container
<voidspace> right
<fwereade> voidspace, dimitern: can we not just add a cleanup for machine that destroys the associated addresses and keep that all server-side?
<fwereade> voidspace, dimitern: the provisioner says it's done with the machine
<voidspace> if we're going to have a worker doing the cleanup it might as well just watch for machine lifecycle
<dimitern> fwereade, yes! indeed we can
<fwereade> voidspace, we'll need dynamic addresses soon enough, let's not couple address lifetime to machine lifetime unless we have to
<dimitern> voidspace, I was thinking about it but then I forgot
<voidspace> so machine destruction can request Destroy on all addresses and an address watcher do the cleanup
<voidspace> and that leaves us free to *also* destroy addresses dynamically
<fwereade> dimitern, about the gate-on-address-availability, I think I want to keep pushing a little
<dimitern> fwereade, voidspace, which means we shouldn't try to remove any machine that still has allocated addresses to cleanup, rather than "delete cascade"
<fwereade> dimitern, if we know we need a container with an address on X network, shouldn't we be getting that address before we even start to run the container? and shouldn't the lack of an address cause us to fail fast, ideally before the provisioner even sees the machine?
<dimitern> fwereade, so far CA is a new feature and unobtrusive - it will work or not, but won't make things worse than before CA got introduced
<fwereade> dimitern, hmm, good point
<fwereade> dimitern, have people been using unaddressable containers though?
<dimitern> fwereade, however, I can see your point as well - and makes sense
<voidspace> fwereade: people have been using containers
<dimitern> fwereade, yeah - except on maas they're unaddressable everywhere
<voidspace> fwereade: the fact that they're now meant to be addressable shouldn't cause them to fail to be created
<dimitern> fwereade, voidspace, exactly - at least at this point
<fwereade> dimitern, voidspace: huh? I thought they were always addressable on maas?
<dimitern> should we decide to make "have a static address on subnet X" a requirement (e.g. like having lxc-ls installed) - then, yes
<fwereade> dimitern, voidspace: by means of black magic and layer-breaking, true, but still
<voidspace> heh, right
<fwereade> dimitern, but ok, shouldn't that still be a model-level consideration either way?
<dimitern> fwereade, they were kinda addressable (mostly), before I started fixing stuff
<dimitern> :)
<fwereade> dimitern, so to preserve behaviour, you create machines with 0 addresses, so the provisioner doesn't have to wait for any of them
<fwereade> dimitern, haha
<dimitern> fwereade, it sounds like a constraint of a sort
<fwereade> dimitern, well, yes -- I think the idea has always been that everything should have an address on the juju-private network
<fwereade> dimitern, but yes there is a use case for unaddressable containers
<fwereade> dimitern, and I suppose we shouldn't take away that ability
<fwereade> dimitern, bah
<fwereade> dimitern, I am still not comfortable with giving people unaddressable containers by default though
<dimitern> fwereade, at least not right now I think :)
<fwereade> dimitern, I really think that should be opt-in, and that fact that it's hitherto been the deafult is the bug
<dimitern> fwereade, now we're not - we're making a best effort to allocate an address every time, if supported and have available
<mup> Bug #1430049 changed: unit "(AnyCharm)" is not assigned to a machine when deploying with juju 1.22-beta5 <oil> <openstack> <uosci> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1430049>
<mup> Bug #1431372 was opened: juju restore fails with "/var/lib/juju/agents: No such file or directory" <backup-restore> <juju-core:Triaged> <https://launchpad.net/bugs/1431372>
<fwereade> dimitern, yeah, and that's cool, but it's unpredictable in practice -- even if it works 90% of the time, that 10% will be baffling
<fwereade> dimitern, it's not very useful if all we can say is "your container might get an address"
<dimitern> fwereade, it's very well logged every step of the way, but it has to be exposed more
<fwereade> dimitern, "your container will get an address, *or* fail, *or* not get an address because you explicitly said you didn't want it" is more palatable
<dimitern> fwereade, we also can't guarantee you won't get a broken instance from the environ
<fwereade> dimitern, yay good logging, but it's not there to support the UX -- that should surface the important things directly
<dimitern> fwereade, but it happens now and then on EC2 with CI
<dimitern> fwereade, yes, I agree - address allocation should be exposed with the status / health checks at some poit
<dimitern> point
<fwereade> dimitern, different class of problem -- that's equivalent to "the provider said it gave you X address but it lied"
<fwereade> dimitern, but my point I think is that a failure to allocate an address does indeed need to feed back into an error on the machine
<dimitern> fwereade, from UX POV it's not "the provider" it's juju's fault :)
<dimitern> fwereade, ok, I'll keep this in mind
<voidspace> so for the immediate problem
<dimitern> fwereade, but unlike the explicit relation endpoints bindings in the net phase 1 model, we don't have a way for the user to express intent
<voidspace> we keep the provisioner api, called by the provisioner/broker
<voidspace> but change the server side to give addresses a life-cycle and a cleanup worker
<voidspace> *or* we keep all the cleanup server side
<dimitern> voidspace, fwereade, for the immediate problem, let's agree to have a cleanup procedure as part of the machine cleanup in the cleanup worker
<voidspace> ok
<dimitern> it will get veeeery clean
<voidspace> :-)
<voidspace> cleaner the better
<voidspace> so we can revert my last merge
<fwereade> voidspace, dimitern: only concern: calling into the environ to do work from within cleanup
<fwereade> voidspace, dimitern: would be much more comfortable with a lifecycle watcher for addresses that just worries about dying->dead
<dimitern> fwereade, why a concern?
<dimitern> fwereade, yeah, +1 - that will be the next step - networker
<natefinch> fwereade, dimitern, anyone else:  wwitzel3 and I are working on upgrading non-state servers into state servers... we have the code to update the data in mongo, is a watcher the correct way to have the intended server notice it's supposed to convert into a state server, or is there a better way?
<fwereade> natefinch, +1 watcher, I think
<dimitern> voidspace, it doesn't have to be reverted now - it can be molded into the cleanup procedure :)
<fwereade> dimitern, voidspace: I am failing to articulate it, but it's really not the right layer
<fwereade> dimitern, voidspace: we need to do dances to use the environs from inside state anyway
<dimitern> natefinch, yeah
<fwereade> dimitern, voidspace: I'd prefer not to encourage it except for really important reasons
<dimitern> fwereade, how about a worker then - akin to the instance poller
<fwereade> dimitern, voidspace: namely preflighting env requests to help give good fast-failing UX in response to nonsensical requests
<fwereade> dimitern, perfect
<fwereade> dimitern, and, yes, it's *another* Environ in the same process, which is bad and wrong, but I think I have a plan for that
<dimitern> fwereade, voidspace, great - then it's decided
 * fwereade cheers
<fwereade> thanks guys
<voidspace> lovely
<voidspace> dimitern: please summarise :-)
<voidspace> a new worker to do address cleanup
<dimitern> fwereade, thank you!
<voidspace> addresses to have a lifecyle
<dimitern> voidspace, just a sec
<voidspace> ok
<dimitern> voidspace, we can do it with a life cycle or otherwise, but essentially a strings/notify  watcher worker
<dimitern> voidspace, which listens to changes to ipaddresses and responds by releasing those no longer needed
<dimitern> (hmm it can even eventually also handle the allocation i guess)
<dimitern> voidspace, we need to have a way to signal that worker - having a life is the simplest and obvious choice
<dimitern> voidspace, however, we already have AddressState
<dimitern> voidspace, which is kinda like life - moving forward
<voidspace> dimitern: although that changes more frequently
<voidspace> dimitern: so the watcher would be triggered more often
<voidspace> dimitern: but fine
<fwereade> dimitern, voidspace: I would recommend using Life myself
<voidspace> cool
<fwereade> voidspace, sad to say that wouldn't help with frequent triggering, but for coincidental implementation reasons rather than fundamental design ones iyswim
<voidspace> heh
<fwereade> voidspace, dimitern: *unless* we put the life field in a distinct doc
<voidspace> that all got a lot more meta than I anticipated
<voidspace> but thanks
<dimitern> fwereade, voidspace, ok, for pre-existing docs it won't be there, which would mean "alive" I guess.. hmmm smells like an upgrade step for sure
<voidspace> right
<fwereade> voidspace, dimitern: yeah, think so
<voidspace> we could make alive the empty string
<voidspace> then there's no upgrade needed...
<voidspace> ;-)
<dimitern> :) if only..
<fwereade> voidspace, dimitern: I would quite like life in a distinct doc, because trigger patterns, but the inconsistency bugs me
<voidspace> so a separate collection with an ipaddress reference and a life
<dimitern> voidspace, fwereade, ok, that starts to sound to me like the old idea we had about an "environ-level networker"
<voidspace> I would *really* like to go to lunch
<fwereade> dimitern, voidspace: fwiw Alive *is* iota
<dimitern> voidspace, sure :)
<voidspace> so if there are any more changes, dimitern can communicate them to me - but it sounds like we're there
<dimitern> fwereade, nice!
<dimitern> yes I think so
<fwereade> dimitern, voidspace: but I would rather just write the fiueld properly than depend on unmarshalling, sometimes we write code that looks in the db not just the local docs, I think it's a landmine
<voidspace> yep, agreed
<fwereade> dimitern, voidspace: so upgrade step please
<dimitern> fwereade, +1
<dimitern> voidspace, I'll summarize the steps needed and we can discuss them tomorrow?
<mup> Bug #1431401 was opened: Juju appears hung when using the local provider for the first time <juju-core:New> <https://launchpad.net/bugs/1431401>
<sinzui> xwwt, alexisb dimitern natefinch fwereade : 1.23 has a pass. Do you want to branch? Now, or wait for more features/fixes to merge
<alexisb> sinzui, lets branch
<dimitern> sinzui, I have one more fix, but it will get merged after branching just the same
<katco>  
<katco> /query alexisb
<dimitern> dooferlad, that's yours ^^
<alexisb> dimitern, fixes can still go in 1.23 :)
<alexisb> just no more features with with the exceptions fo the ones the release team and I have agreed to :)
<alexisb> katco, you looking for me?
<dimitern> alexisb, yeah :)
<sinzui> dimitern, master is 1 commit beyond out test. You need to merge dimitern/ca-fixes-clone in the 1.23 I create. If you are fine with this, I will create the branch this minute
<dimitern> sinzui, ok, I can do this
<sinzui> dimitern, done. https://github.com/juju/juju/commits/1.23
<dimitern> sinzui, cheers
<dimitern> sinzui, is the merge bot monitoring 1.23 yet?
<sinzui> dimitern, adding it right now in fact
<dimitern> it is
<dimitern> :) great timing sinzui
<xwwt> sinzui: branch it.
<mattyw> jam, do you have a moment to do a review for me? http://reviews.vapour.ws/r/1005/
<mattyw> sinzui, is landing in master blocked?
<mattyw> sinzui, or rather - it appears to be blocked but I didn't think it was
<sinzui> mattyw, no, but try again, the bug is fix
<natefinch> mattyw: Why does get meter status return a string for the code and not a MeterStatusCode?  Why even have the type if you're not going to use it?
<mattyw> natefinch, where are you looking? it used to return strings but in theory I've made changes to use the types now
<natefinch> (I know that's not your change here, just curious)
<mattyw> natefinch, you looking at 1005?
<natefinch> mattyw: yes
<natefinch> mattyw: https://github.com/juju/juju/pull/1677/files#diff-8319a651ebe0aac63caf50325cef6903R112
<mup> Bug #1430898 changed: run-unit-tests ppc64el timeout <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Fix Released> <https://launchpad.net/bugs/1430898>
<mup> Bug #1431444 was opened: juju run results contain extraneous newline <juju-core:New> <https://launchpad.net/bugs/1431444>
<natefinch> you know your product is becoming successful when people start complaining about newlines
<natefinch> wow, just stumbled on cmd/jujud/agent/machine.go which has 77 import statements :/
<natefinch> wow, google code is shutting down
<natefinch> http://google-opensource.blogspot.com/2015/03/farewell-to-google-code.html
<bodie_> davecheney, http://reviews.vapour.ws/r/1139/ should be satisfactory now :)
<bodie_> natefinch ... wow
<perrito666> natefinch: I think that is the nature of all google projects
<lazyPower> natefinch: is there way to block on juju run until the service is up?
<lazyPower> or do i need to write some glue code the parse juju status until the state is "started"
<natefinch> lazyPower: hmm
<lazyPower> yeah, i kinda thought i'd have to write some glue code. nbd - just wanted to make sure i wasn't reinventing something that already existed
<natefinch> lazyPower: Juju run might work to keep your refresh of juju status from running more often than you need... but I'm not sure that juju run will block until the service is started.
<lazyPower> yeah it actually returns "No service" so its getting run before the service is even deployed. Either way i'm going to have to block myself.
<lazyPower> and it doesn't make sense for juju run to block when the service doesn't even exist
<natefinch> ahh yeah, that makes sense
<bogdanteleaga> can apt proxy/mirror settings change while an instance is running?
<natefinch> bogdanteleaga: probably
<bogdanteleaga> natefinch: have any idea where's the code that deals with it?
<natefinch> bogdanteleaga: I didn't say we had code to deal with it, I said I wouldn't be surprised if it could happen :)
<natefinch> bogdanteleaga: what would you need to do in response to a change?  Isn't the point of the proxy just that apt will now download stuff from somewhere else?
<bogdanteleaga> natefinch: well I found some stuff that's done wrt it when the instance is booted through cloudinit
<bogdanteleaga> natefinch: not sure about changes after that
<natefinch> thumper: got a little time to talk about converting servers into state servers?  having an auth problem logging into the mongodb, wondering if you might have insight as to how to fix that up
<thumper> um... sure
<natefinch> haha
<natefinch> just hoping you'll think of something I've missed :)
<thumper> natefinch: hangout?
<natefinch> thumper: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1
<perrito666> oh man, I was also interested in natefinch and thumper conversation, I have a similar problem
<thumper> perrito666: you missed an awesome conversation. Pure magic.
<perrito666> thumper: Well I dont doubt there was magic :p I was hoping for a bit of engineering though ;)
<thumper> perrito666: nah, not much of that
 * perrito666 shrugs and puts his gandalf suit
<perrito666> I am going to make a very generic question so please dont hate me and, if you think you have any answer whatsoever that might help please contribute it
<perrito666> I want to change an api server machine tag, what else, besides changing the tag in agent config, you think I should do?
<perrito666> thumper: ok, apparently my browser spell checker thinks your name should be Time, sorry
<thumper> ")
<thumper> perrito666: why change the machine tag?
<thumper> perrito666: you will also need to change the symlink for the tools
<perrito666> thumper: I already have code that does that
<thumper> my question though is why?
<perrito666> thumper: because, when you restore you cannot guarantee that the freshly created state server has the same tag than the old one
 * thumper has a theory
<thumper> hmm...
<perrito666> buuut
<perrito666> I do think there should be another way
<thumper> does the upstart script mention the machine id anywhere?
<perrito666> thumper: not that I recall
<thumper> if you change the tag of the machine, it will no longer be able to connect to the api server
<thumper> because the password will not work
<perrito666> I have to make a change in one of two, either the backed up data or the newly created machine
<thumper> a mongo user is created for the machine with a particular password
<thumper> that user is the machine tag IIRC
<perrito666> yes
<thumper> on the physical machine, the places are the agent config file, and the link to the tools
<thumper> I think that is all
<thumper> actually
<thumper> I think the machine tag is mentioned in the service file
<perrito666> what worries me is the values inside the mongo
<thumper> at least the name of the script
<thumper> yes, the password will be wrong in mongo
<thumper> there are quite a few moving parts there
<perrito666> you just made me realize that, since I am changing everything in the newly created machine, I could just change whatever reference is not replaced by the backup data
<perrito666> and that would save me from actually changing the tag
<perrito666> that leaves the question I asked in the email
<perrito666> I need admin privileges to jumpstart the rs
<perrito666> and iirc, some of the changes I do are also admin-ish
<thumper> perrito666: I only just noticed the email, been focused on doing HR stuff
<thumper> I'll think about it and try to have a well thought out response :)
<perrito666> thanks, that way I can feel worse about just copy pasting the email
<bodie_> any chance we can get this bugfix merged for 1.23?  https://bugs.launchpad.net/juju-core/+bug/1431612
<mup> Bug #1431612: Action defaults don't work for nil params <actions> <defaults> <juju-core:New> <https://launchpad.net/bugs/1431612>
<mup> Bug #1431612 was opened: Action defaults don't work for nil params <actions> <defaults> <juju-core:New for binary132> <https://launchpad.net/bugs/1431612>
#juju-dev 2015-03-13
<axw> wallyworld: just making a cup of tea, will be a few minutes late
<wallyworld> sure
<axw> thumper: what's next-release? was that just to not disrupt 1.23?
<thumper> axw: yep
<thumper> a feature branch
<thumper> so we weren't holding many branches open against master
<axw> okey dokey
<axw> oh balls, I didn't realise trunk was unblocked
<thumper> axw: we are trialing a number of feature branches, and will report more in nuremburg
<axw> thumper: cool. trunk blocking kinda screws everyone atm
<thumper> agreed
<tasdomas> morning
<wallyworld_> axw: i found the correct appamor profile to use to allow mounting loop devices http://reviews.vapour.ws/r/1154/
<axw> wallyworld_: https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-apparmor -- still unsafe, and I still don't think we should enable this unless it's requested
<wallyworld_> axw: that web page is out of date - ls /etc/apparmor.d/lxc/ shows the mount profile
<axw> wallyworld_: the comments in the file line up with the commentary on the page - what is out of date?
<wallyworld_> i guess we can only add the extra config if needed for storage. but, we would then nt be able to hogsmash a new unit with storage onto a container
<wallyworld_> axw: i didn't see the lxc-container-default-with-mounting profile mentioned
<wallyworld_> only lxc-container-default-with-nesting
<axw> wallyworld_: "Another profile shipped with lxc allows containers to mount block filesystem types like ext4. This can be useful in some cases like maas provisioning, but is deemed generally unsafe since the superblock handlers in the kernel have not been audited for safe handling of untrusted input."
<wallyworld_> axw: hmmm, ok, i wonder hen the auditing might occur. i guess then we only enable the extra config if needed for storage
<wallyworld_> but no hogsmash
<axw> wallyworld_: can't do placement yet anyway
<axw> I suppose we could with containers
<axw> non-existing containers that is
<dooferlad> mornin' o/
<dimitern> morning dooferlad
<dimitern> dooferlad, I've seen your PR - am I correct to assume the order of (application of) the iptables rules do not matter?
<dooferlad> dimitern: indeed. They are all inserted rat the top of the table, so since they are all above reject rules, order doesn't matter.
<TheMue> morning o/
<dimitern> dooferlad, good :) because using a map was the first red light for me - how about go1.3+ and gccgo map ordering :)
<dimitern> TheMue, morning
<dimitern> dooferlad, clever tests though - never using more than 1 element in the maps during tests
<dooferlad> dimitern: that was indeed on my mind.
<dimitern> davechen1y, are you about?
<davechen1y> what's up ?
<dimitern> davechen1y, re this review http://reviews.vapour.ws/r/1150/
<dimitern> davechen1y, this fixes a regression on maas after the introduction of addressable containers (lxc) for ec2 and maas
<dimitern> davechen1y, windows support was not even on my agenda tbh
<dimitern> davechen1y, were they running before on windows?
<davechen1y> dimitern: no idea
<davechen1y> it is not clear what we're supposed to do about windows
<dimitern> davechen1y, right, but I think we should fix it on ubuntu where it's supposed to work, then we can add it to the list of things to enable on windows
<davechen1y> dimitern: feel free to land it
<dimitern> davechen1y, ok, all add a comment, thanks
<davechen1y> ok
<dimitern> dooferlad, you have a review
<dooferlad> dimitern: thanks
<dimitern> TheMue, so you have your maas cluster controller running?
<TheMue> dimitern: yes, only getting a weird apache error message (will now set the server name explicitely) and I'm currently adding a 2nd eth for a private network
<dimitern> TheMue, that's just a warning - you can ignore it
<TheMue> dimitern: yeah, but I dislike it :)
<dimitern> TheMue, yeah, you'll need to setup both DHCP and static ranges for the internal network and leave the other one unmanaged
<dimitern> TheMue, ok :)
<TheMue> dimitern: do you have a good doc for dhcp conf when creating a vmaas?
<dimitern> TheMue, why do you need to change the conf directly?
<TheMue> dimitern: my eth0 has a static address here in my net, so that I can reach it
<TheMue> dimitern: I only interpreted your hint as if I would have to do ;)
<dimitern> TheMue, ah, no - you can configure all via the maas web ui - Clusters - edit interfaces
<TheMue> dimitern: already wondered, because that's how I got the maas docs too
<dimitern> TheMue, you'll probably need to enable ip_forwarding on the cluster and add a SNAT iptables rule so the machines behind maas on the internal network can reach outside
<dimitern> TheMue, and on the other side - e.g. your machine you'll need a static route for the internal network pointing to your maas's eth0 address (the one you can reach from your machine)
 * fwereade out at laura's school for a bit
<natefinch> ParamsStateServingInfoToStateStateServingInfo .... really?
<TheMue> natefinch: Java came to Go
<natefinch> It has state in the name THREE TIMES
<TheMue> dimitern: oh how I like this
<davecheney> natefinch: now you know how I feel
<TheMue> dimitern: "virtual metal" as a service! what.is.virtual.metal? :/
<wallyworld_> axw: i'm off to soccer, have a revised lxc config branch almost ready, uses a StorageConfig arg similar to NetworkConfig, can be extended to do what's needed for host loop etc as needed; just need to finish tests, will propose when i get home later
<natefinch> davecheney: yep
<TheMue> natefinch: you know it's created with the ParamsStateServingInfoToStateStateServingInfoFactory and can be simulated by the ParamsStateServingInfoToStateStateServingInfoMock
<dimitern> TheMue, it's stuff from fairy tales :)
<dimitern> adamantium
<TheMue> natefinch: that's state-of-the-art *scnr*
<axw> wallyworld_: cool, thanks. enjoy
<TheMue> dimitern: *lol*
 * TheMue needs another cup of coffee
<dimitern> we could call it AaaS
<dimitern> adamantium as a service
<natefinch> anything you could possibly pronounce as ass is probably not a good acronym
<dimitern> :D
 * dimitern has 335MB of logs from the yesterdays automated tests on MAAS and EC2 with containers
<TheMue> dimitern: and that's only because you compressed them *g*
<dimitern> TheMue, not even :) - full 5h of testing at TRACE level
<dimitern> 16 separate environments
<dimitern> a hefty $4.68 bill in EC2
 * TheMue sees dimitern creating a data warehouse for log analyzing
<TheMue> a DWaaS
<dimitern> TheMue, yeah - running hadoop nodes doing rgrep ERROR
<dimitern> voidspace, hey there
<TheMue> dimitern: you need a filtered logging, only keeping stuff you're interested in and throwing away the rest
<mattyw> TheMue, you available for a review?
<jamespage> dimitern, do you know who's working on the systemd support in juju?
<TheMue> mattyw: shure, it's my job today
<dimitern> voidspace, let's have a chat after standup for the release of addresses worker
<dimitern> jamespage, yes, ericsnow mostly
 * fwereade out for a while at laura's school
<jamespage> dimitern, what tz is he in?
<dimitern> jamespage, -7 I believe
<dimitern> jamespage, what's up?
<jamespage> dimitern, I was wondering what state I could expect vivid support to be in in master and whether any other branches needed testing
<jamespage> dimitern, we're busted on vivid testing right now so have a direct interest in seeing this land asap
<dimitern> jamespage, AFAIK systemd support has landed and for vivid we no longer have an exception to run it with upstart
<dimitern> jamespage, but it's only in 1.23 and master
<jamespage> dimitern, hmm - yeah - still non-functional - tested yesterday
<jamespage> the cloud-config that gets generated for instance creation still does "start jujud-XXX"
<jamespage> which is upstart specific
<dimitern> jamespage, hmm right - is that 1.22.0 ?
<jamespage> no from master branch with locally built copy
<dimitern> jamespage, ok, so it sounds not so complete as I thought
<dimitern> jamespage, I'd suggest to write a mail to ericsnow  cc alexisb about this
<jamespage> dimitern, I'll retest early next week with a clean master (currently have leader election merged as well)
<jamespage> and report back
<dimitern> jamespage, cheers
<mattyw> davecheney, you still around?
<davecheney> whats up ?
<voidspace> dimitern: yep
<voidspace> dimitern: although I think the high level details are reasonably clear
<dimitern> voidspace, that's great :) i've started adding tasks to this new feature card I assigned to you
<voidspace> dimitern: yeah, I saw :-)
<voidspace> dimitern: thanks
<natefinch> rogpeppe: where's the code that adds the new mongo admin users when we run ensure-availability?
<dimitern> dooferlad, standup?
<rogpeppe> natefinch: i don't think any users are added, are they
<natefinch> rogpeppe: system.users gets a user per state machine
<natefinch> rogpeppe: brb
<natefinch> rogpeppe: system.users has the admin user and then one user per state machine.  no big deal if you don't remember this stuff offhand, I know it's been a year since we worked on it
<rogpeppe> natefinch: i think users are added when machines are added
<natefinch> rogpeppe: ahh, I see what it is, I was looking for EnsureAdminUser, but most places just call SetAdminMongoPassword
<dimitern> voidspace, ok I'm done adding tasks - I think I mentioned everything relevant in the feature card
<voidspace> dimitern: great, thanks
<natefinch> I think I need to alias 'exit' to 'echo "dude, you're on you're on machine already"'
<natefinch> s/on/own
<dimitern> natefinch, I have a custom bash prompt - not for that, but it helps in this case
<natefinch> dimitern: i do too.. but I do exit <enter> exit <enter> ... really fast, and sometimes do one too many
<dimitern> natefinch, :) ctrl+d is too easy
<natefinch> dimitern: yeah, I've done that by accident before too  whose bright idea was it to make a hotkey to close a window that could easily be typoed from ctrl+c ? :/
<dimitern> :)
<natefinch> (and ctrl+s, ctrl+x ctrl+f etc)
<perrito666> I used to have a terminal (I think was konsole) where you could setup different background colors depending on the host
<dimitern> looking at the wear patterns on my keyboard ctrl+A, S, C and lastly D I use most of the time -my  emacs habits haven't causes X to wear off too much yet
<perrito666> dimitern: and ctrl?
<dimitern> perrito666, left one is still barely readable, right one a lot more
<dimitern> but oh boy! cursor keys - all but right are long gone
<perrito666> lol. I tink excepting for my thinkpad, which is my spare machine, I hardly have a computer long enough to wear out anythin other than space bar
<perrito666> although I do use an external kb
<perrito666> whose wear pattern makes no sense, since I use it in english but is a spanish kb
<dimitern> :)
 * TheMue steps out for a moment, bbiab
<sinzui> natefinch, mgz: do either of you have a minute for http://reviews.vapour.ws/r/1157/
<mgz_> sinzui: on it
<mgz_> sinzui: lgtm
<sinzui> thank you mgz
<perrito666> meh, I keep writting workflow instead of worload
<jw4> oh, yeah... everytime I need to write worload I accidentally type workflow too... what is worload?
<perrito666> :p
<perrito666> Workload
<jw4> hehe
<dimitern> jw4, I have the same issue typing attempty vs. attempt
<natefinch> I have the same problem with serve vs. server
<natefinch> I can't type serve without typing server and deleting the r
<ericsnow> dooferlad: you still have questions about juju systemd support?
<dooferlad> ericsnow: I didn't think I had any to start with
<jw4> it's funny how our brains store patterns
<ericsnow> dooferlad: oh, wrong person :)
<dooferlad> :-)
<bodie_> how do I land a bugfix for 1.23?  I have an open bug.
<mgz_> bodie_: propose a merge against the 1.23 branch? or do you mean more, what's the overall procedure?
<ericsnow> jamespage: you have questions about juju and systemd?
<jamespage> hi ericsnow
<bodie_> mgz_, derp, of course
<jamespage> ericsnow, indeed I do - vivid has now switched to systemd by default including cloud images and I wanted to get our vivid testing restarted asap for openstack
<jamespage> ericsnow, do you have a branch for juju thatwe can test with?
<ericsnow> jamespage: master :)
<jamespage> ericsnow, ok testing now - but I had probs two days ago :-)
<bodie_> mgz_, but yeah, what more is needed once I get LGTM?  I'd simply $$merge$$ it, right?
<ericsnow> jamespage: we landed the last of the systemd support Tuesday-ish
<bodie_> then mark the bug submitted?
<jamespage> ericsnow, awesome - we may have missed that as we are working on a branch for leadership election right now
<mgz_> bodie_: yup
<jamespage> I did rebase so hopefully we're good
<ericsnow> jamespage: it's totally conceivable there are issues
<ericsnow> jamespage: I tested juju on systemd (vivid) before landing, but I'm sure I missed something
<ericsnow> jamespage: if you run juju (e.g. bootstrap) with --debug you should see DEBUG messages saying which init systemd juju discovered
<jamespage> ericsnow, ok so it works - I think I must have tested prior to re-basing
<ericsnow> jamespage: yay
<ericsnow> jamespage: thanks for taking it for a spin
<ericsnow> jamespage: the alternative is to run vivid with upstart (not hard) temporarily but that's not ideal
<jamespage> ericsnow, nah and thats backwards looking...
<ericsnow> jamespage: :)
<bodie_> mgz_, I already landed the bugfix in master.  can I just $$merge$$ the PR for 1.23?  or do I need to get LGTM on it?  it's identical to what I already got LGTM'd yesterday
<mgz_> bodie_: no, you'll probably need to actually cherrypick
<mgz_> it's a different branch target
<mgz_> github may let you propose again targettting a different branch, I've not tried
<bodie_> yeah, that's what I just did
<mgz_> but you do need a new mp at least
<axw_> wallyworld_: I'm too tired to review for reals, will take another look on the weekend. feel free to get others' opinion on juju-dev about lxc security
<wallyworld_> axw_: no worries, i'm almost finished adding the loop mount config
<wallyworld_> i'll propose tomorrow
<wallyworld_> axw_: i explicitly allow the default loop devices, i think that will do for now as per my comments on the review
<ericsnow> axw_, wallyworld_: yikes! still up?  have a good weekend :)
<wallyworld_> ericsnow: yeah, about to head off, tired
<sinzui> natefinch, dimitern, I just reported bug 1431888. I need to know if juju has a regression or a requirement change so that we get the functional-restricted-network test
<mup> Bug #1431888: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1431888>
<dimitern> sinzui, yeah, I was looking at that but couldn't off hand tell what's the problem
<dimitern> sinzui, since the CA has landed we do modify a few iptables rules and routes
<dimitern> sinzui, if the job drops or removes them it won't work with containers
<sinzui> dimitern, we are just testing that we can bootstrap and deploy two services that don't have external requirements
<sinzui> We know from 25 other tests that juju can bootstrap and deploy fine.
<dimitern> sinzui, I'm looking at the prep steps the job does from the logs
<dimitern> sinzui, and trying to understand what's the issue - so that's a local environment
<dimitern> ?
<sinzui> dimitern, this is one of our oldest tests. it is ugly. the interesting bits are at line 100+ http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/trunk/view/head:/test-restricted-network
<dimitern> sinzui, can you perhaps add a few things to the job - "ip link", "ip route", "ip addr", before and after the prep steps - to see how's the NICs, routes and addresses
<dimitern> configured
<mup> Bug #1431888 was opened: Juju cannot be deployed on a restricted network <ci> <deploy> <network> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1431888>
<sinzui> dimitern, I can. you want this just before it calls juju bootstrap?
<dimitern> sinzui, yes - before trying to change any networking stuff (e.g. around line 100) and just before bootstrap
<dimitern> sinzui, while you're at it - in case of an error also dump these before exit 1
<dimitern> sinzui, it will also be really useful for debugging if you can extract the logs for the host and containers - like you do for other local env tests
<sinzui> dimitern, that is challenging
<dimitern> sinzui, why?
<sinzui> dimitern, the test isn't in our slaves
<dimitern> sinzui, you mean you can't get the logs off that machine?
<redelmann> Hi!
<mup> Bug #1431918 was opened: gce minDiskSize incorrect <juju-core:New> <https://launchpad.net/bugs/1431918>
<redelmann> someboy know what happend to "juju actions", im using "juju 1.22-beta4-trusty-amd64"
<natefinch> bodie_, jw4: ^^
<redelmann> and the "actions option does not exist
<bodie_> redelmann, try `JUJU_DEV_FEATURE_FLAGS=actions juju action help`
<bodie_> sorry
<bodie_> JUJU_DEV_FEATURE_FLAG
<bodie_> jw4, I thought it was FLAGS?  the Juju Doc site shows it as FLAG
<bodie_> anywho, try that
<jw4> bodie_: I think your quotes are wrong
<jw4> JUJU_DEV_FEATURE_FLAG='action' juju action help
<natefinch> JUJU_DEV_FEATURE_FLAGS=action juju action help
<natefinch> (just tried it)
<jw4> bodie_: oh; my mistake I see what you were doing
<sinzui> dimitern, it is in an ec2 instance, but i have a plan. put termination protection on the instance when I see the instance spin up. Then we can claim logs
<jw4> natefinch: sounds like we need to fix the doc
<natefinch> jw4: yep, FLAG definitely does not work
<dimitern> sinzui, good plan, let me know when you have the logs please
<wwitzel3> port of gce fix to 1.23 http://reviews.vapour.ws/r/1161/ if someone can ptal
<redelmann> bodie_ thank
<bodie_> redelmann, that work for ya?
<redelmann> bodie_, JUJU_DEV_FEATURE_FLAGS=action juju action help is working ;)
<voidspace> dimitern: ping
<voidspace> natefinch: you can probably help :-)
<bodie_> redelmann, good.  :)
<voidspace> natefinch: dimitern: cancel that ping
<voidspace> found what I was looking for
<natefinch> procrastination pays off again!
<voidspace> :-)
<bodie_> I landed my bugfix in 1.23.  Do I now mark the bug "released" rather than committed as usual?
<perrito666> bodie_: committed I would say
<perrito666> released is not released until we release
<bodie_> that's what I thought.  just making sure :)
<mup> Bug #1431612 changed: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
<mup> Bug #1431612 was opened: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
<mup> Bug #1431612 changed: Action defaults don't work for nil params <actions> <defaults> <juju-core:Fix Released by binary132> <https://launchpad.net/bugs/1431612>
<voidspace> TheMue: when you have time: http://reviews.vapour.ws/r/1160/
<TheMue> voidspace: I'm looking. just booted my first maas node on vmware *yeehaw*
<voidspace> TheMue: awesome
<TheMue> voidspace: only don't know how to log in *lol*
<voidspace> TheMue: upgrade step is written, just looking at a test
<voidspace> TheMue: hah
<voidspace> TheMue: you have to set a secret in the config I believe and use that as password
<voidspace> or something like that...
 * fwereade out for a bit
<voidspace> TheMue: this is the upgrade step, WIP, FWIW, YMMV: https://github.com/voidspace/juju/compare/address-life...voidspace:address-life-upgrade
<TheMue> voidspace: will take a look after review
<voidspace> TheMue: It's a WIP, it can wait until the test is done
<voidspace> TheMue: just wanted to prove it's on the way...
<TheMue> good
<TheMue> voidspace: reviewed
<voidspace> TheMue: thanks, I'll make that change
<TheMue> voidspace: thx
<TheMue> voidspace: the upgrade stuff looks fine so far too
<voidspace> TheMue: I'm wondering how to test
<voidspace> TheMue: I think I have to manually insert some ip address records without a Life field
<voidspace> TheMue: and then check that the upgrade step adds them
<voidspace> TheMue: (plus a test for idempotency for records with an existing Life field)
<TheMue> voidspace: hmm, I once had an upgrade too. have to see how I've done it
<voidspace> TheMue: that means manually constructing bson.M{...} for the address recoreds
<voidspace> which is easy but tedious
<voidspace> or create some records, delete the Life field...
<voidspace> that's probably quicker (and more future proof) but weirder
<voidspace> (future proof against future schema changes that would also have to be made in the manual bson.M records
<voidspace> although we shouldn't have to future proof an upgrade step
<TheMue> voidspace: I have to admit we only tested that it is called (the upgrade function)
<voidspace> maybe just a manual test...
<voidspace> :-)
<TheMue> voidspace: see http://reviews.vapour.ws/r/253/diff/#
<voidspace> TheMue: there are changes in state/upgrades_test.go
<voidspace> TheMue: and that test runs MigrateJobManageNetworking
<TheMue> voidspace: oh, eh, yeah. too quickly scrolled to the bottom
<voidspace> TheMue: but you're adding to a set rather than adding a new field
<voidspace> and you manually set the jobs of the machines you add before migrating
<voidspace> whereas if I create new IPAddresses in the test they'll *have* the new field
<voidspace> I'll think about it
<voidspace> :-)
<TheMue> voidspace: testing upgrade is *ugly*, indeed
<dimitern> voidspace, since you've already sent the 1835 for merging, I'd ask you to address my review comments in a follow-up
<voidspace> ah
<voidspace> dimitern: ok
<voidspace> dimitern: I can do that in the upgrade setp
<voidspace> *step
<voidspace> dimitern: why do we now need Refresh?
<voidspace> dimitern: is it part of the interface?
<voidspace> I know what it does
<voidspace> but we only need it where we need retries
<voidspace> the other comments are easy enough to address
<voidspace> and if the answer is "we might need it", then can't it wait until we *do* need it? (like Destroy)
<dimitern> voidspace, it's part of the interface
<voidspace> heh, that was the answer I was hoping you wouldn't give
<voidspace> ok
<perrito666> hey, I need to be OoO for a moment, ill be back in about two hs
<dimitern> voidspace, :)
<fwereade> is anyone free to be hassled for a round of reviews? I'd like to land a few that all feed into one, so I can get a ~clean diff on that one without forcing the others into a sequence they don't have to be in
<jw4> fwereade: I'll be out for a couple hours but when I get back I'd be happy to
<dimitern> sinzui, I've commented on the restricted network bug
<dimitern> sinzui, can you perhaps give me access to that machine after it has failed?
<sinzui> dimitern, I can
<marcoceppi> dimitern: thanks for the PR
<dimitern> marcoceppi, np :) I like the tool very much
<mup> Bug #1326355 changed: network-bridge doesn't work on trusty Ubuntu installed from scratch <addressability> <local-provider> <lxc> <network> <juju-core:Won't Fix> <https://launchpad.net/bugs/1326355>
<sinzui> dimitern, ha ha, since the test breaks networking, we cannot get into the machine after the test starts
<sinzui> dimitern, I will try to change the test to either revert the network on failure, or just get the logs
<dimitern> sinzui, ok, sounds good - add the iptables-save dump to the logs as well please
<dimitern> as commented on the bug
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: on isDeadDoc and handling the case where the address was already removed
<voidspace> dimitern: your comment says "handle the case where the unit was removed already"
<voidspace> dimitern: I assume you mean addres
<voidspace> dimitern: I've been looking at unit.Remove
<voidspace> dimitern: adding the isDeadDoc assert is trivial obviously
<voidspace> dimitern: but it's not clear from Remove what code that handles it is
<voidspace> dimitern: unless it's the code doing the Refresh - in which case it *looks* to me like it just ignores that error
<voidspace> dimitern: and if this is the case I wonder how that is different from not having the assert
<dimitern> voidspace, yeah - s/unit/address/ there in that comment
<voidspace> dimitern: and if it's not correct I'd like to be corrected
<dimitern> voidspace, so the isDeadDoc assert will cause the remove op to fail if the record is not dead
<voidspace> ah, right - the Refresh handles not found, which is different
<dimitern> voidspace, yeah
<voidspace> we already check Life before performing the remove - so the assert *can't* fail of course
<dimitern> voidspace, that's why refresh() is needed
<voidspace> ah - so "handle the case where the unit was removed already" is a *separate* issue - not related
<voidspace> dimitern: they can't go from Dead to NotDead
<voidspace> dimitern: so I still say it can't fail
<dimitern> voidspace, add asserts, and if they fail - refresh your local copy of the doc (which is obviously stale at this point) and retry - if needed
<ericsnow> cmars: could you take a look at http://reviews.vapour.ws/r/1162/?
<ericsnow> cmars: it should be pretty straight-forward
<voidspace> dimitern: but the only assert [you want me to add] can't fail...
<dimitern> voidspace, my point is you should *not* rely on the local copy of the doc (inside the state.IPAddress) when taking decisions whether to remove a doc or update it
<voidspace> dimitern: we've just fetched it - and if life is Dead it can't go back
<dimitern> voidspace, because it's possible that data is stale or somebody else changed the same doc your local copy was taken from
<voidspace> dimitern: and if it's *not* Dead we'll bail before getting to that assert
<voidspace> dimitern: so in this case I *don't* think that's possible
<dimitern> voidspace, where is that you just fetched it?
<voidspace> dimitern: prior to calling Remove
<voidspace> dimitern: we only Remove in one place and we fetch all addresses for a container and Remove them
<voidspace> dimitern: in the future we'll fetch all dying ones and Remove them
<dimitern> voidspace, that's different - that's up to how the test is setup, I'm talking about the implementation
<voidspace> dimitern: so am I
<voidspace> dimitern: the actual use of Remove
<voidspace> not the testing of it
<voidspace> dimitern: and as I said, if the local copy doesn't have a Life == Dead we bail before the assert
<voidspace> dimitern: and if the Life  of the local copy is Dead then there is no change anyone else can or will make to change that
<dimitern> voidspace, not true
<voidspace> dimitern: under what circumstances do you imagine an address going from Dead to NotDead?
<dimitern> voidspace, imagine this case: i0 := st.IPAddress(x), i1 := st.IPAddress(x); i1.EnsureDead(), i1.Remove() , i0.EnsureDead()
<dimitern> voidspace, what happens?
<cmars> ericsnow, looking
<voidspace> dimitern: the isAliveDoc assert in EnsureDead fails
<dimitern> voidspace, and we're ignoring it and setting the local Life to Dead
<voidspace> dimitern: there's an early return before setting local life to Dead
<dimitern> voidspace, actually I'm not sure if it will even fail with ErrAborted
<dimitern> voidspace, if the doc is gone
<voidspace> dimitern: so you're saying asserts about a field will *succeed* if the doc doesn't exist at all
<voidspace> dimitern: that sounds unlikely and horrible if true
<dimitern> voidspace, I'm not saying that, I have to check if it is
<voidspace> :-)
<dimitern> voidspace, ok, I still think isDeadDoc should be added on Remove
<dimitern> voidspace, but I've totally missed that now we have Life, a few other methods need to change to account for that
<voidspace> dimitern: do you want to add more review comments?
<voidspace> dimitern: I'm going jogging anyway
<dimitern> voidspace, e.g. SetState and AllocateTo
<dimitern> voidspace, I will yeah
<voidspace> dimitern: yeah, ok
<voidspace> dimitern: an assert on a doc that doesn't exist fails
<voidspace> dimitern: and now you can't call Remove twice on an IPAddress - isDeadDoc now fails the second time
<dimitern> voidspace, yeah, that's why it needs to call refresh on ErrAborted, and if it's not found - return nil, as in other cases
<voidspace> dimitern: ok
<dimitern> voidspace, thanks for confirming about the assert
<redelmann> why when i try debug-hooks "JUJU_CONTEXT_ID" is not set?
<redelmann> im doing something wrong?
<redelmann> ex: config-get ->>  "JUJU_CONTEXT_ID" is not set
<redelmann> inside tmux: juju debug-hooks rabbitmq-server/0 amqp-relation-changed
<natefinch> redelmann: note that after you do debug-hooks, you then have to do a juju resolved --retry from another terminal, to get the hook to rerun, so the debug hook will hook
<natefinch> then your debug hook terminal will dump you into the right directory with all the right environment variables set etc
<ericsnow> cmars: I've replied to your review comment; did the code otherwise look okay?
<redelmann> natefinch, but unit is not in error state
<redelmann> natefinch, so it say: ERROR unit "rabbitmq-server/0" is not in an error state
<redelmann> natefinch, i just need to do a "get-config" to see what is this charm given to my charm
<natefinch> redelmann: ahh, debug-hooks only works if the hook is in an error state... this is usually easy to do, you can add a "return 1" or similar to the top of hook script so that it automatically errors out
<redelmann> natefinch, but i deploy rabbitmq from online charm
<redelmann> natefinch, s i have to download the charm and edit the hook?
<redelmann> natefinch, too complicate just to know what get-config "send"
<natefinch> redelmann: you can still edit the hook script on the unit
<mup> Bug #1215579 changed: Address changes should be propagated to relations <addressability> <network> <reliability> <juju-core:Fix Released> <https://launchpad.net/bugs/1215579>
<redelmann> natefinch, that's true
<redelmann> natefinch, second day playing with juju
<natefinch> redelmann: so, you can deploy the normal rabbitmq charm, then ssh into the unit (do juju ssh <machine number>), then go to the service's hooks directory on that machine (/var/lib/juju/agents/unit-rabbitmq-0/charm/hooks) and  edit the hooks
<natefinch> redelmann: no problem... it's a bit of a learning curve at first
<natefinch> rick_h_: does the gui show all the config data that a charm sets?
<redelmann> natefinch, thank!
 * natefinch should really use the gui more often
<rick_h_> natefinch: it shows all defined in metadata.yaml yes
<rick_h_> err config.yaml
<natefinch> oh yeah, config.yaml
<rick_h_> e.g. https://demo.jujucharms.com/precise/juju-gui-108/#configuration
<natefinch> redelmann: the charm will have a config.yaml you can look at, which should define all the properties it sets... or you can use the gui to go look at them
<natefinch> redelmann: https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/config.yaml
<redelmann> natefinch, but i need relation variables
<redelmann> natefinch, when i create a relation whit my charm, rabbitmq is not given to me her addres
<rick_h_> redelmann: you've fallen into a little bit of a hole there. Relations are meant to be quite flexible and aren't that well document. The only relation currently really well documented is the mysql one as a first stab at it. https://jujucharms.com/docs/interface-mysql
<redelmann> natefinch, address/hostname/whatever
<rick_h_> redelmann: it's something we're working on trying to improve because it can be frustrating when trying to relate to a new service
<rick_h_> redelmann: the best thing we suggest at the moment is to look at other services that talk to that service or to check out what the service does in the hooks for joining the relation.
<redelmann> rick_h_, ok, thank, im going to look in another charms
<redelmann> rick_h_,i already try: hookenv.relation_get("host"), hookenv.relation_get("hostname")
<rick_h_> redelmann: yea, so looking at the charm the amqp relation is in line 110 of https://api.jujucharms.com/v4/trusty/rabbitmq-server-26/archive/hooks/rabbitmq_server_relations.py
<rick_h_> redelmann: which sets up a bunch of stuff in relatoin_settings. You should be able to dump out everything in that I'd think. /me is trying to see
<redelmann> rick_h_, for what can i see "hostname" should do the trick!
<rick_h_> redelmann: woot!
<redelmann> rick_h_, nothing
<rick_h_> boooo
<redelmann> rick_h_, i was trynig to retrive hostname from rabbitmq charm, but was my problem
<redelmann> rick_h_, hookenv.relation_get("hostname") is working as expected
<rick_h_> redelmann: ah ok cool
<redelmann> rick_h_, but my template was wrong
<redelmann> rick_h_, thank for the help!
<rick_h_> redelmann: np, sorry for the trouble. It's definitely a weak point we're got our eye on
<rick_h_> redelmann: thanks for pushing through it
<ericsnow> cmars: thanks for the review
<mup> Bug #1431685 was opened: juju nova-compute charm not enabling live-migration via tcp with auth set to none <juju-core:New> <https://launchpad.net/bugs/1431685>
<fwereade> if anyone's on, I'd love a second opinion on http://reviews.vapour.ws/r/1151/
<jw4> I recently created a bash script to display all your branches (local and remote) sorted by last commit date, and colorized to make finding recent work across remotes easy
<jw4> now I can't get my link to paste! :)   hand typed:  https://gist.github.com/johnweldon/0a9ee3c9406fab2ac93b
<jw4> fwereade: looking... shouldn't you be sleeping?
<fwereade> jw4, ehh, soon
<fwereade> jw4, I want to get a bunch of these landed so I can propose what I have that works and has hooks and proper tests and everything, but depends on too much still in review for a nice diff without hassle
<jw4> fwereade: yeah, makes sense.  I actually wrote my little git tool (^^) so that I could add your repo as a remote and find the branches you've been working on recently :)
<alexisb> fwereade, dude what time is it for you man?
<jw4> fwereade: quarter til 2?
<jw4> oh, quarter to one
<jw4> (DST screws me up even worse now !)
<fwereade> yeah, only quarter to 1
<fwereade> if I can propose that last branch I'll feel like I've done the week
<jw4> fwereade: reading that Raft doc you recommended is helping me review your changes ;)
<alexisb> fwereade, I can put I ship it on it if that helps :)
<jw4> fwereade: non-graduated second opinion :shipit:
#juju-dev 2015-03-15
<mup> Bug #1432421 was opened: vivid local failed to retrieve the template to clone <deploy> <local-provider> <vivid> <juju-core:Triaged> <https://launchpad.net/bugs/1432421>
#juju-dev 2016-03-14
<davecheney>   
<thumper> davecheney: check now
 * thumper did something
<davecheney> thumper: ta
<davecheney> thumper: got it, i'll fix that stupid typo
<axw> wallyworld: just looking at the createmodel command, the new credential code is not entirely correct. there's no requirement that the attr names match between credentials and provider config. also, the provider might need to do auth-type specific things.
<axw> wallyworld: e.g. in gce, read creds from the file pointed to by the "file" attr
<axw> wallyworld: or set auth-mode in openstack
<wallyworld> axw: that is true, there's currently no validation
<axw> wallyworld: the creds-to-config code probably needs to be split out of BootstrapConfig
<axw> wallyworld: s/validation/transformation/
<wallyworld> axw: the other thing is that with --config options to bootstrap, there's also no validation tehre wither
<wallyworld> so people can pass in crap values
<axw> wallyworld: true, but not so much with credentials - they come in a standardish format in a file. we should treat them the same in bootstrap and create-model
<wallyworld> would be good to fix all that before we ship next beta
<wallyworld> yep
<wallyworld> i should have left a todo in the code
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1556630
<mup> Bug #1556630: Timeout github.com/juju/juju/cmd/jujud/reboot <blocker> <ci> <go1.5> <regression> <test-failure> <wily> <juju-core:Fix Committed by dave-cheney> <https://launchpad.net/bugs/1556630>
<davecheney> is fixed committed
<davecheney> can someone kick the CI jobs to get this validated pls
<mup> Bug #1533694 changed: inconsistent juju-gui and juju status <juju-core:Expired> <https://launchpad.net/bugs/1533694>
<bradm> should I be able to do a juju deploy --to lxc:1 with juju 1.25.4 and xenial?  it keeps telling me "ERROR adding new machine to host unit "test/1": cannot add a new machine: machine 1 cannot host lxc containers"
<davecheney> https://github.com/juju/juju/pull/4708
<davecheney> ^ thumper, menn0
<menn0> davecheney: looking
<davecheney> just a small driveby
<menn0> davecheney: done
<davecheney> ta
<axw> wallyworld: finally got all the tests fixed. just gotta update a couple of plugins which are still using configstore, then I can remove it.
<wallyworld> awesome
<davecheney> menn0: here's another little one, https://github.com/juju/juju/pull/4709
<davecheney> anyone: https://github.com/juju/juju/pull/4711 << not a cleanup
<davecheney> an actual improvement
<davecheney> ^ wallyworld
<davecheney> test that is not hoocked up
<wallyworld> davecheney: i think i just reviewed that one
<davecheney> ta
<davecheney> thank yo
<davecheney> i was scratching my head going "how does this work"
<davecheney> and then, oh, it doesn't...
<axw> wallyworld: https://github.com/juju/juju/pull/4712. if it's too difficult to review, I can try and break it up
<wallyworld> axw: i'll be ok :-)
<axw> wallyworld: there's one more bit that I'm working on, which is validation of bootstrap config in jujuclient. won't change much though
<wallyworld> ok
<wallyworld> axw: lgtm with a few small nitpics. i'll be pressing to try and get this branch finally CI tested
<axw> wallyworld: thanks. just finishing up the validation changes - more test fallout, should be trivial tho
<wallyworld> ok
<wallyworld> so good to see that legacy stuff gone
<axw> wallyworld: when do you expect we'll be able to remove -b from restore?
<wallyworld> axw: good question. when someone gets to do the work. i'm hoping we can get some bandwidth next week
<axw> wallyworld: shall I just say "before 2.0"?
<wallyworld> why not :-)
<frobware> dimitern: ping
<frobware> dimitern: can we sync on a ppc64 bug
<dimitern> frobware, hey, sure
<frobware> dimitern: let's use standup HO
<dimitern> frobware, omw
<mwhudson> frobware, dimitern: if it's gccgo related please don't spend too much time on it...
<TheMue> morning
<frobware> mwhudson: sure, but it's currently OK on master, broken on our maas-spaces2 branch.
<frobware> mwhudson: still around?
<frobware> dimitern, voidspace: as a heads up, I have another (almost complete) PR for merging latest master just in case you were also thinking the same...
<dimitern> frobware, sounds good +1
<sparkiegeek> what SSL/TLS protocols does the Juju controller speak?
<frobware> mwhudson, dimitern: for reference, the ppc64 issue is fixed with go1.6
<dimitern> frobware, sweet! but until we can use 1.6 that "fix" will solve the problem on maas-spaces2 I guess?
<frobware> jam: so using `delete(m, "yup!")' means we can drop the log as that works too. :)
<jam> frobware: yay... ish/
<mup> Bug #1554417 changed: Juju 2.0:  ERROR cannot deploy bundle: machine "0" is not referred to by a placement directive (and 4 more errors) <oil> <juju-core:Invalid> <https://launchpad.net/bugs/1554417>
<mup> Bug #1556961 opened: i/o timeout from mongodb <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1556961>
<mup> Bug #1556630 changed: Timeout github.com/juju/juju/cmd/jujud/reboot <blocker> <ci> <go1.5> <regression> <test-failure> <wily> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1556630>
<mup> Bug #1557020 opened: Spurious output with kill|destroy-controller <juju-core:New> <https://launchpad.net/bugs/1557020>
<rick_h__> katco: ping
<natefinch> rick_h__: she's out sick this morning
<rick_h__> natefinch: ah ok, thanks for the info
<natefinch> rick_h__: welcome
 * rick_h__ stops waiting in call
<natefinch> doh
 * rick_h__ sends chicken noodle soup katco's way
<perrito666> rick_h__: oh you can wait, she will eventually be back
<TheMue> *lol*
<rick_h__> perrito666: most helpful person around :)
<perrito666> you know me
<sparkiegeek> what SSL/TLS protocols does the Juju controller speak?
<mup> Bug #1557020 changed: Spurious output with kill|destroy-controller <juju-core:New> <https://launchpad.net/bugs/1557020>
<natefinch> sparkiegeek: tls 1.2
<voidspace> dimitern: frobware: I guess networking meeting is not on?
<voidspace> dimitern: frobware: given that no-one else is here...
<dimitern> voidspace, yeah, sorry about that
<voidspace> dimitern: jay didn't come either
<dimitern> right, well too much to do anyway :)
<natefinch> first meeting about kindergarten for our oldest child is today.  Info packet was set in comic sans and poorly written....
<sparkiegeek> natefinch: thanks
<natefinch> sparkiegeek: welcome
<frobware> voidspace: can we sync on MAAS 2.0; standup HO
<natefinch> rogpeppe: you around?
<rogpeppe> natefinch: i am
<natefinch> rogpeppe: I'm adding a --resource flag to charm publish, so you can specify resources to publish with the charm... however, the code doesn't actually exist on the charmstore yet to support that.  The tests for publish currently all run a full-on charmstore... so I'm uncertain how to proceed with testing the added functionality.  Any suggestions?
<rogpeppe> natefinch: in my view, you can't test it.
<rogpeppe> natefinch: you could implement a mock charmstore endpoint, but what confidence would that given you that the code would eventually work when run against a real charmstore
<rogpeppe> ?
<natefinch> rogpeppe: it would ensure I am parsing the CLI correctly and passing that data onward and upward.  It can't ensure that the real charmstore actually has that endpoint or deals with the information correctly, that's true.
<natefinch> rogpeppe: I would note that I am required by a very long management chain to deliver this functionality (presumably with tests) in the next week-ish.. and I am sure we cannot also implement the serverside work in that time.
<rogpeppe> natefinch: if that's all you need to do, then i'd implement a fake charmstore handler that implements or fakes the required endpoints
<natefinch> rogpeppe: I think that's the best I can do, given the constraints of my current reality :)
<rogpeppe> natefinch: it probably is
<frobware> dimitern, voidspace: http://reviews.vapour.ws/r/4157/
<dimitern> frankban, looking
<frankban> dimitern: ty
<dimitern> frankban, oops sorry :) tab completion failed
<dimitern> frobware, LGTM
<frobware> dimitern: you can get to the bug from the commit
<frankban> dimitern: I have a quick one as well ;-) http://reviews.vapour.ws/r/4154/
<dimitern> frankban, sure :) looking
<frankban> ty
<mup> Bug #1557052 opened: Can't bootstrap LXD on juju2.0-beta2 (xenial): no registered provider for "lxd" <go1.2> <lxd> <trusty> <juju-core:New> <https://launchpad.net/bugs/1557052>
<dimitern> frankban, feel free to drop the issue, but I think it's good to add easy references to bugs where appropriate
<dimitern> argh!
<dimitern> frobware, ^^
<frobware> dimitern: aimed at me? :)
<dimitern> frobware, :)
<frankban> lol
<frankban> dimitern: cool part is that dimitern comment was appropriate to my branch too
<dimitern> frankban, yours LGTM btw :)
<frankban> dimitern: ty
<frobware> dimitern, voidspace: one more (merge master): http://reviews.vapour.ws/r/4158/
<dimitern> frobware, cheers, will look shortly
<frobware> dimitern: thanks. would be good to try and get this ^^ in today as we'll have a CI run to look at tomorrow
<voidspace> frobware: great
<frobware> voidspace: can we sync; standup HO
<voidspace> frobware: sure
<natefinch> rogpeppe: do you have a link to info on how to set up my machine to run the rest of the tests?  For example, I don't have elastic search running on my machine, which seems like it's necessary
<voidspace> frobware: I'm there
<natefinch> rogpeppe: actually... just looked at the readme... is makesysdeps all I need?
<rogpeppe> natefinch: i think so
<dimitern> frobware, LGTM
<natefinch> rogpeppe: I guess I can try it and see ;)
<rogpeppe> natefinch: you can run the tests with ES disabled too
<natefinch> rogpeppe: oh, that would be nice :)
<natefinch> rogpeppe: especially since make sysdeps just failed :)
<rogpeppe> natefinch: but it's not too bad - if you've installed it, you can just run:  sudo service elasticsearch start
<rogpeppe> natefinch: and the tests will use it
<rogpeppe> natefinch: you can export JUJU_TEST_ELASTICSEARCH=none
<rogpeppe> natefinch: but that does disable quite a few tests
<natefinch> rogpeppe: heh, it disables all the publish commands.  Perhaps that's not the way to go about it.
<rogpeppe> natefinch: i like the charmstore approach to testing against ES more than juju's approach to testing against mongo - it uses the same server for all tests rather than starting a new instance for each tested package.
<ericsnow> natefinch: FYI, someone suggested that I get ES directly from the site and not install via the make file
<ericsnow> natefinch: it worked for me
<natefinch> ericsnow: do we need a specific version?
<ericsnow> natefinch: I'm running 1.7.5 (downloaded, unpacked to dir under $HOME, and run straight from unpacked dir/bin/elasticsearch without -d)
<ericsnow> natefinch: I left it running in a separate terminal so I could deal with it more directly
<natefinch> lol, installing elasticsearch 1.7.4: The package is of bad quality. The installation of a package which violates the quality standards isn't allowed. This could cause serious problems on your computer. Please contact the person or organisation who provided this package file and include the details beneath.
<natefinch> er .5 that is
<perrito666> how exactly can dpkg know that?
<mup> Bug #1557102 opened: Cannot unregister from a controller <docteam> <juju-core:New> <https://launchpad.net/bugs/1557102>
<natefinch-afk> off to battle the school system's bureaucracy, wish me luck
<perrito666> luck
<mup> Bug #1557102 changed: Cannot unregister from a controller <docteam> <juju-core:New> <https://launchpad.net/bugs/1557102>
<mup> Bug #1557102 opened: Cannot unregister from a controller <docteam> <juju-core:New> <https://launchpad.net/bugs/1557102>
<mup> Bug #1557124 opened: Controller is orphaned for remotely registered user <docteam> <juju-core:New> <https://launchpad.net/bugs/1557124>
<mup> Bug #1557143 opened: help text for juju list-users needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1557143>
<mup> Bug #1557146 opened: help text for juju register needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1557146>
<mup> Bug #1557148 opened: help text for juju add-user needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1557148>
<mwhudson> frobware: yay for 1.6 fixing things
<natefinch> ericsnow: are we skipping standup today?
<ericsnow> natefinch: wasn't planning on it
<frobware> mwhudson: this was my fix for gccgo. http://reviews.vapour.ws/r/4157/
<frobware> mwhudson: only on maas-spaces2 atm, but if you have or are aware of a better fix...
<mwhudson> frobware: the better fix is https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1536882
<mup> Bug #1536882: upload golang1.6 package for trusty <docteam> <golang (Ubuntu):In Progress by mwhudson> <https://launchpad.net/bugs/1536882>
<frobware> :)
<mwhudson> which reminds me
<alexisb> voidspace, frobware you guys still around?
<mwhudson> are you guys still using gccgo on ppc64el for xenial?
<mwhudson> because if you are, please stop
<mwhudson> (gccgo is less buggy in xenial, but still)
<frobware> alexisb: I can be
<frobware> mwhudson: not sure about xenial
<alexisb> mwhudson, no
<axw> davecheney: thanks for the unused cleanup. I've been meaning to do that, didn't know which tool to use tho
<perrito666> axw: wallyworld  do you people also have DSTd this weekend?
<wallyworld> not in queensland or western australia
<axw> perrito666: what he said. you?
<axw> davecheney: oh, now I see why I didn't know which tool to use. it was announced yesterday ;p
<wallyworld> axw: what tool are you talking about?
<axw> wallyworld: https://godoc.org/honnef.co/go/unused  -- tool davecheney used for the PR that removed all the dead code
<wallyworld> axw: huh, my ide has done that for over 12 months :-)
<wallyworld> don't need to wait for go to catch up
<axw> wallyworld: across package boundaries?
<wallyworld> yup
<axw> mkay. I'd like to see the code for that
<wallyworld> it's open source. there's a who inspections framework
<wallyworld> whole
<wallyworld> all sorts of inspections are written
<perrito666> wallyworld: but it looks so ugly :p
<wallyworld> pycharm with golang plugin
<wallyworld> perrito666: have you even seen it?
<perrito666> yes I did, I once even did an editor that "borrowed extensively" from it :p
<wallyworld> i use the gtk theme, looks native
<perrito666> wasnt pycharm a closed source thing?
<wallyworld> no, never
<perrito666> oh I see there is a paid version and a free version
#juju-dev 2016-03-15
<ericsnow> wallyworld: does anything need to change in core for the addition of resources to the bundle metadata?
<wallyworld> ericsnow: the native bundle deployment code i would expect
<wallyworld> there will be changes to upstream repos bundlechanges and charm as well
<ericsnow> wallyworld: where is the native bundle deployment code?  are you talking about cmd/juju/service/bundle.go?
<wallyworld> yeah
<ericsnow> wallyworld: k
<davecheney> The program 'i' is currently not installed. You can install it by typing:
<davecheney> sudo apt-get install iprint
<davecheney> who does shit like this ?!? seriously
<menn0> \o/ well that's a milestone. I just triggered a migration and the model was successfully serialised and imported into the target controller.
<menn0> still lots to do though
<natefinch> menn0: nice!
<davecheney> w00t
<menn0> next big milestone is to get the agents to switch over to the target controller
<natefinch> menn0: that should be the easy part, right?
<menn0> natefinch: maybe slightly easier...
<axw> wallyworld: http://reviews.vapour.ws/r/4163/diff/
<wallyworld> looking
<axw> wallyworld: no test, because it involves the filesystem + raciness. could test if we moved setting current controller to jujuclient, but I think that's not a worthwhile use of time right now
<wallyworld> axw: could you add a todo before landing?
<axw> menn0: nice work :)  is model migration doing any per-provider post-migration steps? something I brought up with thumper before, is that the hosted models in azure use resources in the controller resource group
<axw> menn0: so if they're moved to a separate controller, each controller will need to run some code to move things about
<axw> wallyworld: sure
<menn0> axw: yep, that's been thought of (as per your conversation with thumper)
<menn0> axw: I can't remember if it's in place yet (I *think* it is) but it's definitely getting done
<axw> menn0: cool
<axw> menn0: BTW, there's a new "controller-uuid" config attr that applies to all providers now
<axw> fairly new
<mup> Bug #1557102 changed: Cannot unregister from a controller <docteam> <juju-core:New> <https://launchpad.net/bugs/1557102>
<mup> Bug #1557124 changed: Controller is orphaned for remotely registered user <docteam> <juju-core:Invalid> <https://launchpad.net/bugs/1557124>
<menn0> axw: what's that for?
<axw> menn0: it's used by azure only atm. azure needs the uuid to identify the controller resource group
<menn0> axw: ok cool.
<natefinch> mna, github.com/juju/cmd has a ton of assumptions about how it'll be used that make it really ugly to unit test
<mup> Bug #1557216 opened: MADE-model-worker stale master <block-ci-testing> <ci> <juju-core:Incomplete> <juju-core made-model-workers:Triaged> <https://launchpad.net/bugs/1557216>
<mup> Bug #1557254 opened: Move current-controller management to jujuclient <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1557254>
<mup> Bug #1557264 opened: feature-resources is too stale to test <block-ci-testing> <ci> <juju-core:Incomplete> <juju-core feature-resources:Triaged> <https://launchpad.net/bugs/1557264>
<wallyworld> axw: do you have a minute to discuss add-credentials? standup hangout?
<axw> wallyworld: ok, just getting a drink, brt
<bradm> can someone tell me I'm doing something wrong here, please?  trying to do a juju deploy <charm> --to lxc:# I get an error
<bradm> ERROR adding new machine to host unit "ubuntu/0": cannot add a new machine: machine 0 cannot host lxc containers
<bradm> this is using xenial with juju 1.25.3 on canonistack-lcy02, but the same thing happens with maas
<axw> wallyworld: actually, USSO probably doesn't fit into accounts.yaml. they're not controller-specific.
<axw> wallyworld: unless we store things not keyed by controller in there too?
<wallyworld> axw: i was thinking we might
<wallyworld> if it is juju related
<wallyworld> easier to manage for the user, less droppings all over their machine
<axw> wallyworld: it's a bit overloaded though. an account currently means "a user in a controller"
<wallyworld> fair point
<wallyworld> axw: i think we need to see what the total surface area of this stuff is going to be and make a call when we know fully what we're dealing with
<cmars> wallyworld, hey, thanks for the reviews on 4131. if you're ok with http://reviews.vapour.ws/r/4131/diff/6-8/, i'll go ahead and land it in model-acls
<cmars> pushed those after your last review
<cmars> wallyworld, also, is there a good shining example of a BaseSuite test that sets up an APICaller for api client tests I could cheat off of?
<wallyworld> cmars: one quick thing that i see - can we code isGreaterAccess() to do the right thing with read/write/admin upfront?
<wallyworld> should be minimal extra effort
<wallyworld> cmars: with the apicaller thing - i thought the tests were all correctly set up and you simply needed s/JujuConnSuite/BaseSuite . if not then leave for another pr
<cmars> wallyworld, i'm hesitant to actually define the ModelWriteAccess until we actually implement it
<cmars> wallyworld, however, i think we're forward compatible with such a change when we're ready to make it
<cmars> wallyworld, ^^ state.ModelWriteAccess, that it
<cmars> wallyworld, i'd like the state to reflect reality, which is, you're either read-only, or full-blown admin
<cmars> wallyworld, that way, when we do add more nuanced access values, they match existing state
<cmars> make sense?
<wallyworld> cmars: sort of. so right now, if we happen to save "AccessWrite" in to state for a user. that isGreaterAccess() will fail if new access is "admin"
<wallyworld> isn;t that a trap we can easily avoid?
<cmars> wallyworld, do you mean if we added state.ModelWriteAccess? because right now, that value is not defined in state.
<cmars> wallyworld, the state package doesn't give callers the language to phrase such a condition (unless you went and casted arbitrary strings to state.ModelAccess)
<cmars> wallyworld, i think when we do make the distinction between "write" and "admin", we'll need to then improve the logic in the apiserver/modelaccess
<wallyworld> ok. fair enough. maybe a todo next to where the constants are defined that if a new one is added, gp and update isGreaterAccess()? i'll leave to you to do or not
<cmars> wallyworld, +1 for a TODO, i was on the fence about that
<wallyworld> cmars: for the api example, cmd/juju/storage/add__test.go (assuming you can't just drop JujuConnSuite for BaseSuite with no other changes). if it's not that easy change, do in another pr so as not to block this work
<cmars> wallyworld, my command tests use mocks like that, but I need to mock an apiserver to test the api client.
<wallyworld> cmars: ah, right sorry, let me get another example
<wallyworld> cmars: see "apiCaller := basetesting.APICallerFunc(" in juju/api/storage/client_test.go
<cmars> wallyworld, awesome. yep, that's what I needed!
<wallyworld> great :-)
<wallyworld> go and cargo cult  that sucker
<natefinch> wallyworld: got a minute?
<wallyworld> sure
<natefinch> wallyworld: I was looking at the channel code... it still looks like roger & co are ignoring the CharmRepo wrapper, so channels aren't getting added to those interfaces
<natefinch> wallyworld: for example: https://github.com/juju/charmrepo/pull/71/files
<wallyworld> looking
<mup> Bug #1557302 opened: juju and jujud have grown to >100MB  <juju-core:Triaged> <https://launchpad.net/bugs/1557302>
<wallyworld> natefinch: it may be time to pull CharmStore into core and look at what's needed moving forward - maybe we only need the extra header attrs it allows us to add when querying revision information; maybe we use csclient directly for most everything else?
<wallyworld> maybe we extra the caching stuff and use where needed
<wallyworld> extract
<wallyworld> separate to the use of the client itself
<natefinch> wallyworld: I actually like the wrapper, so the command code doesn't have raw calls to client.Put(some_Url) in it... so those are all consolidated in the helper library
<natefinch> wallyworld: I'm fine if csclient is that wrapper... I just don't want to ever see wallyworld: I shouldn't ever see client.Put("/"+id.Path()+"/publish", val) outside one API wrapper library...
<natefinch> oops, ignore that second wallyworld in the middle.. copy paste error :)
<wallyworld> np, agree with not using the low level stuff directly
<wallyworld> so i guess core is the only place using the CharmStore wrapper
<wallyworld> hence it's going to be up to us to maintain it. it would be easier to do that if it were in core
<natefinch> it seems wise to have it outside of core, in case other tools want to use the same cache... however, maybe we should be focusing on putting non-caching commands right on csclient
<wallyworld> natefinch: depends on the semantics i think. do you have a list of the commands we need to add that core needs?
<natefinch> wallyworld: ListResource, GetResource, UploadResource... and I made a Publish method so the CLI code wouldn't have to hardcode the url in it.
<wallyworld> natefinch: so comapring what's there on csclient.Client for charms - we have UploadCharm() and UploadBundle(). I guess we use Meta() for a list type operation? and Get(path) for getting charms?
<wallyworld> actually GetArchive()
<wallyworld> is used to get a charm i think
<wallyworld> either a charm or bundle
<natefinch> yeah
<wallyworld> natefinch: so adding thise resource methods to client seems ok doesn't it
<natefinch> wallyworld: yep
<wallyworld> ok, sgtm
<natefinch> wallyworld: ok, cool
<wallyworld> thanks for thinking it through
<natefinch> wallyworld: np
<natefinch> ok, I'm done.  g'night all
<menn0> axw: ping?
<axw> menn0: pong
<menn0> axw: would you mind taking a looking at this one? http://reviews.vapour.ws/r/4169/
<menn0> axw: it's trivial
<axw> sure
<menn0> axw: and should help get a bless for the MADE-state-workers branc
<axw> menn0: LGTM
<menn0> axw: thanks
<frankban> mgz: ping, are those errors spurious? http://reports.vapour.ws/releases/3751
<frankban> mgz: (and morning)
<mup> Bug #1557380 opened: help text for juju add-ssh-key needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1557380>
<frobware> dimitern: ping, 1:1? Can skip if you're busy and sync later. I'm currently merging master again.
<dimitern> frobware, hey, sorry got carried away
<dimitern> frobware, but I'm close to getting multi-nic working
<frobware> dimitern: then I will NOT disturb you. \o/ :)
<dimitern> :) cheers
<TheMue> morning o/
<voidspace> frobware: actually, I have coffee and some motivation - will crack on now and take a break later
<voidspace> frobware: need to make a decision how to have the gomaasapi test server support 1.x *and* 2.x
<voidspace> frobware: obviously the urls have the api version in them and the test server does have a version field
<voidspace> frobware: but in terms of code structure
<voidspace> frobware: branching inside every method isn't ideal but maybe the path of least resistance
<voidspace> frobware: changing the server to an interface and having two implementations would be painful as the server have lots of fields and they would all need accessor methods
<frobware> voidspace: at first blush having a branch seems easier, at least initially. You can always revise a little as you see how things pan out.
<voidspace> frobware: heh, so the first problem is the change to the capabilities output
<voidspace> frobware: but calling "version" from the maas command line is broken (I've filed a bug)
<voidspace> frobware: so having to write gomaasapi code to see the output of the version api call (that gets us the capabilities)
<frobware> voidspace: sweet
<voidspace> frobware: dimitern: so, if we're dropping support for maas < 1.9 there are two choices
<voidspace> frobware: dimitern: detect the unsupported version early and error out
<voidspace> frobware: dimitern: or just assume we're on a supported version
<voidspace> frobware: dimitern: I assume the first option is preferable
<dimitern> voidspace, I think we should report an error
<voidspace> dimitern: yep
<frobware> voidspace: as 1.8 is around, drop out early.
<voidspace> yeah, cool
<voidspace> dimitern: so, supportsDevices, supportsStaticIPs and supportsNetworkDeploymentUbuntu can all go away
<voidspace> and NewEnviron can just do a version check
<voidspace> frobware: it looks like it might be cleaner to do that first
<dimitern> voidspace, yeah - either NewEnviron or even at lower level - getMAASClient
<voidspace> dimitern: that method just returns the client, which is created in SetConfig, which is called from NewEnviron
<voidspace> dimitern: and NewEnviron already does the capabilities checks
<voidspace> dimitern: so replacing the capabilities checks with a version check in NewEnviron seems sensible
<dimitern> voidspace, sounds good, yeah
<voidspace> dimitern: the client does need to change as we need to switch between 1.0 api and 2.0 api depending on maas version
<voidspace> dimitern: hah, however - you need to know the maas api version in order to be able to ask maas which version it is...
<voidspace> dimitern: if you call /api/1.0/version/ on a 2.0 server you get "null" back
<voidspace> dimitern: shall I report a bug for that do you think? they ought to have an api version independent version endpoint
<dimitern> voidspace, yes please, that should be a bug - i.e. maas should report a sensible error for /api/1.0/ requests
<mup> Bug #1557470 opened: juju reads from wrong streams.canonical.com location <simplestreams> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1557470>
<voidspace> dimitern: frobware: old code removed, 50 additions and 653 deletions
<voidspace> dimitern: frobware: it compiles, but tests need checking & fixing
<voidspace> satisfying though
<voidspace> https://github.com/juju/juju/compare/maas-spaces2...voidspace:drop-maas-1.8?expand=1
<frobware> voidspace: nice
<mup> Bug #1557470 changed: juju reads from wrong streams.canonical.com location <simplestreams> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1557470>
<dimitern> voidspace, nice, but that's going into a feature branch first, right?
<voidspace> dimitern: why not land it?
<voidspace> dimitern: long lived feature branches are a pain
<dimitern> voidspace, well, there's a lot to clean up, but doing now will make my life a little hell :/
<dimitern> with all the changes I'm making for multi-nic
<voidspace> dimitern: so you'd rather make my life hell merging yours instead...
<dimitern> voidspace, nope, I'd rather drop the legacy code later and just disable it for now ?
<voidspace> if your changes conflict there's hell for someone...
<dimitern> voidspace, I will happily deal with the conflicts, just if possible I'd like to do that next week
<voidspace> dimitern: fine
<voidspace> I don't mind dealing with them either :-)
<voidspace> someone has to do them
<voidspace> it should just be deleting more code
<voidspace> dimitern: I really loathe long lived feature branches though and it has become our standard way of working
<voidspace> down to 38 test failures :-/
<voidspace> 108 were panicing a minute ago, so that is progress
<mup> Bug #1557470 opened: juju reads from wrong streams.canonical.com location <simplestreams> <juju-core:Triaged> <juju-core 1.25:New> <https://launchpad.net/bugs/1557470>
<frobware> dimitern: how close are you to maas-spaces2 tip? (on your feature branch)
<frobware> dimitern: seem to have lots of unit test failures today
<frobware> dimitern: and the last CI run was way worse than I expected
<dimitern> frobware, not so close anymore
<dimitern> frobware, 5 days behind
<frobware> dimitern: which coincides with when I started merging master. hmm.
<frobware> dimitern, voidspace: I think we'll need to set 'juju_bridge_all_interfaces=1' to 0 in maas-spaces2. We made this 0 on master to make tests pass, but they will of course continue to fail on maas-spaces2. Can I suggest that if you need multiple bridges you set this explicitly in your feature branch until we have everything in place.
<dimitern> frobware, sounds good
<frobware> dimitern, voidspace: http://reviews.vapour.ws/r/4172/
<frobware> hopefully as easy as it gets
<dimitern> frobware, LGTM
<perrito666> dimitern: would you please merge my goose package? I lack super powers
<dimitern> perrito666, doesn't $$merge$$ work there?
<mup> Bug #1556483 changed: ERROR invalid config: no addresses match <juju-core:Invalid> <https://launchpad.net/bugs/1556483>
<perrito666> dimitern: iirc, last time I tried to merge the bot did not respond to me
<dimitern> perrito666, now it did - but the tests failed
 * perrito666 goes check what happened
<frobware> dimitern, voidspace: master merge, http://reviews.vapour.ws/r/4173/
<dimitern> frobware, looks good
<frobware> dimitern: thanks for the review
<frobware> dimitern: so, devices... next :)
<dimitern> frobware, I have lxc working locally
<dimitern> well a bit unstable and only with lxc-clone: false, but still
<frobware> dimitern: sounds interesting; want to sync before I go grab some lunch?
<dimitern> one more live test and I'm pushing what I have to my PoC branch
<dimitern> frobware, sure
<dimitern> frobware, standup HO
<frobware> omw
<frobware> dimitern: well, that's weird. the master merge was quite flaky running the unit tests locally, but passed first time on the CI machines.
<dimitern> frobware, I guess CI is rock solid then :)
<frobware> dimitern: brings new meaning to "Ship It!"
<dimitern> :D
<mup> Bug #1557540 opened: Missing help for payloads <juju-core:New> <https://launchpad.net/bugs/1557540>
<perrito666> dimitern: would you give me another little push? :) I still lack the credentials to command merge bot (the error was because the patch was against liberty but proposed against v1, I fixed that)
<mup> Bug #1557540 changed: Missing help for payloads <juju-core:New> <https://launchpad.net/bugs/1557540>
<dimitern> perrito666, sure - done
<perrito666> tx
<mup> Bug #1557540 opened: Missing help for payloads <juju-core:New> <https://launchpad.net/bugs/1557540>
<katco> natefinch: ericsnow: hey
<ericsnow> katco: yo
<ericsnow> katco: we should get together and point those cards in our iteration backlog
<katco> ericsnow: yes, unfortunately i cannot talk still
<ericsnow> katco: :(
<katco> ericsnow: natefinch: also, unfortunately mark did not like push-resources. he wants it to be "attach-resources"
<dimitern> frobware, pushed the changes so far - https://github.com/juju/juju/compare/maas-spaces2...dimitern:maas-spaces2-multi-nic-lxc
<dimitern> frobware, lxc now works
<ericsnow> katco: k
<dimitern> frobware, tried with 2 on the same machine, now will try a few more
<frobware> dimitern: will take a look in a bit, just looking at the brokeness of maas-spaces2
<frobware> dimitern: but really cool though! \o/
<dimitern> frobware, yeah :) \o/
<dimitern> frobware, there's 2m delay at boot due to the cloud-init-nonet :/ but otherwise seems to work once only the primary NIC has a gateway set in both /e/n/i and lxc.conf
<frobware> dimitern: very nice. congrats!
<dimitern> frobware, thanks :) we all deserve a pat on the back
<natefinch> katco: lol.. well, at least command name changes are trivial
<katco> natefinch: yeah
<dimitern> frobware, I'll start splitting of chunks of that branch and propose them against maas-spaces2
<frobware> dimitern: are you considering just the state changes, or more?
<ericsnow> katco: are you tackling merging master into feature-resources? (bug #1557264)
<mup> Bug #1557264: feature-resources is too stale to test <block-ci-testing> <ci> <juju-core:Incomplete> <juju-core feature-resources:Triaged> <https://launchpad.net/bugs/1557264>
<katco> ericsnow: i can definitely take care of that
<dimitern> frobware, state changes definitely, and some stuff in networkingcommon
<ericsnow> katco: suh-weet
<dimitern> frobware, but the other bits need better testing
<katco> rick_h__: so i think the last loose-end on resources is the progress indicator for fetching resources
<rick_h__> katco: ok, did you look at the current output locations and see if we can fit anything in place on the existing tabular output?
<ericsnow> katco, natefinch: FYI, http://reviews.vapour.ws/r/4160/
<katco> rick_h__: we talked about maybe having it under "juju list-resources <service> --details"
<katco> rick_h__: probably not in an existing column, but maybe a new section
<rick_h__> katco: do you have sample output we can peek at please?
<katco> rick_h__: let me cook one up
<rick_h__> katco: ty
<frobware> alexisb: see progress on multi-NIC from dimitern ^^
<katco> rick_h__: possibly this: http://pastebin.ubuntu.com/15392044/
<frobware> voidspace: this looks so much better now "has interface addresses: [local-cloud:10.17.20.215@default(id:0) local-cloud:192.168.10.102@internal(id:1) local-cloud:192.168.12.102@db(id:3) local-cloud:192.168.11.102@public(id:2)" - thanks!
<alexisb> dimitern, very very cool!
<dimitern> alexisb, cheers :)
<rick_h__> katco: looking
<rick_h__> katco: can we also track upload progress on local ones?
<katco> rick_h__: isn't that kind of strange since the CLI will block when uploading?
<katco> rick_h__: you'd have to flip to a new terminal and do a watch
<natefinch> it's a lot easier than tracking progress on unit downloads
<natefinch> katco: I assume he just means have the CLI output progress indications while it blocks
<rick_h__> katco: thoughts on http://paste.ubuntu.com/15392141/
<rick_h__> natefinch: no, I mean if I were in another terminal or another user I'd see that status
<katco> rick_h__: ah i didn't think about the multi-user implications
<katco> rick_h__: what happens when we do "deploy --resources"? does that status show up as well?
<natefinch> rick_h__: personally, I would find it very weird for other people to be notified when you're uploading a file.  systems don't generally do that... it's just either not there or it's there
<frankban> rick_h__: urulama, hatch, perrito666: for instance, this is the current unit info from master's megawatcher: http://pastebin.com/U51reeY4 (just FYI)
<rick_h__> natefinch: but that's what I'm asking, if you do it when getting a file from a remote location, why not a local location?
<rick_h__> natefinch: it's kind of the same thing to a second user "this thing is updating and in progress..."
<frankban> rick_h__: urulama, hatch, perrito666: so I guess we'll need to fetch what we need from AgentStatus and WorkloadStatus
<katco> rick_h__: you would still see the download progress for a local resource... the upload is to the controller, not individual units
<frankban> perrito666: is AgentStatus going to be renamed there as well?
<katco> rick_h__: so you wouldn't see an upload progress, but you would still see a download progress
<rick_h__> katco: ok, so download would be from the controller down, fiar enough
<perrito666> frankban: gimme a moment and ill be with you
<rick_h__> katco: ok, what are folk's thoughts on the in-line?
<natefinch> rick_h__: there's two different transfers, from charmstore/local to controller, and from controller to unit.  I had only been thinking of updates on the latter.
<katco> rick_h__: the "REVISION" heading seems kind of strange to me now
<rick_h__> katco: yea, it says it's downloading rev9, we know that ahead of time right?
<katco> rick_h__: as in "what revision? downloading"
<natefinch> ditto... it's already overloaded
<katco> rick_h__: yes, through the same mechanism we'll have to provide progress
<katco> rick_h__: we first know about the metadata: rev, size, fingerprint, and then we download
<rick_h__> katco: right, ok so it's not completely out of place and why I made sure ot keep the revision info in the fetching messagfe
<natefinch> rick_h__: the thing is, then you lose information about what that unit is actually running right now, since it's not running 9 right now, it's running whatever it downloaded previously
<rick_h__> natefinch: true, but it's for some download window of time. Is it significant?
<rick_h__> natefinch: and I'd expect the yaml/json status to be full details
<natefinch> rick_h__: is the progress indicator significant, by that measure? :)
<rick_h__> natefinch: e.g. different fields
<natefinch> rick_h__: also keep in mind it'll be this for downloading uploaded resources: http://paste.ubuntu.com/15392199/
<rick_h__> natefinch: rgr, /me goes into call will slow down replies
<natefinch> rick_h__: personally, I find it really hard to read having all that extra data embedded in that column.  I'd much rather have a separate section like katco proposed
<perrito666> so first of all, frankban hatch urulama https://github.com/juju/juju/blob/juju-1.26-alpha1/state/status_model.go#L272
<perrito666> that was the old translate function
<perrito666> frankban: urulama, hatch  second, the change of agent-status -> juju-status is meant to be only  a presentation thing, just user facing, internally we still hold the names that make sense for the internal juju terminology, so my first intuition would be to say that it will remain agent-status unless someone disagrees
<hatch> perrito666: I don't really understand the 'juju-status' key in UnitStatus but ok :)
<frankban> perrito666: cool, I agree
<hatch> the logic doesn't 'look too complex so that's good :D
<natefinch> rogpeppe1: I see a lot of calls like client.Put("/"+id.Path()+"/meta/perm/read", []string{params.Everyone, id.User}) ... how come you guys aren't wrapping that stuff in helper methods on charmrepo/csclient.Client?
<frankban> perrito666: but FYI in the current master mega-watcher units have AgentStatus and WorkloadStatus, machines have JujuStatus and MachineStatus :-)
<rogpeppe1> natefinch: we could do, yeah
<frankban> perrito666: so is this going  to stay like that?
<perrito666> hatch: so, the differentiation between juju-status  and workload-status is because the former shows information about the juju agent and the later about the charm
<rogpeppe1> natefinch: feel free to add PutId, SetPerm etc
<perrito666> frankban: aghh, then expect it to change to juju-status wil be consistent there
<frankban> perrito666: for beta3? and what about Workload vs Machine?
<rogpeppe1> natefinch: it's always a trade-off between adding useful methods and adding a helper method for every single endpoint in the store
<rogpeppe1> natefinch: i'm not sure the maintenance burden of the latter is worth it
<perrito666> frankban: workload and machine remain the same, agent changes to juju, hopefully in a couple of hours tops
<natefinch> rogpeppe1: I would call a helper for every single endpoint in the store useful :)  As someone writing a CLI client, I really don't ever want to have to know what HTTP method I should be using.. someone should have wrapped that stuff away in a helper library, IMO.
<rogpeppe1> natefinch: especially given the way that so much of this stuff can be combined in arbitrary ways with the bulk endpoints
<frankban> perrito666: ok so units will have JujuStatus and WorkloadStatus, machines will have JujuStatus and MachineStatus?
<rogpeppe1> natefinch: the way i look at it is the HTTP API really is the API - the client library is a thin (and not opaque) layer above it
<perrito666> correct
<frankban> perrito666: ok, do you also have the conversions functions used for StatusInfo and StatusData?
<rick_h__> natefinch: my issue is that the other sectino isn't meaningful without trying to match unit/resource name in the first section so I find it hard to process.
<perrito666> frankban: passthrough unless there is error in workload status, there we would show the error message in info iirc
<perrito666> but I dont recall where it was
<natefinch> rogpeppe1: I see what you're saying... but I think the client code is a lot easier to read and understand when I see client.Publish(id, channels, resources)  rather than client.Put("/"+id.Path()+"/publish", params.PublishRequest{Channels: channels, Resources:resources})
<rogpeppe1> natefinch: as i said, it's a trade-off
<rogpeppe1> natefinch: there are hundreds of endpoints
<rogpeppe1> natefinch: well... lots anyway
<rogpeppe1> natefinch: and thousands of potential combinations
<frankban> perrito666: so StatusInfo was the error message and StatusData is current WorkloadStatus?
<rogpeppe1> natefinch: ok, so 60+ endpoints
<rogpeppe1> natefinch: i think that having a reasonable grasp of the HTTP API saves quite a bit of work overall
<frankban> perrito666: so I think old StatusInfo was WorkloadStatus.Message
<perrito666> frankban: the exact name was StatusInfo ?
<natefinch> rogpeppe1: maybe for you, not so much for me ;)
<frankban> perrito666: and old StatusData was AgentStatus.Data
<perrito666> frankban: so, now I remember
<perrito666> StatusData was AgentStatus.Data
<frankban> perrito666: I think the old logic is in https://github.com/juju/juju/blob/juju-1.26-alpha1/state/allwatcher.go#L294
<perrito666> StatusInfo was AgentStatus.Message
<rogpeppe1> natefinch: come on, that call isn't hugely much more verbose than client.SetPerm(id, channels, resources)
<perrito666> frankban: yes, that was the conversion done, when error we did something different otherwise we would just agentStatus fields
<frankban> perrito666: I see, let me try to formalize that
<rick_h__> natefinch: katco http://paste.ubuntu.com/15392403/ please
<rick_h__> natefinch: katco I think that's the best of both worlds as close as I can think up atm
<mup> Bug # changed: 1553298, 1553299, 1553303, 1553308
<ericsnow> katco, natefinch, rick_h__: perhaps we can go with something in the middle:  http://pastebin.ubuntu.com/15392433/
<rick_h__> ericsnow: the thing is the extra column that comes/goes I think is bad for cli imo. Maybe I'm wrong on that front.
<katco> rick_h__: i agree
<rick_h__> ericsnow: and I do think that the YAML has to have the data spelled out as it's machine data
<ericsnow> rick_h__: I'd agree if we expected folks to be parsing the tabular output
<rick_h__> ericsnow: right, but some folks will do simple "status | grep | xxx"
<rick_h__> ericsnow: vs parsing the yaml/json
<katco> ericsnow: it's some ux principal i can't remember: don't have elements of the ui appear/disappear based on state
<katco> principle
<rick_h__> katco: +1, especially if it could be fast moving. "What was that blip? oh, wtf, nothing there now"
<rick_h__> as soon as download completes/etc
<perrito666> frankban: im starting with the agent->juju change now, willping you when its done
<dimitern> frobware, please ping me before you're about to test the PoC branch btw
<frobware> dimitern: stuck in maas-spaces2 land. I cannot bootstrap on there right now... :(
<dimitern> frobware, I found a few more things to fix and will push some updates as soon as the current live test passes
<dimitern> frobware, oh :/
<dimitern> frobware, xenial-related?
<frobware> dimitern: yeah, going back in history. don't think so.
<frankban> perrito666: ty
<natefinch> rick_h__: to be fair, any progress indicator we make is going to blip away once the download is done :)
<dimitern> frobware, that's before or after the last master merge?
<rick_h__> natefinch: true enough
<frobware> dimitern: gone back to bc2914e5af162baa0f6ec1ffed69859fcc3e7633 (Fri Mar 11 12:00:01 2016 +0000)
<ericsnow> rick_h__: here's an iteration on your last one that I find a little less busy:  http://pastebin.ubuntu.com/15392511/
<rick_h__> ericsnow: my only concern there was "is it safe to assume it's always fetching the expected"?
<voidspace> frobware: cool
<ericsnow> rick_h__: my gut says yes :)
<ericsnow> rick_h__: that would be a pretty tight race to be otherwise
<natefinch> ericsnow, rick_h__: if the problem with a separate table is mapping between the two, why not an unobtrusive indicator in the top table? http://pastebin.ubuntu.com/15392537/
<ericsnow> rick_h__: to be safe, a slight tweak:  (fetching 9: 96%)
<rick_h__> ericsnow: ok, if we have verified that it's a safe assumption including in the world where we can pass arbitrarty version numbers then I'm +1
<rick_h__> ericsnow: I think you're right, but I didn't feel comfy assuming it so I went with the 100% case, but yea...I think you're right
<natefinch> I think the separate table will make it a *lot* easier to understand multiple downloads at once
<ericsnow> rick_h__: we should double-check though :)
<rick_h__> ericsnow: ok, if you double check and feel safe then I'm +1 yours and if not, then I'm +1 mine
<katco> ericsnow: well, expected should always list what we want to download. it may not be the latest
<rick_h__> natefinch: the issue there is when you get more than one going on, especially across units/etc
<katco> ericsnow: so i think that's an accurate statement
<natefinch> rick_h__: right, I think this is a lot easier to understand: http://pastebin.ubuntu.com/15392550/
<natefinch> rick_h__: although eric's appending the download info is ok too
<rick_h__> natefinch: understand, but disagree, especially if we can stick it on the end like ericsnow's
<natefinch> rick_h__: I really hate munging multiple pieces of data in the same column, but I think I'm overruled on that :)
<katco> rick_h__: ericsnow: natefinch: so we'll go with http://pastebin.ubuntu.com/15392511/ ?
<ericsnow> katco: with or without the parens around the REVISION value?
<rick_h__> katco: the first one in there, without parens aroud revision value
<katco> +1
<ericsnow> rick_h__: coolio
<katco> ta rick_h__
<rick_h__> ty ericsnow katco natefinch, appreciate the dialog/discussion. Always better hashed out :)
<ericsnow> rick_h__: no, thank you!
<frobware> dimitern: I created a new node, enlisted, commissioned and bootstrapped without problems.
<dimitern> frobware, sweet! so problem solved?
<frobware> dimitern: nope, I take that back (wrong window). :)
<katco> ericsnow: so you need work right now?
<ericsnow> katco: yep, though right now I'm working on getting those two bug fixes landed
<katco> ericsnow: ok, eta on that?
<mup> Bug #1553320 changed: TestBootstrapGUIErrorInvalidVersion os.ProcessState exit status 2 <centos> <ci> <regression> <test-failure> <juju-core:Invalid> <juju-core embedded-gui:Fix Released by frankban> <https://launchpad.net/bugs/1553320>
<mup> Bug #1553322 changed: TestBootstrapGUIErrorUnexpectedArchive os.ProcessState exit status 2 on centos <centos> <ci> <regression> <test-failure> <juju-core:Invalid> <juju-core embedded-gui:Fix Released by frankban> <https://launchpad.net/bugs/1553322>
<ericsnow> katco: should be merging the first one in the next little while
<katco> ericsnow: cool. next up should be the revisions of resources card... we should get together and point that and the new progress indicator work
<katco> ericsnow: natefinch: let's plan on doing that when ericsnow is done landing the bug cards
<ericsnow> katco: sounds good; I added tasks to that card last night in preparation :)
<katco> ericsnow: tyvm
<natefinch> rogpeppe1: Is publish not returning a value anymore?
<rogpeppe1> natefinch: that's right
<rogpeppe1> natefinch: i *think* this is documented in the API docs
<rogpeppe1> natefinch: if not, they need updating
<natefinch> rogpeppe1: ahh, ok, because all my tests are failing now
<rogpeppe1> natefinch: ah
<rogpeppe1> natefinch: we reckoned it probably wasn't telling you much useful, but if you can think of a decent reason why it should return something, we could change it
<natefinch> rogpeppe1: see, if a wrapper library had updated, my code wouldn't compile right now ;)
<rogpeppe1> natefinch: :)
<rogpeppe1> natefinch: luckily your tests caught it
<rogpeppe1> natefinch: in our later APIs, we auto-generate the client
<natefinch> rogpeppe1: my tests printed out 150 lines of logging and showed me a json error
<natefinch> rogpeppe1: autogenerating a client sounds like a wonderful solution :)
<rogpeppe1> natefinch: you don't get the nice bulk API thing though
<rogpeppe1> natefinch: roll on HTTP2 :)
<natefinch> rogpeppe1: so, why aren't we returning the ID of the published charm anymore? wouldn't that be useful?
<rogpeppe1> natefinch: because it's not a newly created id
<rogpeppe1> natefinch: so it's just like using any id in any request
<rogpeppe1> natefinch: for example, do you think a PUT to meta/extra-info should return the charm id
<rogpeppe1> ?
<natefinch> rogpeppe1: oh, I guess I was still thinking that publish creates a new revision, but it doesn't, does it?  It just moves an existing revision to a different channel
<rogpeppe1> natefinch: yeah. well, copies rather than moves.
<natefinch> rogpeppe1: right, sorry, creates a pointer in that channel to that revision :)
<rogpeppe1> natefinch: exactly
<rogpeppe1> natefinch: call me a pedant :)
<natefinch> rogpeppe1: it's good to be exact, so that there's no confusion
<rogpeppe1> natefinch: ok, don't call me a pedant then. see if i care.
<natefinch> rogpeppe1: lol
<natefinch> rogpeppe1: I assume I can delete params.PublishResponse, then? :)
<rogpeppe1> natefinch: indeed :)
<voidspace> Up to an 1100 line diff and down to 28 test failures.
<voidspace> Feels like hand to hand combat getting these to pass though.
<frobware> dimitern: you still about?
<mup> Bug #1557633 opened: status has duplicate entries for relationships <juju-core:New> <https://launchpad.net/bugs/1557633>
<dimitern> frobware, yeah
<perrito666> could anyone kindly share theyr /etc/os-release with me?
<cmars> perrito666, https://paste.ubuntu.com/15393493/
<perrito666> cmars: tx
<perrito666> turns out installing kdeneon will break it :p
<perrito666> and that breaks the tests heavily
<frobware> dimitern: have some time to HO on sapphire standup?
<cmars> perrito666, if you have a moment, could i get a review, http://reviews.vapour.ws/r/4177/ ?
<perrito666> cmars: reviewing
<cmars> perrito666, thanks!
<perrito666> actually please wait 5 mins to my eye drops to settle so I stop seeing fuzzy :p
<dimitern> frobware, ok
<ericsnow> katco, natefinch: now's probably a good time to point those cards
<katco> ericsnow: ok moonstone? although i can't speak
<natefinch> ok
<natefinch> "ok, just speak if you disagree katherine... no? ok, great!"
<mup> Bug #1557679 opened: help text for juju switch needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1557679>
<perrito666> someone could make a quick review? http://reviews.vapour.ws/r/4179/
<jam> cherylj: ping. Were you the one that added status information for pending instances?
<perrito666> jam: I helped, can I help you?
<jam> perrito666: so I'm wanting to give progress during Bootstrap, and I was thinking that maybe we should have a progress func passed into StartInstance
<jam> there is no API server to poll during bootstrap
<jam> and providers shouldn't be directly allowed to know about the database
<perrito666> well iirc, we are passing a statuscallback function, so you could use that?
<jam> perrito666: there it is. I just didn't see it, buried in the params struct.
<perrito666> :)
<perrito666> bonus track, you get the same for containers
<jam> perrito666: yeah. what do you think about tying that StatusCallback into Bootstrap's context so you get the progress messages from "juju bootstrap" ?
<perrito666> I think its an awesome idea
<perrito666> as long as I am not the one implementing it :p
<perrito666> no, honestly, seems like a super short and easy way to get more visibility
<jam> perrito666: heh.
<jam> so I'm trying to understand what the data map is for
<jam> It seems the only things that actually ever call StatusCallback are containers.
<jam> lxc and kvm seem to use it
<jam> but that seems to be it
<jam> and they don't do anything with data
<perrito666> nah, I am pretty sure startinstance calls it too, every provider reports that status
<perrito666> jam: data structure is just to comply with StatusInfo
<jam> perrito666: grep StatusCallback doesn't seem to agree
<jam> perrito666: as in, the only things that reference ".StatusCallback" are the lxc and kvm brokers
<jam> other thincgs get it passed in, but do nothing with it.
<jam> it looks like the shape of the callback is just to match Machine.SetInstanceStatus which is ok
<jam> but none of the other providers are giving feedback during StartInstance that I can see.
<perrito666> maas, is (should be)
<perrito666> lemme do a quick dig
<perrito666> jam: you are right, how could I miss that
<perrito666> mm something does not add up here
<frobware> dimitern, voidspace, mgz: fix maas-spaces2 panic: http://reviews.vapour.ws/r/4180/
<perrito666> jam: that should be implemented for maas only, it escapes a bit why is it not now
<perrito666> jam: I would like cherylj input here in case I am missing something
<frobware> cherylj: ^^
<mgz> frobware: looks good. did you run the maas provider tests?
<perrito666> jam: in any case, fixing that should be as trivial as adding a few calls to the callback to startinstance
<frobware> mgz: yep. the live one escaped me when I did the original merge, hence this panic.
<frobware> mgz: I've run them post that commit
<mgz> and the instance_test.go errors from that got fixed earlier
<frobware> mgz: as in the same fix was applied previously?
<mgz> as in, that borked rev has bad construction of maasInstance in tests but that's already fixed in maas-spaces2
<mgz> frobware: shipit
<frobware> mgz: the diffs between original and maas-spaces2: http://pastebin.ubuntu.com/15394092/
<frobware> mgz: I don't recall seeing a bad construction in the original
<mgz> frobware: yeah, that's fine
<frobware> mgz: ok, thanks. going to merge and let's see if things are better throughout.
 * natefinch is in dependendency hell
<natefinch> with an extra end just in case
<katco> ericsnow: natefinch: fyi they're digging  up the sidewalk again. if i drop out, that's why
<ericsnow> katco: k
<natefinch> katco: k
<kwmonroe> could someone help me configure azure correctly for 2.0-beta2?  i've followed this https://jujucharms.com/docs/devel/config-azure and created what i think are valid creds in my $JUJU_DATA/credentials.yaml (app-id, app-password, sub-id, tenant-id), but bootstrapping gets me this:  http://paste.ubuntu.com/15394344/
<kwmonroe> i don't have anything in clouds.yaml that describes azure at all.  do i need that with some config for the virtual network?
<rick_h__> kwmonroe: sure thing, the release notes leads you down a long path for the username/password version
<rick_h__> kwmonroe: are you using the user/pass or the key file?
<perrito666> kwmonroe: that might be a bug you just found, perhaps that viertual netwrok already?
<kwmonroe> rick_h__: i'm auth-type: userpass
<kwmonroe> perrito666: i'll check if i can navigate the azure portal to look for pre-existing virt networks.
<kwmonroe> rick_h__: the release notes said this "To use the new Azure support, you need the following configuration in environments.yaml", but i didn't find an env.yaml to modify, so i wasn't sure if/where those opts should go.
<kwmonroe> clouds.yaml?
<rick_h__> kwmonroe: sorry, there's a section on the new arm provider in there and this needs to go into credentials.yaml
<rick_h__> kwmonroe: clouds.yaml is just the api endpoints/regions/cloud data that you can reuse with many credentials/etc
<kwmonroe> ahhh, roger that
<rick_h__> kwmonroe: sent you a PM with step by step
<rick_h__> kwmonroe: I got those steps from the release notes on the new ARM provider in 2.0
<kwmonroe> gracias rick_h__!
<natefinch> If I set dependencies so charmstore-client tests pass, then charmrepo tests fail (which charmstore-client relies on).
<katco> ericsnow: merge is up: http://reviews.vapour.ws/r/4181/
<ericsnow> katco: thanks
<katco> ericsnow: pay special attention to cmd/juju/charmcmd/store.go
<mup> Bug #1557714 opened: "juju status-history" doesn't work correctly <cli> <juju-core:Triaged> <https://launchpad.net/bugs/1557714>
<voidspace> frobware: ah, oops
<voidspace> frobware: LGTM
<voidspace> I'm EOD
<perrito666> ok, does anyone have experience with autoload-credentials?
 * perrito666 eyes at rick_h__ 
<rick_h__> perrito666: I spec'd it, not really experience with it. What's up?
<perrito666> well I want to add an openstack from my novarc
<perrito666> I run it
<perrito666> it asks me for a cloud, I have no cloud to add those credentials to
<perrito666> I try to add cloud, it asks me for a cloud.yaml
<perrito666> kind of defeats the purpose
<rick_h__> perrito666: so cloud != credential
<rick_h__> perrito666: cloud = api endpoint and metadata
<rick_h__> perrito666: so you have to add the cloud, and then when you autoload-credential it'll be attached to that cloud
<rick_h__> perrito666: imagine you've got multiple openstacks, prodstack, canonistack, serverstack...
<perrito666> oh I would have expected the add-cloud part to be wizardy too
<rick_h__> perrito666: no, there will be some cli prompting in the future
<rick_h__> perrito666: but for now, it's "pass me some yaml looking info I can use to ping that clooud"
<perrito666> or even more inteligent, autoload to just make me the proper questions to create the cloud
<rick_h__> perrito666: one day
<jam> perrito666: http://reviews.vapour.ws/r/4182/
<jam> if you're interested
<perrito666> jam: I definitely am, reviewing
<perrito666> I also am OCR so not a chance to skip it :p
<jam> perrito666: having actual progress information when bootstrapping lxd is pretty darn nice
<jam> rather than it just hanging for 5 minutes and you're wondering WTF is going on.
<perrito666> it is, I dont know dubai, but in south america is really useful :p
<jam> ye old "juju bootstrap local" problem.
<kwmonroe> rick_h__: perrito666:  looks like i missed "azure provider register Microsoft.Network" first time around.. all looks good now.  thanks!!
<perrito666> kwmonroe: great to hear that
<rick_h__> kwmonroe: awesome, if there's suggestions to the docs to clear it up please let us know
<kwmonroe> will do rick_h__
<perrito666> jam: got reviewed
<jam> thx perrito666
<perrito666> jam: definitely in need of testing
<perrito666> but nothing that cant wait to after beta3
<jam> agreed. this was mostly about proof-of-concept, but I wanted to get that pushed up to make sure it was going in a good direction
<jam> thanks for the StatusCallback pointer.
<perrito666> jam: we are even, I saw you fixed a couple of typos I introduced
<katco> ericsnow: natefinch: do we need to do a standup? is wallyworld coming to the earlier (for him) time?
<ericsnow> katco: not sure what his specific plans are
<katco> ericsnow: natefinch: well, we already kind of did a standup earlier, so only reason is to catch him up
<ericsnow> katco: as far as I'm concerned we got everything covered standup-wise
<natefinch> katco, ericsnow: ditto
<katco> ericsnow: natefinch: agree.. ok then i've hit my limit i need to go lay down
<ericsnow> katco: hope you feel better
<katco> ericsnow: ty
<natefinch> ericsnow: I have a small PR I need reviewed before I can land the bigger PR for adding --resources to publish: https://github.com/juju/charmrepo/pull/72
<ericsnow> natefinch: I'll take a look
<natefinch> katco: btw, figured out the rpoblems I was having, just dependency issues
<mup> Bug #1557726 opened: Restore fails on some openstacks <backup-restore> <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1557726>
<natefinch> ericsnow: I'm reviewing your deploy w/ revisions review
<ericsnow> natefinch: LGTM; you still need one from the UI team, no?
<natefinch> ericsnow: probably
<natefinch> ericsnow: when you get an hour, this is good, and I think you'll like it: https://vimeo.com/80533536
<ericsnow> natefinch: cool; I'll take a look
<ericsnow> katco: FYI, I've finished reviewing your merge-from-master patch
<mup> Bug #1557747 opened: [LXD provider] Trusty container is used by default on Xenial host <docteam> <juju-core:New> <https://launchpad.net/bugs/1557747>
<menn0> davecheney: review please: http://reviews.vapour.ws/r/4183/
<cmars> menn0, quick review please? http://reviews.vapour.ws/r/4184/
<menn0> cmars: will do
<cmars> menn0, thanks
<menn0> davecheney: thanks for that review. here's the next one (small): http://reviews.vapour.ws/r/4185/diff/
<menn0> cmars: ship it
<roryschramm> is it possible to specify a custom apt repository when juju deploys an app to an lxc container? ie the lxc container will use http://myrepo/ubuntu in /etc/apt/sources.list
<mup> Bug #1557769 opened: private-address returns name, not ip, under 1.25.4 <juju-core:New> <https://launchpad.net/bugs/1557769>
<menn0> davecheney: just responded to your review comments for http://reviews.vapour.ws/r/4185/
<menn0> davecheney: I think you're missing what that type assertion is doing
<menn0> davecheney: it's pretty awful
<menn0> davecheney: it's taking very limited interface and with the knowledge that the thing underneath is actually a *state.State is converting it
<menn0> davecheney: saying it out loud, I really don't like the way it's done :)
<menn0> davecheney: I'll try a different approach
<davecheney> i don't understand why the interface value passed into the method cannot be used
<davecheney> is it the wrong type ?
<menn0> davecheney: basically yes.
<menn0> davecheney: migration.ExportModel requires a *state.State, not the Backend interface
<menn0> davecheney: I'm trying to avoid requiring the use of *state.State in the tests
<davecheney> why can't you use an interface
<menn0> davecheney: I'm changing ExportModel to take an interface
<davecheney> you just want the behvaiour
<menn0> davecheney: easier said than done when you have a monster like state.State
<menn0> davecheney: but for export it's actually doable
<davecheney> how many methods does it expect ?
<menn0> davecheney: in this case 1
<menn0> davecheney: for import 10's if not 100's (the tests use StateSuite there)
<menn0> davecheney: actually I lie, it's not too bad for import eithger
<menn0> davecheney: let me fix this
<davecheney> menn0: cherylj bogdanteleaga https://github.com/juju/juju/pull/4749
<wallyworld> axw: let me know when you've got a moment so i can chat about the schema stuff
#juju-dev 2016-03-16
<bradm> is 1.25 and xenial supposed to work?  I filed LP#1557345 because I'm having issues with it deploying to containers
<alexisb> bradm, it works if lxc is installed
<alexisb> but you will not be able to deploy a lxc container on a vanilla xenial image as lxc is not installed by default
<bradm> alexisb: huh, its installed for me
<bradm> alexisb: and it doesn't work deploying a container to it
<bradm> alexisb: the tl;dr is that I took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug
<menn0> davecheney: looking now. was afk for a bit.
<menn0> davecheney: ship it
<davecheney> ta
<menn0> gah! nasty horrible import loop
<anastasiamac_> bradm: could u please add this infor to the bug too?
<bradm> anastasiamac_: that they're just bootstrapped?  sure.  I'm testing it out again, will confirm if lxc is installed both before and after I attempt the deploy
<anastasiamac_> bradm: :D that u "... took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug"
<anastasiamac_> bradm: tyvm \o/
<bradm> anastasiamac_: most of that's already in the bug, but not as concise.
<axw> wallyworld: give me 15 mins please, just finishing cooking my lunch
<wallyworld> axw: talking to anastasiamac_ , will ping when ready
<bradm> alexisb: I can also confirm a freshly booted environment has lxc installed already when I can log into it, before I try to deploy
<alexisb> bradm, can you deploy without the --lxc?
<alexisb> bradm, I am about to eod, but I can raise visibility on the bug tomorrow
<bradm> alexisb: yes I can, it spins up a new instance.  which is fine, but doesn't work too well with HA openstack
<bradm> alexisb: I think jillr is going to be having a seperate conversation about upgrading juju with cherylj tomorrow, but figuring out what I can do to unblock this would be great - I don't particular care what juju version it is, just that I can deploy to LXCs - this is ultimately to get a deployable xenial with mitaka openstack
<alexisb> bradm, ok, I see you added our convo to the bug
<bradm> alexisb: yup, just to be clear about what's happening.
<alexisb> I will get the right eyes on the bug tomorrow
<bradm> excellent, thanks very much.
<davecheney> menn0: so i fixed the lxd reboot tests, and it turns out they don't work
<davecheney> 	github.com/juju/juju/container/lxd/lxd_go12.go:24: LXD containers not supported in go 1.2
<davecheney> 	github.com/juju/juju/cmd/jujud/reboot/reboot.go:88: failed to get manager for container type lxd
<davecheney> 	github.com/juju/juju/cmd/jujud/reboot/reboot.go:134:
<davecheney> 	github.com/juju/juju/cmd/jujud/reboot/reboot.go:66:
<wallyworld> axw: ping whenever you are free, after lunch
<axw> wallyworld: just cooking, it's only 9:30 :)  I'm free now
<axw> wallyworld: standup?
<wallyworld> sure
<natefinch-afk> davecheney: all the lxd stuff should be hidden behind +build !go1.3
<natefinch> davecheney: which maybe is what you found
<menn0> anastasiamac_: looking at your virttype PR now
<anastasiamac_> menn0: thnx? :D
<natefinch> wallyworld: I have a type that supports gnuflag, for --resource foo=bar --resource baz=bat.  I need to use it in juju/juju and also in the charmstore-client.  I was thinking of putting that type in github.com/juju/cmd ... since it's a pretty useful type to have around in general.  Do you think that's an ok place, and if not, do you have a suggestion for a better place?
<wallyworld> natefinch: what does the type do that's not already covered by our existing key-value flags type?
<natefinch> wallyworld: AFAIK we don't actually have a key-value flags type... there
<wallyworld> let me try and find it
<natefinch> wallyworld: there's the constraints-style keyvalue type, but that is severely restricted as to what keys and values it can support
<wallyworld> we have another general one that frank wrote
<natefinch> wallyworld: there's a storage one
<natefinch> wallyworld: there's something for bindings
<menn0> anastasiamac_: review done
<anastasiamac_> menn0: \o/ thank u - looking
<wallyworld> natefinch: yeah, i can't find anything, i may have misremembered what we had
<wallyworld> juju/cmd seems a good spot
<natefinch> wallyworld: there's a few similar things, but nothing quite so straightforward
<natefinch> wallyworld: cool
<wallyworld> i could have sworn we have a generic key=value one
<wallyworld> we do have one but tit also accepts filenames
<wallyworld> not just key values
<wallyworld> the filename if specified contains key values
<wallyworld> anastasiamac_: menn0: providers have a constraints validation interface which they implement - that's where the virt type value needs to be checked
<anastasiamac_> yes
<anastasiamac_> wallyworld: my question here was more along the lines of whether we have a set ofvirt types that we'd accept (like we do with arches)
<wallyworld> provider dependent, hence the validation done in the provider
<anastasiamac_> wallyworld: menn0: however, m happy to not do any validation and only do it on a provider side :D
<wallyworld> even with arches, validation done one the provider also
<wallyworld> apart from initial check
<anastasiamac_> wallyworld: agreed... since i saw the initial arch check in constraints/validation, I've added the to-do to confirm that I did not need to do something similar for virttype.
<anastasiamac_> wallyworld: i'll remove todo
<wallyworld> ty :-)
<menn0> wallyworld, anastasiamac_ : sounds good. I think we were all pretty much on the same page :)
<wallyworld> in violent agreement :-)
<anastasiamac_> :P
<axw> anastasiamac_: did you happen to test with a provider that doesn't support virt-type? I think we need to register unsupported constraints for them all (which is kinda dumb; should be a whitelist I think)
<axw> anastasiamac_: sorry, I think I just asked the same thing wallyworld did
<anastasiamac_> axw: sounds good. I've hit merge but will add it now as a separate PR
<anastasiamac_> although if virt-type is not specified, all will be good
<anastasiamac_> if it's specified, and virt-type is not supported on clouds, we'd just say that nothing matches specified constraints...
<natefinch> anyone up for a quick and pretty painless review? http://reviews.vapour.ws/r/4190/
<natefinch> re: ^  note this is a straight up copy of already-reviewed code in juju-core, just moving it somewhere accessible to other projects.
<axw> natefinch: reviewed
<hatch> with juju 2.0 I'm seeing some very weird deltas. somehow a unit went from a config-changed hook error, to maintenance, to error
<axw> anastasiamac_: yep, I think we could just be a bit more helpful and say immediately that virt-type isn't handled by the provider
<axw> rather than filtering all the things out and saying nothing matches
<axw> hatch: hook retries maybe?
<anastasiamac_> axw: sure. if my current PR lands, I'll follow it up (if it fails, i'll amend current)
<hatch> axw: would a hook automatically retry?
<axw> anastasiamac_: thanks
<axw> hatch: yes, support was added not too long ago to automatically retry failed hooks
<natefinch> axw: interesting point about resources for services in bundles..... it's been on our mind, but we're basically out of time to implement it at this point.
<hatch> axw ohh ok then, this is news to me - that would explain why I was seeing such weird results.
<axw> natefinch: fair enough. just keep that code in mind when you do get there
<hatch> axw I'm also seeing that the 'juju status' updates quite a bit sooner than the delta stream...is this also possible?
<natefinch> axw: definitey, thanks for the pointer. That's really probably the most difficult part, is just the annoying contortions on the command line
<axw> hatch: possible, yeah. deltas are based of a polling mechanism
<axw> I forget the period, but it's in the seconds
<axw> ... I think
<hatch> in this case, it was probably 10s
<axw> hatch: sounds about right
<mup> Bug #1555355 changed: MachineSerializationSuite.TestAnnotations unit test failure (Go 1.6) <ci> <go1.6> <test-failure> <unit-tests> <juju-core model-migration:In Progress by menno.smits> <https://launchpad.net/bugs/1555355>
<mup> Bug #1557345 opened: xenial juju 1.25.3 unable to deploy to lxc containers <canonical-bootstack> <juju-core:Triaged> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1557345>
<hatch> ok thanks for confirming - I'm just qa'ing my changes to support the new agent_state er...JujuStatus and WorkloadStatus
<hatch> thanks axw
<axw> hatch: no worries
<natefinch> lol, landing stuff outside of juju/juju is so much easier.  CI runs in like 30 seconds rather than 30 minutes
<natefinch> axw: I'm a dip: http://reviews.vapour.ws/r/4191/
<axw> natefinch: heh, oops :)
<menn0> davecheney: this is much better: http://reviews.vapour.ws/r/4185/
<menn0> davecheney: PTAL
<mup> Bug #1557345 changed: xenial juju 1.25.3 unable to deploy to lxc containers <canonical-bootstack> <lxc> <xenial> <juju-core:Invalid> <juju-core 1.25:Triaged by anastasia-macmood> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1557345>
<wallyworld> axw: i've also added support for setting allowed values to accommodate the "algorithm" attribute in joyent config http://reviews.vapour.ws/r/4171/
<axw> wallyworld: cool, looks good. I was wondering whether we can determine the algorithm from the key...
<axw> wallyworld: is it ready for re-review now then?
<wallyworld> axw: yeah, why not
<wallyworld> axw: i realised i no longer need to pass authtpye to finalise, i'll remove that
<axw> wallyworld: thanks, was about to comment
<davecheney> menn0: looking
<wallyworld> and an import fix
<davecheney> menn0: https://github.com/juju/juju/pull/4749
<davecheney> could you check again
<davecheney> i had to skip the test if built with go 1.2
<axw> wallyworld: reviewed
<davecheney> because lxd compiles on go 1.2, but doesn't actually work
<wallyworld> ta
<davecheney> which i'm not sure is helping
<wallyworld> axw: i didn't know about strictfieldmap, i'll use that
<wallyworld> file attr may still be interesting though
<menn0> davecheney: looking
<axw> wallyworld: file attr?
<wallyworld> axw: the schema declares it has an attribute "foo". we then use a map with key "foo-file" which proves the value for "foo". "foo-file" would be declared invalid for a scrict schema map, no?
<wallyworld> s/proves/provides
<axw> wallyworld: there will be two fields in the checker
<axw> wallyworld: foo and foo-file
<axw> wallyworld: both marked non-mandatory
<menn0> davecheney: still ship it
<wallyworld> axw: i'll look at it - tests as written won't pass at the moment i think
<axw> wallyworld: okey dokey
<wallyworld> as they don't construct a schema containing foo-file
<wallyworld> axw: and joyent schema won't pass either - so i'll need to inject any file attributes into the schema
<axw> wallyworld: I'm saying they already are added to the environschema
<axw> wallyworld: look at schemaChecker(), search for "(file)"
<wallyworld> as yes
<wallyworld> ah yes
<wallyworld> so they are
<menn0> davecheney: the reason for the blank lines was to separate the test setup, from the call being tested , and then the test asserts
<menn0> davecheney: but whatever :)
<davecheney> meh, your call
<natefinch> blank lines matter?
<anastasiamac_> menn0: axw: black-listing virt-type as constraint for all providers http://reviews.vapour.ws/r/4192/
<axw> anastasiamac_: I'm a bit confused about the comment in ec2. are we using the virt-type constraint in ec2?
<axw> doesn't look like it
<anastasiamac_> axw: no we are not
<anastasiamac_> it's not in the code.
<axw> anastasiamac_: ok, then it should be in unsupported constraints
<anastasiamac_> axw: i'll remove the comment but wanted 2nd pair of eyes to confirm that m not imagining things
<anastasiamac_> axw: yep. i'll remove the comment now that u agree.
<axw> anastasiamac_: yeah, I guess we'll want to expose it sooner or later (choose pv/hvm), but we should reject if we're not using it
<axw> anastasiamac_: thanks
<mup> Bug #1557874 opened: juju behaviour in multi-hypervisor ec2 clouds <juju-core:New> <juju-core 2.0:New> <https://launchpad.net/bugs/1557874>
<jam> wallyworld: with all the CLI changes, is there a way to upload tools for multiple versions anymore?
<jam> series
<jam> specifically, I want to test the LXD provider on Trusty hosting an extra unit on Xenial
<jam> but I need to have tools in state for Trusty and Xenail
<jam> Xenial
<wallyworld> jam: i'd have to check - at one point we uploaded tools for the specified series and any lts series automatically, give be a minute to look
<jam> wallyworld: thanks. I found something weird where "juju bootstrap test-lxd --upload-tools --bootstrap-series xenial" didn't work but somehow "--upload-tools --bootstrap-series xenial --config default-series=xenial" did, IIRC
<jam> but I want both Trusty and Xenial, not just one or the other.
<wallyworld> jam: bootstraa-series is new - it could just be that uploadtools doesn't account for it
<wallyworld> in fact, i bet that is the case
<jam> wallyworld: sure, but I still want both :)
<wallyworld> so should be a simple fix
<wallyworld> yes, both need to be accounted for
<jam> wallyworld: right, I'm looking for the old "--upload-series trusty,xenial" that we used to have
<jam> wallyworld: or even just some other command that lets me push a binary as the right tools for the series
<jam> it doesn't have to be all munged in one thing, just a way to have compiled tools for 2 series
<wallyworld> i think so long as it honours bootstrap-series, default-series that will be a start. i can't recall what happened to the "use these series explicitly" bootstrap option, i seem to recall that was deprecated by someone so we removed it for 2.0
<axw> jam: when you upload for one series, the server explodes that into all series for the same OS
<axw> jam, wallyworld: I'm doing the code to create local-login macaroons now. thoughts on a sensible expiry time? it's 1h for external, but that's too frequent for local I think. maybe 24h?
<wallyworld> i think that sounds ok
<jam> axw: what happens when the macaroon expires? It just does an extra login step?
<jam> requires you to enter your password again?
<wallyworld> yes
<wallyworld> will prompt
<axw> jam: yup
<jam> online having to open a web browser every hour sounds very bad
<wallyworld> 24h i ok though right?
<jam> wallyworld: how often do you like to 2-factor auth?
<jam> even 1/day is pretty hard
<axw> heh :)
<axw> agreed
<wallyworld> fair point
<jam> wallyworld: axw: I set up SSH keys so I don't have to enter my passwords for things, and run an agent so I can enter it on login and forget about it for quite a while.
<mup> Bug #1557470 changed: juju reads from wrong streams.canonical.com location <simplestreams> <juju-core:Invalid> <juju-core 1.25:Invalid> <https://launchpad.net/bugs/1557470>
<jam> Ubuntu SSO does some things about remembering your login for a while so it can recognize you
<jam> if we have something like that
<axw> jam: isn't that just the same? a time based token?
<jam> axw: so the SSO thing means that it is shared across users. If we're integrating with that such that we don't have to prompt the user as long as their SSO is still valid then that macaroon can be any timeout
<jam> we are just checking that they really are still valid, the *real* timeout is SSO
<jam> For Local, we'd like something akin to that.
<jam> Where we can issue a challenge+reauth at any time, but the *user* is in control of how ofter the real reauth happens.
<jam> I may not be clear
<jam> ssh-agent is the thing that says how often I need to login
<jam> not "ssh $MYHOST"
<axw> jam: ok, understand
<jam> I don't know if we have something tasteful here.
<jam> axw: its the sort of thing we may want a knob on the server for
<jam> so my cruddy sites I just set to never expire
<wallyworld> jam: this is for a local controller without sso or an external identity manager
<jam> and the Production servers expire daily
<wallyworld> without sso or an external identity manager, we just use username/password as set on the controller for that user
<wallyworld> and we use a macaroon (time based) to avoid re-authenticating each time
<jam> wallyworld: sure I understand that bit. but how often do I need to auth to it. *Today* we never have to reauth, and its all local, and its pretty nice.
<wallyworld> authentication is done by controller
<axw> jam: that sounds sane. I'll implement without that first, then add config for timeout
<wallyworld> right, but we need a balance between what we have today and being secure
<jam> wallyworld: maybe. passwords that people can remember are rarely actually secure, which means you actually use a password manager
<jam> which means yet again the thing that actually decides how often you "auth" is something else.
<jam> (LastPass, your custom gpg encrypted secrets file, etc)
<wallyworld> indeed, i use one for github and when that times out on the cli, it's trivial to paste in the pw again
<wallyworld> 2 clicks in my browser plugin and paste
<jam> wallyworld: anyway, I would highly suspect people generating solid passwords for a local controller, which means we're just aggressively pissing them off to pass it in all the time.
<jam> likely it means them using weaker passwords that are easier to enter
<jam> ultimately being weaker security
<wallyworld> sure, we just need to decide what "all the time" is in order to piss people off. 1day, 1 week, 1 month?
<jam> wallyworld: with your browser, if someone can login as you, then they're in. If I can login to my Laptop as Jameinel, I may (may not) need to login to Juju on that machine as well.
<jam> I'm just thinking through the space
<wallyworld> i agree a short time is bad. i just think it should be finite
<jam> wallyworld: flip side, what happens if you forget your password?
<jam> What is our password recovery mechanism
<wallyworld> nothing (yet). that is a currrent limitation
<jam> I'm still one "sudo foo" away from being root
<jam> I just hesitate to say "you must remember a password you set" and then not give a way to recover.
<jam> but if the recovery mechanism is weaker than the password, we haven't added security.
<jam> maybe 1/day is reasonable.
<wallyworld> recovery is definitely on the todo list
<jam> as it at least makes you think about it.
<jam> wallyworld: I have a strong feeling that local passords actually don't make sense.
<wallyworld> maybe 1 week or 1 month even. i have no firm view on how long, happy to let others decide
<jam> wallyworld: well 1 month is just saying "forget about this until 1 month later when you won't remember it"
<jam> I'm worried that its the same problem as you may not do "juju foo" for a month
<jam> regardless of the password timeout being shorter
<wallyworld> well, i can't remember my gh password :-)
<jam> wallyworld: yeah, I have several 16 char random passwords I can't remember at all.
<jam> but those integrate with Firefox
<wallyworld> not the gh cli
<wallyworld> but really easy still
<wallyworld> it comes down to i guess, how to we stop unauthorised people from loggin in to your controller
<jam> *today* we have a token on disk
<wallyworld> you mean the ca cert?
<jam> I mean the password in ~/.juju/environments/ENV.jenv
<wallyworld> that' s not there anymore, nor is most of bootstrap config, i'd need to double check what we do now
<axw> wallyworld: the password for admin@local in accounts.yaml is what was admin-secret
<wallyworld> i didn't thinl client login needed anything more than the ca cert
<wallyworld> ah right
<jam> wallyworld: ~/.local/share/juju/accounts.yaml
<axw> wallyworld: it needs a username and password. ca-cert is just for verifying the server's identity
<wallyworld> i forgot
<wallyworld> brain too full
<jam> heh
<jam> I think the statement from tim was that "If you have the cloud credentials you can get in as ADMIN"
<jam> which is. ok, fine. I can spend the money on the cloud, I can get into the env. Maybe not perfect, but something.
<jam> I have $root$ on my local machine, is that enough?
<axw> jam: ideally, although you could just as easily throw away your SSH keys
<axw> well maybe not *just* as easily :)
<axw> jam: I haven't thought a lot about how to do recovery yet, but was thinking of having a localhost-only interface on the controller machines to do that. if you can ssh into machine-0, then you can fix up your own password
<axw> and any admin can change anyone elses password
<jam> axw: so a unix socket is how LXD does it
<jam> so certainly there is precedent.
<jam> might even be how mongo does as well?
<axw> jam: yep, you have to start mongo in a special way tho IIRC (excluding the first startup, where there's an exception if you have no password set yet)
<jam> axw: and certainly we have to consider that we can't be more secure in Juju than someone who can access our DB
<jam> they can just set the password there.
 * axw nods
<axw> jam: speaking of, we should probably disallow sharing admin models with non-admins :)
<axw> otherwise I'll just "juju deploy backdoor --to 0"
<jam> axw: wallyworld: so if it is "you don't need a password if you're on the machine" and you need to refresh your login 1/day from another machine, we can live with it. I don't think it is quite there overall, but it is probably acceptable.
<axw> jam: ok, thanks
<wallyworld> sgtm
<jam> cherylj: perrito666: if you see this later. With my branch you now see messages if you do "juju status --format=yaml" but machine-status messages aren't shown by default in "juju status" output.
<jam> so we have some visibility, but not a huge amount.
<jam> wallyworld: on the downside, "juju status-history" is filled with 100 "downloading image 98%" messages.
<wallyworld> oh joy
<jam> wallyworld: so its super nice to see the progress in status
<jam> but it is yet-another status-history message
<jam> why hasn't my machine started yet? Because the image copy is only 70% done. great
<wallyworld> we should make that more usable
<wallyworld> wanna file a bug?
<jam> wallyworld: expose status messages in default "juju status" or be able to have a message that gets updated instead of adding yet-another message?
<wallyworld> both :-)
<jam> wallyworld: what is the map in SetInstanceStatus for? I haven't seen anywhere that it ever gets set.
<jam> Is it set from "status-set" in charm hooks?
<wallyworld> jam: it accounds for the fact that we may want to pass some arbitary data, like for other status
<wallyworld> eg
<wallyworld> we could pass in the download percentage or something
<wallyworld> or time remaining
<jam> wallyworld: we can, but if nothing is touching it, exposing it, does it actually do anything?
<jam> I guess the API would expose it?
<wallyworld> yeah eg for juju status
<wallyworld> the yaml output has omitempty
<wallyworld> that's the way it works for normal status, i assume it's the same here
<jam> so it is shown just not in the default format.
<wallyworld> it's been a while since i sw the code
<wallyworld> yes, not sown in tabular
<wallyworld> shown
<wallyworld> tabular is more of a summary
<jam> wallyworld: bugs filed
<wallyworld> ty
<wallyworld> i'll try and get them done this week
<wallyworld> there's a similar bug about status-history spam
<wallyworld> for update-status calls
<mup> Bug #1557914 opened: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914>
<mup> Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918>
<jam> wallyworld: yeah, it certainly feels similar to the update-status issue
<wallyworld> yup
<jam> wallyworld: I wonder if a flag like "current-only" would be relevant.
<jam> This is a message that should be displayed, but doesn't need to be logged.
<wallyworld> yeah, i was wondering if we needed to do that
<wallyworld> there's an argument though that we should store everything and filter on display
<jam> wallyworld: that's ok, but not showing by default is the important bit.
<jam> so that you can get the interesting bits
<jam> wallyworld: I thought the status-history collection only stored a limited set of history, though.
<jam> does it store everything always?
<wallyworld> it's capped
<jam> right, so that's a reason to elide them
<jam> cause otherwise 100 "I'm almost there" messages end up pushing out the real content.
<wallyworld> depends on the size but yeah
<mup> Bug #1557914 changed: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914>
<mup> Bug #1557918 changed: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918>
<wallyworld> jam: i do like the idea of a "don't log this flag". but william afaik against throwing away data
<wallyworld> eg who cares if we ran the update status hook 100 times
<jam> wallyworld: so I can see people wanting to know "when was the last time update-status was run" because something was going wrong there.
<jam> I can hypothesize it, at least.
<jam> But *nobody* cares about something that happens more than 10-ish time
<jam> times
<wallyworld> yes
<jam> other than gathering stats about it
<jam> you just can't think about it
<jam> 100 messages saying "copying"
<jam> did you notice it was 99 and 73% wasn't there? :)
<mup> Bug #1557914 opened: "juju status" doesn't show machine-status messages by default <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557914>
<mup> Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1557918>
<wallyworld> axw: interactive add credentials done but i need to rework the provider credentials schema because we want the attributes to be ordered, and atm it is a map
<axw> wallyworld: ok
<voidspace> dimitern: frobware: stdup?
<perrito666> wallyworld: jam reading your comment last night
<perrito666> why dont we grow a loglevel-sh attr to status?
<perrito666> morning all btw
<mup> Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993>
<axw> wallyworld: it's all very hacky atm, but I've got a "juju login" which will request a macaroon, write it to accounts.yaml, and then use that for future logins
<perrito666> axw: congrats btw
<jam> perrito666: you mean for status-history ? yeah something like that seems valid.
<perrito666> jam: well status history its just something that gets created by setstatus so we should add it to status as a whole but that would not hurt since it can just be ignored where it has no value
<perrito666> default loglevel should be the one that gets stored in history
<jam> perrito666: the caveat here is that we want it shown in the "juju status" content, because it is currently active dataa
<jam> however, once it has expired, it isn't really worth hanging onto it/showing it by default
<jam> which is a bit different interprentation of log level, where log level is not-shown-at-all
<perrito666> jam: you mean you want to set the status and then have it dissapear?
<jam> perrito666: I mean that when you do "juju status 0/lxd/0" you want to see "copying image: 25%"
<jam> but when you do "juju status-history --type machine 0/lxd/0" you don't really want to see 100 lines of "copying image: 1-100%"
<perrito666> yeah, I think we are on the same page then :)
<perrito666> currently you call setStatus and that sets the current status and tries to push it to the history bucket too
<TheMue> morning
<perrito666> loglevel (or a better name for it) would determine if it gets pushed to history
<voidspace> dimitern: I think I found it
<voidspace> dimitern: the test server doesn't add interfaces to the node when you call start (a post) only on get
<dimitern> voidspace, ah, there it is then :) good catch
<voidspace> dimitern: well, we'll see...
<wallyworld> axw: sounds awesome
<voidspace> dimitern: yes, seems to be it
<voidspace> dimitern: that brings me down to 23 failures, but looks like many of those have the same cause but need fixing separately
<dimitern> voidspace, nice - what sort of failures remain?
<voidspace> in fact 12 of them
<voidspace> a couple of "acces to address maas.testing.invalid not allowed
<voidspace> because creating a NewEnviron actually hits the api now (to check version) which it didn't used to
<dimitern> ah
<voidspace> so those I can fix by patching out GetCapabilities (done in other places already)
<voidspace> a couple of bad requests which are odd but shouldn't be too hard
<voidspace> and a couple of 404s (also odd)
<voidspace> and a few "failed to allocate address"
<voidspace> so about four different failure cases across 23 tests
<voidspace> ah, some of the 400s are for missing subnets
<voidspace> all to do with test setup I expect
<dimitern> even better then! we'll fix the test server
<voidspace> yep, I'll have a PR for this fix shortly
<dimitern> sweet!
<voidspace> dimitern: https://github.com/juju/gomaasapi/pull/9
<dimitern> voidspace, LGTM
<voidspace> dimitern: thanks
<voidspace> that was quick!
<dimitern> voidspace, I know that code all too well - it was a source of frustration :)
<voidspace> :-)
<frobware> voidspace, dimitern: of course let's not update any dependencies.tsv in maas-spaces2... please... :)
<voidspace> frobware: this is needed only for my branch
<voidspace> frobware: I'll update dependencies there, there maybe more fixes first anyway
<voidspace> (this is the drop-maas-1.8 branch)
<frobware> voidspace: yep, just wanted to ensure we don't perturb what we have in m-spaces2. Really would like to see that branch merged today/tomorrow...
<voidspace> well, my branch maybe ready to land in that timeframe...
<voidspace> ;-)
<voidspace> we'll do a separate CI run on this branch first though
<voidspace> dimitern: is it correct that MAAS 1.9 supports storage, so we don't need the checks for storage support in the provider?
<dimitern> voidspace, I believe so - axw / wallyworld can confirm?
<voidspace> dimitern: well, the error string is returned if the volumes aren't returned - so we still need to check that
<voidspace> so I think I'll leave the check and the test in place
<wallyworld> maas 1.9 does support storage
<voidspace> wallyworld: thanks
<voidspace> maybe I should change the error message
<dimitern> voidspace, perhaps the test server is overly assumptive there
<frobware> wallyworld: ever tried backup/restore recently on maas?
<wallyworld> no
<voidspace>  dimitern this is juju code
<voidspace> dimitern: it checks the number of returned volumes and if it doesn't match expected it reports that the version of MAAS doesn't support storage
<dimitern> voidspace, ah, is this around select/startNode ?
<voidspace> dimitern: I don't think we should drop that check
<frobware> was trying backup/restore http://pastebin.ubuntu.com/15400937/
<voidspace> dimitern: yeah, in startNode
<frobware> I could be driving this the wrong way though
<dimitern> voidspace, because volumes are only returned in the result of selectNode (or selectNode / acquireNode), not in the result of startNode ?
<voidspace> dimitern: this is existing code, I'm only looking at the text of the error message :-)
<dimitern> voidspace, which test is that?
<voidspace> dimitern: TestStartInstanceUnsupportedStorage
<voidspace> dimitern: I've changed the error text to report that the incorrect number of storage volumes were returned.
<voidspace> dimitern: instead of complaining about MAAS version
<voidspace> dimitern: I think getting back the wrong number of volumes should still be an error
<dimitern> voidspace, yeah it looks like the error message is wrong
<dimitern> voidspace, but resultVolumes it returned by maasInstance.volumes
<voidspace> right
<dimitern> voidspace, and if you look at the comment around line 960 in environ.go...
<voidspace> yeah, I see it
<dimitern> voidspace, it matters which maasObject we're embedding in a maasInstance, and subsequently reading the volumes from that embedded maasObject
<voidspace> I'm not touching any of that
<dimitern> voidspace, so error message could be better indeed
<dimitern> voidspace, it's not about supporting spaces
<dimitern> voidspace, s/spaces/storage/ :)
<voidspace> down to 21 failures
<voidspace> yep, error message changed
<dimitern> storage code was done earlier than the spaces support, maybe even before the capability for storage was in place
<voidspace> I think it was being worked on when I joined
<frobware> dimitern: re: backup/restore, there's something similar and already reported. bug #1554807
<mup> Bug #1554807: juju backups restore makes no sense <juju-core:Triaged> <https://launchpad.net/bugs/1554807>
<voidspace> down to 14 failures
<dimitern> a nice title indeed :D
<frobware> dimitern: not sure it helps me with triaging "functional-backup-restore"
<dimitern> frobware, after rebasing and enabling multi-bridge creation my branch still seems to work \o/
<dimitern> frobware, will push and then go on with the mediawiki demo
<dimitern> frobware, or should I wait?
<frobware> dimitern: the mediawiki demo doesn't take too long to run, perhaps try that first.
<dimitern> frobware, +1
<frobware> mgz: you about?
<fwereade> OMFG, I have been hammering on this test for *hours*, and it turns out the reason the agent isn't running the model workers it "should"? I didn't give it JobManageModel >_<
<jam> fwereade: ouch
<jam> why aren't these things starting, its right here...
<fwereade> jam, at least the universe makes sense again :)
<mup> Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993>
<mup> Bug #1558061 opened: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061>
<mup> Bug #1558061 changed: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061>
<mup> Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993>
<mup> Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial <cross-team-kanban> <landscape> <juju-core:New> <https://launchpad.net/bugs/1557993>
<mup> Bug #1558061 opened: LXD machine-status stays in "allocating". <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1558061>
<frobware> sense
<mup> Bug # changed: 1466100, 1498086, 1498094, 1499501, 1506869, 1506881, 1521217, 1528971, 1540447
<mup> Bug #1558078 opened: help text for juju remove ssh key needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1558078>
<mup> Bug # changed: 1459298, 1463904, 1464665, 1482502
<mup> Bug #1558087 opened: TestInvalidFileFormat fails on windows because of / <ci> <regression> <test-failure> <windows> <juju-core:Incomplete> <juju-core model-acls:Triaged> <https://launchpad.net/bugs/1558087>
<natefinch> rogpeppe2: So, charmrepo depends on charmstore (for tests), and charmstore depends on charm repo.  This makes updating dependencies.tsv.... complicated.
<rogpeppe2> natefinch: yes, it is awkward
<rogpeppe2> natefinch: but it is possible
<rogpeppe2> natefinch: i haven't thought of a better approach yet, unfortunately
<natefinch> rogpeppe2: I think in order to update charmrepo to use a new version of charmstore, I'm going to need to update charmstore to use a new version of (at least) charm.v6
<rogpeppe2> natefinch: i've already fixed charmrepo to use a new version of charmstore
<rogpeppe2> natefinch: what's the actual problem you're having?
<natefinch> rogpeppe2: sorry, I missed your comment at the end of the PR saying you already updated the deps.  I had tried updating the deps, but I think I just got them into a bad state, which is why I was having problems.
<natefinch> rogpeppe2: there's actually no problem, using the deps in charmrepo, the tests run fine with changes you suggested
<rogpeppe2> natefinch: try fetch origin and rebase and see how things go
<natefinch> rogpeppe2: yeah, just did
<rogpeppe2> natefinch: the problem was just that the tests assumed the old charmstore semantics
<natefinch> rogpeppe2: right
<rogpeppe2> natefinch: BTW my most recent thinking is that the server side should include tests against the client package, with only some minimal unit tests in the client package itself.
<natefinch> rogpeppe2: I have an interesting video I watched recently, which you'll probably totally disagree with.  It's a talk titled Integrated Tests Are A Scam - https://vimeo.com/80533536
<rogpeppe2> natefinch: i saw your tweet and started watching, but then thought i should probably do it outside work time :)
<rogpeppe2> natefinch: i'm interested to see what he has to say
<natefinch> rogpeppe2: yes, probably wise :)
<mgz> frobware: so, run today on maas-spaces2 looks good
<mgz> has only failures from ci changes to ha test
<mup> Bug #1459300 opened: Failed to query drive template: unexpected EOF <ci> <cloudsigma-provider> <juju-core:Triaged> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1459300>
<frobware> cherylj: ^^
<cherylj> mgz: what CI changes have been made?
<mgz> well, I assume it's that, could also be master that maas-spaces2 merged in being bad
<mgz> either way, it's not a maas-spaces problem
<frobware> mgz: could the reports include the commit IDs of any CI repos so that we could determine when things are change?
<katco> ericsnow: hey
<ericsnow> katco: hi
<katco> ericsnow: i'm working on performance reviews right now, but i read through your comments on my merge
<mgz> frobware: some jobs to include that, but not all I think
<ericsnow> katco: k
<katco> ericsnow: i agree merges shouldn't refactor code. a lot of that came over from main, and i had to change some things to get it to compile
<ericsnow> katco: ah, I was wondering if that was the case
<katco> ericsnow: maybe TAL at master and see if you would have done things differently? specifically for store.go
<ericsnow> katco: probably https://github.com/juju/juju/pull/4623
<katco> ericsnow: yes, exactly that
<ericsnow> katco: I'll take another look
<katco> ericsnow: so anyway, with that context, lmk if that changes your review. it was not my intent to change things. i tried to keep the changes minimal
<ericsnow> katco: you bet
<katco> ericsnow: ta
<natefinch> ericsnow, rick_h__: for charm publish --resource .... are we supporting bundles?  seems like if we are, we need the change how we specify resource names, to something like --resource service:resourcename-rev
<ericsnow> natefinch, rick_h__: I recall that we weren't going to support deploying bundles with --resource args, but I don't know that we discussed publish
<natefinch> ericsnow, rick_h__, katco: yeah, I was worried we'd forgotten about that.  Seems like we kind of need to support it, otherwise bundles can't be published with charms that use resources
<ericsnow> natefinch: the bundle metadata is what defines which resources go with each service; bundles themselves do not have resources
<ericsnow> natefinch: so publishing a bundle with --resource doesn't have the same meaning
<natefinch> ericsnow: oh, hmm... right, so we're actually putting the specific resource revision right in the bundle
<ericsnow> natefinch: correct
<natefinch> ericsnow: ok, that's what I was forgetting.  so, I think we don't need to support --resource on publish for bundles, since by definition, the resource revisions are already defined
<natefinch> ericsnow: good :)
<rick_h__> natefinch: bundles don't have resources, charms do, so don't think we've got anything that does publishing
<ericsnow> natefinch: in effect we're using the bundle to publish the revision set for each service rather than publishing that revision set directly
<natefinch> ericsnow: exactly
<natefinch> rick_h__: yeah, I just confused myself, since I'm writing the CLI to publish with --resource flag, but the same command does bundles and charms
<natefinch> rick_h__: I'd actually already written the "you can't do that" path... but then second guessed myself
<alexisb> perrito666, sorry was running late
<alexisb> on the hangoutnow
<katco> sinzui: hey, is the curse here from bugs brought in from master? http://reports.vapour.ws/releases/3755
<sinzui> katco: we are discussing the nature of restore failing. We suspect the issue really is in master, in which case, we have a blocker
<katco> sinzui: ah ok =/
<katco> sinzui: we're running out of time to land this branch... is there anything at all i can do to help?
<voidspace> dimitern: ping
<natefinch> OMG, my editor is converting tabs to spaces in a .tsv file :/
<dimitern> voidspace, pong
<voidspace> dimitern: do you know about the maas provider and device hostnames
<voidspace> dimitern: there is a comment in newDevice about working round the testservice requiring a hostname
<voidspace> dimitern: and then it calls NewDeviceParams that *does not* fill in a hostname
<voidspace> dimitern: (and so tests fail)
<voidspace> I can patch out NewDeviceParams to provide a hostname in the tests - but I'm going to delete that comment
<voidspace> or I can fix the test server to not require a hostname and to generate a random one
<voidspace>  
<dimitern> voidspace, yeah, that workaround was needed because testserver required hostname to be set
<voidspace> I know, however the code that has that comment doesn't do it
<voidspace> it doesn't workaround it
<voidspace> so the comment is "wrong", not because the workaround isn't needed but because that code doesn't do it!
<dimitern> voidspace, newdeviceparams is there only to make testing the device creation a bit less awkward
<voidspace> dimitern: ah, ok
<voidspace> dimitern: I'll make the comment a bit more obvious
<dimitern> voidspace, production code does not need it otherwise
<voidspace> the comment just confused me
<dimitern> voidspace, sorry about that :/
<voidspace> np
<ericsnow> natefinch: "internal" tests have yet again bitten me :(
<natefinch> ericsnow: doh, sorry to hear that.  How so?
<ericsnow> natefinch: fixing a test causes an import cycle
<natefinch> ericsnow: that indicates a problem with package boundaries or the tests, generally...
<natefinch> ericsnow: though I know sometimes with our infrastructure it is unavoidable
<ericsnow> natefinch: exacto
<natefinch> ericsnow: (though it still indicates a problem, it's often a problem that cannot be easily fixed)
<ericsnow> natefinch: right
<mup> Bug #1558158 opened: Restore fails with no instances found <backup-restore> <blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1558158>
<voidspace> dimitern: sooo, we now allocate addresses using claim_sticky_ip on the device and release through the standard ip address release api
<voidspace> dimitern: which I assume works fine in production - but doesn't work at all on the test server, they're not connected
<voidspace> dimitern: so that requires a test server change to look on devices when releasing addresses
<voidspace> at the moment it 404s
<ericsnow> natefinch, katco: PTAL: https://github.com/juju/charm/pull/200  and  https://github.com/juju/bundlechanges/pull/19
<ericsnow> natefinch, katco: ...and https://github.com/juju/juju/pull/4758
<natefinch> heh, I was just PR 200 in the charmstore client repo
<ericsnow> natefinch: solidarity!
<dimitern> voidspace, that's ok though, as we'll be using a different approach to release addresses and the AC code will go away as implemented atm
<natefinch> you know, sometimes I think Google has it right with a monorepo :/
<voidspace> dimitern: yeah, but for the current tests to pass I still need to fix the test server, not difficult though
<voidspace> right, EOD - off to visit my Mum in hospital
<voidspace> see you tomorrow all
<mup> Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185>
<mup> Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191>
<mup> Bug #1558185 changed: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185>
<mup> Bug #1558191 changed: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191>
<mup> Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing <juju-core:New> <https://launchpad.net/bugs/1558185>
<mup> Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558191>
<perrito666> ashipika: you where looking for me a few days ago?
<frobware> anybody doing `add machine lxd:0' at the moment with xenial?
<frobware> I see: "Creating container: Error adding alias ubuntu-xenial: not found"
<alexisb> jam, tych0 ?? ^^
<alexisb> cherylj,
<frobware> alexisb, jam, tych0, cherylj: status output, http://pastebin.ubuntu.com/15403438/
<frobware> alexisb, jam, tych0, cherylj: current workaround is to 'lxd-images import ubuntu --alias ubuntu-xenial xenial'
<frobware> on the host
<alexisb> frobware, great, can you please capture info in a bug
<katco> natefinch-lunch: still at lunch? :) how's your card going?
<frobware> alexisb: https://bugs.launchpad.net/juju-core/+bug/1558223
<mup> Bug #1558223: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223>
<alexisb> thank you
<mup> Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223>
<natefinch> katco: it was a late lunch :)   Was spending time mid-day working on code review comments from roger
<mup> Bug #1558223 changed: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223>
<natefinch> katco: the card is going fairly smoothly... although I wanted to talk to people about the user-experience for charm push --resource
<katco> natefinch: we're running out of time for that. i'd say push forward with what's already defined and we can circle back iff we have the time
<katco> natefinch: keep in mind... day after tomorrow is our deadline
<natefinch> katco: that's fine.  mostly just wanted people to be aware that charm push --resource is just going to be syntactic sugar around charm push + charm push-resource (i.e. they're separate calls to the charmstore)
<mup> Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" <juju-core:New> <https://launchpad.net/bugs/1558223>
<katco> natefinch: what's your eta for the remainder of the work?
<natefinch> katco: I can get the current review comments finished and charm push --resource proposed today, but there's no one from the UI team to review the latter until tomorrow morning
<katco> natefinch: i think rick_h__ believes in some kind of "code fairy" ;)
<katco> who might merge for us
<ericsnow> katco: didn't happen last time I needed the code fairy :)
<natefinch> katco: as long as the code fairly also keeps roger from strangling us ;)
<katco> ericsnow: you must not really believe
<katco> ericsnow: clap harder!
<natefinch> s/fairly/fairy
<natefinch> heh
<rick_h__> katco: :p can always asj for a merge command if review is ok
<katco> natefinch: yeah, i'd just advise you to pick your battles here this close to a deadline
<natefinch> katco: definitely trying to go with the flow to just get'er'dun
<katco> rick_h__: i think the issue is more that we don't have final sign-off from someone in the ui team
<urulama> also, we're tagging CS with v4.5 today, release tomorrow, so, em, have that in mind :)
<katco> rick_h__: if only someone were around who used to work closely with the ui team and had the authority to sign off. and loved campers and photography.
<natefinch> because they like to "sleep" at "night"
<rick_h__> katco: understand.
<katco> urulama: grats on impending release! :D
<urulama> cs-client is part of that for the moment
<urulama> so, em, all PRs that are not needed for that release will not be landed anyway
<urulama> sorry
<urulama> (as i suspect they don't have the equivalent functionality covered in the uitests)
<katco> urulama: as long as we can get the changes in by friday, we're fine
<urulama> and i apologise, the fairies were kidnapped by en evil witch from the south
<urulama> :)
<katco> haha
<urulama> katco: np, it's unlocked as soon as it get's tested and tagged
<katco> urulama: cool. well gl to your team
<mup> Bug #1558232 opened: ERROR cannot obtain bootstrap information: Get https://10.0.3.1:8443/1.0/profiles: Unable to connect to: 10.0.3.1:8443 <juju-core:New> <https://launchpad.net/bugs/1558232>
<urulama> natefinch, katco: wait ... charm push --resource? what's with charm attach-resource?
<katco> urulama: change in direction from mark
<katco> urulama: he wants push-resource to be attach-resource
<natefinch> urulama: and charm push --resource is just a way to skip a step and do it all at once
<natefinch> urulama: so push up the charm and the resources for it
<urulama> was aware of attach-resource, not about charm push --resource ;)
<urulama> ok, sgtm
<urulama> natefinch: is this your PR? https://github.com/CanonicalLtd/charmstore-client/pull/200
<urulama> natefinch: i mean, the one you want to land?
<natefinch> urulama: yes
<natefinch> urulama: though it needs a few suggestions from code reviews implemented
<urulama> kk
<natefinch> which is what I'm doing now :)
<urulama> ok, if you don't get review tonight, i'll point them to it in the morning
<rick_h__> katco: just attach, no -resource
<natefinch> urulama: thanks
<katco> rick_h__: what happened to the whole 2.0 <verb>-<noun> edict?
<niedbalski> perrito666, http://paste.ubuntu.com/15403845/ , I can't bootstrap on lxd using master/head. Any known issues?
<rick_h__> katco: it went with deploy, bootstrap, and such.
<rick_h__> katco: i'm not 100% on it but i pushed attaxh-resource and was shot down.
<katco> rick_h__: hm. ok. natefinch ^^^
<natefinch> rick_h__: so, juju attach django website=./site.zip ?
<perrito666> niedbalski: I honestly dont know
<rick_h__> natefinch: yes
<natefinch> rogpeppe: okie dokie
<natefinch> whoops
<natefinch> rick_h__: ok :)
 * urulama thinks you've awaken the dragon 
 * natefinch ducks
 * rogpeppe swoops in
 * rogpeppe ignores the puny humans
<natefinch> *grin*
<alexisb> rick_h__, are there other places we would use the word "attach"
<alexisb> I see danger in leaving it just attach with out the -resource
<katco> alexisb: rick_h__: what we've been doing is having <verb>-<noun> with an alias of <verb> if there's not conflict
<alexisb> well <none?
<alexisb> <noun>
<natefinch> katco: or <noun> in the case of resources :)
<natefinch> aka list-resources
<alexisb> so for example list-storage is the same as storage
<alexisb> yep
<katco> alexisb: natefinch: ah, yes.
<natefinch> katco: at least we're consistently inconsistent ;)
<katco> =|
<alexisb> but rick_h__ is right that there are cases where we use just the verb
<alexisb> like bootstrap
<katco> alexisb: i think the case there is that they are fundamental juju concepts, not substrates
<alexisb> but bootstrap is boothstrap - one case
<rick_h__> alexisb: right, I tried to argue against but we had some verb patterns and mark was taken with the idea of "email attachments"
<rick_h__> alexisb: the only other thing I could think for 'attach' was attach a storage device, or some sort of device
<rick_h__> alexisb: but we don't use it currently and then if attach is established in this fashion the others will have to be something else
<frobware> tych0: ping
<katco> rick_h__: i think i understand where mark is coming from, but i'm struggling with this: what delineates commands that should be <verb>-<noun> from those that should just be <verb>?
<rick_h__> katco: well there's the ambigious vs non-cases
<rick_h__> katco: any verb that can apply to more than one think (list-) rightly needs to be more to it
<alexisb> attach-storage
<alexisb> attach-space
<alexisb> attach-model
<alexisb> we may not need them now but we could later
<katco> rick_h__: does that make it true that we can't have any other attach commands until a 3.0 to not break backwards compatibility?
<rick_h__> attach-storage was the only one I was worried about. I don't think of attach in the network space or in models
<urulama> but those are not for charm management, right
<katco> rick_h__: alexisb: yeah, this ^^^
<katco> urulama: there is a juju attach as well
<rick_h__> katco: no, it just means they'd need to be attach-XX worst case, and it'll be confusing so we'd look for a different work
<rick_h__> word
<katco> rick_h__: i think that's a different way saying "can't have any other attach commands"
<rick_h__> katco: :)
<katco> rick_h__: when the team heard attach, the first thing we thought of was storage
<ericsnow> rick_h__: aren't we going to be doing unit-shared filesystems or something like that?
<rick_h__> ericsnow: yes, shared filesystems
<rick_h__> ericsnow: but what's to say that's not 'mount' or the like?
<ericsnow> rick_h__: "attach" would make much more sense there
<rick_h__> ericsnow: it's just hard to not use something because you *might* use it later. There's some cases but there's options.
 * rick_h__ is trying to look at the commands/think through
<ericsnow> rick_h__: if I see "juju attach" I am not going  have enough context to know what that is
<rick_h__> ericsnow: it's because resources are new
<rick_h__> ericsnow: as natefinch's command shows, it's more obvious with the full command
<katco> rick_h__: that is chicken and egg though ;)
<ericsnow> rick_h__: at long as "attach-resource" is a valid command
<katco> rick_h__: you would only have the full command if you knew what attach was
<mup> Bug #1558239 opened: [LXD provider] Cannot bootstrap using Xenial container <docteam> <juju-core:New> <https://launchpad.net/bugs/1558239>
<natefinch> can we still have attach-resource as an alias?
<natefinch> or vice versa
<katco> natefinch: that's backwards imo
<natefinch> I want attach-resource in juju help commands
<natefinch> I guess that's attach-resource with attach as an alias
<natefinch> that way if you want to upload a resource, you do juju help commands, and it's obvious
<rick_h__> the issue is the only aliases now tend to be the list-XXXX ones.
<rick_h__> we've killed off most other aliases, there's a couple that need cleanup still. /me tries to find the list from before
<ericsnow> rick_h__: FWIW, the obvious command is "juju upload-resource"
<natefinch> ^^^^ +100
<natefinch> :)
<natefinch> but I'm not Mark :)
<rick_h__> ericsnow: heh, yea that didn't work either.
<rick_h__> ericsnow: natefinch katco so an alias is a no go. It'll be a precedent setter in that sense. THe other ones are either due to list- or plurals.
 * rick_h__ has to hop on a call, but I think we have to go with attach for now and we'll suffer the consequences later. attach-resources is too long, and attach hasn't been used in storage/networking to date and I think we'll have other options when/if we get there. 
<rick_h__> and when I'm wrong, everyone gets free sprint beverages and gets to say "I told you!" :)
<natefinch> rick_h__: I'm just sad that juju help commands | grep resource  is not gonna show the very first command you'd want to use with a resource
<natefinch> rick_h__: I guess that's not true... the description will probably use the word resource
<ericsnow> natefinch: +1
<rick_h__> natefinch: sure it will, juju help commands has a text string on it that'll say resource in the string
<rick_h__> natefinch: I thought of that and went and checked it in another terminal here :)
<natefinch> attach     uploads a resource to the controller/charm store  :/
<natefinch> rick_h__: me too ;)
<natefinch> heh... actually juju help commands | grep resource shows some ambiguous usage of the word resources
<natefinch> destroy-controller     terminate all machines and other associated resources for the juju controller
<rick_h__> natefinch: yea, should clean that up
 * natefinch adds a line to an export_test.go and feels shame
<tych0> frobware: pong
<frobware> tych0: I was looking at juju/worker/provisioner/lxd-broker.go and noticed this:
<frobware> func (broker *lxdBroker) StartInstance(args environs.StartInstanceParams) (*environs.StartInstanceResult, error) {
<frobware> 	if args.InstanceConfig.HasNetworks() {
<frobware> 		return nil, errors.New("starting lxd containers with networks is not supported yet")
<frobware> 	}
<frobware> tych0: how significant is that?
<tych0> frobware: i don't know :)
<tych0> frobware: what is a "network" in this context?
<frobware> tych0: I think this is a red-herring as I know you can have multiple networks in a profile - this looks juju-only to me right now
<frobware> tych0: where networks in multiple interfaces
<frobware> tych0: don't worry about this ... since my original ping I don't think is a genuine problem on the lxd side of things.
<tych0> frobware: oh, i see
<tych0> you mean like juju isn't rendering things to LXD?
<frobware> tych0: yes, but a known issue. that's up next. I was just walking through the details and saw that comment.
<tych0> ok, cool
<tych0> if you have any questions about how to do the actual translation, let me know
<mup> Bug #1542206 opened: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206>
<mup> Bug #1542206 changed: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206>
<mup> Bug #1542206 opened: space discovery still in progress <ci> <juju-core:Triaged> <juju-core maas-spaces2:Fix Released> <https://launchpad.net/bugs/1542206>
<mup> Bug #1548813 changed: maas-spaces2 bootstrap failure unrecognised signature  <ci> <maas-provider> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by mfoord> <https://launchpad.net/bugs/1548813>
<mup> Bug #1554584 changed: TestAddLinkLayerDevicesInvalidParentName in maas-spaces2 fails on windows <ci> <test-failure> <windows> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by dimitern> <https://launchpad.net/bugs/1554584>
<mup> Bug #1556116 changed: TestDeployBundleMachinesUnitsPlacement mismatch <ci> <gccgo> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:Invalid> <juju-core maas-spaces2:Fix Released by frobware> <https://launchpad.net/bugs/1556116>
<anastasiamac_> cmars: master is blocked waiting for a fix to 1558087. I saw ur comment on the bug saying that u r waiting for master to b unblocked..
<anastasiamac_> cmars: could u plz try Fixes-bug blah merge message to land ur fix?
<menn0> cmars: great that rog has seen the same bakery issue
<anastasiamac_> cmars: actually - never mind \o/
<davecheney> ping, https://github.com/juju/juju/pull/4749 needs a second review
<davecheney> thnaks
<perrito666> davecheney: ship it
#juju-dev 2016-03-17
<ericsnow> anyone know if we are going to drop the legacy HTTP endpoints for 2.0?
<ericsnow> see https://github.com/juju/juju/blob/master/apiserver/apiserver.go#L432
<davecheney> perrito666: thanks, fingers crossed this time
<perrito666> axw: quick question about cloud credentials schema, have a sec?
<axw> perrito666: in 1:1, sorry
<perrito666> no worrries
<axw> perrito666: had to do school drop off, I'm back now if you're still around
<perrito666> axw: solved it :)
<perrito666> tx anyway
<axw> perrito666: glad to be of service ;)
<perrito666> autoload-credentials was ignoring my ~/.novarc but it works with my env variables, will file a bug later
<wallyworld> perrito666: i did test it with ~/.novarc
<wallyworld> i wonder what went wrong
<perrito666> wallyworld: perhaps my novarc lacks the expected format?
<wallyworld> you sure the contents are correct?
<wallyworld> it looks for the same things as env vars
<perrito666> is a bash file full of exports
<perrito666> odd
<wallyworld> maybe be a problem, not sure
<wallyworld> weill have to retest
<wallyworld> axw: i'm about to propose the interactive add-credentials with the CredentialSchema ordering fix. but i'm wondering if the password entry should echo * as characters are typed or pasted
<perrito666> wallyworld: are you sure your env was clean when you tested?
<wallyworld> prtetty sure
<wallyworld> i'll have a look ina bit
 * perrito666 finally gets a k3 bootstrap
<perrito666> with magic auth_url auto discovery :)
<wallyworld> yay
<axw> wallyworld: no, I don't think so. knowing the length of the password isn't great
<axw> wallyworld: you're using readpass, right?
<wallyworld> axw: yep
<wallyworld> is just a personal preference
<wallyworld> i want feedback that i'm typing something
<axw> wallyworld: it's nice for visual feedback, but it is a security flaw
<wallyworld> a small one
<wallyworld> hard to count * :-)
<axw> wallyworld: not if it's a short password
<wallyworld> and who uses short passwords?
<wallyworld> surely everyone uses a pw manager
<axw> wallyworld: lol :)
<wallyworld> if not they deserve wgat they get
<wallyworld> (half joking)
<mup> Bug #1558333 opened: juju's logging the literal "$cmd" instead of value of $cmd <juju-log> <logging> <juju-core:New> <https://launchpad.net/bugs/1558333>
<wallyworld> perrito666: the issue with .novarc is that juju doesn't strip the export bit. my tests didn't have that
<perrito666> so the export bit is not valid? it sure seems
<wallyworld> the export bit should be supported by juju
<anastasiamac_> axw: since u r OCR, could u PTAL http://reviews.vapour.ws/r/4193/
<wallyworld> ie juju should strip it out
<anastasiamac_> axw: it's a critical for 1.25
<axw> anastasiamac_: sure, looking
<anastasiamac_> axw: thanks :D
<perrito666> wallyworld: cant yout source the file into an env and read the env? sounds like you could save a lot of future pains with that
<axw> anastasiamac_: were there any changes to the changes required?
<wallyworld> perrito666: sourcing the file is a shellism
<axw> anastasiamac_: (i.e. did you just cherry pick without having to modify?)
<anastasiamac_> axw: ? no cherry-pick was not possible
<axw> anastasiamac_: I see, you said re-work
<axw> nps
<anastasiamac_> axw: the commits have other stuff in them like lxd for eg..
<wallyworld> perrito666: i was hoping to avoid execing shell command
<axw> right, of course
<anastasiamac_> axw: however, the related code is exactly as onmaster
<axw> anastasiamac_: okey dokey
<anastasiamac_> axw: except for renaming model stuff :P
<perrito666> wallyworld: I would expect to be a way to wrap that into go by running a child env and do whatever that env supports as source (but then again, I am guessing, a lot)
<wallyworld> i didn't see an obvious way, maybe there is
<wallyworld> it's easy enough to tweaks what there already to work
<axw> anastasiamac_: LGTM, thanks
<anastasiamac_> axw: \o/
<perrito666> wallyworld: axw credentials schema fields cannot be made optionals?
<wallyworld> perrito666: they can as of recently
<perrito666> wallyworld: is that on master?
<wallyworld> Optional: true
<wallyworld> yes
<wallyworld> perrito666: i did it just for you
<wallyworld> to support keystone 3
<perrito666> you are wonderful, pint me to plz?
<wallyworld> EPARSEERROR
<wallyworld> if you grab master tip, you can use Optiona: true in credential schema
<perrito666> grrrreat
<wallyworld> axw: fyi, ignore the charm-minvers feature branch pr, that's s merge into master i'm landing now
<axw> wallyworld: ok, thanks
<perrito666> k I have it working and almost fixed the joyous openstack live tests to reflect that, my brain is out of order for the day, see you all tomorrow
<perrito666> wallyworld: axw anastasiamac_ I would love  a review of https://github.com/go-goose/goose/pull/19 during the night
<perrito666> cheers
<wallyworld> our night or yours?
<axw> perrito666: will see how I go, got lots to review today
<wallyworld> i can look
<anastasiamac_> perrito666: nite \o/
<perrito666> wallyworld: yours I need to et all this merged tomorrow
<perrito666> no sorry mine
<perrito666> mybrain is dead
<wallyworld> perrito666: i was being facecious :-)
<wallyworld> i knew what you meant :-)
<natefinch> axw: can I get one more quick review on the stringmap thing?  realized I was missing handling an error case: https://github.com/juju/cmd/pull/32
<axw> natefinch: looking
<natefinch> axw: thanks
<axw> natefinch: done
<natefinch> axw: thanks!
<natefinch> and now to figure out how to finagle these full stack tests to succeed when the server doesn't actually support this endpoint...
<davecheney>   /win12
<menn0> axw: this is pretty horrible but necessary: https://github.com/juju/juju/pull/4766 PTAL
 * axw braces
<natefinch> menn0: is there a bug filed for that on go-yaml?
<menn0> natefinch: not yet, but I've just created a card for myself on our board to do that
<anastasiamac_> menn0: is it worthwhile back porting to 1.25... I think that the bug would be awesome too \o/
<natefinch> anastasiamac_: are we supporting go 1.6 for 1.25?
<mup> Bug # changed: 1499781, 1504578, 1526926, 1529126
<menn0> natefinch: it's an involved bug report to write up so I'll do it next week
<menn0> anastasiamac_: the code that was relying on private embedded types only exists in Juju 2.0
<menn0> anastasiamac_: we'd know about it if we were relying on this elsewhere
<anastasiamac_> menn0: then no need for 1.25 :D
 * menn0 has lost hours on this
<axw> menn0: what's the deal with all the trailing underscores?
<anastasiamac_> menn0: but for reference, I can see CI tests that run against 1.25 with go 1.6
<natefinch> axw: they give you a place to rest your mouse when your arm gets tired
<axw> nani
<menn0> axw: Tim came up with that as a way to avoid collisions between field names and the method names he wanted to use
<anastasiamac_> natefinch: such small mouse tho :D
<axw> menn0: ah, right, ok
<menn0> axw: i've continued with the trend
<axw> menn0: reviewed
<menn0> axw: thanks
<menn0> axw: I clearly didn't re-run the tests after removing that debugging Dump method :)
<axw> :)
<jam> axw: davecheney: I pushed a small update because of a case I found in Juju (we were calling AddCleanup during SetUpSuite()), can you give it one more quick look?
<axw> jam: LGTM. there's a bug somewhere (I think rogpeppe filed it) suggesting that we should just have AddCleanup, and DTRT whether we're in SetUpSuite or in/after SetUpTest
<axw> jam: probably need to change tests to do that tho
<jam> axw: I could do that here, with just checking if s.testSuite is nil
<jam> axw: does that sound sane?
<axw> jam: yeah, I'm just not sure if anything is relying on current behaviour
<jam> axw: if it is, its broken
<jam> assuming AddCleanup during a Suite level operation will be cleaned up after the first test run...
<jam> we could keep AddSuiteCleanup as an explicit thing
<jam> but calling it in a Test context and assuming it will last the rest of the Suite is also broken behavior
<jam> One test should not set global state for another test to expect
<jam> axw: then again, if the Juju test suite *does* have those bugs, I don't know how many I can fix :)
<axw> jam: indeed :)
<axw> jam: I'd be happy to drop AddSuiteCleanup and have AddCleanup add suite-level things when called in SetUpSuite, and test-level things otherwise
<axw> it's just a sed command to rename AddSuiteCleanup to AddCleanup after that
<axw> jam: if you want to defer removing/renaming, that's fine though. we can do that later
<jam> axw: trying to think how to write the tests for it.
<jam> I think making it safe and easy to use is best
<jam> so users don't have to think "Can I call it now"?
<jam> (heirarchy of good API, "make the obvious thing correct" vs "make the wrong thing hard to do"
<jam> )
<jam> axw: it also leads to helpers like PatchValue where you can't control whether it is Suite level or Test level.
<axw> jam: just read your latest email; the bug I was thinking of was actually about PatchValue. same sort of deal though
<jam> axw: yeah, the fact that BaseSuite itself had this bug
<jam> means we can *easily* have lots of tests that are actually doing Outbound tests
<axw> :\
<jam> and we didn't notice because we thought we were protected by BaseSUite
<axw> wallyworld: we're still using CompatSalt in cloudconfig/instancecfg, and I don't think there's any reason to. The hash is a temporary, derived password - we shouldn't need to use a fixed salt
<wallyworld> axw: yeah, i figured that change was a separate pr
<axw> wallyworld: ok
<jam> axw: can you look at the pull?
<jam> http://github.com/juju/testing/pull/92
<axw> jam: looking
<jam> thx
<jam> or davecheney^^
<jam> axw: interestingly "home_test" was a case where we were setting up another test suite, and calling SetUpTest but not SetUpSuite
<axw> jam: LGTM, thanks. there's a couple of odd SetUpTest calls in home_test, but just style issues really
<axw> and preexisting at that
<jam> home_test is trying to test the test itself by creating another test object.
<jam> but it is calling SetUpTest without ever calling SetUpSuite
<axw> jam: ah, I see.
<jam> and we're now at... a whole lot of failures in Juju proper...
<axw> :/
<jam> FakeHomeSuite is doing something bad.
<axw> jam: because they were swept under the rug?
<jam> lots of tests use it
<axw> lots of outbound connections? :)
<axw> oh home suite
<jam> investigating, because I don't see anything bad yet
<jam> FakeJujuXDGDataHomeSuite calls SetUpTest during TearDownSuite ?
<jam> axw: testing/environ.go line 107
<jam> why is TearDownSuite calling Suite.SetUpTest() ?
<jam> I think that's just a copy&paste typo
<axw> jam: heh, yep, I'd say so
<axw> jam: since 2014 :)
<jam> axw: love it when our base infrastructure has some weird bugs in it.
<jam> axw: SetUpSuite is *also* calling SetUpTest instead of SetUpSuite
<axw> lol
<jam> and JujuOSSuite doesn't implement either SetUpSuite or TearDownSuite, so we don't need to write our own anymay
<jam> anyway
<jam> axw: or is it better to have one and just a line that says "this other one doesn't need to be called cause it doesn't exist" ?
<axw> jam: I've not found one way or the other to be much better
<axw> leaving it out is probably OK
<axw> jam: I'd love to write a static analyser one of these days to check that they're all wired up correctly
<jam> axw: yeah, the main problem is that once someone does introduce a SetUpSuite then we should call it, but neither way is going to catch that on its own
 * axw nods
<jam> so it looks like we do have at least 1 failure in Provisioning suite
<jam> where it is actually reading cloud-images somehow
<jam> though it is failing in the opposite way I was expecting
<jam> maybe FakeXDGHome was accidentally making it harder to get outside access by calling SetUpTest all the time?
<wallyworld> axw: i'm off to soccer soonish, i've reposnded to some of the comments on the add credentials review. i'll finish the interactive prompt for replace when i get back
<axw> wallyworld: thanks, I'm just responding now
<axw> enjoy
<wallyworld> leaving in 20 minutes r so
<wallyworld> will do
<jam> axw: ugh. we have test suites that want to call PatchValue before they call their Base type's SetUpTest because SetUp does work that they want to patch out
<axw> jam: ugh indeed
<axw> jam: example?
<jam> provider/lxd/
<jam> testing_test.go
<jam> there is a BaseSuiteUnpatched and a BaseSuite
<jam> and BaseSuite embeds BaseSuiteUnpatched which embeds IsolationSuite
<jam> so BaseSuite patches an object, then calls BaseSuiteUnpatched.SetUp
<jam> line 277
<jam> axw: ^^
<jam> provider/joyent/joyent_test.go calls envtesting.PatchAttemptStrategies() before calling base .SetUpSiet()
<jam> SetUpSuite()
<axw> jam: yup. I think in the lxd case it could be moved to BaseSuite.SetUpSuite
<axw> jam: that one's a bit different, it's not involving the suite
<jam> axw: provider/joyent/local_test.go line 155
<jam> calls PatchValue for 2 things before calling providerSuite.SetUpSuite()
<axw> jam: AFAICT, it doesn't need to
<jam> that also looks like it was honestly broken because the lifetime of that PatchValue should be wrong
 * axw digs
<axw> true
<jam> my "go test ./...." is 5000 lines with this change to AddCleanup...
<axw> jam: yeah, those 2 patches can come after SetUpSuite
<axw> jam: 5000 lines? huh?
<jam> panic tends to create a fair bit of traceback
<jam> and SetUp failures fail on all the associated tests
<jam> so it is bigger than probably it seems
<jam> but its a fair bit to dig through
<jam> axw: provider/joyent/local_test.go adds a *AddSuiteCleanup* that patches the attempt strategies to ShortAttempt
<jam> I don't see it patch them to something longer first.
<jam> my rabbit hole has gotten too deep, I think
<axw> :)
<axw> jam: probably just cruft, but I don't know for sure
<jam> axw: I *did* finally hit tests that are going to cloud-images
<jam> and failing now because they can't
<jam> localLiveSuite.TestStartStop for provider/ec2
<jam> "signature made by unknown entity"
<axw> jam: yep, SetUpSuite calls PatchOfficialDataSources which calls PatchValue
 * axw has to go feed children
<jam> axw: have a good evening
<jam> need to get out a swear jar...
 * frobware wakes up to find maas-spaces2 has merged. \o/
<frobware> dimitern: about to push your branch as upstream/maas-spaces-multi-nic-containers
<dimitern> frobware, great! and I've seen maas-spaces2 get merged! cheers
<frobware> dimitern: if we don't do lxc then CI tests will fail. How do we not do this for the moment is not clear (to me at least).
<dimitern> frobware, well lxc-broker is still there, so ..
<frobware> dimitern: we need to bung this branch (@ 07693b816c207bc6e3fa9d7dd3f76784a695908e) to the OS folks for testing...
<frobware> dimitern, voidspace: upstream/maas-spaces-multi-nic-containers is now soliciting commits... :)
<frobware> mgz, sinzui: please can we add upstream/maas-spaces-multi-nic-containers to the CI jobs (assuming something explicitly needs to be setup). thanks!
<voidspace> frobware: cool
<dimitern> frobware, I have some commits in mind to add :)
<dimitern> voidspace, frobware, we should also delete maas-spaces2 from upstream soon
<mgz> frobware: pushing it into the juju namespace is all that's needed
<frobware> mgz: thanks
<dimitern> frobware, about that fancycheck - it was temporary, now it can be dropped
<dimitern> frobware, and we should do that before the CI run right?
<frobware> dimitern: submit a PR, we can now review, commit in the usual way. ;)
<dimitern> frobware, on it
<voidspace> dimitern: frobware: is maas-spaces2 landing on master today then?
<dimitern> voidspace, it already landed yesterday
<voidspace> dimitern: hah, cool
<TheMue> morning saphires o/ *yawn*
<dimitern> TheMue, hey o/ morning
<voidspace> TheMue: 0/
<dimitern> voidspace, frobware, http://reviews.vapour.ws/r/4210/
<natefinch>  team meeting anyone?
 * TheMue currently not *lol*
<voidspace> dimitern: frobware: I'm down to 11 failures on my branch (from 33 yesterday) - several of them about to be fixed by a change to gomaasapi
<voidspace> (gomaasapi test server)
<perrito666> morning all
<perrito666> natefinch: is there anyone in the tm?
<natefinch> perrito666:  a efw people
<natefinch> perrito666: me, william, michael, and dimiter
<frobware> dimitern, voidspace: sorry, was otp. Want to do a quick standup?
<voidspace> frobware: in team meeting
<fwereade> jam, I would love to help you with IsolationSuite horrors, but... probably not today
<voidspace> dimitern: want to drop back in for standup?
<dimitern> voidspace, sure
<dimitern> voidspace, frobware, I'm the only one there though
<voidspace> dimitern: we stayed in the team meeting room...
<voidspace> dimitern: as we were there already
<dimitern> ah
<voidspace> hence "drop back in"
<TheMue> Btw, Happy St Patricks Day
<voidspace> down to 7 failures and deleted more code
<voidspace> some unused production code some unused test helpers
<jam> thanks fwereade. I'm going to try and get some more of my LXD stuff done, but I did make a bit more progress on the Isolation stuff. Its actually closer than I was worried it would be.
<fwereade> before I go spelunking... can anyone suggest a foolproof way of inducing a StartSync on... whatever *state.State in the background is *actually* driving events for a hosted model running under a jujud test?
<fwereade> jam, dimitern, axw, wallyworld perhaps? ^^
<icey> juju's PPA needs to be updated, currently throwing warnings on Xenial due to https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1558331
<mup> Bug #1558331: After upgrading to apt 1.2.7 in Xenial, PPAs and most other third-party repositories become unusable with "The repository is insufficiently signed by key  (weak digest)" <xenial> <apt (Ubuntu):Confirmed> <https://launchpad.net/bugs/1558331>
<wallyworld> fwereade: isn't there a StartSync() method or something to trigger a sync?
<fwereade> wallyworld, yeah... but you need to call it on the right *State
<wallyworld> that is correct, it can be confusing
<fwereade> wallyworld, we have access to .BackingState
<fwereade> wallyworld, but I have a horrible feeling that the state that's backing the apiserver we talk to is hidden away in a statepool somewhere out of reach
<wallyworld> i haven't looked at that stuff in ages. i do recall it being somewhat funky. i have not good answer for you
<fwereade> wallyworld, (.BackingState is the one for the controller model, but it doesn't cause hosted models to get events
<fwereade> )
<fwereade> wallyworld, no worries, looks like I need to go hunting
<wallyworld> yeah, sorry
<alexisb> frobware, ping
<natefinch> I am sad that juju controllers is not an alias for juju show-controllers
<rick_h__> natefinch: list-controllers?
<natefinch> ty
<natefinch> it's show-controllers
<natefinch> oh, I guess it's both
<rick_h__> show-controller is against once controller?
<rick_h__> it gives metadata for one, the other is the listing
<natefinch> oh that is confusing
<rick_h__> show-X is "give me details of one" and list- "give me a table of them"
<frobware> alexisb: pong
<alexisb> heya frobware see private chat
<natefinch> rick_h__: but then why is there a show-controllers (note plural)?
<rick_h__> natefinch: not sure on that one
<alexisb> natefinch, it may be that the alias still exists on that one
<alexisb> I owuld have to go look
<natefinch> rick_h__: looks like you can pass in multiple controller names and it'll show you details for all
<alexisb> at first we were doing both the single and plural case for everything
<natefinch> s/all/each
<natefinch> if you pass in 0 controller names, it shows you details for just the current
<natefinch> show vs. list is very confusing IMO.
<natefinch> but it looks like we have a lot of each
<rick_h__> yes, it's one of the base principles of it. Very common across all nouns can be shown or listed
<dimitern> frobware, voidspace, http://reviews.vapour.ws/r/4212/ fixes an issue found with multi-nic support
<natefinch> I swear I am never ever ever going to get used to bootstrap having the name first
<rick_h__> natefinch: don't get used to it, branch changing that lands soon hopefully
<natefinch> rick_h__: oh thank goodness
<rick_h__> natefinch: oh, sorry you mean the cli
<rick_h__> that's not changing, though tyou meant the name of the first model
<rick_h__> lol, in which case sorry
<natefinch> aww!
<natefinch> how come when I juju bootstrap mylocal lxd ... it creates a juju controller named local.mylocal?
<rick_h__> the local is for multi-user situation. You might add controllers from other users that already have that name
<natefinch> rick_h__: but if I don't... then you've just munged my namespace for no reason
<rick_h__> natefinch: you should be able to leave it off and we can visit ways of streamlining
<rick_h__> but had to design for the big case and work backwards
<dimitern> voidspace, frobware, we should get it in to have a chance of a blessed ci run
<dimitern> ^^
<frobware> dimitern: looking now
<dimitern> frobware, cheers
<frobware> dimitern: reviewed
<dimitern> frobware, thanks - updated, pushed, and set to merge
<frobware> dimitern: did you see the ci results of the first one?
 * natefinch needs to make an auto-dependencies.tsv merge tool
<dimitern> frobware, yeah - as expected pretty much - the fancycheck
<dimitern> frobware, and most of the other issues should be resolved by the last fix
<dimitern> frobware, btw I managed to repro the empty mac address at setInstanceInfo for containers, looking into it
<frobware> dimitern: I was in 2 minds about whether to push the branch this morning; I didn't think the CI run would happen that quickly given all the other feature branches that are being tested.
<frobware> dimitern: oooh. And, btw, the reason the attach <pid> failed is I didn't start the remote end as root.
<dimitern> frobware, they made some changes recently - there was a mail about it
<dimitern> frobware, ah! :)
<frobware> dimitern: what was the macaddr issue?
<dimitern> frobware, I have a suspicion why it might be happening
<frobware> dimitern: ok, going to ignore that and look at profiles again
<dimitern> frobware, the MACAddress coming from PrepareContainerInterfaceInfo gets lost when trying to set the container devices in state
<dimitern> frobware, +1
<mup> Bug #1558608 opened: maas-spaces-multi-nic-containers cannot bootstrap <block-ci-testing> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:Triaged> <https://launchpad.net/bugs/1558608>
<mup> Bug #1558608 changed: maas-spaces-multi-nic-containers cannot bootstrap <block-ci-testing> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:Triaged> <https://launchpad.net/bugs/1558608>
<mup> Bug #1558608 opened: maas-spaces-multi-nic-containers cannot bootstrap <block-ci-testing> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:In Progress by dimitern> <https://launchpad.net/bugs/1558608>
<mup> Bug #1558612 opened: creating hosted model config: maas-agent-name is already set; this should not be set by hand <bootstrap> <ci> <maas-provider> <test-failure> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1558612>
<mup> Bug #1558608 changed: maas-spaces-multi-nic-containers cannot bootstrap <block-ci-testing> <juju-core:Invalid> <juju-core maas-spaces-multi-nic-containers:Fix Released by dimitern> <https://launchpad.net/bugs/1558608>
<mup> Bug #1558087 changed: TestInvalidFileFormat fails on windows because of / <blocker> <ci> <regression> <test-failure> <windows> <juju-core:Fix Released by cmars> <juju-core model-acls:Fix Released by cmars> <https://launchpad.net/bugs/1558087>
<dimitern> voidspace, btw I've found out the main reason why discoverspaces worker seems to make a lot of calls
<dimitern> voidspace, it does call CreateSpaces and AddSubnets once for each space / subnet
<dimitern> voidspace, I'm testing a patch now which takes advantage of the bulk nature of both api methods, so calls each of them once per handleSubnets() call
<dimitern> should make the discovery much quicker hopefully, and resolve some issues where subnets might be missing when trying to use them (e.g. for machine devices' addresses)
<voidspace> frobware: that test passes"
<voidspace> I meant !
<voidspace> dimitern: I didn't know it seemed to make a lot of calls
<voidspace> dimitern: but cool
<voidspace> dimitern: I'm down to 1 failing test on drop-maas-1.8
<dimitern> voidspace, great!
<frobware> dimitern: is that "resolve some issues where subnets might be missing" at all related to the mac address we were looking at earlier? (surprised if so, but...)
<dimitern> frobware, not clear yet
<dimitern> frobware, I did discover an issue with that fix though
<frobware> dimitern: from the is not alive?
<frobware> dimitern: might want to make sure that bulk method is in 2.0 before getting too far
<dimitern> frobware, because the machiner starts before discoverspaces, it will not set the subnetID for yet-undiscovered subnets
<voidspace> dimitern: will the bulk calls take into account that discovery might have already run and some of the spaces / subnets might already exist? (but some might not)
<dimitern> frobware, it's already bulk
<dimitern> voidspace, it does I think - at least it works idempotently
<voidspace> frobware: the bulk api calls are our code
<frobware> voidspace: ok
<voidspace> dimitern: there were some tests for that
<mup> Bug #1558657 opened: many workers still don't use clocks <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1558657>
<dimitern> voidspace, I didn't have to change the tests after changing CreateSpaces and AddSubnets calls to be done once in bulk
<mup> Bug #1558668 opened: api_undertaker_test is not a feature test <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1558668>
<dimitern> they still passed ok
<dimitern> and a quick live tests showed the difference: in a few seconds the discovery was done (a lot less log spam around "caching subnets.." and also 2 vs 3 messages at bootstrap saying Spaces discovert still in progress)
<dimitern> that wasn't enough apparently - the machiner manages to call SetObservedNetworkConfig a couple of seconds before spaces discovery completes :/
<dimitern> so I'm thinking that we should block MA logins until discovery completes... testing this live now
<fwereade> I'm pretty sure we shouldn't be blocking MA logins for anything, a huge amount of our functionality depends on it
<fwereade> dimitern, ^^
<fwereade> dimitern, I think this is another reason to set space-discovery-completed in state persistently
<fwereade> dimitern, and expose getter/watcher for whatever things need to wait on it?
<dimitern> fwereade, I know it shouldn't, I'm just trying to see if it might help as an experiment
<voidspace> dimitern: hah, the repeated building of the subnet cache is still a bug in the cache code though right?
<dimitern> fwereade, I agree it needs to be recored in state once done
<fwereade> dimitern, fair enough
<dimitern> voidspace, it is, but the cache was done with the assumption it will be used for bulk-style calls
<voidspace> dimitern: so it's cache once per call by design
<voidspace> dimitern: fwereade: yeah, I can't think of another reliable way other than storing in state when discovery is completed
<voidspace> not an alternative that meets all the use cases we have (HA, multiple models etc)
<dimitern> voidspace, fwereade, and we have another case already - the peergrouper needs to know the spaces discovery is done before deciding the common space all controllers are in
<voidspace> dimitern: right
<dimitern> voidspace, why did we decide not to block until the discovery has started? in case the worker does not start at all?
<voidspace> dimitern: because it blocks access for all models when you start discovery for one model
<mgz> dimitern: should container networking on master work with kvm on maas?
<fwereade> dimitern, voidspace: I think we probably should block, it's just that you can't do it safely without a persistent flag
<dimitern> mgz, not really
<voidspace> fwereade: dimitern: agreed, we need to do it right
<mup> Bug #1558678 opened: manual bootstrap: PrepareForCreateEnvironment not implemented <bootstrap> <ci> <manual-provider> <regression> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1558678>
<dimitern> mgz, is the the addressable containers job?
<voidspace> fwereade: dimitern: we disabled it because it was broken in several ways
<mgz> dimitern: yeah
<voidspace> dimitern: it also didn't block access *until* discovery started - which meant bootstrap could complete before the block was in place
<fwereade> voidspace, dimitern: see RB4110 for how the discoverspaces worker will shortly be invoked
<dimitern> mgz, I wouldn't care too much about it - addressable containers are dying, just haven't died yet
<dimitern> fwereade, will check that
<dimitern> fwereade, what I'm currently seeing is that the machiner is consistently starting 2 seconds before discoverspaces
<fwereade> dimitern, I would expect the machiner to start beforehand usually, yeah, that's top-level whereas discoverspaces is only triggered once we've got a modelworkermanager looking for models to manage
<dimitern> fwereade, which messes up setting the observed network config - I guess that must be done in a periodic worker instead of the machiner
<fwereade> dimitern, I can well believe it wants to be split out of the machiner, yeah
<mgz> dimitern: well, the problem is we don't have a good working spaces/new networking functional test
<dimitern> as it currently does not retry / update the observed config later
<fwereade> dimitern, yeah, and running a worker with multiple watchers is generally painful, even with catacomb
<dimitern> mgz, how about the bundle-based one?
<fwereade> dimitern, far better to separate the responsibilities completely
<dimitern> fwereade, indeed, I'll look into that
<mgz> dimitern: point me at it!
<dimitern> mgz, let me have a look
<dimitern> mgz, this one for example http://reports.vapour.ws/releases/3771/job/maas-1_9-OS-deployer/attempt/496
<mgz> right, we do exercise a bunch of stuff with that, but not kvm (or fancy subnet stuff)
<perrito666> dimitern: could you $$merge$$ -> https://github.com/go-goose/goose/pull/19 and tell me who else is in the merge team so I dont pick on you all the time?
<dimitern> mgz, kvm won't work with multi-nic, but it should still work with a single nic
<dimitern> perrito666, done
<perrito666> tx
 * dimitern needs to step out for ~1h
<frobware> dimitern: you still there.
<frobware> ah ^^ - nope
<mgz> frobware: are you capturable atm?
<mup> Bug #1558703 opened: PatchValue unsafe for SetUpSuite <testing> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1558703>
<voidspace> frobware: dimitern: https://github.com/juju/gomaasapi/pull/10/files
<voidspace> frobware: dimitern: all tests now pass on my branch
<voidspace> all *maas* tests  - need to do a full test run
<voidspace> frobware: dimitern: I need that gomaasapi branch to land so I can update dependencies.tsv
<frobware> voidspace: looking
<dimitern> voidspace, looks good
<dimitern> frobware, hey, I just got back, but I'm thinking of declaring EOD
<frobware> dimitern: ok, me too. too many distractions since our lxd investigation earlier.
<dimitern> frobware, I've found out that last fix I did was not sufficient
<frobware> voidspace: I'll hang around for your branch and push that upstream so we can get a CI run
<frobware> dimitern: fresh thinking comes in the morning. :)
<dimitern> frobware, i.e. it will likely cause most failing jobs to pass, but it will break network-get
<dimitern> frobware, but I have a fix in mind that will take care of that - but as you said - morning :)
<frobware> dimitern: that's ok isn't it? (apart from unit test failures)
<dimitern> frobware, it's ok for tonight I guess
<frobware> dimitern: push it - we wake up to a CI run
<dimitern> frobware, will do - it'll take a couple of hours to implement and test, so that's my plan for tomorrow first thing
<frobware> dimitern: sounds good
<voidspace> frobware: I have a provisioner unit test failure on wrong number of network.InterfaceInfo returned
<voidspace> frobware: I'll look at it in the morning, should be able to get it done first thing
<voidspace> frobware: oh wait
<voidspace> frobware: that fails due to missing lxcbr0, that may fail on master for me as well
<voidspace> frobware: and indeed it does fail in the same way
<voidspace> frobware: my gomaasapi branch has landed, I'll fix dependencies.tsv and push
<voidspace> frobware: right branch pushed
<voidspace> frobware: ready for a CI run
<arosales> hello juju core folks :-)
<voidspace> frobware: *requires* maas 1.9
<voidspace> and that's me EOD
<arosales> Need some help on a power8 stack trace
<voidspace> g'night all
<arosales> https://bugs.launchpad.net/juju-core/+bug/1558734
<mup> Bug #1558734: POWER8 agent stacktraces and refuses to boot <juju-core:New> <https://launchpad.net/bugs/1558734>
<rick_h__> natefinch: quick check, if I update my charm with a min-version, and I do attempt to deploy it on older Juju what happens?
<natefinch> rick_h__: it deploys
<rick_h__> natefinch: ok, so Juju doesn't freak about the unknown attribute?
<natefinch> rick_h__: nope... one of the nice things about the way we deserialize the data in there... anything we don't recognize we ignore
<natefinch> rick_h__: in this particular case, it would kind of be nice if we got a "error, can't deploy, unrecogized data in metadata: min-juju-version"  .... but, alas, it'll just ignore it and deploy.
<rick_h__> natefinch: ok
<mup> Bug #1558734 opened: POWER8 agent stacktraces and refuses to boot <juju-core:New> <https://launchpad.net/bugs/1558734>
<frobware> mgz: still about?
<mup> Bug #1558769 opened: unable to create-model in azure <juju-core:New> <https://launchpad.net/bugs/1558769>
<perrito666> I broke restore? :( that  is ironic
<mup> Bug #1558803 opened: Manual deploy on ppc64el wants wrong package and agents <ci> <manual-provider> <ppc64el> <regression> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1558803>
<arosales> mwhudson: marcoceppi still seeing juju peg the cpu on power8, any suggestions?
<mwhudson> arosales: is this a power8 where c programs like dpkg actually work?
<mwhudson> because debugging firmware issues via juju seems insane
<arosales> they have been working, I am not sure if they have tested lately
<arosales> mwhudson: ack I am not asking you to debug firmware via Juju
<arosales> mwhudson: I am dumb, but not that dumb
<arosales> mwhudson: just asking if you had any suggestion for marcoceppi given juju was pegging the cpu
<mwhudson> arosales: basically, no
<mwhudson> if you can give me access, i can poke around
<mwhudson> but as i said in other mail, testing something built with golang-go rather than gccgo would probably be more useful overall
<menn0> wallyworld: would you be able to look at http://reviews.vapour.ws/r/4218/ pls?
<wallyworld> sure
<arosales> mwhudson: marcoceppi may chime with access here in bit, but it is dinner time for him
<arosales> mwhudson: juju 2.0 built any different than 1.25?
<mwhudson> arosales: not entirely my department, but, gosh, i certainly hope so
<mwhudson> arosales: you are testing trusty, i assume?
<arosales> ok so perhaps we try juju 2.0 on that machine
<arosales> mwhudson: yes trusty atm
<wallyworld> arosales: we are moving to use golang 1.6 for building but afaik are not quite there yet
<arosales> ah so testing 2.0 doesn't help there -- gotcha
<arosales> mwhudson: fyi golang-go is indeed the problem and not recommended, and we build juju 2.0 GA with go-lang and we run into these issues we will be in a world of hurt
<arosales> mwhudson: something to be aware of
<arosales> wallyworld: thanks
<mwhudson> arosales: i don't understand
<arosales> mwhudson: there is no current juju (stable or dev) that is build with golang 1.6
<arosales> if 2.0 ends up being built wiwth gcc-go 4.9 it seems like we would be in world of hurt for juju 2.0 ga
<mwhudson> arosales: juju for xenial for amd64 is built with go 1.6
<arosales> mwhudson: that is counter to wallyworld's statement
<arosales> mwhudson: pehaps wallyworld was saying for ppc64el
<wallyworld> arosales: mwhudson: the aim is we will be using golang 1.6 to build for all architectures for 2.0 is my understanding
<arosales> mwhudson: which having amdb64 built with go 1.6 doesn't help much with the known issues on power8le
<mwhudson> arosales: many axes of variation :/
<arosales> indeed, but in the end power8le seems broken on 1.25 built with gcc-go 4.9
#juju-dev 2016-03-18
<menn0> wallyworld: tyvm for the review. You're right about the logging/error handling. I'll improve it.
<wallyworld> menn0: bp, i got distracted and didn't let you know i had done ir, sorry
<wallyworld> *np
<menn0> wallyworld: np, I was busy with the next thing anyway
<axw> wallyworld: are you working on any of those bugs? just don't want to double up
<wallyworld> axw: i had good intentions but haven't started yet
<wallyworld> are they legit issues?
<axw> wallyworld: nps. yes, I think so.
<wallyworld> ok
<axw> wallyworld: actually, create-model would not work on master for manual either
<axw> wallyworld: but meh, I can fix it on admin-controller-model with one line change
<wallyworld> axw: that was the point i tried to make and they were not wanting to ack that
<wallyworld> ok that would be good
<anastasiamac> axw: there is also a bug for create-model on azure bug 1558769
<mup> Bug #1558769: unable to create-model in azure <juju-core:New> <https://launchpad.net/bugs/1558769>
<anastasiamac> axw: is it related and will b fixed by the one line change above? :D
<axw> anastasiamac: I saw, thank you. I think wallyworld already fixed it, but I'll double check
<anastasiamac> \o/
<axw> anastasiamac: nah, the manual one is specific to manual
<anastasiamac> axw: thnx :D
<axw> wallyworld: sorry, actually, the problem is worse on admin-controller-model. you can't even bootstrap, because we try to create a hosted model automatically
<wallyworld> axw: understood. but root cause comes form master
<wallyworld> the test would have seem it if they had existed
<wallyworld> seen
<axw> wallyworld: you can't create-model on master, you can't even bootstrap on this branch. yes, we would have caught it earlier if we tested create-model in all substrates
<wallyworld> axw: indeed. but i want to snure the bug is correctly targetted - i am tired of pushback to fix issues in feature branches which blocks work where the root cause comes from master
<mwhudson> oh yeah i found https://tracker.debian.org/pkg/golang-golang-x-tools yesterday i think
<mwhudson> whoops
<axw> wallyworld: just a few more small things on the PR
<wallyworld> ok
<wallyworld> axw: the maas agent-name thing is because we grab the controller model and end up calling PrepareForCreateEnvironment() which doesn't like the agent name being there
<wallyworld> i need to rework it a bit
<axw> wallyworld: okey dokey
<axw> wallyworld: I thought we didn't copy across attrs if they were restricted?
<axw> or ... non restricted
<wallyworld> axw: that could be - but it seems we do now
<wallyworld> likely a bug from me supporting new create model
<axw> wallyworld: possibly also in the code I added to agent/agentbootstrap
<wallyworld> yeah, it won't taje much to fix i don't think
<wallyworld> axw: http://reviews.vapour.ws/r/4222/
<axw> wallyworld: looking
<axw> wallyworld: I'm looking at the other bug btw
<wallyworld> ta
<axw> wallyworld: I think your change means the credentials won't be copied across
<axw> wallyworld: and things from --config
<wallyworld> isn't the stuff in --config in HostedModelConfig arg?
<axw> I don't think so, but I'll check
<wallyworld> i may need to do credentials though
<axw> wallyworld: nope, it's not
<wallyworld> damn, what is in that arg?
<axw> wallyworld: model name, uuid
<axw> wallyworld: it's set in cmd/juju/commands/bootstrap.go
<wallyworld> hmmm, ok. may need to add the --config to that arg then maybe
<axw> wallyworld: and credentials...
<wallyworld> yup
<axw> wallyworld: problem is how to separate credentials from things like maas-agent-name
<wallyworld> on bootstrap we don't need to worry right?
<wallyworld> and it's taken care of in create model that is used by create-model command
<wallyworld> maas-aganr-name is add in PrepareForCreateEnvironment. on bootstrap, it won't be there in passed in config, or will error if it is. in create-model we create a skeleton config
<axw> wallyworld: how are you going to get the credential attrs?
<wallyworld> need to look into that
<axw> wallyworld: rhetorical question. the answer is (currently) to call EnvironProvider.BootstrapConfig
<axw> wallyworld: that adds the credential attrs to the config given a cloud.Credential
<wallyworld> ok
<axw> wallyworld: ... but it also adds maas-agent-name
<wallyworld> ah damn ok
<axw> wallyworld: I think what we could is this: call RestrictedConfig, make sure we include those, and delete anything that's not passed in via --config
<wallyworld> except for credentials maybe
<axw> wallyworld: I was thinking they're restricted... but yeah, they're not
<axw> le sigh
<wallyworld> sigh indeed, i found out the same thing
<wallyworld> axw: it's all good. all i need to do is include --config in the hosted model config arg. (c ModelConfigCreator) NewModelConfig does the right thing with credentials
<axw> wallyworld: ah, I see
<axw> wallyworld: uhm, although, it's kinda wrong :/
<axw> wallyworld: those credential attributes are not supposed to map 1:1
<wallyworld> they sort of do atm in common usage
<wallyworld> but they don't have to that's true
<axw> wallyworld: they do atm, but it's an abstraction breakage, and means we're buggered if we change something
<wallyworld> yep
<axw> wallyworld: I think we may need something on EnvironProvider to identify which things to carry across
<wallyworld> was a quick win, we need to fix the tech deby
<wallyworld> solves the common case for admins creating new models
<axw> wallyworld: or we update PrepareForCreateEnvironment to take a Credential too
<wallyworld> that might be better
<axw> wallyworld: or ... maybe separate BootstrapConfig even further to convert Credential to attrs
<wallyworld> or that
<axw> wallyworld: main problem is that some cloud.Credentials only make sense at the client (e.g. ones that refer to files)
<wallyworld> indeed
<axw> tho we should convert them
<wallyworld> we need to think it through
<axw> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1558803/comments/1
<mup> Bug #1558803: Manual deploy on ppc64el wants wrong package and agents <ci> <manual-provider> <ppc64el> <regression> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1558803>
<wallyworld> the file values are parsed server side, need to do that client side
<wallyworld> did you find an issue?
<axw> wallyworld: a pretty minor one - see comment
<axw> wallyworld: I've regargeted and lowered importance
<wallyworld> axw: sorry, got pinged still to look
<wallyworld> axw: it's not even relevant to admin controller branch is it
<axw> wallyworld: no, I've removed that
<wallyworld> ah cool
<axw> wallyworld: and set the medium, and removed regression tag
 * wallyworld hits refresh
<menn0> axw or wallyworld: http://reviews.vapour.ws/r/4223/ please
<wallyworld> menn0: i can look in a bit, just wip atm
<menn0> wallyworld: thanks
<axw> in that case, I'll go get lunch
<jam> wallyworld: quick ping. Do you remember how to tell a test what tools to use? It looks like EC2 tests were relying on side-effects from some other test to set up tools
<wallyworld> jam: that has recently changed, we were relying on cloud (provider storage) which is gone. rthere are helpers to uplod tools to state
<wallyworld> i can look up the methods
<jam> wallyworld: so with my recent change to fix SetUp stuff, provider/ec2 is not finding tools. It looks like the FakeVersionNumber is 1.99.0 but it only sees 2.0-beta3 tools in the faked out location.
<jam> looks like something is faking out tools before we fake out the version number
<jam> trying to sort out where that may be
<wallyworld> jam: i'll take a look, i am not deeply familiar as i didn't see the final code
<jam> environs/jujutest/livetests.go SetUpTest looks like it is doing the right thing (calls FakeVersionNumber before it calls UploadFakeTools)
<jam> wallyworld: k, I have the feeling this is old semi-rotten code
<wallyworld> axw: pushed new changes to that maas fix
<jam> ah, maybe its me
<jam> I moved one of the PatchValue calls
<wallyworld> ah
<jam> because it was happening before SetUp
<wallyworld> joy
<jam> which is unsafe, because we haven't called IsolationSuite.SetUp yet
<wallyworld> a lot of the tests rely on patching version
<mup> Bug #1558901 opened: TestAddLocalCharmSuccess read has been closed <ci> <go1.5> <go1.6> <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1558901>
<jam> so we haven't told it what test we're about to run. I think I can just patch for the lifetime of the Suite and we'll be ok
<wallyworld> i think that souuds ok
<jam> nope... :(
<jam> wallyworld: we had a lot of tests that were doing "SetUpSuite() { base.SetUpTest() }"
<jam> those were interesting.
<wallyworld> wot?
<wallyworld> wow
<wallyworld> i hope i didn't write any
<jam> wallyworld: FakeXDGHomeDir was one of them
<jam> apparently broken since 2014 according to blame.
<wallyworld> huh. thats actually old code renamed
<jam> wallyworld: yeah, a bunch of our base infrastructure was actually wrong.
<wallyworld> at yeast it will be right now :-)
<jam> wallyworld: BaseSuite.SetUpSuite() was calling PatchValue(utils.OutsideAccess)
<wallyworld> lol yeast. i can' type
<jam> which would be reset on the first test that ran
<jam> except we leaked it somewhere early
<jam> so it was always false
<wallyworld> oh dear
<anastasiamac> wallyworld: axw: before I can bootstrap master tip on my openstck, I need to add-cloud and i guess add-credential
<jam> so it was Correct, but for the wrong reasons.
<jam> anyway, provider/ec2 is the last bastion of failing tests, so I'm close.
<anastasiamac> where do I put auth-url? in my-cloud.yaml or my-credential.yaml?
<wallyworld> anastasiamac: that's a credential attribute
<anastasiamac> k
<wallyworld> anastasiamac: if you have a novarc and master, it will auto detect
<jam> wallyworld: auth-url isn't that what URL to hit which is a cloud attribute?
<jam> (souds like a per-region attribute)
<wallyworld> jam: we have endpoint url which i think is different but i haven't look specifically at openstack for a bit so could be misremembering
<wallyworld> jam: but i think you may be right
<anastasiamac> wallyworld: I have novarc but do I need to have it somehwere in a special place?
<wallyworld> ~/.novarc
<anastasiamac> :D
<anastasiamac> so in this case, I do not need to add-credentials?
<cherylj> Can I get a review?  https://github.com/juju/juju/pull/4783
<cherylj> (this is for the restore failure on master)
<wallyworld> jam: yes, i looked at the code and confirmed you are right
<wallyworld> cherylj: looking
<cherylj> thanks, wallyworld.  I had to shuffle things around to be able to mock out stuff for testing
<cherylj> turns out that there really aren't tests for restore :(
<jam> wallyworld: ... provider/ec2/ LocalLiveTests embeds LiveTests but doesn't call LiveTests.SetUpTest()
<wallyworld> win
<axw> anastasiamac: if you have ~/.novarc, or if you just source your novarc in your shell, you can do "juju bootstrap <controller> openstack"
<wallyworld> cherylj: only ci tests it seems
<cherylj> yeah
<axw> wallyworld: eating lunch, will look again soon
<wallyworld> np, thanks
<jam> cherylj: why is there a "fakeEnviron" in live code?
<cherylj> jam: for mocking out the environ in the test.  We only set it in tests
<axw> wallyworld: actually I can review that while eating :)
<axw> LGTM
<anastasiamac> axw: i have both ~/.novarc and sourced it in the shell, but when I "juju bootstrap <my controller> openstack" I get "ERROR missing auth-url not valid
<anastasiamac> "
<axw> anastasiamac: that would suggest you're missing OS_AUTH_URL from the file
<axw> anastasiamac: where did you get the novarc file from?
<jam> cherylj: have you done manual testing of an environment that really hasn't ever existed?
<anastasiamac> axw: it would... but I don't
<anastasiamac> it's in the file
<anastasiamac> that was supplied
<axw> oh, that's odd
<axw> anastasiamac: does it have "export OS_AUTH_URL" in it per chance?
<axw> anastasiamac: wallyworld fixed a bug yesterday to do with that
<axw> :q
<axw> oops
<anastasiamac> axw: yes, everything in the file has an export... and m running from master tip as of 1hr ago :D
<wallyworld> cherylj: i've made a request to alter things slightly to better set up for testing
<axw> anastasiamac: ok then, I dunno. sounds like a bug
<wallyworld> axw: anastasiamac: it's a bug that's been there for a while
<axw> anastasiamac: try moving the file away from ~/.novarc and sourcing it, and see if that makes a difference
<wallyworld> DetectRegions() only calls CredentialsFromEnv()
<jam> cherylj: reviewed.
<wallyworld> not the new DetectCredentials() code
<axw> wallyworld: anastasiamac said she sourced it as well tho
<wallyworld> don't believe it :-)
<wallyworld> since that should have worked
<anastasiamac> wallyworld: right....
<wallyworld> unless there's a 3rd problem
<wallyworld> i think openstack has been used with beta2 which is where sourcing the novarc would have been required
<anastasiamac> wallyworld: m so calling u for that!
<axw> jam: any idea why SetInstanceStatus is so slow? it's all local, so should be super fast :(
<jam> axw: I don't know specifically, but IIRC the spec wanted to rate limit charms calling it
<jam> given how accurate it is to 1.0s
<jam> I think its just a time.Sleep somewhere.
<axw> jam: ok, I'll dig. we should probably use the ratelimit package with a token bucket
<jam> well, it could be that, too. I just get the feeling it is explicitly limiting itself, which interacts poorly with fast downloads and a set number of events
<jam> wallyworld: and now the test suite "passes" but a test is taking 60s, investigation looks like it is accessing my EC2 credentials and launching a real instance...
<jam> (ec2.localLiveSuite) not so local
<wallyworld> jam: localLive tests, from memory, are run both local and live
<wallyworld> that distinction goes waaaaaay back
<wallyworld> i wish we dropped live tests, we don't need them now
<wallyworld> i guess they were added for juju 0.1 before we had CI
<cherylj> wallyworld: Can you take another look?  http://reviews.vapour.ws/r/4224/
<cherylj> wallyworld: Also, I think that the backup / restore commands should be controller commands, not model commands
<cherylj> but that's outside the scope of this PR
<wallyworld> cherylj: looking
<wallyworld> cherylj: thanks for that fix. ideally we'd not have the func as an arg to NewRestoreCommand. There'd be a NewRestoreCommandForTest() in export_text. see detectcredentials.go in juju/cmd/juju/cloud and also the NewDetectCredentialsCommandForTest in export_test.go
<cherylj> wallyworld: ok, I see.  Give me a few and I'll update it
<cherylj> wallyworld: okay, maybe 3rd time's the charm
<wallyworld> :-)
<wallyworld> looking
<wallyworld> cherylj: very small issue, lgtm
<cherylj> ha, sorry it's taking so long.  I'm so very tired
<jam> wallyworld: hm, looks like it isn't talking to EC2, just that we have a 60s timeout waiting for a security group that is being used to be released
<jam> maybe a setup was supposed to be changing that timeout
<jam> (just ran in offline mode and had the same result)
<wallyworld> could be
<wallyworld> we have so many tests with clocks to fix
<wallyworld> cherylj: indeed, you should not be at work
<cherylj> and with that...  bedtime
<wallyworld> see ya
<wallyworld> axw: also, i did end up needing to remove all the CompatSalt stuff yesterday, as bootstrap still relied on it and it needed to be killed
<axw> wallyworld: cool
<axw> wallyworld: does it need re-reviewing?
<wallyworld> axw: nah, was 99% cut
<wallyworld> i tested live
<axw> wallyworld: sounds good
<wallyworld> i tested i could log into mongo console from machine 0 as well as a deployment
<wallyworld> axw: awesome, the interactive add credential branch doesn't appear to compile on go 1.2
<axw> wallyworld: ? :/
<wallyworld> compiles and runs locally
<wallyworld> http://juju-ci.vapour.ws:8080/job/github-merge-juju/6912/console
<wallyworld> i'll have to get go 1.2 to repro
<wallyworld> unless i'm being dumb
<axw> wallyworld: nothing obvious to me
<wallyworld> sigh
<jam> wallyworld: k. there is definitely a problem with 60s to destroy, but it exists in Master as well, so my code is better than previous
<wallyworld> jam: i've found and fixed a few bad tests lately too. our code has many :-(
<mup> Bug #1558924 opened: provider/ec2 localLiveSuite.TestGlobalPorts localLiveSuite.TestStartStop takes 60s <tech-debt> <testing> <juju-core:Triaged> <https://launchpad.net/bugs/1558924>
<wallyworld> axw: am merging master into admin-controller-model... soooo many conflicts \o/
<wallyworld> will have to finish after soccer
<axw> wallyworld: thank you
<axw> wallyworld: my current status is refactoring modelcmd/juju code in between bouts of sneezing
<axw> need to refactor so I can pass in alternative credentials
<wallyworld> ok
<wallyworld> i still need to install go 1.2 and resolve that other issue also
<anastasiamac> axw: is this ur juju allergies making u sneeze? :P
<axw> anastasiamac: naw, got a sleep-deprivation induced cold I think
<anastasiamac> axw: :( sounds awful
<voidspace> frobware: ping
<frobware> voidspace: pong
<frobware> voidspace: I pushed your branch
<voidspace> frobware: yeah, I see it
<voidspace> frobware: no CI run yet though
<frobware> voidspace: as upstream/drop-maas-1.8-support-from-juju2
<frobware> voidspace: nope...
<voidspace> frobware: I'm writing up a high-level sketch of the maas 2 work
<frobware> voidspace: but then again our multi-nic branch only only started running this morning
<frobware> voidspace: thanks! \o/
<voidspace> frobware: the gomaasapi test server needs not far short of a full rewrite, so that's not a small chunk of work
<voidspace> :-/
<voidspace> frobware: I think a new one, with some shared code, will be cleaner than a single implementation or an interface based approach
<frobware> voidspace: given the time remaining is that at all feasible? Or even desirable?
<voidspace> frobware: I think it's the only reasonable approach
<voidspace> frobware: the TestServer has about thirty attributes, most of which won't be used for 2.0
<voidspace> frobware: so a monster with 60 fields and branches in every method will be impossible to work on
<voidspace> frobware: that's what I think
<voidspace> frobware: a lot of the code can be shared (all the endpoint stuff)
<voidspace> frobware: but maybe there's a middle ground somewhere
<perrito666> wallyworld: https://github.com/juju/juju/pull/4761 but for some reason rb didnt pick it, perhaps the conflict?
<frobware> voidspace: realistically I think we should consider that middle ground, otherwise the potential of adding new bugs is a concern
<voidspace> frobware: I don't think hugely branching code is less likely to introduce bugs
<voidspace> that's my worry
<voidspace> anyway
<TheMue> morning
<voidspace> TheMue: o/
<fwereade> voidspace, frobware, dimitern, dooferlad: would one of you please check the AddressAllocation test changes in jujud/agent, in http://reviews.vapour.ws/r/4110/diff/5/?page=2 ?
<fwereade> ashipika, by the way, I am deeply suspicious of what github thinks about that branch -- I am planning to just apply the correct diff to MADE-model-workers in a new branch -- will that inconvenience you horribly?
<ashipika> fwereade: nah, not really.. just give me the new branch and i'll rebase :)
<fwereade> ashipika, cool, just a heads up :)
<ashipika> fwereade: thanks!
<ashipika> fwereade: when do you expect this to land?
<fwereade> ashipika, I have a ship it, so hopefully today
<ashipika> fwereade: \o/
<fwereade> ashipika, although that's just onto MADE-model-workers, which needs to be updated with latest trunk and then make CI happy
<dimitern> fwereade, looking
<voidspace> frobware: stdup?
<frobware> voidspace: oops, sidetracked. omw
<fwereade> dimitern, for reference: I dropped the skipped tests, and added one test that enables the feature flag and checks there's an address-cleaner worker running alongside all the usual ones
<fwereade> dimitern, I called it address-cleaner because that seemed to be its main job, let me know if I misunderstood anything
<dimitern> fwereade, that sounds good to me
<fwereade> dimitern, ok, cool, will be submitting it as RB4225 in a few seconds then :)
<dimitern> fwereade, go for it :)
<dimitern> frobware, voidspace, there's the fix for those panics: http://reviews.vapour.ws/r/4226/
<frobware> dimitern: will stash the lxd changes and try your PR now
<dimitern> frobware, cheers
<frobware> dimitern: do you know why in the container the device are not ifup'd?
<frobware> dimitern: I now have 3 nics, but without an ifup they have no addrs
<dimitern> frobware, hmm no idea - it works like that for lxc
<dimitern> frobware, maybe something is blocking the networking job to finish bringing them up? ntpdate lock contention?
<frobware> dimitern: will take a look after trying your changes
<dimitern> frobware, ok
<dimitern> frobware, btw I've realized we're lagging behind 2 major feature branch merges into master
<frobware> dimitern: I think we need to decide whether it makes sense to create one singular network profile, or one profile per container
<frobware> dimitern: this is an entropy judgement call
<dimitern> frobware, one per container seems to be the simplest option, considering the hwaddr needs to match
<frobware> dimitern: in my mind we should get the multi-nic branch back to the state of maas-spaces2, CI-wise.
<dimitern> frobware, I'm looking into how bad the conflicts are
<frobware> dimitern: then look at merging master
<dimitern> frobware, that's a good point
<dimitern> frobware, but if we lag too much behind it will only get harder
<frobware> dimitern: I know. but equally adding the unknown is ...
<frobware> dimitern: I think there's benefit in giving the OS folks a known state today
<frobware> dimitern: yes, the hwaddr forces that decision.
<dimitern> frobware, I wouldn't rush to do a merge today exactly for that matter
<dimitern> http://paste.ubuntu.com/15413949/
<frobware> dimitern: so entropy level at lines 1..3624. :)
<dimitern> frobware, that's BS though as I did reset --hard FETCH_HEAD before, redoing it properly now
<dimitern> frobware, it's entirely not that bad: http://paste.ubuntu.com/15413972/
<frobware> dimitern: well, that's a lot better
<dimitern> frobware, it even builds, running make check now and a live test with the bundle
<frobware> dimitern: I think we need to decide how to spend the rest of the day; polish the branch and give it to OS folks, or merge and polish
<dimitern> frobware, I think those two are not necessarily mutually exclusive
<dimitern> frobware, OS folks can test the maas-spaces-multi-nic-containers + my proposed fix and the workaround bouncing the MA of machine-0
<frobware> dimitern: ah, you still need to bounce the MA
<dimitern> frobware, yeah
<dimitern> frobware, if you're deploying containers on machine 0
<frobware> dimitern: right
<frobware> dimitern: we can merge master; we can also give the OS folks a specific commit for testing. that would allow us to move on. thoughts?
<dimitern> frobware, we can give them a binary tarball to try, while we sync up with master (and assuming that does not cause a regression for multi-nic stuff)
<frobware> dimitern: thedac seemed happy with checkout a commit id last time; it's also easier if we want them to move to a different commit
<dimitern> frobware, fair enough
<dimitern> frobware, but otherwise I think that sounds like the plan for today?
<frobware> dimitern: great
<frobware> dimitern: ah... one other thing...
<frobware> dimitern: if we can first resolve the missing mac addr that would allow lxd end-to-end
<dimitern> frobware, right!
<frobware> dimitern: can you look at that next?
<dimitern> frobware, I'll look into that when I'm back ~1h
<frobware> dimitern: thanks!
<fwereade> ashipika, MADE-model-workers is up to date with the model-agent-integration work
<dimitern> and I'll leave the live test and make check going in the mean time
<frobware> dimitern: I have to say this is why I don't want to merge master: http://reports.vapour.ws/releases/3776
<frobware> dimitern: multi-nic came from the tip of maas-spaces2 and that had only 2 failures
<frobware> dimitern: we should really get back to that state first
<dimitern> frobware, well, we touched a lot of places in the multi-nic branch
<dimitern> frobware, voidspace, btw a review on http://reviews.vapour.ws/r/4226/ will be appreciated :)
 * dimitern is out for ~1h
<ashipika> fwereade: that's a feature branch i presume?
<frobware> dimitern: no issues with what we've changed; just for our own sanity bringing new stuff may make it harder to differentiate root cause analysis
<fwereade> ashipika, yeah
<frobware> dimitern: getting closer with lxd: http://pastebin.ubuntu.com/15414021/
<frobware> dimitern: cannot login using juju ssh but can ordinarily. error fetching address for machine "0/lxd/0": private no address
<frobware> voidspace: CI run 3777 is the drop-1.8 support.
<voidspace> frobware: ah, cool
<voidspace> frobware: lots blocked and a couple of build-binary failures
<voidspace> frobware: I'm looking at the failures
<voidspace> frobware: problems with lxc-start in the build binary
<frobware> voidspace: log? link?
<voidspace> frobware: http://data.vapour.ws/juju-ci/products/version-3777/build-binary-wily-amd64/build-884/consoleText
<voidspace> frobware: same problem on build-binary-wily-arm64
<voidspace> doesn't *look* like a problem with the code
<voidspace> the logs are "terse" though
<frobware> voidspace: check status of master too
<voidspace> looking
<voidspace> frobware: they succeeded on master
<voidspace> frobware: my guess is that other tests will be gated on the binary build being successful
<voidspace> mgz: ping
<frobware> voidspace: so it may depend on when you branched or did your checkout
<voidspace> frobware: was that a problem on master before?
<frobware> voidspace: you would have to take a look back through the previous builds
<voidspace> frobware: failing to build the binary seems likely (to me) to be an infrastructure problem
<frobware> voidspace: maybe mgz knows more
<tvansteenburgh> hey guys, i have a fresh xenial with a fresh juju 1.25.3. `ps -aef |grep juju` returns nothing. how do i get juju running, or figure out why it won't start?
<tvansteenburgh> never mind, got it going again with the juju-clean plugin, sorry for the noise
<mup> Bug #1559062 opened: ensure-availability not safe from adding 7+ nodes, can't remove stale ones <juju-core:New> <https://launchpad.net/bugs/1559062>
<dimitern> frobware, voidspace, back again
<voidspace> dimitern: welcome
<voidspace> frobware: that test run was useles, all the interesting tests were blocked by the build binary failure
<voidspace> frobware: waiting for comment on that from mgz
<frobware> yep
<dimitern> voidspace, frobware, so I after merging master into multi-nic, make check passes, and the live test with the bundle still works
<frobware> dimitern: if it was blessed I'm more inclined to say go with it. but it's not.
<dimitern> frobware, I'll propose it, but we don't have to merge right away..
<beisner> hi all - ? re: availability zones.  if we have maas machines set in different zones, should that zone be observable to juju via JUJU_AVAILABILITY_ZONE?
<frobware> dimitern: do you time to HO/sync? Can show you lxd stuff, some question re: mac addr and 'inet manual'
<dimitern> frobware, sure, just give me 10m
<frobware> ok
<frobware> dimitern: let's make it 30 past the hour. I'll grab a very quick lunch.
<dimitern> frobware, sgtm
<perrito666> pseudo morning all
<dimitern> odd .. I couldn't open google calender for a long time..
<frobware> dimitern: sounds like a productivity win
<dimitern> :D
<dimitern> frobware, I'm in today's standup HO
<frobware> omw
<mup> Bug #1559099 opened: JUJU_AVAILABILITY_ZONE not set in MAAS <juju-core:New> <https://launchpad.net/bugs/1559099>
<perrito666> cherylj: are you around?
<cherylj> perrito666: yeah, what's up?
<perrito666> cherylj: silly question, actually two, 1) is master blocked on the restore bug and 2) where is the release note doc? I need to add a few bits
<cherylj> perrito666: 1 - Yes, but I committed the fix last night.  Don't think that there's been a run on master since then.  Let me think about unblocking...
<cherylj> perrito666: 2 - https://docs.google.com/document/d/1ID-r22-UIjl00UY_URXQo_vJNdRPqmSNv7vP8HI_E5U/edit
<perrito666> cherylj: tx x 2
<cherylj> np :)
<cherylj> perrito666: did the keystone 3 support land?  (I think I saw that it did?)
<perrito666> I didnt, lemme see if Ian land it
<perrito666> ah yes, it got jfdone
<perrito666> :p
<cherylj> k, will update the blueprint
<perrito666> ill update the release notes for all that landed these days
<cherylj> perrito666: awesome!
<mup> Bug #1559131 opened: unable to add-credential for maas <conjure> <juju-core:New> <https://launchpad.net/bugs/1559131>
<katco> ericsnow: natefinch: good morning... final day!
<katco> natefinch: i see you got your PRs landed... grats! can you follow up with marcoceppi to see when they'll hit the deb?
<marcoceppi> katco natefinch I'm planning on building those today in about 3 hours
<katco> marcoceppi: rock!
<natefinch> awesome
<katco> marcoceppi: note that most of the commands will currently return errors b/c the charmstore endpoints don't actually do anything yet
<marcoceppi> katco: yeah, I'm going to coordinate with uros
<marcoceppi> katco: hopefully the store will be deployed today :\
<katco> marcoceppi: no, i mean even after uros deploys they still do nothing. the code hasn't been written
<marcoceppi> katco: \o/ cool good to know I'll make sure when I announce that there is still backend code being worked on
<katco> marcoceppi: we are wrapping up some bug fixes in core, and then will be swinging onto implementing the charmstore stuff
<marcoceppi> katco: is it just anything with --resources ?
<katco> marcoceppi: yeah i think that's a fair representation
<marcoceppi> katco: awesome, thanks for the ehads up
<cherylj> hey katco, I've updated the release table on the feature tracker page to address some of your feedback.  Can you take a look and let me know if it's a bit clearer now?  https://private-fileshare.canonical.com/~cherylj/juju-features-20.html
<katco> marcoceppi: anything here prepended by "charmer" https://github.com/CanonicalLtd/juju-specs/tree/master/resources/features
<katco> cherylj: wow, i didn't even think to ask! ty that's great!
<katco> cherylj: very clear
<mup> Bug #1559131 changed: unable to add-credential for maas <conjure> <juju-core:New> <https://launchpad.net/bugs/1559131>
<cherylj> excellent, thank you katco
<katco> cherylj: no, really, ty
<perrito666> are we actually writing markdown on a google doc?
<katco> perrito666: # katco's answer
<katco> perrito666: * Yes of course we are.
<katco> perrito666: * Why wouldn't we be?
<katco> perrito666: * It's md! Everyone knows markdown!
<katco> perrito666: * bullet point 4
<perrito666> its a google doc, it supports 20th century formatting, but besides that is less than ideal as a medium for md
<katco> perrito666: i know, i'm joking ;) i would think our release notes would be checked into our repo actually
<perrito666> well apparently there is more people that knows md than git :p
<mup> Bug #1559131 opened: unable to add-credential for maas <conjure> <juju-core:New> <https://launchpad.net/bugs/1559131>
<TheMue> md is a _nice_ simple format
<natefinch> perrito666: putting the release notes in git as MD would be amazing... PRs to add new sections, PRs for edits, you could actually see who added each piece etc
<TheMue> natefinch: +1
<natefinch> perrito666: I thnnk it's in docs for collaborative editing purposes and low barrier of entry
<natefinch> perrito666: however, one would assume that people at a software company who use git all day would not be averse to using it a tiny bit more.
<natefinch> also... being able to preview your markdown to know it's doing what you expect would be nice.
<perrito666> nothing says low barrier like a nice google doc full of markdown...
<natefinch> lol
<katco> natefinch: hey, don't think i even need to ask, but: you'll have your current card wrapped up today?
<TheMue> google doc containing html as plain text with a pre section of markdown
<natefinch> katco: should be, year.  It's touching more spots than I originally expected... there's a lot of layers of abstraction to add arguments to, but it's still pretty trivial
<natefinch> s/year/yeah
<katco> natefinch: cool... please also coordinate with marcoceppi to let him know eta for next build of charm deb
<katco> natefinch: i'sok... i read it in a pirate voice as "yar!"
<natefinch> katco: haha
<katco> natefinch: but seriously, work closely with marcoceppi on eta/progress, cool?
<natefinch> katco: yes
 * marcoceppi elmira hugs natefinch
<natefinch> katco: I think I misunderstood the card... I thought I was adding channels to juju charm list-resources
<natefinch> katco: there isn't a charm list-resources command AFAIK
<katco> natefinch: gahhh you are correct and i am friday head
<natefinch> katco: oh good, I thought I was doing the wrong thing
<katco> natefinch: the card is incorrect... i'll put a juju out in front
<natefinch> katco: sounds good
<katco> natefinch: so the charm work you did already included channel support?
<natefinch> katco: yes, the UI team did the channel work there
<natefinch> katco: we just tacked resources on top of that
<katco> natefinch: ok
<katco> marcoceppi: really i just like pinging you. false alarm, we are truly done with the charm command now.
<marcoceppi> katco: I like feeling like I'm needed
<katco> :)
<natefinch> marcoceppi: and I like hugs :)
<katco> marcoceppi: also i had to look up elmira... wow there's some neurons that hadn't fired in like 15 years
<marcoceppi> katco: haha, yeah it's been a while but she def left an impression on me when I was younger
<natefinch> marcoceppi: aww, I misread it as elvira hug
 * katco spits out coffee
<marcoceppi> natefinch: haha
<katco> that would be quite different
<natefinch> indeed :)
<marcoceppi> elmyra* is apparently the spelling you should be searching with
<katco> marcoceppi: on a complete tangent: the shirts from the charmer's summit are great. material is very soft and design is awesome
<marcoceppi> katco: thanks, we decided to them in house. we want people to actually like wearing them ;)
<katco> hehe
<katco> if i could get a zip-up juju hoodie i would be so happy
<natefinch> ditto
<marcoceppi> which means getting nice fabric
<katco> i think wwitzel3 made his own
<marcoceppi> that's good to know, we'll keep that in mind for pasadena
<ericsnow> katco, natefinch: PTAL http://reviews.vapour.ws/r/4219/
<katco> ericsnow: reviewing right now actually!
<ericsnow> :)
<natefinch> ericsnow: looking
<ericsnow> ta
<katco> ericsnow: natefinch: actually, since we're down to the wire here... just this once, can i ask that natefinch focus on getting his code written?
<katco> ericsnow: natefinch: i'd like to be able to merge master in by EOD
<ericsnow> katco: np
<natefinch> ok
<katco> natefinch: ta
<wwitzel3> katco, marcoceppi: yeah, I took the hoodie to a alterations shop and they put a zipper on it for me
<wwitzel3> cost me $11 + the zipper (amazon for $4 iirc)
<natefinch> my wife said she'd do that for me, but I haven't gotten around to actually asking her to do it
<wwitzel3> mine sat in a closet unused until I put a zipper on it
<wwitzel3> what kind of uncivilized person uses a pull over
<natefinch> wwitzel3: lol, right?
<katco> well put wwitzel3
<TheMue> natefinch: your wife can do any sewing job for you. always watching her posts, she's so good.
<natefinch> TheMue: Thank you :)  She's pretty great, yeah :)
<TheMue> natefinch: absolutely
<fwereade> katco, do you know why resources/resourceadapters.WorkerFactory is creating a worker from a *State?
<fwereade> katco, (hi, by the way, sorry abrupt!)
<katco> fwereade: let me take a peek. ericsnow might have a quicker answer
<katco> fwereade: np at all lol, hi! :D
<ericsnow> fwereade: I'll take a look :(
<ericsnow> fwereade: gah, I'll fix that
<katco> fwereade: well there ya go.
<fwereade> katco, ericsnow, before you get too deep into that, I hit this because I'm doing pretty drastic things to the machine agent, and I'm not entirely sure how I should go about integrating that functionality
<katco> fwereade: ericsnow: is the problem that it's taking in a State pointer?
<katco> fwereade: ericsnow: and not an interface?
<fwereade> katco, that's several problems
<fwereade> katco, an interface would be nicer
<fwereade> katco, the upsetting thing is that it's a worker that's not going via the api layer
<ericsnow> fwereade: right, the API part is what I need to fix
 * ericsnow feels dumb for forgetting that mandate
<fwereade> ericsnow, would you take a look at the MADE-model-workers branch please?
<ericsnow> fwereade: will do now-ish
<fwereade> ericsnow, in particular, cmd/jujud/agent.MachineAgent.startModelWorkers; and the  cmd/jujud/agent/model package
<ericsnow> fwereade: and sorry for not getting you that review yesterday (model agent integration)
<ericsnow> fwereade: I was actually just reading through it
<fwereade> ericsnow, because I am reluctant to make the tests for those packages dependent upon what other packages might have been imported, and it feels like the approaches are at an impasse
<fwereade> ericsnow, no worries, I am keen to hear your thoughts on it but you weren't blocking me
<ericsnow> fwereade: k
<katco> fwereade: is this the whole "imports have a side-effect" conversation?
<katco> fwereade: via init
<fwereade> katco, yeah
<katco> fwereade: to my knowledge we are not doing that, if that helps.
<fwereade> katco, well, then it's the "logic dependent upon mutable global state" one
<katco> fwereade: ah, that we are currently doing :)
<katco> fwereade: although i'd characterize it as "package level state"
<fwereade> katco, IMO that makes it marginally more tractable, but little less evil
<katco> fwereade: but i don't understand how the registration pattern makes writing tests harder? or what that has to do with imports?
<ericsnow> fwereade: oh, you better take a look at our feature branch https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go
<ericsnow> fwereade: I've since merged that worker (in master) with the charmrevisionupdater worker
<ericsnow> fwereade: (Ian's idea)
<fwereade> katco, well, I have this Manifolds() func, which returns a representation of the workers we need to run per-model in a controller; I would like to be able to test it, and have confidence that it will do the same thing whatever else may or may not have been called or imported
<ericsnow> fwereade: the integration between the two takes place in the API server so no workers are involved
<ericsnow> fwereade: and yeah, I really don't like the registries and would love to talk with you about what I think is the sensible atlernative
<fwereade> ericsnow, well... that worker is sitting right there on master afaics? https://github.com/juju/juju/blob/master/resource/resourceadapters/workers.go
<katco> fwereade: sorry for the 2 convos at once. but can't you? your tests have the global view of manifolds in the context of your tests. it should be testing that it does the right thing when you register workers
<ericsnow> fwereade: right, we haven't merged that part of our feature branch back into master yet
<katco> ericsnow: actually i am confused as well... the worker is present in the feature branch. you're saying we merged that, but haven't yet deleted it?
<fwereade> ericsnow, katco: so, parenthetically, it is really not a good  thing that that code ever landed on a feature branch, let alone made it into master :(
<fwereade> ericsnow, katco: but I am much more interested in talking about the tests
<ericsnow> katco: ??? https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go
<katco> ericsnow: i am missing something. i'm looking at the same worker which takes in state
<fwereade> katco, if I need to pay attention to ensuring that the context of my tests matches the context at runtime, I will almost certainly screw it up at some point
<ericsnow> katco: there's no worker there; compare with master: https://github.com/juju/juju/blob/master/resource/resourceadapters/workers.go
<fwereade> katco, if I write my SUT such that all its dependencies are supplied explicitly, I can have much greater confidence that the tests will remain useful in the long term
<katco> ericsnow: is this not instantiating a worker? https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go#L21
<katco> fwereade: you don't have to make sure it aligns with the context of runtime, just that the path is tested
<ericsnow> katco: correct
<katco> ericsnow: so, in our feature branch, we have a function that takes in state, and returns a worker looking at state?
<ericsnow> katco: it returns a "latest charm handler", not a worker
<katco> fwereade: i.e. register a "foo" worker, does Manifolds() return it? PASS: we successfully know about registered workers
<fwereade> katco, ericsnow: how can you possibly write such a registration func without magical dependencies across all the components in play?
<katco> fwereade: well wait, let's resolve the testing line of questioning
<fwereade> katco, ericsnow: I don't think you can usefully register a manifold without secret knowledge of the names of the workers defined in agent/model
<fwereade> katco, ok
<katco> ericsnow: that is perhaps misleading then. it sits in a "workers" package
<ericsnow> katco: agreed
<ericsnow> katco: it's left over from when it was a worker
<cmars> fwereade, is it possible for a relation hook to fire before both services are installed? even if one of those services is a subordinate?
<fwereade> katco, ericsnow: (either way, the only things that should be using a *State should live in the apiserver)
<katco> ericsnow: ahhh ok this makes sense
<ericsnow> fwereade: agreed
<fwereade> cmars, it should not be
<fwereade> katco, ok, back to the tests
<cmars> fwereade, thought so. might be an old juju, i'll find out
<cmars> thanks
 * katco listens
<fwereade> katco, I am not interested in testing a registration mechanism: I am interested in testing what workers will be started by a model agent
<fwereade> katco, the mutable global state makes that unknowable
<katco> fwereade: ah, i see your plight now
<frobware> dimitern: meeting
<frobware> voidspace: ^^
<katco> fwereade: if we were aligned, this would be a test that would live somewhere in component/... because that's where the list of workers would get passed in
<katco> fwereade: coding the list in the dependency engine is like hard-coding the state i think
<fwereade> katco, I am not sure that testing that behaviour in component/ is notably better that testing it under agent/model/
<fwereade> katco, a model agent has a number of responsibilities, and those responsibilities have non-trivial interactions
<katco> fwereade: well, i think it's because in the manifold, you test that the registration works. then where the registration happens, you test that your list is complete
<fwereade> katco, restate please?
<katco> fwereade: sure, let me try and think of a different way
<katco> fwereade: so, starting from these axioms:
<ericsnow> katco, fwereade: I think that testing it where we are is correct; but we should be testing which manifolds will be run (not which workers)
<katco> fwereade: 1. unit tests should test 1 thing at a time
<katco> fwereade: 2. the combination of all unit tests trends towards correctness
<katco> fwereade: if your goal is to test that the workers we expect to run, are
<fwereade> katco, (1) concur (2) strongly disagree, quantity of tests does not imply quality of tests
<katco> fwereade: that's not the point of 2... another way is to say: the more correctly written unit tests you have, the greater confidence you have the system as a whole works
<katco> fwereade: in other words it's not about the correctness of the test, it's about the combination of correct tests stating something about the system as a whole
<fwereade> katco, and now we tangle on "correctly written", I fear
<katco> fwereade: no, i don't think so. it doesn't matter. for any value of correct.
<katco> fwereade: if it correctly tests the 1 thing the test is supposed to test
<fwereade> katco, still unconvinced, I think some tests have negative value
<katco> fwereade: agreed, in overhead. not in emergent correctness of the system
<fwereade> katco, on the contrary
<fwereade> katco, a little while ago I found tests for the machine agent that checked it called some api method on some facade
<fwereade> katco, which (1) it shouldn't have done in the first place
<voidspace> frobware: oops, sorry
<fwereade> katco, and (2) it actually didn't anyway, because someone had tweaked the test setup to make that call at the right time, so the test merely tested that the test setup had run
<katco> fwereade: ok, so that's fails both clauses right? it did not correctly test the thing it was supposed to test, and it wasn't supposed to test it
<katco> fwereade: strawman
<voidspace> mgz: ping
<mgz> voidspace: yo
<voidspace> mgz: the drop-maas-1.8 CI run failed on build binary, which meant everything failed
<katco> fwereade: if your goal is to test that the workers we expect to run, are. first thing you should do is test that when workers are registered, the manifold knows about them
<katco> fwereade: the second thing is that you register the right workers
<voidspace> mgz: the logs are extremely terse - it just says that lxc-start failed
<mgz> voidspace: that's fixed
<voidspace> mgz: so it was a problem with CI, not with the branch?
<mgz> lxc update broke wily
<katco> fwereade: through emergence, you have now tested that the workers we expect to run, are
<fwereade> katco, you seem to be arguing from a position that assumes that we should depend on mutable global state in the first place
<voidspace> mgz: ah, damn
<mgz> sinzui had to roll back the package
<mgz> but the testing is happening for realy
<katco> fwereade: yes, i prepended all of this by saing "if we were aligned"
<voidspace> mgz: so we'll get a run in due course
<voidspace> mgz: thanks
<voidspace> frobware: ^^^
<fwereade> katco, ok, I did not think that "globals are bad mmmkay" was a controversial position
<katco> fwereade: channeling mr. mackey there :)
<fwereade> katco, absolutely :)
<katco> fwereade: but we could just pass the list in... doesn't have to be global
<fwereade> katco, right, and if we did, that would certainly be more testable and less upsetting
<frobware> dimitern: so with your patch, some fiddling with the lxd code and adding an 'ifup -a' as a new run command in cloud-init the lxd containers' interfaces are all up!
<ericsnow> fwereade: note that in our feature branch that worker registry is gone
<katco> fwereade: under the tests i have laid out, it would be no easier to test. but yes, less upsetting
<fwereade> katco, but we're still talking about a set of interdependent components
<frobware> dimitern: first container without `ifup -a', second with. http://pastebin.ubuntu.com/15415766/
<fwereade> katco, the agent/model package is about defining the dependencies and interactions between them
<fwereade> katco, and many parts of that are internal details
<fwereade> katco, the api caller might be called "api-caller", but if you depend on that from half a docebase away Bad Things will happen
<katco> fwereade: and we arrive at the real point of contention: IoC vs. not
<dimitern> frobware, awesomesauce!
<katco> and as if to signal something, my cat just puked at my feet
<katco> brb
<ericsnow> fwereade: ideally I'd like to eliminate all the global registries (and component/all) and accomplish the same thing in the correct places (under cmd/juju, cmd/jujud, etc.)
<frobware> dimitern: we should sync with stefan (and raise some bugs to track) for some of the timing issues and for the cases where the interfaces don't come up.
<fwereade> katco, I think I am in favour of IoC... do I seem not to be?
<fwereade> brb also, talk when you're back
<dimitern> frobware, yeah
<ericsnow> fwereade: the challenge is refactoring code so that we pass the necessary "registries" around
<fwereade> ericsnow, this is absolutely true
<fwereade> ericsnow, and I think it's more or less what I've been working towards with all the dependency.Engine work
<ericsnow> fwereade: exactly
<ericsnow> fwereade: that's what helped the concept click for me :)
<fwereade> ericsnow, cool :D
<cmars> katco, is it possible to select a lxd profile when deploying, by using constraints=... ?
<katco> cmars: i don't believe so. jam and tych0 have been doing the latest work on this though
<ericsnow> cmars: not likely (unless jam or tych0 have added that)
<cmars> katco, ericsnow ok thanks
<tych0> not that i know of :(
<cmars> would such a thing fit into constraints? dang it'd be useful to be able to do that -- adding bind mounts stuff, or making containers privileged
<cmars> i'd consider hacking away on this... i know y'all are busy
<fwereade> cmars, that feels like a placement directive? can be passed through to the provider
<katco> cmars: you can still kind of get this behavior by changing what image ubuntu-<series> points to
<dimitern> frobware,
<dimitern> frobware, sorry wrong window :)
<fwereade> cmars: provider/ec2/environ.go:370 for an example
<cmars> haven't seen the placement stuff yet, yeah, that makes sense
<fwereade> cmars, constraints want to be generic, placement is for provider-specific trapdoors
<fwereade> cmars, generally accessed via --to
<fwereade> katco, so, IoC?
<cmars> fwereade, that'd put all the provisioned machines in the same profile.. which would work, but what I really want is to deploy one service into a privileged container while leaving the others unprivileged
<katco> fwereade: oh, right... sorry
<fwereade> cmars, deploy myservice --to lxd:profile=myfancyprofile?
<fwereade> katco, np
<cmars> oh, works on deploy as well as bootstrap. ok, nice!
<fwereade> katco, I *think* we're both in favour of ioc?
<katco> fwereade: no you don't seem to be against IoC. but hard coding things does not seem to be IoC. maybe i've missed the point being that you pass around manifolds?
<fwereade> katco, the way I see it, a Manifolds func is explicitly responsible for accepting some sort of config that applies to the responsibility it holds, and returning some set of workers that will meet those responsibilities
<fwereade> katco, if the set of workers were independent, I would agree that a dynamic registry would probably be ok
<katco> fwereade: so the crux of your argument is that there is a central spot to codify that dependency graph, and you can't test that if outsiders can register things
<fwereade> katco, basically, yes
<fwereade> katco, and it makes me sad that code changes many packages away could, e.g. induce a cycle and render the whole lot useless
<katco> fwereade: tbh i would have to read through the code again to comment on that point
<fwereade> katco, I think it's a broadly applicable argument? I want local changes to cause local test failures, and the bigger/more-integrationy the test the more distance-risk I take on in exchange for greater certainty about macro behaviour
<fwereade> ...er, if that makes any sense at all outside my own head
<dimitern> frobware, I solved the case of the missing mac address
<frobware> great
<ericsnow> fwereade: FWIW, I agree that "configuration" of the application, which is how I see this, is ideally all in one place
<ericsnow> fwereade: which is what I think you are arguing for
<dimitern> frobware, it turned out much to my surprise that `x := y` is not the same as `x := make([]T, len(y)); copy(x,y)`
<fwereade> katco, at a high level, yess -- I think the problem I am solving with dependency.Engine is "nobody understands what workers are running at any given time"
<fwereade> katco, I have been working towards making that more explicit
<katco> fwereade: yeah i agree with that
<fwereade> katco, so, from that perspective, you can see why I'm having trouble with components
<katco> fwereade: oh yes, certainly! i think i'm arguing from some kind of ideal future state that doesn't actually exist yet
<katco> fwereade: i don't feel the problem is intractable
<fwereade> katco, would you consider it a slur if I accused the component approach of being reminiscent of aspect-oriented programming?
<katco> fwereade: but i acknowledge and respect that you are much closer to the problem than i am, so might have a more valid opinion :)
<katco> fwereade: no not a slur... i don't understand the comparison though
<fwereade> katco, because it feels to me like it has similar strengths (clear separation of unrelated concerns) and weaknesses (pain when those concerns turn out to intersect in subtle ways)
<fwereade> katco, just a thought, may be nonsense, probably not a useful comparison if it doesn't immediately speak to you
<katco> fwereade: mm... i think it's different in that we're trying to advocate codifying those interactions instead of letting them happen naturally
<katco> fwereade: i.e. if i decorate a method with a bunch of attributes, i may not know how it will act, but if i write a function that says X interact with Y, i can write a test against that
<katco> fwereade: i suppose i can write a test for how attributes are combined as well
<katco> fwereade: hey, i love talking about this stuff (and it's important to continue doing so) but i have stuff i need to take care of for the release
<frobware> dimitern: so it was lost in the pointer copy from before? just impl/patch didn't actually fix it?
<fwereade> katco, likewise -- and I'm approaching EoD -- but I do need to figure out how to weld these two parts together if I want to merge MADE-model-workers
<katco> fwereade: when do you need that merged?
<fwereade> katco, heh, now it's done, as soon as I can, but I know there's no shortage of competition
<dimitern> frobware, I'm not sure what was wrong with the patch, but using make + copy vs := worked
<katco> fwereade: so you're aiming to get it in for 2.0?
<fwereade> katco, last I saw it was on the list
<voidspace> frobware: when will you have confirmation of the induction sprint dates?
<katco> fwereade: well if it's just 1 test left, you could land the branch as is and fix the test before the ~8th
<frobware> voidspace: alexis raised the req for april 18->22 - I haven't seen or heard anything to suggest that it will not be those dates
<voidspace> frobware: ok, cool - thanks
<fwereade> katco, well, it's more than that -- it's that it's a replacement of the per-model workers
<frobware> voidspace: it's mine and dooferlad's birthday that week, so it's a rave!
<katco> fwereade: oh, so functionality is blocked as well?
<fwereade> katco, and, well, there's this *State-dependent thing that's just turned up and is partly there because the magic registration mechanism let the implementer avoid seeing the WTF DO NOT DO THIS comment in startEnvWorkers
<fwereade> katco, yes, my branch will not currently run that worker
<frobware> ericsnow, katco: would love some feedback on this if you have bandwidth: https://github.com/juju/juju/pull/4789/commits/417c8fd67b40eac734a815707104bfc3c8657af5
<frobware> ericsnow, katco: I haven't looked at the lxdclient before, but was trying to make multi-nic containers work with maas/spaces
<voidspace> frobware: hehe, cool
<fwereade> katco, and I would like it to, but adding package-level component registration to agent/model is very much at odds with the purpose of the package
<katco> fwereade: ok, well then we need to discuss syncing up this weekend to meet monday's deadline
<ericsnow> fwereade: don't forget that once feature-resources is merged back in, the whole model-worker-registry thing is gone
<katco> frobware: will try and tal in a bit
<fwereade> ericsnow, so it's just charmrevisionupdater doing both jobs, and using the api and everything? that seems sane
<ericsnow> fwereade: right
<frobware> katco: appreciated
<fwereade> ericsnow, ok, then that is great
<ericsnow> fwereade: it was Ian's idea :)
<katco> fwereade: unblocked?
<fwereade> ericsnow, yeah, they're pretty much the same job
<voidspace> frobware: we could make it literally a rave... http://www.fabriclondon.com/club/listing/1238
<fwereade> katco, not sure
<voidspace> dimitern: http://www.fabriclondon.com/club/listing/1238
<ericsnow> fwereade: yep :)
<fwereade> katco, don't think I can merge without a functioning worker in the short term
<frobware> katco: we may land that on maas-spaces-multi-nic-containers anyway to drive through any integration issues.
<ericsnow> fwereade: it may be worth cherry-picking the particular merge commit from the feature-resources branch?
<katco> frobware: seems reasonable, but appreciate the heads-up. in that same light, heads up tych0 and jam ^^^
<dimitern> voidspace, that's a great idea actually
<voidspace> dimitern: :-)
<fwereade> ericsnow, it very probably would, I will have a look for that
<dimitern> voidspace, haven't been to a party like that in london, so count me in :)
<fwereade> ericsnow, katco: so, yes, unblocked, thank you :).
<fwereade> ericsnow, katco: but I think this is a near-miss and we'll have worse interactions before too long, so let's talk more about this right after the release madness
<ericsnow> fwereade: I believe it is 8758089b35c3120b52b10da6d93514ac0d992f6f (PR #4539)
<katco> fwereade: yes, totally agree!
<ericsnow> fwereade: +1
<katco> fwereade: we will be doing a retro on the component oriented approach at our next sprint
<fwereade> ericsnow, katco: cool
<katco> fwereade: and i'm hoping we can get everyone to close their laptops and participate :)
<fwereade> katco, whole-team sprint? excellent, +1
<katco> fwereade: a juju core sprint
<natefinch> arg... I really really really wish that channel was part of a charm.Url
<ericsnow> natefinch: +1
<natefinch> so... uh... do we store the channel of a charm that a service is using?
<ericsnow> natefinch: I doubt it
<natefinch> ericsnow: so, how do we know what channel to look in for updates?
<ericsnow> natefinch: though I expect we should have it for the charmrevisionupdater worker
<ericsnow> natefinch: yep
<natefinch> ericsnow: that's what I'm looking at right now
<natefinch> ericsnow: since LatestCharmInfo now requires a channel
<natefinch> fwereade, rick_h__: either of you know if juju-core respects channels at all, and if so, how?  Seems like we need to remember the channel from which we deployed a service... but I don't see us actually storing that anywhere
<katco> natefinch: ericsnow: i would have expected that to be part of the work of landing --channel into the deploy command
<ericsnow> katco: +1
<natefinch> katco: there's no mention of channel in deploy.go: https://github.com/juju/juju/blob/master/cmd/juju/service/deploy.go
<katco> natefinch: ericsnow: when we discussed yesterday, it sounded like the patch hadn't been landed yet
<natefinch> katco: I guess I'll just hardcode stable and we can fix it when that other stuff lands
<ericsnow> katco: correct, still not in master
<ericsnow> natefinch: that's consistent
<rick_h__> natefinch: no, it's on the list but it's not been done yet.
<rick_h__> natefinch: agree with the assertion that we need to remember what channel the services is following
<frobware> voidspace: latest drop-1.8 looks better.
<katco> rick_h__: sounds like rogpeppe2 is doing it. can we expect his patch to store that somewhere and update the existing resources code to take advantage of it?
<voidspace> frobware: ah, cool
<rick_h__> katco: I'd hope so yes. However, I've not seen the code/etc to be 100% on it.
<frobware> voidspace: all have existing bugs registered against the failure and one was a timeout
<voidspace> frobware: there's a xenial uniit test failure based on ordering
<voidspace> frobware: I bet that's a dictionary iteration order change
<voidspace> ah, existing bug
<voidspace> frobware: yeah, that's much better
<voidspace> frobware: when is the plan to land this?
<natefinch> rick_h__: do you know why we didn't just add a channel field to the charm.Url struct?  it's hugely disruptive to the codebase to change basically everywhere we take a charm.Url to now take a charm.Url + channel
<rick_h__> natefinch: I'm guessing because a charm can have more than one? e.g. a single charm revision can be development, then made stable, etc
<rick_h__> natefinch: but I'm only guessing, I'm not in the implementation calls right now and EU folks would know better than I
<natefinch> rick_h__: ok, no problem.  Just wondered if you were there for the discussion.
<katco> natefinch: there should be a channel flag on upgrade-charm too, correct?
<rick_h__> katco: natefinch yes, but only with --switch to be clear it's changing what it's tracking
<natefinch> ^ that
<katco> k
<natefinch> in general the channel shouldn't change, and therefore only needed at deploy time
<natefinch> (and even then I presume we'll default to stable, so usually people won't ever need to think about channels)
<rick_h__> natefinch: correct
<rick_h__> natefinch: channels are a pro-use only tool, hidden from normal users
<mup> Bug #1559233 opened: juju run gets 'Permission denied (publickey)' in models other than the controller model <juju-core:New> <https://launchpad.net/bugs/1559233>
<frobware> voidspace: whenever we can get a clean CI - death to long lived feature branches
<dimitern> frobware, here it is: https://github.com/dimitern/juju/tree/multi-nic-master-fixes
<frobware> dimitern: looking
<voidspace> frobware: cool
<frobware> dimitern: the bridge script tests fail in multi-nic-containers atm
<dimitern> frobware, really?
<frobware> dimitern: ok, going to push your branch as maas-spaces-multi-nic-containers-with-master. Viva la tab completion.
<frobware> dimitern: was a bit surprised too
<dimitern> frobware, great! thanks
<frobware> dimitern: ah, no. sorry it was: FAIL: container_userdata_test.go:120: UserDataSuite.TestNewCloudInitConfigWithNetworks
<dimitern> frobware, ah, yeah - that's fixed in the branch above
<frobware> dimitern: bleh. too much going on.
<dimitern> :)
<frobware> dimitern: confirming tip of your branch is 65322f6d483ab0ea6e19fc466af9b60096f5174e
<dimitern> frobware, just a sec
<dimitern> frobware, I'm waiting for the final make check to pass first
<dimitern> frobware, and it did find one issue - I'll fix it and push an update
<dimitern> map ordering related, not in our code, but still - http://paste.ubuntu.com/15416689/
<natefinch> anyone recognize this: provider/dummy/environs.go:316: "github.com/juju/testing".MgoServer.Reset() used as value
<frobware> dimitern: I've seen a bug for that
<frobware> dimitern: tip is now 0cd8eb5b02a9430e31430b65acff595e7dfe0f71?
<dimitern> frobware, and I have included, but perhaps the last cherry-pick reverted it? dunno
<frobware> ?
<dimitern> frobware, yep, I was just about to paste the commit
<frobware> dimitern: what did you revert?
<natefinch> nvm, looks like a dependencies problem... because of course it is
<dimitern> frobware, I mean https://github.com/juju/juju/pull/4787 might have caused the test failure
<dimitern> which I cherry-picked from master
<frobware> right
<frobware> ok, pushing now.
<dimitern> +1 go for it
<frobware> dimitern: see natefinch's comment about deps. didn't you see something there today?
<dimitern> frobware, https://github.com/juju/juju/pull/4759 fixed the map ordering issue in 2 out of 3 providers that have it - i.e. ec2 still have it after that PR, and my last commit fixed ec2 as well
<dimitern> no, I was having issues with dependencies.tsv after I tried to foolishly rebase after I did merge master
<frobware> dimitern:  * [new branch]      maas-spaces-multi-nic-containers-with-master -> maas-spaces-multi-nic-containers-with-master
<natefinch> I just had to update my dependencies.tsv to point to the head of juju/testing... not sure how I got the code change in core and not the deps change... might have been a simple merge problem
<dimitern> merging dependencies.tsv starts to get so painful it needs tooling around it
<dimitern> and I'm not just talking about godeps -N
<dimitern> frobware, awesome!
<frobware> dimitern: calling it a week, my lxd patch/branch is out there if you want to take a look, perhaps merge into one of our branches. :)
<dimitern> frobware, I'll take care of it
<dimitern> frobware, :) enjoy the holiday
<frobware> dimitern: I'll sort out some stuff for Christian tomorrow morning, but I will be incognito thereafter. \o/
<dimitern> ok, beer o'clock
<mgz> dimitern: hm, me too
<dimitern> :) cheers mgz
<fwereade> ericsnow, I'm confused about the various Connect methods around charmcmd, and what if anything should be using a *cmd.Context
<fwereade> ericsnow, is that something that came after that commit?
<natefinch> brb rebooting
<ericsnow> fwereade: in cmd/juju/charmcmd/store.go?
<fwereade> ericsnow, yeah, and extending into resource/ I think?
<katco> fwereade: that was cmars team trying to stop needing to launch a browser to sign in i think
<ericsnow> fwereade: that
<cmars> ericsnow, missed some of the scrollback there, just came back from a reboot. *cmd.Context was needed in a few places for terminal interaction, to log in on the command line
<ericsnow> fwereade: ^^^
<katco> natefinch: https://github.com/CanonicalLtd/juju-specs/blob/master/resources/features/charmer-query-charm-metadata.feature#L32-L41
<fwereade> ericsnow, katco, cmars, thanks, I'll see what needs to go where :)
<katco> natefinch: i think we missed this. list-resources is both on the charm command and the juju charm command
<natefinch> katco: damn
<natefinch> katco: I mean, of course that makes sense
<katco> natefinch: finish up the juju side of things and we'll talk
<natefinch> katco: it's pretty easy to code up, at least
<natefinch> katco: kk
<katco> marcoceppi: i just can't stop pinging you
<katco> marcoceppi: we messed up. we actually do have 1 more thing to land into the charm command
<mgz> to the tune of "I can't stop loving you"?
<natefinch> prtty sure that's a song
<natefinch> lol
<katco> lol
 * natefinch hi-5's mgz
<katco> to the tune of the CSI theme?
<natefinch> katco: https://www.youtube.com/watch?v=mzE8cyu9Vf8&t=1m4s
 * perrito666 is overninetied by the video
<katco> https://www.youtube.com/watch?v=QkM-r4ZdX1o&list=PL0Yz4dINw_VIJ0bkHJe3ZV5ktsFsYxL2K
<natefinch> pea sized hail falling here
<natefinch> katco: good choice for coding brand new crap on the day of the deadline
<katco> natefinch: you know it. rage against that machine baby
<mup> Bug #1559277 opened: cannot set initial hosted model constraints: i/o timeout <bootstrap> <ci> <juju-core:New> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559277>
<perrito666> can I force the deletion of an old lxd controller?
<katco> perrito666: with lxc delete <id> --force
<perrito666> katco: since you are in it, what was the new juju destroy --force?
<perrito666> in it == answering my stupid questions
<katco> perrito666: oh thx for clarification :) uh i think destroy-controller although i don't see a --force
<perrito666> I recall sinzui mentioning something else the other day
<sinzui> perrito666: juju kill-controller will call lxd if the controller wont shutdown
<perrito666> tx
<perrito666> I insist, destroy sounds much harder than kill
<perrito666> dunno, destroy sounds like kill then burn the body
<perrito666> ERROR cannot obtain bootstrap information: Get https://10.0.3.1:8443/1.0/profiles: x509: cannot validate certificate for 10.0.3.1 because it doesn't contain any IP SANs
<katco> natefinch: where is the new location for the charm command patches?
<natefinch> katco: github.com/juju/charmstore-client
<perrito666> sinzui: ever had that fun error ^^
<natefinch> perrito666: I have that one one of mine
<perrito666> natefinch: ?
<natefinch> perrito666: I think it's the old lxd not playing well with new lxd
<perrito666> natefinch: any clue on how to solve it?
<natefinch> perrito666: I haven't had time to poke at it yet
<sinzui> perrito666: not woth 2,0. I have seen is with 1.18 and 1.20. I recall x509 errors were associated with clock skew.
<perrito666> I clearly have the wrong lxd version
<perrito666> which one should I have
<natefinch> perrito666: well... you may have the right lxd version with an old lxd environment still running from the old lxd
<natefinch> perrito666: I think that's the case for me
<perrito666> natefinch: I cannot create a new  one so I think I have the wrong version
<perrito666> ERROR invalid config: can't connect to the local LXD server: json: cannot unmarshal number into Go value of type string
<natefinch> perrito666: ahh, yeah that is definitely the "you have the wrong lxd" error
<natefinch> perrito666: I think that means you have the one from the ppa... remove the ppa and remove lxd then install the standard one
<perrito666> hence my question, what is the right version :)
<natefinch> apt-get install that is
<mup> Bug #1559280 opened: creating hosted model config: opening model: endpoint: expected string, got nothing <bootstrap> <ci> <juju-core:New> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559280>
<mup> Bug #1559285 opened: creating hosted model config: opening model: storage-endpoint: expected string, got nothing <bootstrap> <ci> <juju-core:New> <juju-core admin-controller-model:New> <https://launchpad.net/bugs/1559285>
<fwereade> ericsnow, do you recall why charmcmd.CharmstoreClient became *charmstore.Client?
<ericsnow> fwereade: not precisely but I expect it was due to use elsewhere -> put in a common place
<katco> natefinch: ericsnow: fyi i'm working on the list-resources subcommand of charm
<ericsnow> katco: k
<natefinch> katco: cool
<natefinch> katco: too bad we can't just use the same implementation in both commands, since in theory they should be identical
<katco> natefinch: i was thinking it would be ideal if we could somehow share the formatting
<katco> natefinch: how did channel end up coming along for the ride in the charm command? looking at attach as the example
<natefinch> katco: channel is set on the csclient.Client
<katco> natefinch: but how does it get passed to that through the command?
<katco> natefinch: e.g. charm attach --channel unpublished foo
<katco> natefinch: i think i see... when the csclient.Client gets instantiated it takes in a *cmd.Context
<natefinch> katco: attach doesn't use channels, publish does, though
<katco> natefinch: ah ty
<mup> Bug #1559293 opened: show-controller fails <ci> <show-controller> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559293>
<marcoceppi> katco: I can't release anyways, charm store is still not deployed
<katco> marcoceppi: kk we will keep you posted... coding remaining bit up right now, but then has to be reviewed, etc. we can still land on monday, yeah?
<perrito666> marcoceppi: do you happen to know a charm that has status-history implemented?
<marcoceppi> perrito666: isn't status-history a CLI command?
<perrito666> I meant update-status
<marcoceppi> katco: yeah, but I don't like the closenes
<marcoceppi> perrito666: sure, a lot of layers have it, but none are like "traditional" charms
<katco> marcoceppi: me either
 * perrito666 is really having problems with this "dont drink coffee stuff"
<perrito666> marcoceppi: dont care, I need to  check on the excessive verbosity issue and need a charm with it installed
<marcoceppi> perrito666: sure, any of the big-data ones use it. let me get you one
<perrito666> tx
<katco> ericsnow: standup time
<mup> Bug #1559299 opened: cannot obtain provisioning script <bootstrap> <ci> <manual-provider> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559299>
<mup> Bug #1559305 opened: Process exited with: 1. Reason was:  () <bootstrap> <ci> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559305>
<mup> Bug #1559310 opened: bootstrap fails: "model is not bootstrapped" <bootstrap> <ci> <juju-core:New> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559310>
<mup> Bug #1559313 opened: admin-secret should never be written to the state <bootstrap> <ci> <juju-core:Incomplete> <juju-core admin-controller-model:Triaged> <https://launchpad.net/bugs/1559313>
<perrito666> trivial review anyone? http://reviews.vapour.ws/r/4233/
<mup> Bug #1559329 opened: Relation information is duplicated in status tabular format <juju-core:New> <https://launchpad.net/bugs/1559329>
<mup> Bug #1559329 changed: Relation information is duplicated in status tabular format <juju-core:New> <https://launchpad.net/bugs/1559329>
<perrito666> cmars: you are not here still, are you?
<perrito666> ouch I fixed a duplicate bug
<perrito666> cherylj: nice catch
<mup> Bug #1559329 opened: Relation information is duplicated in status tabular format <juju-core:New> <https://launchpad.net/bugs/1559329>
<perrito666> there is some irony there, a bug about duplicate status is a duplicate
<perrito666> heh
<cmars> perrito666, did i open a duplicate bug about duplicates?
<perrito666> you did
<cmars> maybe i should call it a week :\
<perrito666> and I fixed it, so if you want to check the 3 line fix before leaving Ill fix it
<perrito666> merge it*
<cmars> oh, awesome, got a link
<perrito666> http://reviews.vapour.ws/r/4235/
<perrito666> I was literally playing with vim syntax checkers with that file :p
<cmars> perrito666, looks good, but probably worth a test
<cmars> perrito666, for a followup?
<perrito666> definitely, tests in status take a couple of hours to write
<perrito666> status tests are like a coding speedbump
<mup> Bug #1559329 changed: Relation information is duplicated in status tabular format <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1559329>
<katco> cherylj: sinzui: either of you still around? easy-peasy question/comment
#juju-dev 2016-03-19
<natefinch> perrito666: ship it
<natefinch> perrito666: re: http://reviews.vapour.ws/r/4233/
<ericsnow> natefinch: https://github.com/juju/charmrepo/pull/78 :)
<natefinch> lol charmstore_going_away.go
<katco> natefinch: what's the key in the map[string][]params.Resource that ListResource returns in charmrepo?
<natefinch> katco: the charmID.String()
<katco> natefinch: really?
<katco> 	var results map[string][]params.Resource
<katco> 	if err := c.Get(path, &results); err != nil {
<natefinch> katco: I keep meaning to update the comments on that method.. took me forever to figure it out too
<natefinch> katco: yeah
<katco> natefinch: i assumed it would be some http thing stuffed in there b/c it's parsing the response
<katco> natefinch: that is super intuitive.
<natefinch> katco: ....yeah
<ericsnow> natefinch, katco: hey, at least you don't have to make your own HTTP GET call...
<ericsnow> <wink>
<natefinch> haha
<katco> natefinch: ericsnow: i'm just renaming the "results" var to "charmID2resources"
<ericsnow> :)
<natefinch> katco: ð
<natefinch> wow that is a tiny thumbs up
<katco> ericsnow: natefinch: someone give me a high five shipit
<katco> https://github.com/juju/charmrepo/pull/79
<natefinch> katco: can you update the comment to describe it, too?
<katco> natefinch: what would you like the comment to say
<natefinch> katco: like, comment on the function that is
<natefinch> / ListResoruces retrieves the metadata about resources for the given charms.  It returns a map of charmurl to resources.
<katco> natefinch: done
<natefinch> katco: shipit
<perrito666> ok EOW, have a nice night
<katco> perrito666: tc
<natefinch> see ya perrito666
<natefinch> ahhh... charm.Url *has* a channel
<ericsnow> natefinch: I wouldn't count on that sticking around
<natefinch> we should bring it up, though... if it's already there, why not use it?
<ericsnow> +1
 * katco cries quietly
<katco> the full stack tests
<natefinch> right?
<katco> suites... suites upon suites... all the way down
<natefinch> the real kicker is that charmrepo tests import charmstore, and the charmstore imports charmrepo
<natefinch> and both charmrepo and charmstore use dependencies.tsv.... so... good luck
<cherylj> katco: did you still need something?
<katco> cherylj: yeah, sent email
<katco> cherylj: ty for checking in ^.^
<cherylj> katco: is feature-resources the only branch your team is aiming to get merged?
<katco> cherylj: yep, that's it
<cherylj> katco: awesome, thanks.  I'll add it to the list of branches ready for merging (pending CI run)
<katco> cherylj: ty! please keep us updated on the results
<cherylj> sure thing
<mup> Bug #1559381 opened: Container not on the same network as host <ci> <lxc> <lxd> <network> <regression> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:Triaged> <https://launchpad.net/bugs/1559381>
<mup> Bug #1559382 opened: windows cannot be deployed with  maas-spaces-multi-nic-containers <ci> <maas-provider> <regression> <windows> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:Triaged> <https://launchpad.net/bugs/1559382>
<mup> Bug #1559400 opened: TestManageModelRunsRegisteredWorkers is flaky <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1559400>
<mup> Bug #1559402 opened: cmdControllerSuite.TestCreateModel is flaky <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1559402>
#juju-dev 2016-03-20
<mup> Bug #1559701 opened: kill-controller manual provider broken <juju-core:Triaged> <https://launchpad.net/bugs/1559701>
<mup> Bug #1559704 opened: can't load package: package github.com/axw/fancycheck: <ci> <regression> <unit-tests> <juju-core:Incomplete> <juju-core maas-spaces-multi-nic-containers:Triaged> <https://launchpad.net/bugs/1559704>
<mup> Bug #1559706 opened: TestFinalizeCredentialInvalidFilePath fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1559706>
<mup> Bug #1558158 changed: Restore fails with no instances found <backup-restore> <ci> <regression> <juju-core:Fix Released by cherylj> <https://launchpad.net/bugs/1558158>
<mup> Bug #1559708 opened: TestMinVersionLocalCharm unexpected non-nil error <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1559708>
<mup> Bug #1559712 opened: Backup-restore: could not start prepare restore mode <backup-restore> <ci> <intermittent-failure> <juju-core:Incomplete> <juju-core made-model-workers:Triaged> <https://launchpad.net/bugs/1559712>
<mup> Bug #1559715 opened: restore-backup is unreliable <backup-restore> <ci> <destroy-controller> <destroy-environment> <regression> <juju-ci-tools:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1559715>
<mup> Bug #1559730 opened: TestMongoErrorNoCommonSpace timed out <intermittent-failure> <network> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1559730>
<deanman> The default vagrant box for juju does not work behind proxy. Where can i have a look how is built on the CI server to make appropriate adjustments ?
<_thumper_> :-(
<thumper> menn0: looking at merging master into model-migration
<thumper> one simple conflict
<thumper> but state/toolstorage doesn't exist any more
 * thumper goes to see where it went
<thumper> found it
<thumper> state/binarystorge
<thumper> oh ffs
<thumper> where did version go?
<mwhudson> juju/version!
<mwhudson> nate emailed about that
 * thumper hasn't read emails yet
<thumper> mwhudson: found it btw...
 * thumper headdesks
 * thumper takes a deep breath
<thumper> someone has removed refactorings I added to state for model migrations to work
<menn0> thumper: this is one reason why I don't like feature branches any more
<thumper> yeah...
<thumper> definitely has drawbacks
<menn0> thumper: it's too easy for stuff like that to happen in the mega merges that result
 * thumper nods
<thumper> I think I'm beginning to side with that
<menn0> thumper: I was a big fan of the idea when we started using them but I definitely it's been a failed experiment
 * thumper sobs quietly in the corner
<thumper> ipaddressesC is gone
<menn0> thumper: that will affect model descriptions right?
<thumper> still digging
<thumper> dealing with changes to read only users right now
<menn0> thumper: which reminds me... instances have status now
<menn0> thumper: model descriptions need to be updated to handle that
<menn0> thumper: I've done the bare minimum to get it compiling again
<thumper> \o/.
<menn0> thumper: but it skips over instance status
<thumper> sounds like a bug fix
<menn0> thumper: there's a card for that
<menn0> thumper: yep :)
<thumper> oh...
<thumper> good
<thumper> I hadn't done ipaddressesC yet
<thumper> menn0: ugh...
<thumper> how we retrieve tools has all changed too
<thumper> the tools migration doesn't currently compile
 * thumper is still investigating
<thumper> got it now
<thumper> running tests
<thumper> menn0: this is painful
<menn0> thumper: need help?
<thumper> nah
<thumper> I think I have it now
<thumper> just lots of paper cuts
 * thumper found some more cuts
<thumper> ok...
<thumper> I have now fixed description, migration, and state to work with changes
<thumper> now running full test suite
<thumper> I think I need another coffee
<thumper> at least I have time...
<thumper> huh...
<thumper> found an intersting error in the cmd/jujud/reboot tests...
<thumper> 	fork/exec /usr/bin/virsh: too many open files
<thumper> 	github.com/juju/juju/cmd/jujud/reboot/reboot.go:96: failed to list containers
<mup> Bug #1559789 opened: lxd container not removed <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1559789>
<thumper> menn0: just submitted branch to merge master into model-migrations
 * thumper goes to look at the failures for MADE-model-workers
<menn0> thumper: great stuff
<mup> Bug #1559789 changed: lxd container not removed <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1559789>
<mup> Bug #1559789 opened: lxd container not removed <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1559789>
<thumper> ...
<thumper> wat?
 * thumper digs
<rogpeppe3> menn0, axw, thumper: fairly simple change in preparation for something which I need to land tomorrow... fancy a review? http://reviews.vapour.ws/r/4249/
<thumper> o/ rogpeppe3
<rogpeppe3> thumper: o/
<rogpeppe> thumper: how's tricks?
<thumper> I want to smash my face into this desk
<thumper> you?
<rogpeppe> thumper: that'll be tomorrow morning
<rogpeppe> thumper: tonight i'm on the sofa, so no desk available
<axw> rogpeppe: LGTM
<rogpeppe> axw: ta!
<rogpeppe> axw: oh well, looks like we can't land anything on master anyway
<axw> aw, crap
<rogpeppe> axw: thanks for the review anyway
<axw> rogpeppe: np
 * rogpeppe hits $$merge$$ anyway, on the off-chance...
<rogpeppe> bah humbug
<thumper> ugh...
<thumper> I have to work out how to start the windows vm again
<rogpeppe> i wish that master was gated on windows tests passingf
<rogpeppe> if anyone manages to unblock master, you'd be doing me a great favour by hitting $$merge$$ on https://github.com/juju/juju/pull/4807. but i guess it probably can't get unblocked for about 4 days now until CI runs again... :-\
<axw> wallyworld: you sent me a PGP-encrypted blank email? :)
<wallyworld> ffs, i'll resend
<thumper> :(
<thumper> I have forgotten the password I used for my windows vm
<thumper> is it a standard admin password?
<rogpeppe> thumper: looks like line 609 should be 			"file": filepath.FromSlash("/some/file"),
<rogpeppe> thumper: with an associated change in the test condition
<rogpeppe> thumper: 'cos under windows, filepath.IsAbs("/foo") will return false
<thumper> rogpeppe: is this for a blocker?
<thumper> I've not looked
<rogpeppe> thumper: yeah
<thumper> rogpeppe: ta, I'll get on it
<rogpeppe> thumper: although... the filepath code says i'm wrong. maybe in an older version of Go.
<thumper> it would be great to remember this password...
<rogpeppe> thumper: ah no it doesn't. under windows, /some/file isn't absolute because it doesn't mention a drive.
<rogpeppe> thumper: so my fix is wrong, but in the right general direction.
<rogpeppe> thumper: i guess you could just use a known-absolute path such as the result of os.TempDir(). Or have a windows-specific test. Ugh.
<thumper> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1559706
<mup> Bug #1559706: TestFinalizeCredentialInvalidFilePath fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1559706>
<thumper> wallyworld: this is blocking master
<thumper> wallyworld: can you look at it?
<rogpeppe> thumper, wallyworld: i've commented on the bug
<thumper> rogpeppe: cheers
<wallyworld> thumper: already am
<thumper> wallyworld: thanks
<thumper> wallyworld: what's my windows password, I've forgotten
<rogpeppe> wallyworld, thumper: thanks for looking at this. it would make me a lot happier if juju was unblocked within the next 7 hours :)
 * rogpeppe goes to bed
<wallyworld> it will be
<rogpeppe> wallyworld, thumper, axw, menn0: g'night all
<wallyworld> see ya
<axw> night rogpeppe
<thumper> night
<thumper> wallyworld: has someone recently been doing work on ec2 security groups?
<wallyworld> not that i am aware
<wallyworld> but we have always had issues with how e have used them IIRC
<thumper> wallyworld: any ideas? http://reports.vapour.ws/releases/3788/job/functional-ha-backup-restore/attempt/3476
<wallyworld> the security group spam?
<wallyworld> that's started showing up of late
<wallyworld> i think it's a circular dependency thing
<wallyworld> it also happens in master IIAMN
<thumper> menn0: for the good news, it seems that none of the failures on MADE-model-workers are specific to that branch
<thumper> just a bunch of other intermittent test failures
<thumper> however the bad news is that we have a bunch of intermittent test failures
<thumper> menn0: ping me when you have some time to talk about what next
<axw> wallyworld: gotta go downstairs for a while, fix for azure is up for review
<wallyworld> awesome ty
<axw> wallyworld: has been tested live
<wallyworld> even betta
<wallyworld> thumper: trivial fix http://reviews.vapour.ws/r/4251/
 * thumper looks
<thumper> wallyworld: who deletes the temp dir?
<wallyworld> thumper: c.MkDir() is used as the root path
<wallyworld> so it gets cleaned up automatically
<thumper> ok, cool
<thumper> shipit
<wallyworld> ta
<menn0> thumper: was having lunch. back now.
<thumper> menn0: hangout?
<davecheney> thumper: isn't today otago day ?
<thumper> davecheney: yes
<davecheney> so why are you working ?
<thumper> davecheney: but due to deadlines, busy things... I'm working today
<thumper> will swap later
<davecheney> right,
<davecheney> sorry, i should have come to stand up then
<davecheney> but I thought it would just be me
<thumper> np
 * menn0 and mwhudson aren't in otago so davecheney wouldn't have been by himself anyway
<davecheney> otago isn't the whole country
<davecheney> is it only the south island ?
<thumper> only the bottom bit of the south island
<thumper> and not even all of that
<thumper> but has waigani and me
<davecheney> right, dunedin day, got it
<davecheney> did I mention it was ashfield day on tuesday ? :)
#juju-dev 2017-03-13
<junaidali> N
<mup> Bug #1672306 opened: unit stuck executing update-status <juju-core:New> <https://launchpad.net/bugs/1672306>
<mup> Bug #1672306 changed: unit stuck executing update-status <juju-core:New> <https://launchpad.net/bugs/1672306>
<mup> Bug #1672306 opened: unit stuck executing update-status <juju-core:New> <https://launchpad.net/bugs/1672306>
<Hetfield_> any of you can log-in to launchpad?
<Hetfield_> i'm having error 500 since days
<hoenir> hey guys
<hoenir> can I add storage with the juju add-storage in  the controller  for example?
<hoenir> or just in units?
<hoenir> can anyone tell ?
<rick_h> hoenir: just in applications atm. There's a need to add that kind of charm-like support to the controller but it's not there atm
<mup> Bug #1669729 changed: Updates to MAAS machine data propogated to Juju <canonical-bootstack> <juju:Triaged> <https://launchpad.net/bugs/1669729>
<mup> Bug #1670499 changed: maas 2.1.3 nodes do not deploy, Juju reports 'started but not deployed' <cdo-qa-blocker> <juju:Incomplete> <MAAS:Incomplete> <https://launchpad.net/bugs/1670499>
<mup> Bug #1670499 opened: maas 2.1.3 nodes do not deploy, Juju reports 'started but not deployed' <cdo-qa-blocker> <juju:Incomplete> <MAAS:Incomplete> <https://launchpad.net/bugs/1670499>
<mup> Bug #1670499 changed: maas 2.1.3 nodes do not deploy, Juju reports 'started but not deployed' <cdo-qa-blocker> <juju:Incomplete> <MAAS:Incomplete> <https://launchpad.net/bugs/1670499>
<mup> Bug #1666396 changed: Missing txn-revnos in mongodb leads to missing status updates <canonical-is> <juju-core:Won't Fix> <https://launchpad.net/bugs/1666396>
<mup> Bug #1671733 changed: harvest mode setting not honoured by destroy-model <sts> <juju:Invalid> <https://launchpad.net/bugs/1671733>
<axw> wallyworld: hey, are you back today? are we doing the earlier standup (now) yet?
<wallyworld> axw: yeah, 3/4 back. wasn't going to do the earlier one today. heather was ok to try it was is and see how we go i think. we can confirm tomorrow
<axw> wallyworld: ok, ttyl then
<mup> Bug #1672306 changed: unit stuck executing update-status <juju-core:Won't Fix> <https://launchpad.net/bugs/1672306>
#juju-dev 2017-03-14
<axw> menn0: are you around? if so, can you please review https://github.com/juju/description/pull/3?
<menn0> axw: i'm around. will take a look shortly.
<axw> cheers
<menn0> axw: well that was easy :)
<axw> menn0: :)
<axw> thanks
<axw> menn0: do you have merge rights on that repo?
<axw> cos I don't, and bot doesn't know about it
<menn0> axw: I don't either but I wonder if we can give ourselves rights
<menn0> axw: doens't look like it
<axw> guess I'll have to wait till thumper is better, and we should get him to assign merge rights to others
<menn0> axw: looks like thumper has been manually merging them
<axw> yep
<menn0> axw: +1
<axw> wallyworld: looks like you're an owner in the juju org. can you please see if you can give leads merge access to the juju/description repo later?
<wallyworld> axw: standup?
<axw> coming
<hml> wallyworld: the PR made it to review board: http://reviews.vapour.ws/r/6402/
<wallyworld> hml: thank you, will review
<hml> wallyworld: thanks
<axw> wallyworld: do you have merge rights on https://github.com/juju/description/pull/3 ?
<wallyworld> axw: sorry, got distracted, let me look
<wallyworld> axw: have should have write access now
<axw> wallyworld: thanks
<seyeongkim> juju restore-backup with -b option seems not working well. is there any example to run this command correctly? https://bugs.launchpad.net/juju/+bug/1671501
<mup> Bug #1671501: juju restore-backup to new state server fails or hangs <juju:New> <https://launchpad.net/bugs/1671501>
<anastasiamac> wallyworld: axw: tiny change, PTAL :D https://github.com/juju/juju/pull/7095
<axw> looking
<anastasiamac> axw: \o/
<anastasiamac> wallyworld: axw: did u ever had an issue with pre-push hook? i have not but just came accroos bug 1669011 and m wondering if i was just lucky...
<mup> Bug #1669011: pre-push git hook is insane, rebuilds in a loop <juju:New> <https://launchpad.net/bugs/1669011>
<wallyworld> i haven't been bit by that
<axw> anastasiamac: erm nope, never seen that
<anastasiamac> axw: actually changed the wording to avoid repeating the same stuff as in contributing...
<anastasiamac> axw: we do have a ref to it, but obviously it comes too late in workflow..
<anastasiamac> i kept the original refeernce as it serves as troubleshooting :)
<axw> anastasiamac: LGTM
<axw> wallyworld: when you're free, please take a look at https://github.com/juju/juju/pull/7096
<wallyworld> sure, give me 5
<axw> no rush, I've got other stuff to do
<wallyworld> axw: swap you https://github.com/juju/juju/pull/7097
<axw> wallyworld: ok
<wallyworld> axw: how will the buildTxn loop react to validateRemoveOwnerStorageInstanceOps assertions failing? ie owner changes
<axw> wallyworld: if the owner changes, it'll cause the loop to repeat; we re-read the storage instance doc, and try again with the new owner
<wallyworld> righto, sgtm
<axw> wallyworld: can't test that bit of code yet, since storage is not yet detachable
<wallyworld> ok
<wallyworld> axw: and the new ref counting is geared towards future shared storage?
<axw> wallyworld: which bit is that?
<axw> there's existing ref counting, but some minor changes around it
<wallyworld>  +// increfEntityStorageOp returns a txn.Op that increments the reference
<wallyworld>  +// count for a storage instance for a given application or unit. This
<wallyworld>  +// should be called when creating a shared storage instance, or when
<wallyworld> func increfEntityStorageOp(st *State, owner names.Tag, storageName string, n int) (txn.Op, error) { is new
<axw> wallyworld: that's used when you do "juju add-storage". it'll also be used by attach-storage
<axw> wallyworld: the refcount is for a storage name, so we can track the number of storage instances assigned to a unit or application
<axw> wallyworld: that's in turn used to check the min/max storage constraints
<wallyworld> makes sense, thanks
<anastasiamac> wallyworld: axw: another trivial - PTAL: https://github.com/juju/juju/pull/7098
<wallyworld> lgtm
<anastasiamac> *\o/*
<kjackal> Hello, something has changed int he web api offered by juju. This GET requests worked yesterday https://api.jujucharms.com/v5/apache-analytics-pig/meta/stats Was the enpoint moved?
<axw> wallyworld: you may have already answered this, but are we (integrated dev/q teams) taking responsibility for maintaining qa.jujucharms.com now?
<axw> jenkins, etc.
<anastasiamac> axw: short answer is - yes :(
<anastasiamac> axw: it looks like bot is stuck..
<axw> anastasiamac: I think it's fine, we should be responsible for our own stuff. just wanted to check
<axw> anastasiamac: oh, good timing
<axw> I was just asking, so that's a coincidence
<wallyworld> axw: what anastasiamac said, we will take over everything
 * wallyworld runs away to soccer
 * anastasiamac imagines wallyworld taking over the world
<rogpeppe1> menn0-afk: i've responded to your review of https://github.com/juju/juju/pull/7092 BTW
<menn0-afk> rogpeppe1: ok cool. I'm not working right now but will take a look in an hour or so.
<rogpeppe1> menn0-afk: it's kinda priority that we land it soon BTW
<menn0-afk> rogpeppe1: understood
<rogpeppe1> menn0-afk: thanks
<menn0-afk> rogpeppe1: looking again now
<rogpeppe1> menn0-afk: tyvm
<menn0-afk> rogpeppe1: is "juju login bob" really an appropriate example now?
<rogpeppe1> menn0-afk: i think i've addressed your concerns
<rogpeppe1> menn0-afk: ha, not!
<rogpeppe1> menn0-afk: also some other cruft left in there i see
<menn0-afk> rogpeppe1: I'm also suggesting some tweaks to the first paragraph (will do that on the review)
<rogpeppe1> menn0-afk: i've just pushed up some changes that tweak the examples and remove the references to the unsupported env var
<menn0-afk> rogpeppe1: cheers
<menn0-afk> rogpeppe1: review done
<rogpeppe1> menn0-afk: ta!
<menn0-afk> rogpeppe1: I've suggested a change to the first paragraph and there's one bit of debug logging to remove (as per martin)
<menn0-afk> rogpeppe1: all good otherwise
<rogpeppe1> menn0-afk: could you "approve with changes made" now, so i can land it when i've addressed your suggestions?
<menn0-afk> rogpeppe1: I already approved
<rogpeppe1> menn0-afk: ah, github shows some things automatically but not others...
<menn0-afk> rogpeppe1: the list of reviewers in the RH sidebar shows ticks and crosses for reviewers
<menn0-afk> rogpeppe1: the discussion thread can be a bit confusing
<rogpeppe1> menn0-afk: yeah. and it's not always updated automatically.
<rogpeppe1> menn0-afk: landing now...
 * rogpeppe1 hopes the bot isn't still stuck
<hoenir> so I need an application running to add storage to it ?
<hoenir> can anyone recomand me a charm that is lite and supports storage?
<anastasiamac> hoenir: the best person to ask would b axw, and he may be eod atm.. could u maybe ask on juju-dev mailing list?
<tvansteenburgh> anastasiamac: good morning, can I trouble you with my p2.xlarge problem again?
<gsamfira> hoenir: postgresql has this option I think --> https://jujucharms.com/postgresql/129
<jam> morning juju land
<wpk> morning
<wpk> well, afternoon
<jam> hi wpk
<jam> it looks like I'm stuck on the road for a day longer
<jam> My flight was cancelled after waiting on the tarmac for 5hrs
<wpk> -almost- happened to me when the flight from New Orleans to Newark was delayed by almost 1 hour, with 1h28m original transfer time
<wpk> The stewardess was nice enough to ask people not transferring or with longer waits to sit and wait for those who are in a hurry
<jam> wpk: I'm very glad you made it back
<jam> wpk: are you finding enough to do/
<jam> ?
<wpk> jam: as for now - yes. I've made my first charm, wrote my first specs, now reading through docs
<jam> wpk: good. can you explain CNAME RR? or point me to a doc on it?
<wpk> jam: It's an alias, like www.canonical.com IN CNAME server1.canonical.com.
<wpk> jam: from client app perspective it's the same, but if you do 'dig' it'll show you the path
<jam> I've heard of CNAME, but not the RR part
<wpk> Resource Record
<beisner> rick_h, last night we had more occurrences of "theblues.errors.EntityNotFound: https://api.jujucharms.com/v4/xenial/percona-cluster/meta/any" - which is weird.  my browser redirects that a-ok to /charmstore/foo, but in practice via the libraries, no love.  ideas?
<beisner> actually we have all amulet tests failing now on that.
<rick_h> beisner: the cowboy was auto unset and webops had to make a more permanent fix
<rick_h> beisner: apologies
<rick_h> beisner: ? it's not working atm?
 * rick_h checks
<beisner> rick_h, redirs ok in my browser now.  we've got some amulet tests in queue, will let you know if it's not back to working.  ta
<beisner> jamespage fyi ^
<rick_h> beisner: ok, yes should be working. something in the apache config/charm reset the cowboy and it was caught/updated again in a more permanent way so I'm expecting it to behave now
 * redir blinks
<beisner> rick_h, to follow up, yep looking good.  thx again!
<rick_h> beisner: good to hear
<thumper> morning
<babbageclunk> hey externalreality, how was the trip back?
<veebers> hey babbageclunk how was your trip? I heard it wasn't straight forward for you?
<babbageclunk> veebers: yeah, dropped my luggage off at the hotel after checkout for a final wander around the city, then when I came to get it and head to the airport it was gone!
<babbageclunk> veebers: so that sucked.
<veebers> babbageclunk: jeez that really does suck! :-( Have you been able to sort it out?
<babbageclunk> veebers: Turned out they'd given it to someone else - luckily I got an email this morning saying they've got it back and will be shipping it on.
<veebers> babbageclunk: *phew* wow, I'm happy for you then. Lets hope that it doesn't take long to get to you
<veebers> I imainge that would be a pricy shipping bill
<babbageclunk> veebers: would have been a bummer trying to replace everything - lots of things that I can't get again
<veebers> babbageclunk: was it both backpack and suitcase (laptop, clothes etc.)?
<babbageclunk> veebers: since our stuff hasn't actually arrived from the UK yet it was probably about 90% of the clothing I have here.
<babbageclunk> veebers: had to rush out to get undies yesterday!
<veebers> yeah it sucks loosing stuff, I'm too attached to my stuff
<babbageclunk> veebers: no, just suitcase - backpack would've been an absolute nightmare!
<veebers> babbageclunk: hah ^_^ I imagine you running down the road in a towel trailing soap bubbles to get the days clothes
<veebers> oh shit yeah, I suspect your passport would have been in that, wow I'm glad you had your passport on you!
<babbageclunk> veebers: I mean, wandering around Farmers commando is a weird experiende.
<babbageclunk> *experience
<veebers> hah ^_^
<veebers> babbageclunk: you need to get a good washing machine and dryer so you can have JIT undies every day
<babbageclunk> veebers: I like it - so toasty
<babbageclunk> menn0-afk: they found my bag, yay!
<menn0-afk> babbageclunk: awesome!
<menn0-afk> babbageclunk: where?
<jam> babbageclunk: grats on having them get your bag back. Someone obviously was being kind to follow through with that
<babbageclunk> menn0: It was returned on Saturday evening apparently
<jam> babbageclunk: I got stuck in Chicago for an extra day
<babbageclunk> jam: I saw on FB - that sucks!
<jam> Was on the plane, which apparently had engine trouble, and after 5hrs waiting they decided to put us up for the night
<menn0> jam: not fun
<jam> had I not gotten to bed at 4am, I might have been more tempted to go enjoy something in Chicago
<jam> how are things going in NZ Juju land?
<babbageclunk> jam: good, I think!
<thumper> o/ folks
<jam> has anyone heard where hml's patch is at for Openstack?
<jam> hi thumper
<jam> missed you last week
<thumper> oh shucks
<jam> would have been nice to have a beer with you
<thumper> :)
<thumper> next time
<jam> or maybe margarita's?
<thumper> which isn't too far away
<jam> its not a mojito, but more local :)
<jam> thumper: yeah, have you booked for May already?
<anastasiamac> thumper: sts call?..
<thumper> got approval, but haven't got flights yet
<babbageclunk> thumper: and we were supposed to do karaoke!
<anastasiamac> babbageclunk: \o/ m glad u'd get ur bag :)
<anastasiamac> jam: hml PR was very close yesterday, should have/will land today or near..
<babbageclunk> anastasiamac: thanks! same - although I guess they could lose it again before I get it.
<jam> anastasiamac: good to hear
<anastasiamac> babbageclunk: don't jinx!!
<jam> babbageclunk: it will probably make it to NZ, and then some wanker will notice the nice undies
<babbageclunk> lol
<babbageclunk> gah, was just about to make a note of something to do but remembered that my pencil and notebook are in my bag!
<thumper> babbageclunk: did you find any singing partners?
<babbageclunk> thumper: actually rogpeppe did! When we were wandering about NOLA late at night.
<thumper> :)
<babbageclunk> thumper: Although I was feeling a bit worse for wear at that point to improvise experimental atonal music with them.
<thumper> hah
<babbageclunk> thumper: Found this in a kids shop though: http://www.michellehirstius.com/
<thumper> awesome
<thumper> I was on camp with a girl called Juju
<anastasiamac> roger also found a voodoo doll called 'Juju Guardian' :)
<veebers> anastasiamac: I was worried about getting my wee juju doll thing through NZ customs (wood and moss etc.) they didn't care, it was the dried kidney beans that I had they where interested in (and had to destroy :-( )
<anastasiamac> veebers: :(
<anastasiamac> veebers: i wonder how roger went with the gun ... ;)
<veebers> hah ^_^
<babbageclunk> veebers, weird, they were fine with my jellybeans!
<babbageclunk> veebers: I'm going to have the best beanstalk tomorrow, then I'm gonna go get that harp.
<veebers> babbageclunk: lol, I doubt they consider them seeds (although, that would be good, I would have a field of jellybeans)
<veebers> babbageclunk: be careful, those jelly beanstalks can be slippery
<jam> sinzui: can you meet with wpk tomorrow during 'afternoon jam' ? I don't know if perrito666 is around, but it would be helpful if you guys can just chat a bit, since I can't be there.
<sinzui> jam: I sure can
<jam> thanks
<sinzui> jam: The meeting was just me giving my self encouragement this week so far
<jam> I wonder if there is a mismatch, cause wpk mentioned it was only him.
<jam> I wonder if he tried to join as his gmail accournt instead of canonical account
<anastasiamac> jam: sinzui: perrito666 mite be taking swap days, i have not seen him either and have questions :)
<jam> I've done that and you sit and nobody else shows up
<jam> sinzui: so probably reach out on IRC as well, in case he's using the wrong auth
<wallyworld> hml: hey, can you push your latest changes? i see you commented on the PR but i don't think the changes are pushed yet?
<hml> wallyworld: iâm working on some of the test pieces right now, so not completely ready
<wallyworld> hml: no worries, thank you
 * wallyworld goes to get another coffee. it's a 2 coffee morning
<wallyworld> babbageclunk: the discover spaces stuff landed!
<wallyworld> babbageclunk: are you able to start looking at the gce subnet support now that discover spaces worker has landed?
#juju-dev 2017-03-15
<babbageclunk> wallyworld: yup, looking at it now
<wallyworld> \o/
<axw> menn0 wallyworld thumper: tech board? anything to talk about?
<menn0> axw: sorry I missed tech board. things got a little crazy at home.
<axw> menn0: no worries
<menn0> axw: did the meeting end up happening
<menn0> ?
<axw> menn0: nah, just had a chat with ian about the sprint last week. john's out, and tim's sick
<rogpeppe1> here's a feature branch merge; anyone wanna give it the nod? https://github.com/juju/juju/pull/7099
<rogpeppe1> jam: any chance you could approve this feature-branch merge, please? https://github.com/juju/juju/pull/7099
<rogpeppe1> jam: i'm going to merge it anyway, as all the individual branches have been approved
<perrito666> good morning
<perrito666> wpk: standup?
<stokachu> rogpeppe1: what python library should we be using if we wanted to look into adding macaroon support?
<stokachu> rogpeppe1: apparently go is the only one with these bindings anywhere...
<stokachu> sorry, bakery support
<rogpeppe1> stokachu: i'm not sure that there is any decent general one yet
<rogpeppe1> stokachu: people have been talking about it for a while
<rogpeppe1> stokachu: it also depends whether you want to do server support too, or just client support
<stokachu> rogpeppe1: we just need a seamless experience from conjure-up's ui to be able to connect to jaas controller
<stokachu> without sending them to the browser
<rogpeppe1> stokachu: personally i think that we shouldn't expect people to type their passwords into arbitrary command-line apps
<rogpeppe1> stokachu: and AFAIK using a browser login is the only reasonable way to avoid that
<stokachu> rogpeppe1: ok, lemme go back to the boss man and see what we want to do
<rogpeppe1> stokachu: juju has supported entering a password on the command line
<rogpeppe1> stokachu: but it's flawed
<rogpeppe1> stokachu: because it doesn't work the first time (it relies on the user having used the browser-based login previously)
<cory_fu> rogpeppe1, stokachu: What about the case where the user is running conjure-up on a machine that isn't running a gui, or if they're ssh'd in?  I thought that's why we were using a terminal UI instead of the GUI in the first place?
<rogpeppe1> cory_fu: you can still use the browser-based login in that case
<cory_fu> rogpeppe1: I should also note that `charm login` prompts for user and password
<rogpeppe1> cory_fu: just as long as you have access to a browser and can copy/paste the URI
<rogpeppe1> cory_fu: yes, the charm command is also problematic in that respect
<cory_fu> rogpeppe1: Why is a browser-based login required before CLI login will work?  That seems very odd to me.
<rogpeppe1> cory_fu: because only the browser-based login provides the idm server with the user details it needs
<rogpeppe1> cory_fu: it can't get those from an oauth token
<rogpeppe1> cory_fu: (which is what the password-based login provides)
<rogpeppe1> cory_fu: in the future, i'd like us to be able provide private-key-based login (also known as "agent" login) to arbitrary users.
<rogpeppe1> cory_fu: so if you want non-interactive login, you associate a private key with your account and use that to log in.
<stokachu> so tldr; we have to use the browser based login approach?
<stokachu> rogpeppe1: ^
<rogpeppe1> stokachu: well, you *can* use the user-password approach, but i'd suggest leaving that until last, as we don't have a decent answer to that yet.
<stokachu> rogpeppe1: im talking about we need this done in the next week or so
<stokachu> :D
<rogpeppe1> stokachu: and it will always be necessary to support at least the browser-based approach
<stokachu> i see, ok
<rogpeppe1> stokachu: because you can't do password-based login to non-usso domains
<stokachu> ok thank you
<Dmitrii-Sh> Hi, does anybody know a good way to specify series when upgrading a charm to a local version? https://paste.ubuntu.com/24183189/
<Dmitrii-Sh> right now xenial is picked while I need trusty. I don't see any switch to manually select a version
<rogpeppe1> Dmitrii-Sh: you can't change the series of a deployed application
<Dmitrii-Sh> rogpeppe1: I don't need to change it - that's the thing
<Dmitrii-Sh> rogpeppe1: the problem here is that it thinks that I want to deploy a xenial charm
<rogpeppe1> Dmitrii-Sh: oh yes, that does seem odd
<rogpeppe1> Dmitrii-Sh: is this with juju 2?
<Dmitrii-Sh> rogpeppe1: if I remove xenial and yakkety from the yaml it deploys just fine
<Dmitrii-Sh> 2.1.1-xenial-amd64
<Dmitrii-Sh> rogpeppe1: yes
<rogpeppe1> Dmitrii-Sh: looks like a bug to me
<Dmitrii-Sh> rogpeppe1: ok, will file it now
<Dmitrii-Sh> rogpeppe1: been driving me crazy for some time )
<Dmitrii-Sh> rogpeppe1: thx
<rogpeppe1> Dmitrii-Sh: try using --force-series
<Dmitrii-Sh> rogpeppe1: I can try that but that's still a bug, right?
<rogpeppe1> Dmitrii-Sh: but even if that works, it still seems like a bug, yeah
<Dmitrii-Sh> rogpeppe1:
<Dmitrii-Sh> ubuntu@maas:~â« juju upgrade-charm keystone --force-series --path ~/build/charm-keystone
<Dmitrii-Sh> Added charm "local:xenial/keystone-4" to the model.
<Dmitrii-Sh> ERROR cannot upgrade application "keystone" to charm "local:xenial/keystone-4": cannot change an application's series
<Dmitrii-Sh> rogpeppe1: doesn't look right
<rogpeppe1> agreed
<Dmitrii-Sh> rogpeppe1: https://bugs.launchpad.net/juju/+bug/1673122
<mup> Bug #1673122: Incorrect series used during upgrade to a local charm and no way to specify it manually <juju:New> <https://launchpad.net/bugs/1673122>
<Dmitrii-Sh> rogpeppe1: the order of items in a list matters
<Dmitrii-Sh> rogpeppe1: so yakkety at the bottom didn't trigger an error )
<rogpeppe1> Dmitrii-Sh: yeah, i see
<rogpeppe1> Dmitrii-Sh: nice report, thanks
<Dmitrii-Sh> rogpeppe1: np, thanks for the clarification
<rogpeppe1> Dmitrii-Sh: thanks for the report.
<rogpeppe1> Dmitrii-Sh: and discovering the issue
<rogpeppe1> Dmitrii-Sh: i suspect it might be fairly simple to fix
<perrito666> bbl
<rogpeppe1> i'm looking for reviews of this, please: https://github.com/juju/juju/pull/7074
<thumper> morning
<perrito666> thumper: hi
<babbageclunk> wallyworld: ping?
<wallyworld> hey
<babbageclunk> From this https://cloud.google.com/compute/docs/subnetworks#subnetworks_and_instances, it sounds like subnets in GCE don't have zones.
<babbageclunk> wallyworld: Should I just leave zones empty in the return from the provider, or get all zones and populate it with them?
<wallyworld> not sure offhand. i'm not overtly familiar with the underlying juju network model
<wallyworld> i'll look into it nd we can discuss
<babbageclunk> ok thanks
<babbageclunk> wallyworld: also, for our purposes we don't need the instance or subnet filtering of the Subnets method. Should I still implement them now? (Probably, right?)
<wallyworld> babbageclunk: would be good to have that method properly implemented
<wallyworld> let's try and avoid tech debt
<babbageclunk> wallyworld: yeah, that's what I was figuring.
<rogpeppe1> wallyworld: hiya
<wallyworld> hey
<rogpeppe1> wallyworld: i've addressed the initial points you made here... fancy taking another look? https://github.com/juju/juju/pull/7074
<wallyworld> sure
<rogpeppe1> wallyworld: thanks!
<wallyworld> rogpeppe1: just curious, why are those other seemingly non related dependencies updated?
<rogpeppe1> wallyworld: mostly because there were some newer dependencies available, i think. i think it's worth keeping up with dependency versions when reasonable.
<wallyworld> i figured as much. it does seem strange though that devel branch (our tip) is not running latest deps
<wallyworld> i guess those deps were updated for other branches
<rogpeppe1> wallyworld: yeah
<rogpeppe1> wallyworld: BTW the odd-looking AddDate calls are to make some of the tests pass under >go1.8 (which has monotonic clock readings in time.Time)
<wallyworld> cool, i'm running 1.8
<wallyworld> we need to fix the url parsing issues also
<rogpeppe1> wallyworld: yeah, we shouldn't provide invalid URLs :)
<wallyworld> +1 to that
<wallyworld> rogpeppe1: i just saw the AddDate() stuff - it would be worth a comment. without your explanation, I had NFI why that was there
<rogpeppe1> wallyworld: yeah, agreed. please add a comment to remind me for the morning
<wallyworld> maybe even a todo to use the clock.Clock
<wallyworld> will do
<wallyworld> rogpeppe1: why remove our clock.Clock from lease manager?
<wallyworld> and replace with state.clock
<wallyworld> that's the opposite of what we are trying to do
<rogpeppe1> wallyworld: file, line?
<wallyworld> 76 of state/workers.go
<wallyworld> the "github.com/juju/utils/clock" import is deleted alsdo
<rogpeppe1> wallyworld: the lease manager still takes a clock as a parameter
<wallyworld> the comment applies to all of those workers
<rogpeppe1> wallyworld: why would wouldn't you want them all to use the same clock?
<wallyworld> right, but the state.clock is being passed in, has that been changed to a clock.Clock?
<rogpeppe1> wallyworld: state.clock has always been a clock.Clock
<wallyworld> ok, makes more sense now, thanks
<wallyworld> so the removal of workerFactory means we can get the clock from st
<rogpeppe1> wallyworld: BTW i think that except for low level, "i know everything that's going on" testing, mocking clocks is probably an anti-pattern, as it can make for extremely fragile tests.
<wallyworld> it's been the exact opposite for us
<wallyworld> we've needed to do it to *fix* fragile tests
<wallyworld> any tests based on wall clock are just plain wrong for us
<wallyworld> we need to step through known time increments
<wallyworld> to trigger various events
<wallyworld> that's a pretty standard pattern i would have thought?
<wallyworld> many devs on core have certainly commonly used it elsewhere outside canonical
<wallyworld> eg if something triggers every 30 minutes, you can't use a wall clock for that
<wallyworld> in a unit test
<wallyworld> we've also been victim to waiting say 10ms for something fails on loaded substrates
<rogpeppe1> wallyworld: yeah, for low level unit tests, it's very useful
<wallyworld> we need control of the clock ticks
<rogpeppe1> wallyworld: but when you've got a bunch of components together, you end up needing to know exactly how many actors there are in the system and when you expect them to wait
<rogpeppe1> wallyworld: and even then you're vulnerable to races
<wallyworld> isn't that the point of detailed unit tests :-)
<rogpeppe1> wallyworld: definitely
<rogpeppe1> wallyworld: but i'm talking about somewhat higher level tests
<wallyworld> ok, so we're in violent agreement :-)
<wallyworld> too many of our juju core tests should be unit tests but are not :-(
<rogpeppe1> wallyworld: so where we're using clock in state with all those workers - that for me is probably too far.
<wallyworld> we've been actively trying to fix that
<wallyworld> how then would you test the interaction between workers?
<wallyworld> if you don't control the clock in tests?
<rogpeppe1> wallyworld: i don't like the distinction between unit tests and integration tests. i prefer the distinction they tend to use in google - quick tests and slow tests.
<rogpeppe1> wallyworld: i'd set the worker timing parameters and poll. And try and have less dependencies in general.
<wallyworld> unit tests are needed to give coverage; intergration tests more for "does it really work"
<rogpeppe1> wallyworld: for me, it's all about confidence in the code.
<rogpeppe1> wallyworld: anyway, gotta run! i am called to bed.
<wallyworld> lucky you
<wallyworld> have fun :-)
<rogpeppe1> ttfn
#juju-dev 2017-03-16
<axw> wallyworld: would you please review https://github.com/juju/juju/pull/7100 some time today. the changes for remove-machine won't be far behind
<wallyworld> sure
<axw> wallyworld: I've kept the apiserver changes in a separate commit, to make reviewing a bit easier
<wallyworld> sgtm ty
<rick_h> axw: make sure to check out https://www.youtube.com/watch?v=tjp_JHSZCyA will have a blog post/email hopefully tomorrow as follow up
<axw> rick_h: okey dokey. otp, will watch afterwards
 * axw brews a cup of tea and listens to rick_h's smooth radio voice
<babbageclunk> axw: sick burn when the guy's doing a video! ;)
<axw> babbageclunk: I'm just listening while doing other stuff!
<babbageclunk> axw: oh, right!
<axw> I have a habit of being unintentionally offensive :|
 * rick_h winks at axw
<babbageclunk> axw: same. :(
<axw> whoa, matt's beard
<babbageclunk> wallyworld, axw: worked out my bug - taking the address of a loop variable so all the metadata settings had the same value.
<wallyworld> ah, that old chesnut
<wallyworld> that's bitten me before too
<wallyworld> i reckon it's bloody stupid design by Go
<babbageclunk> wallyworld: is there a better way around it than making a new variable in the loop?
<axw> babbageclunk: depends on what you're doing. that's a common solution
<axw> babbageclunk: point me at code?
<wallyworld> i usually make a new va rinside the loop
<babbageclunk> axw: I've seen the to.StringPtr in azure provider, but this is just one case so that's maybe overkill.
<axw> babbageclunk: yeah I think so
<babbageclunk> axw: it's herehttps://github.com/juju/juju/blob/develop/provider/gce/google/instance.go#L185
<babbageclunk> axw: ...but in the new api version the value is a *string
<babbageclunk> (I mean, MetadataItems.Value is *string)
<axw> I see
<axw> babbageclunk: yeah, I'd just do (above): valueCopy := value, then &valueCopy
<axw> babbageclunk: if you're using a loop value in a function literal (not the case here obv), I'd give the function args and pass the value in
<axw> rather than just closing over the outer var
<babbageclunk> axw: yeah, that's how I'd handle the analogous problem in Python
<axw> rick_h: looks good! having prebaked dashboards will be useful
<rick_h> axw: yea, get an "opinionated" ootb thing going
<axw> rick_h: in terms of useful things, I'm not sure what else to add. maybe API request throughput? but it's probably only useful for debugging, rather than general health check
<rick_h> axw: yea, folks can turn on things when there's an issue but want to just do the basics for all folks ootb
 * axw nods
<wallyworld> axw: i had a few issues, see what you think
<wallyworld> nothing that major
<axw> wallyworld: ok, thanks
<axw> wallyworld: huh, thought I renamed DestroyUnit to DestroyUnits. must've been thinking of the machines branch
<wallyworld> no worries
<axw> wallyworld: the api/machinemanager changes weren't meant to be there, but ok if I leave them in and add tests in the follow up? they're not used yet anyway
<wallyworld> axw: yeah, i figured as much :-)
<babbageclunk> wallyworld: Do you know how I could tell bootstrap to use a different network in GCE?
<wallyworld> no :-(
<babbageclunk> hmm
<wallyworld> --bin peraps
<wallyworld> --bind
<wallyworld> i think that may work?
<wallyworld> babbageclunk: actually
<wallyworld> --to
<wallyworld> with a network placement directive
<wallyworld> i remmeber now
<babbageclunk> wallyworld: so I can say bootstrap --to with-subnets? I'll try that.
<wallyworld> i can't recall the exact syntax
<wallyworld> it was in some release notes for 2.1 i think
<babbageclunk> wallyworld: seems to be doing something anyway
<wallyworld> i'll see if i can find the release notes
<wallyworld> babbageclunk: https://github.com/juju/juju/pull/6907
<wallyworld> it's in 2.2
<wallyworld> not released yet
<babbageclunk> wallyworld: cheers
<babbageclunk> wallyworld: d'oh, that's aws specific - I want it for gce. Not that surprising, I guess, given that we don't support subnets in gce yet.
<wallyworld> babbageclunk: yeah, once subnets are implemented, it should work. i thought that's what you were testing :-)
<axw> wallyworld babbageclunk: if you're adding more stuff to apiserver/application for CMR, can you please add tests to application_unit_test.go, not application_test.go
<wallyworld> ok
<wallyworld> i didin't notice we had that even
<axw> application_test.go should eventually be gutted, split into unit tests and feature tests (using commands, preferably)
<axw> wallyworld: yeah I started it a while ago, but ran out of steam
<axw> there's a lot of existing tests
<axw> wallyworld: PTAL at https://github.com/juju/juju/pull/7100
<wallyworld> axw: looking
<wallyworld> axw: the reworded messages look ok to me, ty
<axw> wallyworld: ta
<axw> wallyworld: here's the remove-machine branch: https://github.com/juju/juju/pull/7108
<wallyworld> ok
<wallyworld> axw: lgtm, thanks
<axw> wallyworld: why do we need to update CI tests?
<axw> wallyworld: to check for those messages? as long as we don't expect them to be unchanging over time, I guess
<axw> (because it's meant for humans, not for scripts)
<wallyworld> axw: that is true, but we want to be sure the correct units/storage etc are reported
<axw> okey dokey. probably should add it to the test plan for storage then
<wallyworld> yeah
<axw> added a card
<wallyworld> ta
<wallyworld> axw: i asked because there were no feature tests as such, so nothing that did full end-end tests
<wallyworld> of the new bits
<axw> wallyworld: might make more sense just to add one there, rather than running a separate CI test on all clouds
<wallyworld> that would work
<mup> Bug #1466951 changed: juju incorrectly mixes http-proxy with apt-http-proxy and doesn't mix no-proxy <docs> <proxy> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1466951>
<mup> Bug #1466951 opened: juju incorrectly mixes http-proxy with apt-http-proxy and doesn't mix no-proxy <docs> <proxy> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1466951>
<mup> Bug #1466951 changed: juju incorrectly mixes http-proxy with apt-http-proxy and doesn't mix no-proxy <docs> <proxy> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1466951>
<axw> the merge bots are a little bit crap today
<rogpeppe1> axw: just a heads up: i'm just merging a PR that removes the state/workers package: https://github.com/juju/juju/pull/7074
<axw> rogpeppe1: sounds good, thanks
<rogpeppe1> axw: i like deleting code :)
<axw> me too, sadly I've been adding more than deleting lately :(
<axw> rogpeppe1: ah, I see you also fixed at least one of the 1.8 test failures.. just started looking at that. thanks.
<rogpeppe1> axw: i fixed a couple of the 1.9 test failures too
<axw> heh, I haven't gone there yet
<rogpeppe1> axw: i usually run on tip
<axw> rogpeppe1: do you see much instability along the way?
<rogpeppe1> axw: the monotonic clocks muck up quite a few places that do DeepEquals on times
 * axw nods
<rogpeppe1> axw: no, not really
<rogpeppe1> axw: it's generally rock solid all the time
<axw> cool
<rogpeppe1> axw: using tip means that i can provide early feedback (e.g. a bug i reported resulted in the semantics of Time.Round and Time.Truncate changing)
<axw> rogpeppe1: yeah, I saw that. I should do the same, just a bit slack
<rogpeppe1> axw: i think the go stability is a lesson in how it's possible to have a stable multiplatform product without gating commits directly on CI.
<Hetfield> good morning! when i use juju to deploy a charm using containers (i.e. openstack)
<Hetfield> i can see containers in maas, but, as i'm using an external dns they don't resolve, causing aodh for instance to fail. so question is: can juju (or better maas as it allocates hostname + ip) trigger a script/hook to invoke dns registration (and de-registration when lxd is released)?
<Hetfield> this is a blocking issue for me
<rogpeppe1> anyone around that can give this a review, please (a simple fix to a flaky test): https://github.com/juju/juju/pull/7113
<rogpeppe> the above PR now fixes three bugs in one, a bargain! review much appreciated: https://github.com/juju/juju/pull/7113
<rogpeppe> perrito666: wanna take a look?
<perrito666> why not? :)
<rogpeppe> perrito666: thanks a lot :)
<perrito666> I wont make a fuzz about in your pr but... WTF " func (st *State) SetClockForTesting(clock clock.Clock) error "
<perrito666> rogpeppe: ship it, nice change, seems to add decent determinism
<rogpeppe> perrito666: thanks a lot; already saw your approval and hit $$merge$$ :)
<rogpeppe> perrito666: i think that much of the clock mocking fad in juju is misguided tbh
<rogpeppe> perrito666: clock mocking is useful when you know everything about the components you're testing, but not for higher levels
<perrito666> rogpeppe: I believe that live clock should never be used but perhaps our design should account for the fact that the clock can be replaced and behave accordingly to avoid out of band knowledge need
<rogpeppe> perrito666: i used to go along with that, but i have come to change my mind
<rogpeppe> perrito666: i think the live clock is the only decent way
<rogpeppe> perrito666: to avoid severely fragile test code
<rogpeppe> perrito666: otherwise you need to know exactly how many agents there are in the system and exactly what they are expected to do at any moment, so you know how many alarms to wait for
<rogpeppe> perrito666: and even then you have race conditions
<rogpeppe> perrito666: if Go provided support for clock mocking (a la Go playground) i think things might be different
<rogpeppe> perrito666: having a clock that runs faster than real-time is an another approach i've seen. that doesn't suffer the same problems.
<perrito666> rogpeppe: interesting, what changed your mind?
<rogpeppe> perrito666: trying to write robust tests using the clock mocking technique
<rogpeppe> perrito666: and finding them flaky even with appropriate values passed to WaitAdvance
<perrito666> rogpeppe: I do agree however that the ability to own the system clock would be a big win
<rogpeppe> perrito666: the problem is that the tests are also governed by the system clock
<perrito666> rogpeppe: true, true
<rogpeppe> perrito666: i like clock mocking when testing low level worker code where I know exactly what's going on
<rogpeppe> perrito666: and i still use it for that
<perrito666> I dont think I have a strong opinion in either aproach
<rogpeppe> perrito666: but pretending you can thread a mock clock through the whole system and have meaningful tests that won't break at the least change of unrelated code is... hopeful :)
<mattyw> perrito666, I'm just heading off, but if you're looking for something to do: https://github.com/juju/juju/pull/7110
<mattyw> :)
<rogpeppe> perrito666: but i know it's kinda like a religion in juju-core these days :)
<perrito666> mattyw: I certainly am not but ill gladly review you
<mattyw> perrito666, haha, that's the spirit :)
<rogpeppe> perrito666: anyway, good to chat. i'm eod now, but hopefully catch you again before you go.
<perrito666> cheers, ill be her until the 21st :)
<mattyw> perrito666, less than a week?
<perrito666> mattyw: 3 work days
<mattyw> perrito666, counting down?
<perrito666> mattyw: it matches the day I have a vacation :)
<babbageclunk> rogpeppe: I'm not sure it helps much but I'm on your side of the testing clock debate - except for very narrow tests they hurt more than the problem they solve.
<redir> clock.Advance(1)
<babbageclunk> redir: yeah, I know you're a convert - we went through that pain together.
<babbageclunk> wallyworld, perrito666, thumper, hml, externalreality, menn0: anyone know about networking in GCE? I presume jam does, probably axw - anyone awake now?
<thumper> NFI sorry
<wallyworld> i don't know anything in detail
<perrito666> jam does
<wallyworld> or at all i guess
<hml> babbageclunk: not i
<menn0> babbageclunk: sorry, was on call and I don't know much about the topic
<babbageclunk> menn0: no worries, muddling along for now - will chat with jam later (although I guess he won't be around until Monday).
<babbageclunk> wallyworld: do you know who I should hit up to get my own GCE account? I'm beginning to get a bit nervous about messing around with the networks and firewall rules on the juju-qa account.
<wallyworld> babbageclunk: i can find out
<babbageclunk> wallyworld: cool, thanks
#juju-dev 2017-03-17
<rick_h> wallyworld: or thumper either of you have a few to chat?
<wallyworld> sure
<rick_h> https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1wallyworld:
<rick_h> bah
<rick_h> wallyworld: https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1
<wallyworld> thumper: are you free to pop into chat above?
<axw> babbageclunk: back from school drop off. what did you want to know about GCE networking specifically?
<axw> babbageclunk: I think I may have poked around the edges before, so *may* be able ot answer questions
<babbageclunk> axw: hi! I'm just trying to understand how GCE's networking compares to what we have
<thumper> wallyworld: I'm around now
<wallyworld> thumper: tis ok, we can chat in a bit
<babbageclunk> axw: actually, I'm having trouble formulating specific questions now - I need to get things a bit clearer in my head.
<babbageclunk> axw: ok, one specific question - GCE subnets are tied to a region and span all the AZs for that region. Should the SubnetInfos I return from gceEnviron.Subnets() have all of the AZs for a region? Or should AvailabilityZones be empty?
<axw> babbageclunk: yeah sorry, no idea about that. that's probably a question for jam
<babbageclunk> axw: :) ah well - I'm just going to pick one for now and then talk to him about it on Monday.
<axw> babbageclunk: based on the docs for SubnetInfo, I'd fill them all in. but I don't know for sure
<axw> babbageclunk: I *think* we use that to constrain which AZs to choose from when starting the instance
<babbageclunk> axw: ah, ok.
<babbageclunk> axw: I think that's the kind of information I really need, actually
<axw> okey dokey
<babbageclunk> axw: thanks!
<axw> babbageclunk: np
<rick_h> thumper: still around?
 * rick_h checks in one more time before bed
<rick_h> thumper: nvm, I see you sent me an email. ty
<sinzui> babbageclunk: signup using your  canonical email and register a credit card. The expenses wont be much for the occassional testing
<babbageclunk> Ah, ok
<babbageclunk> thanks sinzui
<sinzui> babbageclunk: also, you can request a raise in limits from the quota page, you might not needed.
 * babbageclunk should really sort out an NZ credit card anyway.
<sinzui> babbageclunk: Happy to think you are protecting my firewalls. ses deleeted them all one day in December because he thought GCE would ignore requests to delete firwalls that were in use.
<babbageclunk> sinzui: oh ouch!
<pranav_> Hi Folks. Facing an issue publishing our charm for review
<pranav_> # charm release cs:~vtas-hyperscale-ci/hyperscale-0 --resource "install/-1" --channel edge ERROR cannot release charm or bundle: bad request: charm published with incorrect resources: charm does not have resource "install/"
<Guest50267> Hey all. Any one has idea on charm push-term? [14:24] <Guest50267> getting error ERROR unrecognized command: charm push-term [14:24] <Guest50267> charm version is 2.2.0-0ubuntu1~ubuntu16.
<psarwate> Hey folks. Need help getting juju terms working
<psarwate> juju doesn't seem to prompt for agreeing terms
<mattyw> perrito666, you still around?
<perrito666> mattyw: hey, yes, sorry I was not watching this channel
<mattyw> perrito666, unforgivable, as punishment here's a review you can do :) https://github.com/juju/juju/pull/7120
<mattyw> :)
<perrito666> mattyw: I am curious, what is the number on your branch names?
<mattyw> perrito666, I number my branches, so I have a concept of the order I've done them in
<perrito666> interesting idea
<mattyw> perrito666, but the number resets at random intervals
<mattyw> perrito666, and isn't always sequential
<mattyw> perrito666, it's fairly ad hoc
<perrito666> 16 files change :p
<mattyw> perrito666, most of it is fairly trivial
#juju-dev 2017-03-19
<babbageclunk> morning!
<menn0> the stats for my current PR:  61 files changed, 2837 insertions(+), 4512 deletions(-)
<menn0> and I'm about to delete a whole lot more
<menn0> and this is just a refactoring - no functional change
<babbageclunk> ugh.
<babbageclunk> menn0: I mean, good work deleting stuff though.
<menn0> babbageclunk: there was a lot of bullshit code :(
<babbageclunk> :(
<menn0> babbageclunk: I believe you've been frustrated in this area as well (resources). This is all/mostly fixed by this PR.
<axw> menn0: that's a ridiculous amount of unnecessary code
<menn0> axw: indeed. lots of unnecessary abstraction, silly infrastructure code and poor tests
<babbageclunk> menn0: awesome
<menn0> axw: i'm done now, just QAing. -1775 net line count
<axw> menn0: very nice
<menn0> I apologise in advance to the reviewer(s)
<axw> menn0: and thank you :)
<axw> heh
#juju-dev 2018-03-12
<jam> anyone around that could review https://github.com/juju/juju/8473 it fixes another intermittent failure
<jam> https://github.com/juju/juju/8474 deals with a panic() that happens when a different test fails, which suppresses understanding what the failure was
<zeestrat> Anyone got some tips on how to further troubleshoot (or perhaps even fix) https://bugs.launchpad.net/juju/+bug/1755155?
<mup> Bug #1755155: charm hook failures after controller model upgrade from 2.1.2 to 2.3.4 <juju:New> <https://launchpad.net/bugs/1755155>
#juju-dev 2018-03-13
<jam> anyone around that could review https://github.com/juju/juju/8477 ?
<jam> it refreshes AWS instance types, since now x1e, c5, m5 are supported in more regions.
<jam> manadart: any chance you can look at https://github.com/juju/juju/8474 as well?
<manadart> jam: Am currently :)
<jam> thx
<jam> manadart: thanks for the review on https://github.com/juju/juju/8474 care to look at the renames and see if it makes more sense to you?
<manadart> jam: K.
<manadart> jam: LGTM
<jam> wpk: are you around? I was wondering what happened with patching vmware dependency to support go 1.10?
<manadart> jam: Is https://bugs.launchpad.net/juju/+bug/1726317 effectively sorted now?
<mup> Bug #1726317: multiple addresses in a space confuses peergrouper <4010> <juju:In Progress by manadart> <https://launchpad.net/bugs/1726317>
<wpk> jam: It's fixed in upstream, I want to put it as a part of my 'big' vmware update
<jam> wpk: any chance you could split out that as a small patch to 2.3 so we can build both 2.3 and 2.4 w/ go 1.10?
<wpk> sure, sec
<wpk> one small fix is needed in constraints too
<jam> wpk: thanks
<jam> manadart: one more? https://github.com/juju/juju/8478  it seems my last patch for OpenTimeout created a race condition (the goroutine that writes to a channel is the only one that should close it)
<jam> or wpk ^^ its pretty small, and then I'll merge 2.3 into develop
<wpk> jam: looking, govmomi dep updated
<jam> wpk: approved
<manadart> jam externalreality: Small one - https://github.com/juju/juju/pull/8481/files
<jam> looking
<externalreality_> manadart, what is that for?
<externalreality_> manadart, how did you know that was needed?
<manadart> externalreality_: We spoke with John in Budapest.
<externalreality_> I didn't
<externalreality_> manadart, I am afraid I am often ignored often
<externalreality_> manadart, I am beginning to dislike being left out all the time
<mup> Bug #1751153 changed: juju 2.3.4: vsphere units automatically remove jujud across reboot when cloud-init /var/lib/cloud is removed  <juju:New> <juju-core:Won't Fix> <https://launchpad.net/bugs/1751153>
<wallyworld> balloons: sorry, internet died. ffs. tethered to my phone now for a bit
<balloons> wallyworld, ahh.. I figured something like that happened
<wallyworld> i have another meeting now but i think we had more or less finished
<balloons> wallyworld, yea, I think it's fine. Thanks for catching up
#juju-dev 2018-03-14
<thumper> morning peeps
<thumper> https://github.com/juju/juju/pull/8490
<thumper> veebers: ping
<veebers> thumper: OTP at the mo
<thumper> ack
<veebers> thumper: pong
<thumper> otp,
<thumper> back with you shortly
<veebers> lol ping tag :-)
<thumper> veebers: I'm in the release call
<veebers> thumper: ack, you want me to join?
<thumper> wallyworld: https://github.com/juju/juju/pull/8490
#juju-dev 2018-03-15
<wallyworld> thumper: small PR? https://github.com/juju/juju/pull/8492
 * thumper looks
<thumper> wallyworld: one small comment
<wallyworld> ty
<thumper> https://github.com/juju/juju/pull/8493
<thumper> wallyworld: can you look at this one? http://10.125.0.203:8080/view/unit%20tests/job/RunUnittests-ppc64el/286/testReport/junit/github/com_juju_juju_worker_firewaller/TestAll/
<thumper> I've been staring at it for about 10-15 minutes, and also haven't been able to reproduce failure locally
<thumper> ...
<thumper> hmm...
<thumper> I do wonder though...
<thumper> this is most likely a watcher bug
<thumper> and we do know that mongo has a bug on these architectures where we miss things...
<thumper> holy crap I want to get off mongo 3.2
<thumper> wallyworld: I do think now that this bug is likely an observation of the same mgo.OpLog test failure we see
<wallyworld> thumper: looking
<wallyworld> hmmm, i think i agree, it's a plausible explanation
<wallyworld> anastasiamac: small review? https://github.com/juju/juju/pull/8494
 * anastasiamac looking
<jam> I have a patch to bring 2.3 up to go-1.10, turns out we have a few tests that fail because of things like changes in error strings.
<jam> Anyone want to confirm that my fixes make sense?
<jam> httsp://github.com/juju/juju/pull/8480
<jam> https://github.com/juju/juju/pull/8480
<manadart> jam: LGTM.
<jam> thanks manadart
<jam> manadart: I just approved your expenses, but wasn't there another meal in there, like Sun lunch ?
<manadart> jam: Thanks. Doesn't matter. It's OK as is.
<wpk> jam: do you have a sec?
<jam> wpk: sure
<wpk> jam: I msged you about the problem I have with vsphere. I'm stuck with it since yesterday
<jam> wpk: happy to talk it over with you, not that I know that much about vsphere. Let me grab a coffee, and I'll hit you up for a hangout
<wpk> jam: ok
<wpk> I might grab a coffee as well
<wpk> witold-john
<manadart> jam: Scenario: No HA space, 3 controllers in HA (one cloud-local address each). Cloud-local address is added to one of the machines. Peergrouper errors out thereafter.
<manadart> Until one goes back to one cloud-local each, or sets a correct HA space config.
<manadart> jam: Initial thought is maybe to hold the err value of the up-front address check, and only return it if the members are different. Report no change if they are the same and still have the prior addresses...
<jam> manadart: peergrouper starts erroring, but presumably the replicaset itself is untouched?
<wpk> jam: linking jujud killed everything
<manadart> jam: Yes.
<jam> manadart: I'm about 3 conversations deep right now. I'm not positive what to do here. it could be fine to preserve the addresses, rather than going into failure mode, but we'd probably at least want some sort of message to users, since it is unclear
<jam> maybe its an info or even just DEBUG message...?
<manadart> jam: Yeah; I see you're all over. I will implement something now, review later.
<mup> Bug #1752662 changed: ssh-proxy does not work as expected on AWS <juju:New> <https://launchpad.net/bugs/1752662>
<wpk> jam: balloons: damn, I found the issue. after importing the disk has -two- files (file.vmdk and file-flat.vmdk), the first one is a ~500b text file with the link to the latter. Since I was creating the temporary VM for conversion in the same directory as the 'final' one it worked for the first time (because the 'large' file was there too), but for all later launches the directory was different and the VM
<wpk> was left with an unusable short text file
<balloons> wpk, wow..
<balloons> wpk, glad you figured it out.. Foiled by a symlink then?
<wpk> balloons: kind of
<wpk> balloons: I'll switch to creating a template VM altogether, not an image. That should work and it should be 'cleaner'
<jam> wpk: balloons: i think it is because a VM disk in VSphere can be a bunch of chained files that point to each other. there are comments while searching about "my thing is now 100 objects, how do I get it back to 1"
<wpk> jam: yep, the file containts 'extents' section with block -> file
<wpk> jam: so, I'll be doing it 'the template way'.
<jam> wpk: it honestly does seem cleaner. hopefully its not a lot more effort
<balloons> jam, ahh, interesting.
<wpk> jam: hopefully create a VM, mark it as a template
<balloons> wpk, will the template vm be reused as well, or will you make a new "template" each time?
<wpk> reused
<wpk> I'd create VM, clone it (to convert disk), then mark the clone as template.
<balloons> wpk, so this is the same as before -- subsequent bootstraps would use the same template
<wpk> balloons: not really, previously we had not template at all
<jam> night all
<jam> wpk: I thought it was more exporting a vm to a template
<jam> anyway, still good
<agprado> Team, does anybody know how to use testing.Isolation suite? And has a few minutes, for some questions?
<balloons> agprado, I imagine APAC will be awake shortly :-)
<agprado> balloons: ty, I know what I want, but I'm not sure how to make it run.
<agprado> balloons: are you available?
#juju-dev 2018-03-16
<anastasiamac> wallyworld: is this the pr u need a review on? https://github.com/juju/juju/pull/8486
<wallyworld> anastasiamac: my network dropped just before, not ure if you saw my msg. sorry i missed your question, was intrviewing
<anastasiamac> wallyworld: what msg?
<wallyworld> otp in another interview, sorry give me a sec
<anastasiamac> nws -saw ur responses on PR anyway - lgtm still stands :D
<wallyworld> anastasiamac: nfi how that got merged, am sending email to nic to ask
<manadart> Anyone up for a review?
<manadart> https://github.com/juju/juju/pull/8501
<balloons> manadart, passing tests too? :-)
<manadart> balloons: Crazy talk :) Yes, sorted my little issue. It was indeed to do with wrestling out test scenarios with the fakes.
#juju-dev 2018-03-18
<thumper> morning team
<babbageclunk> thumper: oops hi
<thumper> veebers: how many issues are there with the charmstore.v5 and go 1.10?
<thumper> quick enough for you to submit a PR?
<thumper> or are there many?
<veebers> thumper: Unknown, I would have to dig in and see if I can figure out what the actual issue is
 * thumper nods
<thumper> ok
<veebers> thumper: it *seems* like it might be a single issue, but that's a guess at this point
<veebers> oh great, and I just lost all my terminals :-\
<wallyworld> thumper: did you have 5 mins?
<thumper> sure
<thumper> wallyworld: what's up?
<wallyworld> quick chat 1:1?
<thumper> ack
