#juju-dev 2013-08-05
<bigjools_> it's so much quieter here without thumper around
<davecheney> too quiet ...
<bigjools> davecheney: I am putting some logger.Infof() calls to debug stuff but they are not appearing, do I need to initialise anything first?
<davecheney> bigjools: shouldn't
<bigjools> oh ffs
<davecheney> but from memory unless you pass -v, then you get nothing
<bigjools> I just saw that I need a -v
<davecheney> axw will know
<bigjools> works with -v
<davecheney> yeah, this is fucked
<davecheney> he's fixing it
<bigjools> good man
<davecheney> in our defense, we inhertied this madness from pyjuju
<davecheney> i don't know if that is the kind of defence that will stand up in court
<axw> yeah just waiting for thumper to ready his changes
<axw> it's a bit messy
<davecheney> 'yes your honor, I saw someone else brutally murdered, so I figured it was ok '
<bigjools> one day, juju will evolve the perfect provider interface
 * bigjools juju bootstraps an azure environment and deploys juju-gui on it
<axw> bigjools: I'm looking into adding instance-specific state to juju status. Do you know if MAAS provides a way of getting the current state of a machine?
<bigjools> axw: one  sec
<bigjools> axw: the nearest thing is that api lets you query all allocated nodes which returns quite detailed info for each node.
<bigjools> so depends on what state you want?
<axw> I don't know :)  I guess I'd like to see what's available and work from there
<axw> can you point me to some docs/code?
<axw> bigjools: I'm after something comparable to gwacl.HostedServiceDescriptor.Status for Azure
<bigjools> jtv1: can you remember if the maas api will return json for a single node if we GET its URL?
<jtv1> bigjools: IIRC it does.
<bigjools> axw: ok so there's a status on the json that gets returned
<bigjools> if you look in the maas source tree at the file src/maasserver/enum.py check out the NODE_STATUS enum
<axw> bigjools: cool, thanks
<bigjools> axw: if juju knows about it, it'll never see anything other than "reserved" though
<axw> ah
<bigjools> errr allocated I mean
<bigjools> maas does not track the status of anything after the node gets given to a user
<axw> bigjools: okey dokey. sounds like it mightn't be useful to add at all then
<bigjools> so there's no way of knowing whether it's installing, powering up etc
<bigjools> yeah
<rogpeppe> mornin' all
<axw> rogpeppe: morning
<rogpeppe> axw: hiya
 * axw wonders how much lxc testing his laptop can take before carking it
<dimitern> rogpeppe: ping
<rogpeppe> dimitern: pong
<dimitern> rogpeppe: hey, morning!
<dimitern> rogpeppe: did you merge the agent entity stuff already?
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe: cool, i'll take a look then
<dimitern> rogpeppe: so AgentEntity will now need StatusSetter and Watcher(?)
<rogpeppe> dimitern: seems reasonable
<dimitern> rogpeppe: how should I call the interface that has Entiy.Watch() ?
<dimitern> rogpeppe: and I also need EnsureDeader
<rogpeppe> dimitern: ha, you can't have EntityWatcher.
<dimitern> rogpeppe: exactly
<dimitern> rogpeppe: Watchable?
 * rogpeppe thinks
<dimitern> rogpeppe: WatchEntiter ?:)
<rogpeppe> dimitern: i've already mentally rejected that :-)
<dimitern> rogpeppe: yeah
<rogpeppe> dimitern: EWatcher?
<rogpeppe> dimitern: EntityWatcherInterface?
<dimitern> rogpeppe: AgentEntityWatcher ?
<rogpeppe> dimitern: thing is, it's not specific to agent entities
<dimitern> rogpeppe: why? i think it is
<rogpeppe> dimitern: it applies to anything that has Watch() *NotifyWatcher
<rogpeppe> dimitern: e.g. Service
<rogpeppe> dimitern: but... maybe that's ok
<dimitern> rogpeppe: but a service is also managed by some agent after all
<rogpeppe> dimitern: really? i don't think so.
<dimitern> rogpeppe: well the uniter for example manages services inside the unit agent
<rogpeppe> dimitern: it manages units *of* a service
<rogpeppe> dimitern: well, one unit of a service
<rogpeppe> dimitern: the service itself has no agent
<dimitern> rogpeppe: yeah, true
<dimitern> rogpeppe: but I don't think we'll need an interface for that
<dimitern> rogpeppe: it's about reusing common code for agents
<rogpeppe> dimitern: but still, the reason we're defining the interface is for AgentEntity, so i'm ok with calling it AgentEntityWatcher
<dimitern> rogpeppe: there'll be a WatchService API call in the uniter
<dimitern> rogpeppe: ok, AgentEntityWatcher then
<rogpeppe> dimitern: +
<rogpeppe> 1
<sidnei> here's my attempt at fixing https://bugs.launchpad.net/juju-core/+bug/1203816 https://codereview.appspot.com/12143043
<_mup_> Bug #1203816: local provider should support use of local proxy <juju-core:Triaged> <https://launchpad.net/bugs/1203816>
<dimitern> rogpeppe: you haven't changed getAuthFuncs to use method expressions yet?
<rogpeppe> dimitern: no, i still need another LGTM on that CL
<dimitern> rogpeppe: ah, ok
<rogpeppe> dimitern: i'm also looking for reviews of https://codereview.appspot.com/12361043/ (mgz?) and https://codereview.appspot.com/12352044/
<dimitern> rogpeppe: will take a look in a bit
 * thumper waves from the IOM
<jamespage> davecheney, which release of juju-core should I be uploading to saucy?
<dimitern> thumper: hey, how is it going there?
<thumper> good so far, going through everyone's demos
<thumper> also, raining
<dimitern> :) cool
<mgz> jamespage: 1.12 I believe
<jamespage> mgz, ack - thats what I thought
<davecheney> jamespage: correct
<jamespage> davecheney, great -thanks for confirming - working that now
<davecheney> jamespage: thanks mate
<davecheney> this is the version we talked about in Oakland
<davecheney> ie, the one we're hoping to feed into backports
<dimitern> rogpeppe: one review done, i'm on your second CL now
<dimitern> rogpeppe: can you take a look at https://codereview.appspot.com/12443044/ in the mean time?
<rogpeppe> davecheney: i'd be interested in your input on  https://codereview.appspot.com/12361043/ too, please
<rogpeppe> dimitern: looking
<dimitern> rogpeppe: second review done as well
<rogpeppe> dimitern: i don't know if it's a nice catch until we get a lack of intermittent test failures for that test :-)
<rogpeppe> dimitern: do you know if it had a bug report, BTW? i couldn't find one
<dimitern> rogpeppe: which test failed?
<davecheney> rogpeppe: i saw that
<rogpeppe> dimitern: MachineSuite.TestWatchMachine
<davecheney> where is the change to the README to explain how to use it ?
<dimitern> rogpeppe: no, I don't think there's a bug for it
<dimitern> rogpeppe: just checked
<rogpeppe> davecheney: currently we're just putting it into the tree. mgz is going to change the bot to use it; then i plan to change godeps so it can use it
<davecheney> rogpeppe: where is godeps ?
<rogpeppe> davecheney: go get launchpad.net/godeps
<dimitern> sidnei: you've got a review
<sidnei> dimitern: lovely. i might be doing a few more changes, during testing i noticed the proxy is set up too late, talking with smoser about a proper fix
<jamespage> davecheney, do you happen to know what happening re upgrade from pyjuju->juju-core
<jamespage> first upgrade break:
<jamespage> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1200878
<_mup_> Bug #1200878: Upgrade breaks existing pyjuju deployment <apport-collected> <regression-release> <saucy> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1200878>
<dimitern> sidnei: I see, ok
<dimitern> oh boy.. EnsureDeaderer
<dimitern> or maybe DeathEnsurerer :)
<rogpeppe> dimitern: DeadEnsurer?
<dimitern> rogpeppe: that's for state?
<rogpeppe> dimitern: is there a problem with changing the Machines field to Entities in the Machine.SetStatus call, BTW? i'm concerned about backward incompatibility.
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe: that's why I added DEPRECATE(v1.14) to it
<dimitern> rogpeppe: to the older struct
<rogpeppe> dimitern: ah, ok
<rogpeppe> dimitern: cool
<dimitern> rogpeppe: ok i'll go with DeadEnsurer
<sidnei> dimitern: indeed, the AddScripts was supposed to be a pre-req, wasn't merged yet. i'll submit it.
<jam> just saying a quick hello to everyone. In a meeting, but back at work after vacation.
<dimitern> sidnei: ok; you might run into conflicts with this CL afte the prereq lands
<dimitern> jam: heyhey
<sidnei> dimitern: in codereview? because bzr should handle it fine.
<rogpeppe> dimitern: i'm not sure how your changes preserve API backward compatibility
<dimitern> sidnei: ah, ok then; no, codereview doesn't matter - it's a bzr matter anyway
<dimitern> rogpeppe: do you propose to have handling for both MachineSetStatus and SetStatus for compatibility?
<rogpeppe> dimitern: how would that help?
<dimitern> rogpeppe: it'll take both types of args
<dimitern> rogpeppe: (machiner.SetStatus)
 * dimitern fwiw i think even considering backwards compatibility with the API at this point is ridiculous
<rogpeppe> dimitern: it's something that we need to consider
<rogpeppe> dimitern: if something updates the API server but not one of the clients, then this change will break the installation
<jam> dimitern: my personal feeling, you don't have to preserve compat with 1.13, but we should preserve with things in 1.12. (At least, we need to start getting comfortable with how we are going to do these things going forward.)
<rogpeppe> jam: hiya
<rogpeppe> jam: i think i agree with that
<dimitern> rogpeppe: so how about my proposal?
<rogpeppe> dimitern: and i *think* 1.12 used the API machiner, didn't it?
<dimitern> rogpeppe: handle both the old and new args types in SetStatus?
<rogpeppe> dimitern: that seems reasonable. just add a Machines field in params.SetStatus, marked deprecated.
<dimitern> rogpeppe: and read both?
<rogpeppe> dimitern: if len(p.Entities) == 0 { p.Entities = p.Machines }
<dimitern> rogpeppe: ok
<rogpeppe> dimitern: i'm particularly +1 on jam's remark "we need to start getting comfortable with how we are going to do these things going forward"
<rogpeppe> dimitern: it's important that we get experience maintaining API compatibility, see what works, what doesn't.
<dimitern> rogpeppe: ok, I can agree with that
<dimitern> rogpeppe: how can I get 1.12 tools to test the upgrade?
<rogpeppe> dimitern: i should hope they'd be in the tools bucket
<rogpeppe> dimitern: if not, you'll have to fetch the tgz and compile them yourself
<dimitern> rogpeppe: I might need to have a quick chat with you about that, after the standup
<rogpeppe> dimitern: ok
<dimitern> mgz: standup?
<dimitern> jam, thumper: I suppose you won't be joining as well
<mgz> ta dimitern
<jam> we will not
<jam> lunch time for us
<wallyworld> rogpeppe: https://codereview.appspot.com/12308044/ :-)
<rogpeppe> wallyworld: ta!
<sidnei> dimitern: cl updated
<natefinch> rogpeppe: https://codereview.appspot.com/12347044/    It still needs tests, but I wanted a check to make sure the approach was ok, since it's a bit above and beyond what the bug calls for.   It's a very small number of lines changed, however.
<dimitern> sidnei: thanks, still LGTM
<rogpeppe> natefinch: ah yes, i started looking at that and wasn't entirely sure, but got distracted while trying to think of a better alternative...
<rogpeppe> natefinch: one question: why does ReadEnvirons return a no-env error only if path=="" ?
<natefinch> rogpeppe: I went back and forth on that.  The idea was that it only gives the error message if you're using the default environment... because otherwise it means you explicitly gave it a file you were expecting to exist, and the error message talks about using init, which won't help make your existing configuration work
<rogpeppe> natefinch: hmm, that sounds like something that should be triggered at a different level (for example at the command level, where you know if you've been passed an environment or not)
<rogpeppe> natefinch: i have an idea - i'll just check to see if it's viable, one mo
<natefinch> rogpeppe: cool, thanks for the help.
<rogpeppe> natefinch: i wonder if a better place to return the NoEnv error would be from juju.NewConnFromName
<dimitern> natefinch: reviewed
<rogpeppe> natefinch: and i'm not sure if we should pollute cmd/cmd.go with juju-command specific stuff (it's also used by jujud)
<dimitern> rogpeppe: that's where the supercommand lives, no?
<rogpeppe> dimitern: yes, but i don't thing the supercommand wants to know about stuff that cmd/juju is doing. it seems like mixing responsibilities
<rogpeppe> natefinch: so i'd be tempted to do something like this, in each juju command that calls NewConnFromName:
<rogpeppe> 	conn, err := juju.NewConnFromName(c.EnvName)
<rogpeppe> 	if err != nil {
<rogpeppe> 		return envOpenFailure(err)
<rogpeppe> 	}
<dimitern> rogpeppe: if it's already reading the env.yaml file there, the responsibility is there
<rogpeppe> dimitern: it's not, is it?
<natefinch> rogpeppe: yeah, I was trying to avoid changing every single command, but if cmd.go isn't juju specific, then yeah, we don't want to pollute that
<rogpeppe> natefinch: and define envOpenFailure as: func envOpenFailure(err error) error {if err == juju.ErrNoEnvironmentFile {print message; return cmd.ErrSilent}; return err}
<dimitern> rogpeppe: i'll not restrict EnsureDead to not alive entities then
<natefinch> rogpeppe: Yeah, I guess I was ok letting the error go back up the stack,  mostly so it still gets printed out, but I guess we can just print it out there, too.
<natefinch> rogpeppe: (the file not found error)
<rogpeppe> natefinch: i think there's no need to embed the original file-not-found error - we already know it's os.IsNotExist
<natefinch> rogpeppe: I think it's helpful to know what file the system was trying to open and couldn't find.  It helps if you typo the path of the file somewhere
<rogpeppe> natefinch: fair enough
<rogpeppe> natefinch: oh yes, you'll probably want envOpenFailure(err error, envName string) so that you can print the "Please create a configuration" message only if it's empty.
<rogpeppe> natefinch: or you could even implement it as a method on EnvCommandBase
<natefinch> rogpeppe: yep, sounds good
<dimitern> mgz: https://codereview.appspot.com/12443044/ this is the first CL
<dimitern> mgz: and this is the follow-up https://codereview.appspot.com/12461043
<dimitern> rogpeppe: ^^
<dimitern> rogpeppe: enter the DeadEnsurer :)
 * rogpeppe feels uncomfortable about the spelling, but DeathEnsurer seems wrong too
<dimitern> rogpeppe: I'm open to suggestions
<rogpeppe> dimitern: i have none.
<natefinch> dimitern: If we have the opportunity to put a DeathEnsurer in the product, I think we should take it :)
<natefinch> dimitern: Reaper?  With a Reap() function?
<dimitern> natefinch: :) alas we need to adhere to the predefined state interface
<dimitern> natefinch: I proposed DeathEnsurer, but it's equally bad
<natefinch> dimitern: Ahh, too bad.
<dimitern> rogpeppe: 1.12 is reallt 1.11.something, right?
<dimitern> really*
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe: do you know which one exactly?
<rogpeppe> dimitern: no. why do you want to know?
<dimitern> rogpeppe: so I can wget it and test the upgrade
<rogpeppe> dimitern: https://launchpad.net/juju-core/1.12/1.12.0/+download/juju-core_1.12.0-1.tar.gz
<dimitern> rogpeppe: ah, ok, 10x
 * rogpeppe goes for lunch
<dimitern> mgz: ping
<mgz> hey dimitern
<dimitern> mgz: hey, any chance to look at my CLs?
<mgz> 088 and 089?
<dimitern> mgz: yep
<mgz> doing so now
<dimitern> cheers!
<mgz> I find "NewStatusSetter returns a new StatusSetter" a little funny as the first sentence of a doc comment
<mgz> but can't really suggest an improvement....
<dimitern> that's what it does :)
<mgz> dimitern: one down, will just grab some food them look at the other
<dimitern> ok, thanks
<dimitern> mgz: about the deprecation stuff - aiui we need to make sure the api params/calls are backwards compatible between versions, rather than removing them straight away - it'll make certain version-to-version upgrades problematic
<dimitern> hence the deprecation - we guarantee at least the upgrade is possible from the previous release, but deprecated stuff will go away in the next
<dimitern> and now.. AgentEntityWatcherer..
<dimitern> we defined a whole dictionary
<natefinch> dimitern: I usually take the "add -er to an interface name" as a guideline, not a hard and fast rule, for exactly these problems
<dimitern> natefinch: tell that to rogpeppe :)
<rogpeppe> dimitern: i agree with natefinch
<rogpeppe> dimitern: i hate the *erer names
<dimitern> rogpeppe: you're the one who usually insists on using -er everywhere
<dimitern> rogpeppe: I can remember at least 3 cases where I choose something else :)
<rogpeppe> dimitern: only when reasonable. i would never have suggested Removerer (the first such name)
<dimitern> rogpeppe: let me find the CL
<natefinch> at some point it just looks ridiculous (pretty much any -erer)
<rogpeppe> dimitern: i don't know who did Removerer - i laughed out loud when i saw it
<rogpeppe> dimitern: looks like you and jam were responsible... https://codereview.appspot.com/11125043/diff/1/state/apiserver/common/remove.go#newcode17state/apiserver/common/remove.go:17
<dimitern> rogpeppe: ok, sorry it wasn't you - it was jam, sorry
<rogpeppe> dimitern: you will be :-) :-)
<dimitern> well, strictly speaking English is not my native language, so none of these names look especially funny or wrong to me :)
<dimitern> it's just convention
<fwereade> rogpeppe, dimitern: so, SetStatus change
<fwereade> rogpeppe, dimitern: what's the deal?
<dimitern> fwereade: it landed
<dimitern> fwereade: oh? what
<fwereade> dimitern, Machines vs Agents/Entities/whatever
<rogpeppe> fwereade: i suggested that it was worth maintaining 1.12 compat
<dimitern> fwereade: yes, and I did that
<dimitern> fwereade: it's deprecated
<fwereade> rogpeppe, for values of "worth" equal to "necessary"
<rogpeppe> fwereade: which means that SetStatus needs to recognise the Machines argument, until we can deprecate iyt
<fwereade> rogpeppe, dimitern: perfect
<fwereade> rogpeppe, dimitern: just checking, there was uncertainty here
<dimitern> fwereade: I even added a test for 1.12 compat and live tested an upgrade 1.12 -> tip
<fwereade> dimitern, rogpeppe: lovely
<rogpeppe> fwereade: in practice, i don't *think* it would be a problem if we didn't recognise the Machines argument, because environments tend to upgrade all at the same time
<fwereade> rogpeppe, disagree
<fwereade> rogpeppe, dimitern: please confirm which workers are currently using the api
<rogpeppe> fwereade: in 1.12, i think it's the machiner
<fwereade> rogpeppe, in trunk
<dimitern> fwereade: upgrader, deployer and machiner
<dimitern> rogpeppe: upgrader as well, right?
<rogpeppe> dimitern: yup
<rogpeppe> fwereade: i *think* this particular incompatibility is only a problem if there's an agent that's persistently using an earlier version.
<fwereade> rogpeppe, I would prefer us not to half-ass compatibility even if we can get away with it today
<rogpeppe> fwereade: i agree entirely, which was why i suggested the change
<rogpeppe> fwereade: but it's interesting to consider what the actual ramifications of the incompatibility might be in *practice*
<fwereade> rogpeppe, gtg, but I note in APIWorker
<fwereade>     runner := worker.NewRunner(allFatal, moreImportant)
<rogpeppe> fwereade: yes, that will change very soon
<fwereade> rogpeppe, cool, so long as we're aligned on it being important
<rogpeppe> fwereade: indeed
<dimitern> mgz: ping
<mgz> dimitern: second branch? :)
<dimitern> mgz: yep :)
<dimitern> rogpeppe: can you take a look as well? https://codereview.appspot.com/12461043/
<mgz> DeadEnsurer really is funny
<rogpeppe> dimitern: looking
<dimitern> it might as well be GrimReaper
<rogpeppe> dimitern: i wonder if we might want to consider combining LifeGetter, StatusGetter, DeadEnsurer etc into a single type
<rogpeppe> dimitern: i dunno actually, just a passing thought
<dimitern> rogpeppe: I'll actually do something like this in the final CL
<dimitern> rogpeppe: but they are good on their own, at least for now
<dimitern> sweet.. now with the last CL the machiner API is reduced to a single constructor
<dimitern> rogpeppe: thanks!
<dimitern> rogpeppe, mgz: last one (for now) https://codereview.appspot.com/12464043
<mgz> dimitern: ta
<sidnei> can i get one more review on https://codereview.appspot.com/12143043/ ?
<dimitern> rogpeppe: now that we have a separate apiserver/agent, we'll get rid of apiserver/machine/agent.go and rename the path to apiserver/machiner, right (well, pending deprecation in 1.14). For the uniter, I'll create it in apiserver/uniter directly
<rogpeppe> dimitern: i don't quite understand. rename which path? create what in apiserver/uniter?
<dimitern> rogpeppe: rename apiserver/machine to apiserver/machiner, remove agent.go from apiserver/machine; create apiserver/uniter/uniter.go
<rogpeppe> dimitern: sgtm
<dimitern> rogpeppe: but the renaming and removing agent.go only in 1.14, right?
<rogpeppe> dimitern: yeah
<natefinch> dimitern, rogpeppe, whoever else: what's the best way to get bazaar to show diffs in an external diff tool? I'm a big fan of Beyond Compare, but a quick google gives me like 6 different ways to hook up external diff/conflict resolution... and none of them look particularly good.
<rogpeppe> natefinch: i use qbzr
<rogpeppe> natefinch: so there is some way of doing it, but i don't know more than that
<natefinch> rogpeppe: thanks, that's a good place to start
<dimitern> natefinch: +10 to qbzr
<rogpeppe> natefinch: mgz is good on bzr
<rogpeppe> dimitern: do you use any other qbzr feature than qdiff?
<mgz> you can also just alias something with the --using flag
<dimitern> rogpeppe: I use a few - qlog, qannotate, qdiff, q
<dimitern> not just q :)
<mgz> the reson the docs aren't completely clear on this is there are two scenarious you might want to use beyond compare
<mgz> just looking at the current state of the tree, a la `bzr diff`, and resolving merges
<mgz> and there's not unified config for the two
<natefinch> mgz: pretty much any time I *can* use Beyond Compare, I do :)
<dimitern> sidnei: ping rogpeppe and mgz directly :)
<rogpeppe> dimitern: good point.
<rogpeppe> sidnei: looking :-)
<dimitern> rogpeppe: and then maybe look my third? :)
<mgz> natefinch: so, for starts you just want somethig like `bzr alias bc="diff --using=/path/to/bc"`
<natefinch> mgz: thanks
<dimitern> mgz: I feel bad to bug you again..
<mgz> dimitern: it's okay, I keep wombling off onto other code
<mgz> poking is justified
<dimitern> :) ok, noted
<mgz> dimitern: why have you moved away from table-ish tests to a similar style but with five or so seperate condidiotns in one test case?
<dimitern> mgz: which one are you refering to?
<mgz> having the inputs x0..x5 at the top, then the expected results at the bottom of a long-ish test not obviously connected seems more confusing to me
<mgz> TestEnsureDead and TestSetStatus in the previous mp
<dimitern> mgz: what do you suggest?
<rogpeppe> sidnei: reviewed
<mgz> I'm just curious, those tests aren't so huge, but I thought it was the kind of thing you favoured the table driven style for
<dimitern> mgz: well, imo table-driven tests are good for a lot of similar cases, which only slightly differ in their setup; this is a single test with multiple arguments
<sidnei> rogpeppe: thanks!
<sidnei> rogpeppe: http://paste.ubuntu.com/5951789/
<sidnei> or dimitern ^
<rogpeppe> sidnei: sorry, use DeepEquals
<dimitern> sidnei: yeah
<sidnei> that works, thanks!
<sidnei> rogpeppe: next: ./apt.go:24: invalid method expression exec.Cmd.Output (needs pointer receiver: (*exec.Cmd).Output)
<sidnei> oh, just did as the error message, works.
<dimitern> niemeyer: ping
<niemeyer> dimitern: Yo
<dimitern> niemeyer: hey
<dimitern> niemeyer: so no rpc/jsoncodec lines at all in all-machines.log?
<niemeyer> dimitern: Yeah
<dimitern> niemeyer: any DEBUG stuff at all?
<niemeyer> dimitern: Yeah
<dimitern> niemeyer: that's really strange.. can you check machine-1.log or some other machine log?
<niemeyer> dimitern: The file is also very messy
<niemeyer> dimitern: It's missing line breaks
<niemeyer> dimitern: It has about 3 messages about rpc, all of them of the style:
<niemeyer> INFO JUJU:jujud:machine rpc: discarding obtainer method reflect.Method{Name:"Kill"
<dimitern> niemeyer: anything with "juju.worker.machiner" in there? (not "juju/worker/machiner")
<dimitern> niemeyer: ah, should be printed once per method in the beginning - it's messy, yes
<niemeyer> dimitern: Nothing at all
<dimitern> niemeyer: can you paste the log somehow? it shouldn't be that huge
<niemeyer> dimitern: Definitely.. just a sec
<niemeyer> dimitern: Btw, I wouldn't be surprised if I have an out-of-date dependency.. but it's not clear which. I'd expect the agent stuff to be all within juju-core itself
<dimitern> niemeyer: it is, yes
<sidnei> rogpeppe: did you meant to suggest CombinedOutput? the comment is a bit unclear
<rogpeppe> sidnei: ah, sorry missed your earlier remark
<rogpeppe> sidnei: (*exec.Cmd).Output
<niemeyer> % juju ssh 0 cat /var/log/juju/all-machines.log | pastebinit
<niemeyer> http://paste.ubuntu.com/5951849/
<sidnei> rogpeppe: yes, past that now, onto the next comment about capturing stderr
<niemeyer> dimitern: ^^^
<niemeyer> Hmm
<niemeyer> Didn't quite work
<rogpeppe> sidnei: it depends where the apt- command prints its error message to
<rogpeppe> sidnei: perhaps it's safer to use CombinedOutput, unless it's cluttered with stdout stuff
<dimitern> niemeyer: maybe it's a depedency actually
<niemeyer> Weird.. piping isn't working
<sidnei> rogpeppe: seems to be stderr: https://pastebin.canonical.com/95449/
<dimitern> niemeyer: you can try comparing the godeps generated dependencies.tsv file from here https://codereview.appspot.com/12361043/ to a run of godeps -t `{go list ./...}` in juju-core/ (get it from launchpad.net/godeps/
 * rogpeppe goes to look for his keyring for 2-factor auth
<sidnei> rogpeppe: sorry, pasting to public
<niemeyer> dimitern: Holy crap.. it's one gigantic line!
<dimitern> niemeyer: the log?
<sidnei> rogpeppe: http://paste.ubuntu.com/5951862/
<rogpeppe> sidnei: thanks
<rogpeppe> sidnei: it looks as if the error lines have a leading "E:"
<niemeyer> dimitern: Yeah
<rogpeppe> sidnei: if that's standard, perhaps we should use those for the error message. i wonder if we can have several.
<sidnei> indeed. conversely it doesn't print any error on missing config, it just produces no output.
<niemeyer> dimitern: It's breaking the editor as well
<rogpeppe> sidnei: i guess we could split them and join them with semicolons or something
<dimitern> niemeyer: hmm.. maybe take a look for anything suspicious in cloud-init-output.log ?
<sidnei> rogpeppe: http://paste.ubuntu.com/5951872/
<niemeyer> dimitern: Please note that the system is actually working
<dimitern> niemeyer: hmm.. perhaps try zipping the logs and passing it over U1 or something?
<rogpeppe> sidnei: i imagine there are other possible failure modes
<dimitern> mgz, rogpeppe: poke https://codereview.appspot.com/12464043/
<sidnei> rogpeppe: i'll make it like the cmd() in git.go then.
<rogpeppe> sidnei: yeah, that seems reasonable
<rogpeppe> sidnei: thanks
<natefinch> mgz: bzr diff --using seems to only show diffs serially, which is kind of annoying. Is there a way to just have it show them all in parallel?  I don't see anything obvious in the man pages
<mgz> not for diff I think, the equivalent for merge can be smarter
<mgz> ...though that probably wants you to resolve everything sequentially
<mgz> ah, maybe qbzr has better handling for this?
<natefinch> qbzr looks like it should work... evidently some kinks to work out
<natefinch> it's not opening the external diff I specified in the config by default, and if I switch to bcompare from the qdiff window, the command line seems not to expand environment variables
<mgz> have to tried just running `bzr qdiff` with args?
<mgz> modifying your alias to be qdiff rather than diff may be all you need
<natefinch> qdiff doesn't seem to have using
<mgz> hm, no, the flags are different
<mgz> so, is through config only
<natefinch> yeah, I can get qdiff to pop up the default diff, and then choose beyond compare from the list of external diffs... but it doesn't seem to want to pop up beyond compare by default
<dimitern> rogpeppe: are you seious about the tests?
<dimitern> serious
<rogpeppe> dimitern: only if we're serious about the tests for any of the other of those methods. as they are all going away very soon (assuming my proposal is accepted), i have difficulty in suggesting seriously that you write the tests
<dimitern> rogpeppe: yep
<mgz> natefinch: as you may have gathered, I don't use external diff toos so am not clear on all the details... possibly qbzr doesn't support going straight to a subprocess for this
<dimitern> rogpeppe: I'll shovel up most of these ifaces soon
<mgz> looks like what we really want is the better qbzr code in bzr core anyway
<rogpeppe> dimitern: that's what i'm doing now
<natefinch> mgz: no problem, thanks for the help anyway
<dimitern> rogpeppe: ah, even better then :)
<rogpeppe> dimitern: i'm also making some changes to the names package that i hope you'll like
<dimitern> rogpeppe: nice, can't wait to see them
<dimitern> mgz: well, as we discovered with rogpeppe, Setterer is actually a word for some bizare profession, but Watcherer is just a proper name in some parts :)
<natefinch> dimitern: have you gotten qdiff to work with external diff by default?
<dimitern> natefinch: since I couldn't use my favorite meld, I didn't bother trying to setup external diff with bzr; qdiff seems fine for my uses
<natefinch> dimitern: dang
 * dimitern signs off for today
<dimitern> g'night all!
<rogpeppe> dimitern: g'night
<sidnei> rogpeppe: fixed 'em all
<rogpeppe> sidnei: brilliant, thanks
<sidnei> rogpeppe: should i poke someone else re: apt-config, or you're comfortable with approving?
<rogpeppe> sidnei: it would be really good to get at least one person who knows about that stuff to have a sanity check
<rogpeppe> sidnei: i don't really feel comfortable approving something that i don't know anything about
<sidnei> there's not really much to know about it other than 'man apt-config' afaict
<sidnei> rogpeppe: i've validated the approach with smoser fwiw
<rogpeppe> sidnei: it would be nice if he could have a look at the code, but if it's not possible, then i guess it'll be ok
<sidnei> rogpeppe: +1'd in private
<rogpeppe> sidnei: cool
<sidnei> rogpeppe: there will be a new feature in cloud-init itself so that apt_https_proxy is a valid config option, then we can remove the 99proxy-extra hack when that's out there.
<rogpeppe> sidnei: cool
<rogpeppe> sidnei: just doing a last once-over
<mattyw> I seem to get permission denied when sshing to a unit and cd'ing to /var/lib/juju/agents/<unit-name>/charm, anyone else seen this? < - juju 1.13.0
<rogpeppe> sidnei: i hope that apt doesn't sometimes print warnings to stderr :-)
<sidnei> i could look at the source, but meh. it's generally not very verbose.
<rogpeppe> sidnei: reviewed
<sidnei> rogpeppe: when putting }) in the newline, should i end the last line with ',' as well?
<rogpeppe> sidnei: yes
<rogpeppe> sidnei: (you need to)
<sidnei> do i need to lbox propose again or just rv-submit?
<sidnei> oh, apt-config output is case sensitive
 * sidnei looks for case-insensitive flags on go regex
<sidnei> cool fixed.
<sidnei> rogpeppe: sent for landing
 * rogpeppe smells the sweet smell of frying onions and garlic from downstairs
<rogpeppe> time to stop for the day :-)
<mgz> how very british :)
<rogpeppe> mgz: pea risotto. lots of peas from the garden, mmm.
<mgz> sounds great
<rogpeppe> mgz: smells great :-)
<rogpeppe> g'night all
<mgz> later rog
<davecheney> jamespage: no, sorry, i do not know about pyjuju -> go juju upgrades
<davecheney> at one level, we always said there is no upgrade path for pyjuju-> go juju environments
<davecheney> i am not sure if that was your question
#juju-dev 2013-08-06
<jtv> wallyworld: thanks for your reply to my simplestreams question... I have another fun one though.
<wallyworld> ok
<jtv> Is the Endpoint field in an ImageConstraint ever actually used?
<wallyworld> yes
<jtv> Where?
<wallyworld> to determine which cloud to use - endpoint + region uniquely define the cloud
<jtv> Okay, but where?  Not in simplestreams.go.
<wallyworld> the index file contains region/endpoint pairsd
<jtv> Nor in validation.go.
<wallyworld> let me look
<jtv> Yes, but I'm talking about the Endpoint field in the image constraint.
<jtv> This parameter is awkward to obtain; the data suggests that the simplestreams metadata should be the source for that item in the first place.
<wallyworld> jtv: in getImageIdsPath() in simplestreams.go - the CloudSpec structs are compared
<jtv> Bugger.
<wallyworld> no, simplestreams shouldn't e the source
<wallyworld> a cloud deployment is defined by region + endpoint
<wallyworld> that is a separate concept to simplestreams
<jtv> I see.
<wallyworld> i thought all the azure endpoints were well known?
<wallyworld> lie for ec2?
<wallyworld> like
<wallyworld> in the ec2 case, they are hard coded
<jtv> Yes, they are -- I got them from the simplestreams data.  :)
<wallyworld> lol
<wallyworld> to use simplestreams, one must first know one's cloud
<jtv> We know our cloud.  It's "Azure."
<wallyworld> in internals
<wallyworld> the
<jtv> Never heard of these endpoints until I saw them in the simplestreams data...
<wallyworld> that's not simplestreams' fault :-P
<jtv> Perhaps not.  But as far as I'm concerned, all endpoints could have been the same for Azure.
<wallyworld> i think they are mostly?
<jtv> Mostly.
<jtv> Not as good as all.
<wallyworld> i'd just take the shortcut the ec2 uses and enumerate them in code
<jtv> "All" has cyclomatic complexity of 1.  "Mostly" has 2 or more.
<bigjools> is endpoint just an api address?
<jtv> Enumerating them in the code has the added advantage that a change in regions may require an immediate code update.
<wallyworld> yes
<wallyworld> typically
<wallyworld> for openstack, its the url to use to query the identity service
<wallyworld> for ec2, i *think* we actually don't use it, not sure
<bigjools> jtv: china has a different endpoint iirc
<jtv> bigjools: that's the problem I'm talking about, yes.
<bigjools> but gwacl is currently hard-coding just the one anyway
<jtv> Well I'm not sure that's the same thing in the first place.
<jtv> Because we're hardcoding the URLs that the documentation gives as fixed.
<bigjools> the problem is that there's two endpoints in Azure
<bigjools> and I think I only saw one in simplestreams
<jtv> Two.
<bigjools> it lists the storage endpoint too?
<bigjools> I didn't see it do that
<jtv> It does, but you may have to skip over a lot of other data in the json to get to it.
<bigjools> jtv: where? https://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:azure.json
<jtv> bigjools: search that page for "endpoint" and you'll find it.
<bigjools> jtv: there's no storage endpoint there they are only management endpoints
<jtv> What's a storage endpoint?
<bigjools> there's a totally different URL for storage access
<jtv> Oh, I missed your question.  But how do you get storage endpoints?
<bigjools> exactly!
<bigjools> they are hard-coded right now
<bigjools> but I expect china has a different one like the management side does
<jtv> Maybe.  But the documentation always says "here's the URL for this request" and never "here's the URL for this request outside of mainland Chia."
<jtv> *China
<bigjools> indeed
<jtv> So it's not a given that these endpoints matter to us at all.
<jtv> There is a documentation page about this.
<jtv> It's consistently giving me "no data received."
<jtv> Can anyone else see what's on http://windowsazure.cn/zh-cn/ ?
<bigjools> the first time I saw the china endpoint was in the simplestreams data so I wonder where that was seen to know to put it in there?
<jtv> bigjools: that's exactly what I'm trying to figure out as well.  Can you access that page I just linked to?
<bigjools> yes, it's in chinese
<jtv> Bugger.
<jtv> I guess they don't serve that page in Asia or something.  :/
<bigjools> chromium helpfully offers a translation
<jtv> Ah yes, let Google, Microsoft, and the Chinese government slug it out.  Any mention of URLs?
<bigjools> nope
<bigjools> can't see any api doc links either
<bigjools> I think we should ignore the endpoints for now
<jtv> I agree --- except for the purposes of simplestreams, where they're required.  :(
<bigjools> they are?
<jtv> Yes... in simplestreams, the endpoint is the only thing that identifies Azure.
<jtv> Ah, this blog post explains it: http://geekswithblogs.net/shaunxu/archive/2013/06/10/tips-an-tricks-of-developing-on-windows-azure-china.aspx
<jtv> So it looks as if what we have now simply isn't going to work for mainland China anyway.
<jtv> (Hong Kong seems to be on the international network, not the Chinese intranet)
<bigjools> jtv: sorry, you need to know an endpoint in advance to make use of simplestreams data?
<bigjools> and yes that blog shows that storage endpoint is different for china too
 * bigjools curses bitey things and goes to get anti-itch cream
<jtv> bigjools: yes, this is a pair of parameters we must provide to the imagemetadata code in order for it to find us an OS image: "region" and "endpoint."  The endpoint is the only thing that tells it we're looking for an Azure image, as opposed to (say) an image in EC2.
<bigjools> ok
<jtv> No worries, I'm implementing.
<jtv> Can't have nothing at all working for all of mainland China...
<sidnei> hey folks, trivial fix: https://codereview.appspot.com/12510043
<thumper> sidnei: done
<axw> heh, at the same time :)
<sidnei> thanks :)
<tasdomas> hi
<tasdomas> does juju-core support peer relations?
<thumper> tasdomas: yes
<tasdomas> thumper: and these relations can be established between units running different services (as long as the relation name is the same)?
<rogpeppe> mornin' all
<thumper> no, peer relations are between units of one service
 * rogpeppe is slightly scraped and bruised
<tasdomas> thumper: is there any way to do service discovery in a juju environment?
 * thumper hands tasdomas to rogpeppe
<tasdomas> morning, rogpeppe
<rogpeppe> tasdomas: hiya
 * rogpeppe needs some context
<rogpeppe> tasdomas: what do you mean by "service discovery" there?
<tasdomas> rogpeppe, let's say I have two charms: a server charm and a client charm
<tasdomas> tey both expose a relation
<tasdomas> s/tey/they
<tasdomas> is it possible for the client charm, when it is deployed, check if a service charm is already running in that environment
<tasdomas> and initiate a relation automatically?
<rogpeppe> tasdomas: no, relations must be added explicitly by the user
<tasdomas> rogpeppe, I see, thanks!
<rogpeppe> tasdomas: np
<jtv> Any reviewers available for https://codereview.appspot.com/12512043 ?  It matches a gwacl API change, so I'd like to get it in quickly and minimize the window for inconvenience.
<rogpeppe> jtv: looking
<noodles775> dimitern: Thanks for the review yesterday. Do I just need to find a second LGTM? (https://codereview.appspot.com/12469044/ )
<dimitern> noodles775: yep; only really trivial changes can land with a single LGTM
<dimitern> jtv: looking as well
<noodles775> Any reviewers able to do a second check on https://codereview.appspot.com/12469044/ (just updates juju init to create the environments.yaml by default, as per https://bugs.launchpad.net/bugs/1208491 )
<_mup_> Bug #1208491: Juju init should write an environments.yaml <papercut> <juju-core:In Progress by michael.nelson> <https://launchpad.net/bugs/1208491>
<rogpeppe> noodles775: i'll have a look after i've reviewed jtv's branch
<noodles775> Thanks rogpeppe
<dimitern> jtv: reviewed
<rogpeppe> jtv: reviewed
<rogpeppe> noodles775: reviewed
<dimitern> rogpeppe: so
<dimitern> rogpeppe: you said you're workng on agententity + unifying the common bits?
<rogpeppe> dimitern: yup
<dimitern> rogpeppe: how's that going?
<rogpeppe> dimitern: i've got two CLs in the works; i'm about to propose one
<dimitern> rogpeppe: ok
<dimitern> rogpeppe: i was looking at the uniter code and came up with a list of client-side api stuff we need: http://paste.ubuntu.com/5954390/
<rogpeppe> dimitern: quite a lot of stuff there doesn't need to be in separate API calls, but i'm sure you know that
<dimitern> rogpeppe: now, from there I'll reverse it to a bunch of server-side calls
<dimitern> rogpeppe: yeah, I know
<rogpeppe> dimitern: https://codereview.appspot.com/12473043/
<dimitern> rogpeppe: so your idea is to expose Entity and get rid of the functions that use it, leave only the interfaces?
<rogpeppe> dimitern: yes
<rogpeppe> dimitern: i don't see that those functions add anything over just doing a type conversion inline
<dimitern> rogpeppe: maybe complement that with a bunch of errors in the errors package, describing X does not do Y
<dimitern> rogpeppe: like "%q does not support lifecycles"
<rogpeppe> dimitern: perhaps
<rogpeppe> dimitern: if so, probably one error type would be sufficient
<dimitern> rogpeppe: reviewed
<rogpeppe> dimitern: thanks
<noodles775> rogpeppe: do you think that we should support both flags together (ie. `juju init -f --show`), or just give one precedence?
<rogpeppe> noodles775: i think it's probably ok to just ignore -f if --show is given
<rogpeppe> noodles775: nothing to force
<noodles775> Cool
<jtv> Thanks dimitern & rogpeppe for your reviews!  Could either of you update gwacl on the build machine so I can land?  I'll also send out an email to juju-dev about updating dev systems.
<dimitern> jtv: i'm on it
<jtv> Merci.
<rogpeppe> dimitern: responded. i'm interested in your thought on the "tagName" issue
<rogpeppe> s/thought/thoughts/
<dimitern> jtv: done
<jtv> Thanks!
<dimitern> rogpeppe: it is indeed from a tag, otherwise it's from an id or unitName, etc.
<rogpeppe> dimitern: it's *from* a tag, yes, but it isn't a tag itself
<rogpeppe> dimitern: because a tag has a "kind-" prefix, which this does not
<dimitern> rogpeppe: and please use "test %d:" it makes more sense imho and as william said (and I agree) my eyes are trained to look for that in log outputs
<dimitern> rogpeppe: ah, good point
<dimitern> rogpeppe: can we call it fromTagSuffix then?
<rogpeppe> dimitern: sgtm
<dimitern> rogpeppe: replied
<rogpeppe> dimitern: PTAL
<dimitern> rogpeppe: looking now
<dimitern> rogpeppe: only one comment - the argument to *TagSuffixToId should be called "suffix" I think
<rogpeppe> dimitern: ah, good point
<dimitern> rogpeppe: otherwise, LGTM  still
<rogpeppe> anyone fancy a quick review? https://codereview.appspot.com/12473043
<noodles775> rogpeppe: updated - thanks. https://codereview.appspot.com/12469044/
<rogpeppe> noodles775: ah, that's not quite what i had in mind
<rogpeppe> noodles775: i think that --show should print the generated configuration data without trying to write it to a file
<rogpeppe> noodles775: otherwise it's just a shortcut for cat $HOME/.juju/environments.yaml
<noodles775> rogpeppe: Right - I was wondering what the benefit was :) OK, hangon.
<rogpeppe> noodles775: responded with a couple of suggestions following on from the above
<noodles775> rogpeppe: done - sorry for the hand-holding.
<rogpeppe> noodles775: not at all, thanks for going along with me
<rogpeppe> noodles775: reviewed
<wallyworld_> rogpeppe: i think you forgot my reviews from yesterday? (plugin args and env.yaml simplification)
<rogpeppe> wallyworld_: oh bugger, sorry
<wallyworld_> np :-)
<rogpeppe> wallyworld_: looking now
<wallyworld_> ty
<noodles775> rogpeppe: s/ShowFile/Show/ pushed.
<rogpeppe> noodles775: cool
<rogpeppe> noodles775: thanks
<dimitern> fwereade, rogpeppe, mgz, axw: standup
<noodles775> rogpeppe: hrm, I tried `lbox submit` but apparently the perms have changed recently ( http://paste.ubuntu.com/5954761/ ). I'll check with John Meinel who apparently changed the perms recently.
<rogpeppe> noodles775: the submit procedure has changed
<rogpeppe> noodles775: we now go through a bot
<rogpeppe> noodles775: the way to submit is this:
<rogpeppe> noodles775: - set the commit message on the merge proposal (by copying and pasting from the mp description, making sure to include the codereview link)
<rogpeppe> noodles775: - mark the merge proposal as Approved
<rogpeppe> noodles775: the branch will then be merged some time later, when the bot has tested it (on the order of 15 minutes, depending on the size of the queue)
<dimitern> noodles775: or, you can use rv-submit
<dimitern> noodles775:  $ mkdir -p ~/.bazaar/plugins
<dimitern> $ bzr branch lp:rvsubmit ~/.bazaar/plugins/rvsubmit
<dimitern> $ bzr rv-submit (to land the branch)
<axw> dimitern: sorry, was idling before (I guess you figured that out tho)
<dimitern> axw: no worries, I remembered your tz and that you're not usually joining us for the standup, because it's too late for you
<axw> yeah it is a bit - I can come back for a standup in the future if needed
<axw> I'm off now, have a nice day
<dimitern> axw: if you think it's not too late, be welcome - we're trying to keep it within 30m tops
<dimitern> axw: you too!
<axw> ok
<ahasenack> hi guys, is it expected that the charm directory has tight permissions?
<ahasenack> drwx------ 4 root root 4.0K Aug  6 00:11 /var/lib/juju/agents/unit-ubuntu-0/charm/
<ahasenack> just wondering because the postgresql charm broke because of that, apparently that wasn't the case in pyjuju
<ahasenack> https://bugs.launchpad.net/juju-core/+bug/1205286
<_mup_> Bug #1205286: charm directory permissions now more restrictive <juju-core:New> <postgresql (Juju Charms Collection):Triaged> <postgresql-psql (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1205286>
<noodles775> Thanks rogpeppe, dimitern. FWIW, it looks like you need to be a branch reviewer to use rv-submit: http://paste.ubuntu.com/5954888/
 * noodles775 tries adding an approve vote to see if that helps.
<dimitern> noodles775: well, if it doesn't, I can mark it as approved for you
<noodles775> dimitern: it doesn't. Please do, thanks.
<dimitern> noodles775: done, let's see - the bot should pick it up soon
 * rogpeppe goes for lunch
<rogpeppe> dimitern: ping
<dimitern> rogpeppe: pong
<rogpeppe> dimitern: i'm wanting to talk some things over to get my head straight
<rogpeppe> dimitern: fancy a chat?
<dimitern> rogpeppe: sure, just a sec
<noodles775> Another easy (but maybe controversial) if anyone has time: https://codereview.appspot.com/12535043
<rogpeppe> dimitern: standup hangout?
<dimitern> rogpeppe: https://plus.google.com/hangouts/_/9557d0344a6e30b1320adbbf7a4406d3d7f27b09?hl=en
<rogpeppe> wallyworld_: are you still around, by any chance?
<natefinch> cd ..
<natefinch> haha
<natefinch> anyone know why I can commit and push, but bzr pull says no pull location known or specified?
<sidnei> natefinch: try bzr pull --remember, and it will work from there on
<natefinch> sidnei: thanks...  still getting used to bazaar
<rogpeppe> a couple of branches i'd much appreciate reviews of, please: https://codereview.appspot.com/12473043/ and https://codereview.appspot.com/12551043
<rogpeppe> time for me to go
<rogpeppe> g'night all
<natefinch> g'night rogpeppe
#juju-dev 2013-08-07
<bigjools> davecheney: does the core team have any plans to set up CI for juju?
<davecheney> bigjools: yes, we want to set that up
<davecheney> but we're still stuck on the dependency management problem
<davecheney> at the moment tarmac and lb build recipies are the best we have
<bigjools> davecheney: right - nobody replied to my email suggesting a way of handling that.
<bigjools> davecheney: also at the sprint here I want to move gwacl over to the same bot that lands the core
<davecheney> bigjools: ok, cool
<davecheney> sorry, stil on da phone
<bigjools> np
 * bigjools picked a hotel for everyone. It's not too late to move it to a grotty backpackers hostel.
<davecheney> axw: did you want to try for a 1.13.1 release tomorrow or friday to pick up debug-hooks ?
<davecheney> when i say 'you', i mean 'me', except when it comes to writing the release notes, in which case i mean 'you' :)
<axw> davecheney: I'm happy to write the release notes. I understand people are waiting for it, so I guess now's as good a time as any
<axw> I'll write something up after finishing reviews
<davecheney> axw: it can be as simple as 'debug-hooks' now works
<davecheney> if there are any operatoinal diffrences just call them out
<davecheney> but it sort of works as expected right ?
<axw> yeah there shouldn't be any differences
<axw> I'll read over the docs again to make sure
<davecheney> axw: https://docs.google.com/a/canonical.com/document/d/1XPzmRAM3W7oYYmCGVYB5nR1DZyB16t8qPF68xxmWZho/edit#
<axw> davecheney: ta
<axw> davecheney: yeah, there are no operational differences between pyjuju and juju-core implementations
<axw> so I guess your one liner is good enough
<davecheney> axw: marco and I are going to use it release notes or not
<davecheney> :)
<axw> use what?
<davecheney> we're doing some training next week, north of australia
<davecheney> so debug hooks would be usefu;
<axw> ah
<axw> cool :)
<davecheney> it is one of the last things keeping people on pyjuju
<davecheney> that and the argument about provider specific constraint names
<axw> davecheney: has anyone written anything substantial about what's expected of the null/manual provider?
<axw> e.g. what is it meant to do when a request for a new instance is made?
<davecheney> axw: willimam would have to be the one to answer that
<davecheney> it sounds more like a pipe dream
<axw> davecheney: ok, ta
<axw> you don't think it's feasible?
<davecheney> but i guess it operates like a provider where someone has called 'add-machine' to prime th environment
<davecheney> axw i haven't see a spec, just a lot of hyperventilation
<davecheney> so i'll reserve judgement
<axw> right
<jtv> Hi wallyworld_ -- could I quiz you a bit more about simplestreams?
<wallyworld_> jtv: sure, give me a couple of minutes
 * jtv gives wallyworld a couple of minutes
<wallyworld_> jtv: z'up
<jtv> Hi wallyworld_.  I was wondering about the daily vs. release simplestreams.
<jtv> How should we arrange configuration for this?
<wallyworld_> what were you wondering?
<jtv> Here's what:
<wallyworld_> i just use the release ones
<jtv> Default would get the base URLs for both the release and the daily simplestreams.
<wallyworld_> i ignore the dailies
<jtv> The Stream would be configurable,
<jtv> and if desired, we could allow configuration to override the base URLs -- but hopefully nobody would really use that.
<wallyworld_> for ec2 etc, the base url is the same for both
<wallyworld_> as far as i remember
<wallyworld_> the difference is in the product ids inside the index
<jtv> It all depends on how far down that tree we navigate, doesn't it?  Lemme check
<jtv> https://cloud-images.ubuntu.com/releases/streams/v1/
<wallyworld_> my memory might be wrong
<jtv> https://cloud-images.ubuntu.com/daily/streams/v1/
<jtv> I think those need to be separate base URLs.
<wallyworld_> ah, that looks lke it may have changed since i last looked at it
<wallyworld_> yes, in that case, the base urls need to be different
<jtv> But I was thinking: if we get to select the stream, we might as well include both URLs by default.
<wallyworld_> i haven't put the base url into config yet
<jtv> Because we want the config to be flexible, but also, simple.
<jtv> That explains why searching the providers for "daily" didn't get me anything.  :)
<wallyworld_> i think including both urls will work
<wallyworld_> but i think we may also need a user override for base url
<jtv> It should -- everything is designed for multiple base URLs.  I'd be happy to leave them hard-coded for now, and make the stream configurable.
<wallyworld_> sure
<jtv> IIRC we get to pick 1 stream in our image constraint.  I wonder if we should expose it in the config that way, or let the user specify multiple streams in order of preference and loop over calls to imagemetadata.
<wallyworld_> i think using one stream only is easier to understand
<jtv> True... and having an order of preference would be something that belonged in imagemetadata anyway.
<jtv> So I'll go with a single config item then: "stream."  It'll default to the releases stream.
<wallyworld_> sounds good
<jtv> (I think that stream is called "released" but its index is called "releases" or something like that)
<wallyworld_> i'm about to propose a pretty big (in terms of diff) change to simplestreams stuff - to have a generic part and a type specific part
<jtv> Ah yes, the stream is called "released" but the base URL says "releases".
<jtv> Thanks!
<wallyworld_> so we can have image metadata and easily add tools metadata
<wallyworld_> most of the code is framework, the image andtools bits just define some type specific artifacts
<jtv> wallyworld_: actually it looks as if maybe imagemetadata.DefaultBaseURL should be an array: [released, daily].
<jtv> But absent this same streams configuration in the other providers, that would suddenly make them use daily images.  :/
<wallyworld_> the daily one could come second? and isn't daily used in the product id? no no matches would occur?
<jtv> wallyworld_: I got called away...  Yes, you're right, I think the way imagemetadata works, listing the releases URL first would mean that providers would still get the released images by default.
<jtv> So that might be worth changing globally.  Then all providers would know of both the released and the daily streams out of the box, with released being the default, and it'd be easy to add configuration options for "daily."
<jtv> In fact I don't see why any of this should be provider-specific.
<wallyworld_> jtv: it is provider specific because different providers has their own way of giving out metadata. only CPC's use the default base urls
<wallyworld_> eg canonistack defines a keystone endpoint
<wallyworld_> and we want the default urls to be used last so users can override them
<jtv> I see.
<jtv> Still, selecting a stream should be pretty standard, no?
<wallyworld_> yes
<noodles775> Thanks for the review axw. Anyone around who can do a second review for a simple branch - adding environment name to the status output: https://codereview.appspot.com/12535043
<axw> nps
<sidnei> thumper: here's a fun one if you want to take a look: http://paste.ubuntu.com/5957994/ was midway through bringing a unit up when my battery died, now it's stuck in a loop trying to recover from what seems to be an unrecoverable state
<sidnei> ok, blowing away state/bundles and state/deployer made it get unstuck
<rogpeppe> mornin' all
<rogpeppe> anyone up for giving me another review of this? https://codereview.appspot.com/12473043/
<rogpeppe> and i'm after any reviews of this: https://codereview.appspot.com/12551043/
<wallyworld_> rogpeppe: hi, i'm about to go play soccer. i re-proposed some changes to the branch. also, don't forget the plugin params branch :-) i'll be back in a few hours
<rogpeppe> wallyworld_: will do. have fun!
<wallyworld_> thanks!
<rogpeppe> mgz: could you have a quick glance at https://codereview.appspot.com/12361043/ please, just to go over my additions to the CONTRIBUTIONS file?
<rogpeppe> dimitern: you've been waiting for this a whle: https://codereview.appspot.com/12551043/
<dimitern> rogpeppe: cheers!
<dimitern> rogpeppe: ian sent me an email that he's going to miss today and tomorrow's standup due to soccer
<rogpeppe> dimitern: ok
<dimitern> rogpeppe: did you manage to chat with him yesterday?
<rogpeppe> dimitern: no
<dimitern> rogpeppe: well, it's friday then
<rogpeppe> dimitern: i
<rogpeppe> dimitern: i might be able to catch him today after soccer
<dimitern> rogpeppe: ah, ok
<axw> rvba: ping?
<rvba> Hi axw
<axw> rvba: hi. regarding the NetworkConfigurationSet thing...
<axw> NewNetworkConfigurationSet takes a slice, which you're passing in as nil
<axw> it then takes the address
<axw> so it's never going to be a nil pointer
<axw> rvba: nto sure if this is really a problem or not, I'm just going by what the gwacl comment says
<rvba> axw: right, but that's not a problem because NewNetworkConfigurationSet creates a configuration of type "NetworkConfiguration" so we don't need the "subnetNames" element to be omitted.
<rvba> axw: NewLinuxProvisioningConfigurationSet creates the same type of XML element but with a type "LinuxProvisioningConfiguration".  In this case, the "subnetNames" element *must* be omitted and we use the nil pointer trick.
<axw> gotcha
<axw> sorry!
<axw> I will LGTM now
<rvba> Sorry *I* wasn't very clear before :)
<rvba> Ta.
<dimitern> rogpeppe: I particularly like the "oneLife" method
<dimitern> :)
<dimitern> rogpeppe: I don't think you merged my trunk changes into your branch though
<rogpeppe> dimitern: no, good point - i'll do that
<dimitern> rogpeppe: reviewed
<rogpeppe> dimitern: ta!
<rogpeppe> dimitern: i removed the test cases because they're a little more awkward to implement with the shared fakeState and i didn't think they added anything significant to the test coverage, but actually i think you're right; i'll reinstate them
<dimitern> rogpeppe: thanks!
<dimitern> rogpeppe: I suspected it was a bit awkward to implement them, but they still test a valid case, which shows we handle state errors correctly
<noodles775> dimitern: thanks for the second review. I've done rv-submit (ie. just adds reviewers to description for me), could you please manually mark the MP approved? https://code.launchpad.net/~michael.nelson/juju-core/1208505-status-env/+merge/178766
<sidnei> https://bugs.launchpad.net/juju-core/+bug/1209112
<_mup_> Bug #1209112: deployer stuck in unrecoverable loop after power loss <juju-core:New> <https://launchpad.net/bugs/1209112>
<dimitern> sidnei: sorry, I don't really understand what was the issue there
<sidnei> dimitern: the zip file in the state dir was corrupted
<sidnei> dimitern: or at least that's my understanding
<dimitern> sidnei: how did that happen?
<sidnei> dimitern: local deployer, ran out of battery in my laptop
<dimitern> sidnei: but it seems the issue was with the uniter not downloading the complete zip file, not the deployer
<sidnei> dimitern: i probably got them confused, yes
<dimitern> sidnei: I edited the description, but otherwise it seems like a genuine uniter issue
<dimitern> sidnei: thanks for the report
<sidnei> np!
<dimitern> rogpeppe: unit agents are now deployed with APIInfo, right?
<rogpeppe> dimitern: yes
<rogpeppe> dimitern: they even connect to the API now
<dimitern> rogpeppe: great, so we have half of the blueprint implemented now :)
<noodles775> axw: Thanks for the approve. The mp failed with a test failure... all tests pass for me locally, so I tried re-merging devel and running that test, it still passes: http://paste.ubuntu.com/5958302/ . Any ideas?
<axw> noodles775: not sure what happened there, it looks unrelated tho.  Perhaps a race condition in another test. I'll reapprove
<dimitern> rogpeppe: I though you fixed that http://paste.ubuntu.com/5958349/
<axw_> noodles775: it worked this time
<rogpeppe> dimitern: i'm still waiting for a second review on the branch
<dimitern> rogpeppe: no, I meant TestMachineWatch
<dimitern> axw_: it's intermittent
<rogpeppe> dimitern: that's what i'm talking about
<rogpeppe> dimitern: https://codereview.appspot.com/12352044/
<dimitern> rogpeppe: ah, I see
<dimitern> axw_: can you review that please? ^^
<axw_> yup, may be beyond me tho
<rogpeppe> axw_: a good opportunity to delve into the details of the state/watcher code :-)
<axw_> :)
<dimitern> axw_: take a look at State.Sync and State.StartSync - that's the crux of the problem
<axw_> dimitern: thanks
<noodles775> axw_: Thanks - merged that time.
<axw_> noodles775: nps
<rogpeppe> axw_: and note the way that the watcher code uses time.After(0) when it gets a sync request. that's part of the source of this problem, i think.
<dimitern> noodles775: btw you might want to get rid of cobzr and use bzr instead - cobzr was created when bzr didn't support some of the features, like lightweight checkouts
<noodles775> dimitern: really? hrm - we should update the CONTRIBUTING doc. bzr has supported lightweight checkouts for a long time (I mean, I wrote this tutorial using them in 2011... https://micknelson.wordpress.com/2011/05/19/sharing-your-development-environment-across-branches/ )
<dimitern> noodles775: yeah, that part in CONTRIBUTING needs to be updated
<noodles775> dimitern: Cool. I was wondering whether it'd be worth adding a make target that does most of what CONTRIBUTING says (or should say). If you guys can let me know what the proper way is atm, I can update it and/or add the target.
<noodles775> For eg, currently I'm installing go from godeb (1.1.1), and downloading the mongodb executables and putting them in in $GOPATH/bin, but is the PPA preferred now?
<dimitern> noodles775: ppa's version of mongodb is not yet usable I think - at least I couldn't make it work couple of weeks ago when I tried last
<axw_> rogpeppe: it looks sound, but do you mind if I go over it more closely in the morning when I'm more awake?
<axw_> haven't had much sleep :)
<noodles775> dimitern: what about go - is godeb install 1.1.1 preferred for a dev env, or a PPA?
<dimitern> noodles775: we all should be using the golang version from ppa:james-page/golang-backports
<dimitern> noodles775: it's 1.1.1 + some post-release fixes, that will be like that until we backport it to the archive
<noodles775> k
<jamespage> dimitern, actually that reminds me - mgz - I pushed an up-to-date as of yesterday morning copy of golang 1.1.1 into https://launchpad.net/~james-page/+archive/golang-backports
<jamespage> do you want to sync that to your official ~juju golang PPA?
<jamespage> noodles775, dimitern: this PPA would be better - https://launchpad.net/~juju/+archive/golang
<dimitern> jamespage: ah, ok, good to know
 * dimitern bbiab
<jamespage> thats what we are using todo all of the juju-core backport builds
<noodles775> Great - thanks jamespage
<jamespage> davecheney, still awake?
<mgz> jamespage: yup, we should
<mgz> copied acriss
<mgz> *acroo
<mgz> ...
<mgz> across
<mgz> removed the saucy one as that doesn't need backporting
<dimitern> rogpeppe, mgz, natefinch: standup
<rogpeppe> mgz: you still there?
<mgz> gsh!
<mgz> how long was I talking for when g+ decided to log me out
<rogpeppe> mgz: not v long
<mgz> >_<
<natefinch> rogpeppe, dimitern, mgz: Do we avoid concrete error types on purpose? I've noticed a lot of places in the code where we do string comparisons on error messages, and it makes me antsy, because I always consider error messages to be something you shouldn't count on staying the same.
<rogpeppe> natefinch: i think we only do that in tests
<natefinch> rogpeppe: ok, that's fair
<rogpeppe> dimitern: reproposed, with all your requested fixes made, i think. i also added the same error-returning test to the other places that didn't do it. https://codereview.appspot.com/12551043/
<dimitern> rogpeppe: great, thanks! will take a look in a bit
<rogpeppe> i wondered why the network was down. turned out that Carmen had accidentally unplugged the ethernet switch when cleaning
<dimitern> :)
<dimitern> natefinch, rogpeppe: reviewed
<natefinch> dimitern:  thanks!
<rogpeppe> dimitern: thanks!
<rogpeppe> dimitern: the TODO now lives in the FindEntity method
<dimitern> rogpeppe: ah, ok I missed that then
 * rogpeppe goes for lunch
<frankban> could anyone please take a look at https://codereview.appspot.com/12522043? thanks!
<dimitern> frankban: i will
<frankban> dimitern: thank you
<dimitern> frankban: reviewed
<frankban> dimitern: thanks!
<rogpeppe> frankban: reviewed
<frankban> rogpeppe: thanks!
<frankban> dimitern, rogpeppe: I'll make the requested changes, and then I 'll try to land it. This branch unblocks the GUI and we can re-discuss API implication/changes later. How does it sound?
<rogpeppe> frankban: assuming the GUI will be able to change to use a different API if we decide to revert to the original direction, +1
<frankban> rogpeppe: cool
<dimitern> frankban: what implications?
<dimitern> rogpeppe, frankban: I'm ok with depracating that call later than 1.14
<dimitern> but at least there should be a // DEPRECATE(vTBD) or something there
<rogpeppe> dimitern: +1
<frankban> dimitern: yes, Roger suggested it in his review
<dimitern> frankban: ok then; my point is - it'll be useful later to search for these DEPRECATE tags
<dimitern> rvba: hey
<rvba> Hi dimitern.
<dimitern> rvba: if you search for all errors in the codebase, you'll see the lowercase-first format; moreover as you said errors can be chained, so they're not a sentence by themselves
<rvba> dimitern: right but in this case, it's not an error, it's a unit-test failure message. AFAIK it won't be chained.
<rvba> That's why I'm not sure the rule applies here :).
<dimitern> rvba: even these are lowercase-first - take a look at c.Logf, c.Fatalf, c.Errorf invocations in tests
<dimitern> rvba: i'm aiming for consistency here
<rvba> dimitern: ah ok, you're rightâ¦ it's a bit weird but okay, consistency is good :)
<rvba> Thanks for the heads up.
<dimitern> rvba: thanks for understanding :)
<dimitern> rogpeppe: so you merged https://codereview.appspot.com/12551043/ with 1 LGTM only? (not that i'm complaining much..)
<rogpeppe> dimitern: oh
<rogpeppe> dimitern: for some reason i was sure i'd had another review of it
<dimitern> mgz: can you LGTM that post-factum ? https://codereview.appspot.com/12551043/
<rogpeppe> dimitern: could someone else take a look at it, for form's sake?
<rogpeppe> mgz, natefinch: ^
<mgz> dimitern: sure :)
<mgz> oo, it's actually quite a complex one
<natefinch> rogpeppe:  I'll give it a quick once over, but I only have about 20 minutes before I have to jet, so it might not be enough time.  Sounds like Martin has it though.
<dimitern> mgz: not really, it's mostly restructuring and removals
<rogpeppe> natefinch: any remarks about what you see in that time would be appreciated regardless, if you want to have a look
<mgz> yeah, looks mostly renaming and shuffling, just a nice big diff :0
<natefinch> rogpeppe: GetAnnotationsResults is a struct, not a function?  Oddly named...
<rogpeppe> natefinch: the convention in the params package is to declare structs that are parameters to API calls
<rogpeppe> natefinch: the argument parameter struct is named after the call (so we might have params.GetAnnotations, for example
<rogpeppe> natefinch: the results struct gets a "Results" suffix
<rogpeppe> natefinch: i agree that in isolation it looks odd
<natefinch> rogpeppe: why use structs for params and returns?
<rogpeppe> natefinch: so that we can add extra parameters to API calls without breaking backwards compatibility
<natefinch> rogpeppe: Ahh, yeah, good point
<rogpeppe> natefinch: it means that all parameters to API calls are named in the json
<natefinch> rogpeppe: also a good point :)
<rogpeppe> natefinch-afk: you have another revier
<rogpeppe> review
<rogpeppe> dimitern: i've just thought of a potential problem with my Bootstrapper idea
<dimitern> rogpeppe: oh?
<rogpeppe> dimitern: it assumes that the only reason for calling EnvironProvider.PrepareToBootstrap is if you're about to bootstrap
<dimitern> rogpeppe: what else?
<rogpeppe> dimitern: but that's not actually true - sync-tools is an example of that
<dimitern> rogpeppe: hmm.. it needs the bucket only, but does not bootstrap
<rogpeppe> dimitern: because you want to call sync-tools to push tools out to a current env (including verifying and possibly creating the bucket), but then you don't bootstrap
<dimitern> rogpeppe: yeah, good point
<dimitern> rogpeppe: but this does not invalidate my Initialize proposal, right?
<rogpeppe> dimitern: not sure. when would you call Initialize?
<dimitern> rogpeppe: when you need to have a bucket or anything else pre-bootstrap
<rogpeppe> dimitern: where would the config argument to Initialize come from?
<dimitern> rogpeppe: you don't have to call bootstrap afterwards
<dimitern> rogpeppe: envs.yaml?
<rogpeppe> dimitern: in that case it has the same problem
<rogpeppe> dimitern: assuming you always call Initialize before calling Bootstrap
<dimitern> rogpeppe: what config do you need to pass to Initialize anyway?
<rogpeppe> dimitern: if i call sync-tools, then i call bootstrap, the bootstrap command should see the control-bucket that's been created by calling sync-tools
<rogpeppe> dimitern: if Initialize only looks at environments.yaml, and it's called before Bootstrap, then Bootstrap will see a new control bucket
<rogpeppe> dimitern: ... not the one that was created in the course of calling sync-tools
<rogpeppe> dimitern: i *think* i've got a solution, which applies equally to Initialize or Bootstrapper
<dimitern> rogpeppe: i don't think it's sync-tools' job to create a bucket
<rogpeppe> dimitern: in which case, how can you use sync-tools?
<dimitern> rogpeppe: it has to rely on the provider to do that, just like bootstrap does
<rogpeppe> dimitern: ok, assume a totally new juju environment. i've created environments.yaml with my provider credentials but no control bucket
<rogpeppe> dimitern: should i be able to call sync-tools now?
<dimitern> rogpeppe: yes, and it should check there's no bucket and call initialize
<rogpeppe> dimitern: ok, so it's created a new name for the bucket, right?
<dimitern> rogpeppe: in fact any such command have to ensure there's a bucket if it needs it
<dimitern> rogpeppe: yes, which is then stored in the local attrs
<rogpeppe> dimitern: ok, so then I call Bootstrap
<rogpeppe> dimitern: Bootstrap calls Initialize
<dimitern> rogpeppe: yeah, it sees the bucket from the local attrs and uses that
<rogpeppe> dimitern: ah, i thought we were creating new settings each time an environment was bootstrapped
<dimitern> rogpeppe: you call bootstrap the command or bootstrap from environs?
<rogpeppe> dimitern: the command
<dimitern> rogpeppe: let's revisit that a bit
<dimitern> rogpeppe: bootstrap creates new local config, if it doesn't exist yet
<rogpeppe> dimitern: what if exists but the old bucket name has now been taken?
<dimitern> rogpeppe: what of it? is it any different from how is it now?
<dimitern> rogpeppe: wait, what? "taken" ? by whom
<rogpeppe> dimitern: someone random on the internet
<dimitern> rogpeppe: it was created by iniialize when you called sync-tools, and it's not removed, no?
<dimitern> rogpeppe: the bootstrap reuses that, given it's already there in the local conf
<dimitern> rogpeppe: istm the only way for the bucket to be taken is if it was removed before bootstrap
<rogpeppe> dimitern: hmm, i think that might work. we always read local attrs, even when calling Bootstrap. we destroy local attrs when we call destroy-environment
<dimitern> rogpeppe: +1
<dimitern> rogpeppe: we should also do it when bootstrap fails perhaps
<rogpeppe> dimitern: hmm, not sure
<dimitern> rogpeppe: i'm not big on that one either
<dimitern> rogpeppe: it's just one command away to remove it
<rogpeppe> dimitern: yeah
<rogpeppe> dimitern: it does means that you can meaningfully call destroy-environment even when you haven't bootstrapped
<dimitern> rogpeppe: I think you can do that even now
<rogpeppe> dimitern: yeah, probably
<dimitern> rogpeppe: yep, just tried - no error
<rogpeppe> dimitern: i'm trying to work out the implications of this in a distributed environment
<rogpeppe> dimitern: i think this whole thing makes it more crucial that we generate the env's UUID the first time we write out the local attrs
<dimitern> rogpeppe: well, having local conf and envs.yaml means you need to provide both of these to a third party wishing to access you env
<dimitern> your
<rogpeppe> dimitern: i'm thinking of debugging this, when you've got a stale local attrs file because someone else (probably you) has destroyed & re-bootstrapped an environment
<dimitern> rogpeppe: worst thing?
<dimitern> rogpeppe: juju won't see your env and will ask you to bootstrap it again
<dimitern> rogpeppe: but the env itself won't be affected by the stale conf
<rogpeppe> dimitern: machine 1: bootstrap; copy local env file to machine 2; destroy; bootstrap;   machine 2: bootstrap
<rogpeppe> dimitern: -> two environments
<rogpeppe> dimitern: perhaps that's fine though
<dimitern> rogpeppe: yep
<dimitern> rogpeppe: you know the saying about making things foolproof
<rogpeppe> dimitern: or, worse because silent: call sync-tools on machine 2 - it'll upload the tools to a new environment, and say it succeeded
<rogpeppe> dimitern: even though you *think* you uploaded to the original env
<dimitern> rogpeppe: shouldn't sync-tools fail if you already have the latest?
<rogpeppe> dimitern: the sync-tools on machine-2 will re-use the old control bucket
<rogpeppe> dimitern: which will have been destroyed earlier
<dimitern> rogpeppe: that way, assuming you did sync-tools originally on m1, doing it on m2, after destroying and re-bootstrapping on m1 will succeed
<dimitern> rogpeppe: yep
<rogpeppe> dimitern: no, because the re-bootstrapping on m1 will now be using a new control bucket
<dimitern> rogpeppe: we might have to think of juju export or migrate command to share envs like that
<rogpeppe> dimitern: well, that's where J**S comes in
<dimitern> rogpeppe: or at least store the intent of sharing it somewhere, so we can warn users about destroying shared envs
<dimitern> rogpeppe: before that
<rogpeppe> dimitern: nah, i think there's nothing useful we can do there
<dimitern> rogpeppe: how about a flag "shared" that, if set, will make some commands issue warnings
<rogpeppe> dimitern: honestly, i think the user should just be aware
<rogpeppe> dimitern: although....
<rogpeppe> dimitern: perhaps we could print a message when generating a new environment
<dimitern> rogpeppe: juju should be aware as well i think, if doesn't hurt and helps us give more context to an absent-minded users
<sidnei> rogpeppe, dimitern: are you guys happy with https://codereview.appspot.com/12235043/ ? im dying to use it
<dimitern> sidnei: rogpeppe can say about his review, but I think william had some issues with it
<dimitern> sidnei: I'll take a closer look
<sidnei> just realized nothing prevents me from merging locally, so. ha.
<sidnei> so much nicer.
<sidnei> seems like axw addressed william's concerns, unless there were further ones.
<dimitern> fwereade: hey
<fwereade> dimitern, heyhey
<dimitern> fwereade: https://codereview.appspot.com/12235043/ - wanna have a look?
<dimitern> fwereade: you already reviewed that earlier
<fwereade> dimitern, I started yesterday but distractions happened
<fwereade> dimitern, I'll see if I can finish it now
<dimitern> rogpeppe: you perhaps as well?
<rogpeppe> dimitern: i'm just on it
<dimitern> cheers!
<rogpeppe> sidnei: i've just re-reviewed. looks very close to me, but fwereade's MMV
<rogpeppe> fwereade: hiya BTW,
<rogpeppe> fwereade: hope all's going ok
<fwereade> rogpeppe, heyhey
<fwereade> rogpeppe, yeah, not too bad
<fwereade> rogpeppe, I *might* manage more than one review today but don;t count on it :(
<sidnei> thanks!
<rogpeppe> natefinch: i made a couple of suggestions in https://codereview.appspot.com/12546043/
<natefinch> rogpeppe: yep, was just looking at them
<rogpeppe> natefinch: i haven't verified that the Command-wrapping stuff works, but i don't see why it shoudn't and it seems to me a reasonably clear win
<dimitern> sidnei: that CL is 2xLGTMed now, so expect it to land soon :)
<natefinch> rogpeppe: I liked Andrew's suggestion of generalizing the error handling, but I was going to do it via a defer in each Run command... like this:
<natefinch> func (c *AddMachineCommand) Run(ctx *cmd.Context) (err error) {
<natefinch> 	defer func() { err = c.HandleError(err, ctx.Stderr) }()
<natefinch> rogpeppe: wrapping keeps it all in one spot, which is nice, though
<rogpeppe> natefinch: yeah, that's the idea
<rogpeppe> natefinch: so if we add new environment commands, they'll just get the benefit automatically
<rogpeppe> natefinch: and it doesn't need any new mechanisms in cmd, which is kinda nice
<natefinch> rogpeppe: yeah, I didn't want to add more to the Command interface
<rogpeppe> natefinch: yeah
<rogpeppe> natefinch: it's unnecessary
<natefinch> rogpeppe: ok, sounds good. I'll work on that now.
<rogpeppe> natefinch: cool, thanks
 * rogpeppe has reached eod
<rogpeppe> g'night all
#juju-dev 2013-08-08
<marcoceppi> is -r not implemented for relation-set yet with juju-core?
<bigjools> would people please stop adding ~juju to more freaking teams
 * bigjools adds another procmail entry line 
<jtv> wallyworld_: I'm trying to add some debug logging to imagemetadata, to help track down failures to find matches.
<wallyworld_> you poor bastard
<jtv> In the outer getImageIdsPath() loop, there's a condition: can't load package: package launchpad.net/juju-core: no Go source files in /home/jtv/go/src/launchpad.net/juju-core
<jtv> Ahem, not that one.
<jtv> Ignore that one; old buffer contents.  It's a Go error that means "you put an option on the command line in the wrong place."
<jtv> The code I mean is:
<jtv> if metadata.DataType != imageIds {
<wallyworld_> imageIds is "image-ids" i think
<jtv> Is that a search (i.e. true for at most, and ideally exactly, one iteration)?  Or a filter (i.e. true for any number of iterations)?
<jtv> Yes, it is.
<jtv> ISTM it's meant as a search.
<wallyworld_> the simplestreams contains data belonging to different data types
<wallyworld_> it is meant to iterate over them and only look at image-ids data types
<jtv> FWIW it's usually a good idea to extract these search loops, so that you get explicit "not found" conditions instead of just arriving at the end of the big loop without success.
<jtv> So it's a filter, not a search?
<wallyworld_> yes, i guess you could say that
<jtv> Thanks.  I'll apply loop fission.
<wallyworld_> it's a filter built into the loop
<jtv> The code will be easier to follow, but also have explicitly recognizable, reportable "nothing found" conditions.
<wallyworld_> the not found condition hasn't been important
<jtv> I find these things usually improve a lot with loop fission.
<wallyworld_> maybe it is now
<jtv> It's bloody well important to me!
<jtv> For debugging.
<jtv> Just about the only error condition is "sorry, no luck."  A nightmare to debug.
<wallyworld_> the idea is that there can be many metadata files and if one doesn't suit, it just looks at the next one
<wallyworld_> no need to bother the user with such trivialities
<wallyworld_> but for debugging, i see your point
<jtv> And I yours, w.r.t. the user.
<jtv> Well, except for cases maybe where "not found" errors are quietly skipped.  :)
<wallyworld_> that's desired behaviour sometimes :-)
<jtv> I can see that -- but I think even non-dev users need to be able to get at debug output sometimes.
<wallyworld_> agreed
<wallyworld_> i just never added any cause it sorta just worked
<wallyworld_> so i didn't have an itch to scratc
<wallyworld_> h
 * jtv scratches
<rogpeppe> mornin' all
<axw> rogpeppe: morning
<rogpeppe> axw: hiya
<rogpeppe> axw: simplifying the patterns is a great idea, BTW. that had also occurred to me as a possibility
<axw> rogpeppe: cool
<rogpeppe> axw: we can easily expand the syntax later if people have the need
<axw> okey dokey
<axw> rogpeppe: LGTM stands then? I wanted to give you a chance to respond before I went ahead and merged
<rogpeppe> axw: i'm just having a last look through it
<axw> ok
<axw> fwereade: thanks for the review; no additional feedback needed, if you're happy with the latest changes
<fwereade> axw, it looked great yesterday and you're more than competent to make the changes I suggested ;)
<axw> heh thanks :)
<rogpeppe> axw: shouldn't validatePattern allow a slash ?
<axw> validatePattern is called on either side of the slash
<axw> rogpeppe: see the strings.Split in newUnitMatcher
<rogpeppe> axw: ah i'd missed that, yes.
<rogpeppe> axw: reviewed
<axw> rogpeppe: ta
<thumper> axw: are you around?
<axw> thumper: I am
<thumper> I'm thinking about what to get you to look at
<thumper> I'll have a chat with jam and fwereade and get back to you :)
<axw> thumper: okey dokey, thanks
<thumper> I am tempted to leave you going through all the high bugs thinking that at this rate, we'll be all done real soon now :)
<thumper> but that is a bit mean
<davecheney> thumper: et al, is there a bug for the juju-plugins fuckup ?
<thumper> davecheney: I don't think so, jcastro was pinged by people with private clouds going "huh? fubared"
<thumper> davecheney: feel free to file one
<davecheney> kk
<davecheney> https://docs.google.com/a/canonical.com/document/d/1XPzmRAM3W7oYYmCGVYB5nR1DZyB16t8qPF68xxmWZho/edit#
<davecheney> release notes for 1.13.1
<davecheney> jcastro: did anyone log an issue for the juju-metadata snafu ?
<davecheney> if not, no worries
 * davecheney waves to anonymous frog
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: dimitern | Bugs: 4 Critical, 82 High - https://bugs.launchpad.net/juju-core/
<mthaddon> hi folks, we're having a problem with CPU constraints and juju-core 1.13.0-1~1594. --constraints "cpu-cores=1 mem=4G arch=amd64" gets 4 core machine
<mthaddon> is this a known issue? couldn't see anything in the bugs list on LP
<axw> oops
<axw> fwereade: your surprise was well warranted ;)
<axw> I missed a test
<axw> (wordpress endpoints)
<dimitern> mthaddon: the constraints are the minimum values you want, not the maximum
<mthaddon> dimitern: is there some way of specifying maximum values as well?
<dimitern> mthaddon: perhaps there are no 1 core x64 machines with 4G ram
<dimitern> mthaddon: not that I know of
<dimitern> mthaddon: even py-juju I think didn't support that
<mthaddon> in pyjuju you could specify an instance-type which would achieve what we're trying to do here
<dimitern> mthaddon: in juju-core, the algorithm is to satisfy your given constraints to the best possible, even exceeding them, if not possible to meet them all
<mthaddon> and in this case there very much is an instance type defined that matches the constraints we want exactly
<dimitern> mthaddon: yeah, that's being discussed and should be coming soon
<mthaddon> dimitern: any idea of what "soon" means here?
<dimitern> mthaddon: not with certainty, but most likely a couple of weeks or so after the IoM sprint
<dimitern> which ends this week
<mthaddon> dimitern: if we can show that there is an instance satisfying the constraints (cpu-cores=1 mem=4G arch=amd64) and nothing more that's available, would we be more likely to get that bug fixed or better to wait for the ability to specify an instance-type? this is a critical issue for us as to whether we can deploy on juju-core or need to stick with pyjuju
<dimitern> mthaddon: try specifying them incrementally, don't overspecify - first try ram, then cores or cpu-power, at last arch
<rogpeppe> mthaddon: what provider are you using?
<mthaddon> dimitern: if I can boot an instance with the exact constraints outside of juju, but then juju-core can't surely that'd be a good enough test of a bug?
<mthaddon> rogpeppe: openstack
<rogpeppe> mthaddon: hmm, it's possible that the simplestreams data is erroneous
<dimitern> mthaddon: since juju starts filtering the instance by ram, and 4G defined as 1024*1024*1024, you might want to try specifying something a bit lower
<dimitern> mthaddon: but by all means, if you want, file a bug as well, describing what you expect as a constraints selection behavior
<mthaddon> dimitern: we have plenty of flavors that should match - http://paste.ubuntu.com/5961886/
<mthaddon> rogpeppe: is that something we'd need to fix? how would we verify that? I'm not familiar enough with simplestreams yet to know how this could affect this issue
<dimitern> mthaddon: let me check the exact matching
<rogpeppe> mthaddon: i'm looking into the code at the moment
<mthaddon> cool, thx guys
<rogpeppe> mthaddon: hmm, it would be interesting to have some debugging prints in instances.getMatchingInstanceTypes
<dimitern> rogpeppe: or Tracef
<rogpeppe> dimitern: that's what i mean
<frankban> rogpeppe: hey, re your suggestions to use params.ServiceUpdate as the only argument for Client.ServiceUpdate. I have no problem in doing that, but that would be the only client call like that. E.g. ServiceDeploy, which takes almost the same number of arguments, does not use params in its signature.
<rogpeppe> frankban: yeah, ServiceDeploy is pushing the boundaries
<rogpeppe> frankban: but i think ServiceUpdate is slightly different, as all the parameters are optional
<frankban> rogpeppe: good point, I'll make the change
<rogpeppe> frankban: thanks
<frankban> rogpeppe: should I use *params.ServiceUpdate?
<rogpeppe> frankban: that's a good question. on balance, i think no. others' opinions may vary.
<frankban> rogpeppe: I'll use the value then, thank you
<rogpeppe> frankban: i might use a pointer if it was useful to be able to omit all parameters
<frankban> rogpeppe: it is not, the service name is not optional
<rogpeppe> frankban: yeah
<rogpeppe> frankban: and it wouldn't be that useful anyway
<frankban> yeah
<mthaddon> is this juju-core's way of telling me there's no environment booted? http://paste.ubuntu.com/5961915/
<rogpeppe> mthaddon: lol
<rogpeppe> mthaddon: it's entirely possible
 * mthaddon files a bug about that
<rogpeppe> mthaddon: there may already be one
<mthaddon> k, will check
<mthaddon> https://bugs.launchpad.net/juju-core/+bug/1028053
<_mup_> Bug #1028053: running juju status on a non bootstrapped environment gives confusing results <juju-core:Triaged by danilo> <https://launchpad.net/bugs/1028053>
<dimitern> mthaddon: this one is being worked on I think
 * mthaddon nods - obviously not very high priority, but good to have logged as a bug
<axw> INFO juju.environs.sync sync.go:166 download 2882kB, uploading download 2882kB, uploading
<axw> love it
<frankban> dimitern, rogpeppe: re-proposed the branch with the requested changes. I am going to approve it unless you want to take another look
<rogpeppe> frankban: i'll just take a very quick look
<frankban> rogpeppe: cool thanks
<rogpeppe> frankban: done
<frankban> rogpeppe: great, proceeding
<frankban> rogpeppe: merged, and filed bug 1210076
<_mup_> Bug #1210076: Expose minUnits in the CLI <juju-core:New> <https://launchpad.net/bugs/1210076>
<jtv> wallyworld_: you wouldn't be available to review my logging branch by any chance?  https://codereview.appspot.com/12655043
<mthaddon> https://bugs.launchpad.net/juju-core/+bug/1210086
<_mup_> Bug #1210086: CPU instance constraints not working as expected <canonical-webops> <juju-core:New> <https://launchpad.net/bugs/1210086>
<dimitern> rogpeppe, natefinch: https://codereview.appspot.com/12658043 UniterAPI
<dimitern> sorry, left out a piece of debugging code, reproposing without it now
<rogpeppe>  dimiternlooking
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe: cheers
<dimitern> rogpeppe: strictly speaking, these tests are not the same and even if some logic is the same (setup/teardown/asserts), the thing that's being tested is different
<dimitern> natefinch: ping
<dimitern> thumper: hey, I need a review on a small branch if you have 5m? https://codereview.appspot.com/12658043/
 * thumper looks
<jtv> And anyone free to review this one? https://codereview.appspot.com/12655043
<rogpeppe> jtv: looking
<dimitern> jtv: me too
<rogpeppe> jtv: i was thinking earlier that something like this might be a good idea
<jtv> Thanks guys
<jtv> Well, Roger, here you go.  :)  I'm sure you'll be able to make some good suggestions then.
<dimitern> thumper: thanks!
<sidnei> noodles775: feel like taking a look at mthaddon's bug above ^?
 * noodles775 looks
<noodles775> Sure - sounds interesting :)
<dimitern> jtv: reviewed
<jtv> Thanks dimitern
<rogpeppe> jtv: reviewed
<sidnei> mthaddon: can you paste the full list of instance types?
<sidnei> mthaddon: also the one that got selected
<mthaddon> sidnei: I don't have the actual one that got selected, would need to confirm with thedac if he has that in history - the full list of instance types coming up (it's massive)
<sidnei> mthaddon: ok. i think you said it selected the 4cores one instead of a 1core one
<mthaddon> yep
<dimitern> rogpeppe: standup?
<sidnei> rogpeppe: https://bugs.launchpad.net/juju-core/+bug/1210086/comments/1 makes sense?
<_mup_> Bug #1210086: CPU instance constraints not working as expected <canonical-webops> <juju-core:New> <https://launchpad.net/bugs/1210086>
<rogpeppe> sidnei: yeah, that looks plausible
<rogpeppe> sidnei: it should probably try to minimise all constraint attributes in the absence of a monetary cost
<rogpeppe> rvba: you have a review: https://codereview.appspot.com/12251044/
<sidnei> rogpeppe: what do you suggest, changing byCost to take the other attributes into account or changing the openstack environ to use a combination of ram and cpu cores?
<rogpeppe> sidnei: just having a look
<rogpeppe> sidnei: i think it would probably be better to change byCost
<rogpeppe> sidnei: that way providers don't have to worry about fitting a multidimensional value into a scalar
<sidnei> rogpeppe: and change openstack to have nil as default cost instead of ram, makes sense?
<rogpeppe> sidnei: yes, i think so
<sidnei> ack, on it.
<rogpeppe> sidnei: we will need to decide on precedence
<rogpeppe> sidnei: i *think* we should probably weigh ram higher than cores
<rogpeppe> sidnei: so, in order of precedence (each only considered if the previous values are equal): cost, ram, cores
<sidnei> sounds good to me
<rogpeppe> sidnei: cool
<sidnei> rogpeppe: to double check, lowest cost but highest ram and highest cores?
<rogpeppe> sidnei: no, i think lowest of all of them
<rogpeppe> sidnei: we want to pick the slimmest machine that fits the constraints, i think
<sidnei> rogpeppe: i'd think if two machines have the same cost and one has more ram i'd pick the one with more ram, but i could do that by tweaking the constraints anyway, so fine.
<rogpeppe> sidnei: yeah
<rogpeppe> sidnei: when monetary cost is implemented, it will tend to pick least ram/cores, so this is just extending that heuristic
<dimitern> rogpeppe, mgz: next CL https://codereview.appspot.com/12661043/ - uniter client-side API
<rogpeppe> dimitern: LGTM
<dimitern> rogpeppe: thanks
<sidnei> rogpeppe: would cpu-power come before or after cpu-cost?
<rogpeppe> sidnei: cpu-cost?
<sidnei> rogpeppe: sorry, cpu-cores
<rogpeppe> sidnei: your guess is as good as mine. perhaps before, as fast cpus are more expensive in general?
<sidnei> k
<wallyworld_> rogpeppe: hi there. have you had a chance to look at my latest changes for the control bucket stuff?
<rogpeppe> wallyworld_: i have; i really think this should probably go on hold until we can have a proper discussion about it along with fwereade
<wallyworld_> :-(
<rogpeppe> wallyworld_: i know, but i think this taps into something a bit deeper that it's worth sorting out earlier rather than later
<wallyworld_> ok then
<rogpeppe> wallyworld_: i'm really sorry - i've been thinking a lot about the issue over the last couple of days
<wallyworld_> i'm following the design document
<rogpeppe> wallyworld_: can you point me to the design doc again please - i always lose Drive links
 * wallyworld_ looks it up
<wallyworld_> https://docs.google.com/a/canonical.com/document/d/1ncsNzDHauV_9Fwsm59GjX0-O-_g48UYnObefwtBStG0/edit
<wallyworld_> william was also ok with the approach i started with - get rid of control bucket, admin secret,and also the keys (in the followup branches)
 * rogpeppe wishes google drive remembered what documents i'd viewed
<wallyworld_> it can do
<wallyworld_> goto docs.google.com
<wallyworld_> and your recently viewed ones are there
<wallyworld_> plus you can put them in folders etc
<rogpeppe> wallyworld_: i've viewed that doc recently and it isn't there
 * wallyworld_ shrugs
<dimitern> mgz: ping
<rogpeppe> hmm, it is now
<rogpeppe> wallyworld_: anyway
<rogpeppe> wallyworld_: the design doc says this:
<rogpeppe> $ juju bootstrap myenv1 --template experimental
<rogpeppe> # generate admin-secret, CA cert/key, and control-bucket
<rogpeppe> wallyworld_: the problem i have is that your proposal generates these values even when you're not bootstrapping
<rogpeppe> wallyworld_: for instance, if you haven't bootstrapped an environment, and you call juju status, it'll generate these values, which i think is wrong
<rogpeppe> wallyworld_: i think your proposal is nice and non-disruptive to most juju code, but i think something more is needed
<mgz> dimitern: hey
<wallyworld_> rogpeppe: well, it will generate the attrs whenever NewFromName is called ie whenever a new env is required to be constructed
<rogpeppe> wallyworld_: exactly
<wallyworld_> so whats the issue there?
<wallyworld_> a new env requires certains attrs
<wallyworld_> some come from yaml, others are generated
<rogpeppe> wallyworld_: i think the provider should have the chance to verify that the new bucket can actually be created
<wallyworld_> then next time an env is used, the attrs are there
<dimitern> mgz: https://codereview.appspot.com/12661043/ .
<dimitern> ?
<rogpeppe> wallyworld_: and there might be other attributes that we want to put in there that are not random, but obtained by querying the provider
<wallyworld_> rogpeppe: status won't work without being bootstrapped. the next bootstrap can verify the control bucket etc. but if we want the new attrs done on bootstrap, that will likely mean major code changes
<rogpeppe> wallyworld_: i don't think the changes are too bad - i've already got an experimental branch that makes ec2 work in that kind of way
<rogpeppe> wallyworld_: BTW, i've just realised your changes will break sync-tools
<wallyworld_> i guess it depends on what is considered the lifecycle of an env and what is allowed to create one as opposed to just using one
<rogpeppe> wallyworld_: yeah, absolutely
<rogpeppe> wallyworld_: that's the crux of the issue
<wallyworld_> sync-tools breakage is bad
<rogpeppe> wallyworld_: here's my current thoughts on how we might change the EnvironProvider interface: http://paste.ubuntu.com/5962612/
<mgz> dimitern: lgtm
<rogpeppe> wallyworld_: both the bootstrap and sync-tools commands would call Prepare rather than Open, and save any extra attributes in .environments
<dimitern> mgz: thanks!
<wallyworld_> prepare vs open is good - separates out the semantics
<wallyworld_> open would fail is prepare weren't called first
<rogpeppe> wallyworld_: yeah
<rogpeppe> wallyworld_: when calling Open we'd pass it all the config attributes including local ones
<rogpeppe> wallyworld_: actually that needs to be true for Prepare too
<rogpeppe> wallyworld_: and we're delete the local attributes when destroy-environment is called
<wallyworld_> or have open merge them and complain if they aren't there
<rogpeppe> wallyworld_: i think it can be done outside the provider
<wallyworld_> NewFromName and the associated Read functions would need looking at
<rogpeppe> wallyworld_: yes
<wallyworld_> a bit of work there
<rogpeppe> wallyworld_: not too much i think actually
<rogpeppe> wallyworld_: we don't have to change the semantics of NewFromName
<rogpeppe> wallyworld_: we'd add something like PrepareFromName (bad name!)
<rogpeppe> wallyworld_: very little code is actually directly involved in bootstrapping
<wallyworld_> i'll think about it tomorrow. quite late here now. i actually took  sick day today. i got a broken cheek bone from being king hit from behind. had to get xrays and go to the police etc etc
<wallyworld_> it happened during a soccer game of all things
<rogpeppe> wallyworld_: woah
<rogpeppe> wallyworld_: you poor bugger
<wallyworld_> it's not too bad. i can't chew without a fair bit of pain. makes eating kind hard
<rogpeppe> wallyworld_: BTW here are the changes to environs/ec2 required for the above interface. http://paste.ubuntu.com/5962643/
<rogpeppe> wallyworld_: soup!
<wallyworld_> thanks, i'll look in detail and see what i can add to my branch
 * wallyworld_ hopes the ocr can look at his other simplestreams branch
 * wallyworld_ goes to take some more painkillers and heads to bed
<rogpeppe> wallyworld_: mend soon!
<rogpeppe> wallyworld_: g'night
 * rogpeppe goes for some lunch
<wallyworld_> see ya, thanks for discussion
<rogpeppe>  wallyworld_: np, sorry for pushing back in the review
<rogpeppe> i'm planning to take the day off tomorrow, BTW
<rogpeppe> dimitern, nate-finch, wallyworld_: ^
<dimitern> rogpeppe: ok
<nate-finch> rogpeppe: cool, have fun with the long weekend
<sidnei> rogpeppe: https://codereview.appspot.com/12569044
<sidnei> dimitern: ^
<dimitern> sidnei: looking
<dimitern> sidnei: reviewed
<sidnei> can i get a second eye on https://codereview.appspot.com/12569044/ ? im getting someone to copy over the flavors from prodstack to canonistack so i can do some manual testing soonish.
<dimitern> rogpeppe: ^^ ?
<rogpeppe> dimitern: just back from lunch
<rogpeppe> sidnei: lookin
<rogpeppe> g
<sidnei> dimitern: i need to upload-tools and bootstrap with the newest to test right?
<dimitern> sidnei: yeah, but before that, make sure you do "go install ." inside cmd/juju and cmd/jujud
<sidnei> dimitern: and what's the syntax for sync-tools?
<dimitern> sidnei: juju sync-tools --help ? :)
<dimitern> sidnei: you don't need sync-tools if you're doing --upload-tools
<sidnei> dimitern: ah, bootstrap --upload-tools, i was looking for an upload-tools command
<dimitern> sidnei: yeah - when in doubt, use juju help commands and --help on a command - it's really good
<rogpeppe> sidnei: reviewed
<mthaddon> sidnei: do you need the flavors on canonistack to be able to test?
<mthaddon> (if so I can try and get that pushed through)
<sidnei> mthaddon: i think jjo is on it, let me check again
<sidnei> i see a couple dozen are already added, i guess the script is running
<dimitern> sidnei: if, while testing you  see the issue mthaddon reported earlier is resolved, can you please link your MP to the bug as well?
<sidnei> already did
<dimitern> sidnei: great, thanks!
<dimitern> rogpeppe: you have a few branches ready for landing
<sidnei> rogpeppe: if cpu power is only set in one of the instances then it fallsback to cpu cores, right?
<dimitern> sidnei: cpu power and cpu cores are not linked in any way, and cpu power has only ec2
<rogpeppe> sidnei: i think that if cpu power is only set in one of the instances, the one with no cpu power set will win
<rogpeppe> sidnei: because 0 is less than a positive cpu power
<rogpeppe> sidnei: or...
<rogpeppe> sidnei: are you asking about desired behaviour?
<rogpeppe> sidnei: if so, yes
<sidnei> rogpeppe: per your suggested if either side is nil we skip the cpu power check
<rogpeppe> sidnei: yeah
<sidnei> rogpeppe: ok, just double checking
<sidnei> rogpeppe: done && pushed, doing manual verification.
<sidnei> manual verification worked as expected, so landing it.
<noodles775> dimitern: thanks for the trivial... I assume that means it doesn't need a second LGTM, so I did the s/'/" and submitted - you should be able to just approve the MP.
<noodles775> dimitern: btw, there were 4 bugs, so I chose one and marked the other 3 dups.
<dimitern> noodles775: great, will do now
<dimitern> noodles775: approved
<sidnei> dimitern: what's the process for creating a tarball? i thought it was in the makefile but apparently not
<dimitern> sidnei: mgz might answer that better
<sidnei> dimitern: alternatively, i guess i just need to tar up $GOPATH/bin?
<dimitern> sidnei: there's a script that generates the tarball + all dependencies, but I don't know where it is and the process is not documented clearly
<dimitern> sidnei: no, you'll need the sources as well - wget a tarball and see what's inside
<teknico> guys, where do I find the procedure to upgrade to go 1.1?
<dimitern> sidnei: why do you need a tarball anyway? isn't --upload-tools enough?
<dimitern> teknico: sudo add-apt-repository ppa:james-page/golang-backports && sudo apt-get update && sudo apt-get install golang
<sidnei> dimitern: i need to get a trunk client into a canonistack instance to run tests in jenkins.
<teknico> dimitern: thanks :-)
<dimitern> teknico: then remove all binaries in $GOPATH/linux_*/ and rebuild & retest everything
<dimitern> teknico: oops - I meant $GOPATH/pkg/linux_*/ :)
<dimitern> sidnei: I see
<teknico> dimitern: yep, and bin/ too for good measure
<dimitern> teknico: yeah
<dimitern> teknico: won't hurt
<mgz> sidnei: what are you trying to do exactly?
<sidnei> mgz: getting a binary built and shipped to a remote instance to run deployment tests from the cloud
<mgz> just make a tarball? running the thingy under scripts/ does that
<sidnei> mgz: although i could just as easily build the binary over there
<mgz> you can probably just use --upload-tools?
<mgz> or, if it's not for testing trunk, just use sync-tools
<mgz> then you can pull the files out of your control-bucket
<mgz> the tarball is nice because you can be exactly sure about what have been built and reproduce it
<mgz> if you don't need that though (eg, for a one off test), then the existing dev methods are probably fine
<sidnei> mgz: yeah, i'll go for building from tip directly.
<mthaddon> hi guys, who controls when PPAs get built for https://launchpad.net/~juju/+archive/devel/+packages ?
<mthaddon> er, when packages get built for that PPA, I mean
<sidnei> mthaddon: seems to be https://code.launchpad.net/~dave-cheney/+recipe/juju-core
<sidnei> seems they are built on https://code.launchpad.net/~juju/+archive/experimental by default
<mthaddon> yeah, looks like it
<thedac> sidnei: great, which revno do we need to wait for?
<mthaddon> so can we just request a build using the button there?
<mthaddon> thedac: we need 1621
<mthaddon> (or later)
<sidnei> i can trigger a build it seems
<mthaddon> woohoo
<thedac> cool
<thedac> I'll port that to precise-cat as soon as it builds
<sidnei> thedac: https://code.launchpad.net/~juju/+archive/experimental/+builds?build_text=&build_state=all it's where it's at
<thedac> thanks
<sidnei> thedac: source package is built, since i guess that's all you need
<thedac> sidnei: thanks
<nate-finch> rogpeppe: for the record, the import cycle I was having mysteriously went away after I finished some rework from code review suggestions.  Weird, but I'll take it
<rogpeppe> nate-finch: it may have been to do with state object files, i suppose
<rogpeppe> s/state/stale/!
<nate-finch> rogpeppe: yeah, I was thinking that too
<sidnei> fwereade: got a minute?
<nate-finch> dimitern, rogpeppe: https://codereview.appspot.com/12546043/   mostly minor code review changes
<dimitern> natefinch: looking
<dimitern> nate-finch: reviewed
<nate-finch> dimitern: why are we renaming gocheck to gc?  I tend to try to use the original name unless there's a conflict
<dimitern> nate-finch: convention
<dimitern> nate-finch: because gocheck and juju-core/checkers are used a lot, we decided to alias them to gc and jc respectively
<dimitern> nate-finch: it helps to keep the code consistent
<nate-finch> dimitern: fair enough
<dimitern> nate-finch: originally, gocheck was imported as . but that's bad, just like from x import * in python, so we decided to avoid that and be explicit
<rogpeppe> nate-finch: because "gc" is better than nothing at all, but still reasonably terse
<dimitern> nate-finch: not all of the codebase is converted to the new style: 1) separating stdlib, third-party and juju-core imports by blanks; 2) using gc and jc for gocheck and checkers; 3) prefer longer names if possible, rather than 1-2 letters vars
<dimitern> nate-finch: exception for 3) (most of the time) method receivers, like func (s *someSuite) are usually ok
<nate-finch> dimitern: yeah import  as . is really bad. Glad we're moving away from it.
<dimitern> but it's better to have machiner, err := state.Machine(id) than m, e := state.Machine(id)
<nate-finch> dimitern:  yep
<dimitern> nate-finch: most of these are not really "set in stone", but like highly recommended "best practices"
 * rogpeppe finds "m" just fine for a local variable referring to a machine
<rogpeppe> nate-finch: reviewed
<dimitern> rogpeppe: but "machine" is even better :)
<rogpeppe> dimitern: personally, i don't think so
<rogpeppe> dimitern: i think extra verbosity can make code harder to read
<dimitern> rogpeppe: aha.. ok :)
<dimitern> rogpeppe: if, we're talking about takeThisVeryLongAndMeaninglessIdentifierHere, I agree
<dimitern> rogpeppe: otherwise, for short and meaningful idents, like machine, or wordpressUnit, or assertStopsCleanly := func() .. I disagree
<rogpeppe> dimitern: russ cox said it best: the length of an identifier should be proportional to its information content
<dimitern> rogpeppe: yeah, and unless it's a loop counter, it'd better be a word at least
<nate-finch> rogpeppe: I agree with you and russ. I use single letters most of the time... sometimes up to 3 letters if it's a good abbreviation of the word (like ctx for context)
<rogpeppe> dimitern: if i see m := x.Machine(), it's a fair bet that the Machine method is returning a *Machine :-)
<dimitern> nate-finch: you should try to convince thumper then :)
<rogpeppe> nate-finch: this is not idiomatic Go territory :-)
<dimitern> rogpeppe: I'm glad it isn't :)
<nate-finch> rogpeppe: that's too bad. I like almost everything in idiomatic Go
 * rogpeppe finds the standard Go library very easy to read
<rogpeppe> nate-finch: yeah, me too; it's one of the things that makes me like the language as a whole.
<nate-finch> rogpeppe: yep. Simplicity and explicitness and readability
<dimitern> rogpeppe, nate-finch: well, don't forget there are other people with different views on what's considered "readable" :)
<nate-finch> dimitern: of course.
<rogpeppe> dimitern: i understand that, and i write my juju code accordingly these days
<nate-finch> dimitern: they're just wrong ;)
<dimitern> nate-finch: hehe, wrong or not - it's consensus that matters
<nate-finch> dimitern: yes, definitely. And that's actually one of the things I like about go format... I wouldn't have made all the same choices, but I like that there's no more debate about it.
<dimitern> a bit of compromise, here and there for the benefit of even people, not that familiar with go itself
<nate-finch> dimitern: and I'm fine with whatever conventions are established in Juju
<dimitern> let's try to keep in mind that the project is open source and aims to be popular and attract external contributors
<dimitern> if a few extra chars make things easier to read, even for a novice go dev, it's a good thing
<dimitern> i kinda hate OSS projects where you've to read *pages* of rules on how to contribute, etc. because the project is so important
<dimitern> rogpeppe: btw so you'll be off tomorrow, and wallyworld as well
<dimitern> nate-finch, mgz: you'll be here tomorrow, right?
<nate-finch> dimitern: yep.
<rogpeppe> dimitern: i will be, yes
<nate-finch> dimitern: I haven't earned much vacation time in the last 8 work days :)
<dimitern> man, we're running low on workforce this week :)
<rogpeppe> dimitern: leaving very soon actuall
<rogpeppe> y
<dimitern> it's good the iom guys will be back next week
<nate-finch> rogpeppe, dimitern: you guys are on London  time, right?
<rogpeppe> nate-finch: i am
<dimitern> rogpeppe: rogpeppe yes, but I'm on CET
<rogpeppe> dimitern: dimitern is one hour earlier
<dimitern> later
<rogpeppe> dimitern: yeah. well, depends how you look at it
<rogpeppe> dimitern: you get up an hour earlier than me :-)
<dimitern> rogpeppe: i doubt it :)
<dimitern> rogpeppe: it's 7.42pm here now
<nate-finch> that's what really matters :)
<rogpeppe> dimitern: lol
<nate-finch> dimitern: makes for a late working day
 * rogpeppe is off now
<nate-finch> rogpeppe: Have a good weekend
<dimitern> rogpeppe: have a nice, long weekend then ;)
<rogpeppe> dimitern, nate-finch, mgz: g'night
<rogpeppe> dimitern: i will. going to the Lake District and up some cliffs, then down some caves to learn vertical rope technique.
<dimitern> man.. one more huge review to do and i'll be off as well
<nate-finch> rogpeppe: wow, awesome
<dimitern> rogpeppe: wow - upsailing?
<rogpeppe> dimitern: SRT
<rogpeppe> dimitern: going up and down the rope
<rogpeppe> right, gotta go!
<dimitern> rogpeppe: take care then, don't overdo it
<rogpeppe> :-)
<jrwren> has anyone attempted to write a vmware environment?
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1210328
<_mup_> Bug #1210328: cloudinit: switch apt-add-repository to use ppa:juju/stable <juju-core:New> <https://launchpad.net/bugs/1210328>
<davecheney> jamespage: thanks for getting this sorted for me/us/everyone
<davecheney> ok, going to hunt some breakfast then will cut 1.13.1
#juju-dev 2013-08-09
<davecheney> axw: ping
<axw> davecheney: ahoy
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1210328
<_mup_> Bug #1210328: cloudinit: switch apt-add-repository to use ppa:juju/stable <juju-core:New> <https://launchpad.net/bugs/1210328>
<davecheney> ^ once I cut 1.13.1
<davecheney> do you want to have a look at this ?
<davecheney> it's a one line change environs/cloudinit/something
<axw> sure
<davecheney> and a fixup for the test which is expecting the old string
<davecheney> then we can update all the docs to say 'get juju from ppa:juju/stable'
<davecheney> which also has the golang dependency and the mongodb dependency for their local provider
<davecheney> actually, let me try to tag 1.13.1 now
<axw> okey dokey
<bigjools> looks like Wally needs surgery ...
<davecheney> bloody hell
<axw> what's happened?
<davecheney> axw: i'll wait for this to land, The proposal to merge lp:~axwalk/juju-core/lp1027876-uniter-use-loggo into lp:juju-core has been updated.
<axw> davecheney: sounds good
<axw> thanks
<davecheney> axw: anything else you want to land ?
<davecheney> or I can review so you can ?
<axw> umm
<davecheney> there is no seriousl hurry on cutting 1.13.1
<axw> about to land another one: https://bugs.launchpad.net/juju-core/+bug/1167441
<axw> just addressing comments
<davecheney> kk
<bigjools> axw: not seen him in person yet but he got headbutted when playing football
<axw> bigjools: ooh :(
<axw> I used to play indoor soccer, it got pretty rough sometimes
<bigjools> and king hit (I had to look that up)
<axw> eh.. shit, not just jostling then
<bigjools> yeah I am trying to find out what he did to get that kind of whack
<axw> davecheney: do you just tag trunk, or is there a special branch that you pull changes into for releases?
<davecheney> axw: at the moment we're just tagging trunk
<davecheney> 1.12 is it's own branch
<davecheney> i don't expect that will receive any updates
<axw> ok
<axw> there's another change here, https://bugs.launchpad.net/juju-core/+bug/1121914, I'll add to the release notes
<_mup_> Bug #1121914: destroy-environment does not request confirmation <cmdline> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1121914>
<davecheney> axw: nice
 * davecheney goes to check leankit
 * axw didn't do anything in leankit
<axw> sorry
<davecheney> axw: s'ok
 * davecheney is not axw's boss
<axw> heh :)  no, but I want to do what's helpful. you use leankit to see what's in the release?
<axw> I thought you'd see which bugs are committed but not released
<davecheney> axw: i try to scrape up whatever I can
<davecheney> we used to use lp for planning what we were going to do
<davecheney> but it didn't give mramm the reportability he needed
<axw> ok
 * axw saw a task about bridging them
<axw> that would be handy
<axw> anyway, what's the process with leankit? when I start working on a bug, enter it in there?
<davecheney> axw: ideally there should be a card for what you're working on
<davecheney> if it's a bug, you can link it to LP
<davecheney> there is an optoin on the right client menu
<davecheney> which makes them red and puts it on the top of the card
<axw> ok, ta
<jtv> Hi wallyworld!
<wallyworld> hi there
<jtv> Question: imagemetadata documentation says that the Stream on a constraint can be "", but not what that actually *means*.
<jtv> What does it mean?
<wallyworld> i think stream is release vs daily etc. it forms part of the product id that is generated
<wallyworld> so if stream is "", then it is not inserted into the product id
<wallyworld> well that's what i figured from reverse engineering the format
<jtv> Then how do we select the "released" stream?
<wallyworld> pass in ""
<jtv> Ah, so back to lack of documentation as the root problem.  :(
<wallyworld> as far as i recall, for ec2 etc, released product ids have ""
<wallyworld> there may be doco on the lp:simplestreams project, not sure
<jtv> Hah.
<wallyworld> i sorta just figured out what i needed to make stuff work for ec2 and openstack
<jtv> That was where I started.  :)
<wallyworld> :-)
<wallyworld> you should just need the glue code, and let the generic functionality take care of the rest
<jtv> Okay, I've been trying to get this to work with a "released" stream but it didn't work.
<jtv> I don't have any excuse for not finding out, because I can't pass on the job of guessing the meaning to the end-user.
<wallyworld> from memory, i think right now, the stream is just used to create the ids
<jtv> Product IDs, yes.
<wallyworld> so you need to ensure the ids that are being generated are in the json
<jtv> What confused me was that the released stream had "released" in a few places.
<wallyworld> which is why :azure broke stuff
<wallyworld> ah
<jtv> Yes...  I added some debug logging to help with these things, so as of today, breaking tests are a lot more helpful.
<wallyworld> hopefully doco will catch up to code :-)
<davecheney> axw: it's the 'external card id' on the advanced tab
<davecheney> just put the lp # there
<davecheney> without adornment
<axw> ta
<axw> ah crap
<axw> goose needs updating on the bot for my test to pass
<axw> might just leave this one till later
<axw> davecheney: do you know if the bot is using rog's godeps stuff?
<axw> actually, never mind, I'm thinking of goamz...
<jtv> wallyworld: ahhh, the documentation in imagemetadata actually says Stream may be "", or "released", "daily" etc.  I guess it shouldn't actually be "released" -- can I replace that with "Use empty string for released stream"?
<wallyworld> sure. it seems to be the right thing to do at the moment doesnt it
<axw> davecheney: the bot needs updating for #1167441. It's not important, so tag at will and I'll land it later
<_mup_> Bug #1167441: environs/providers must report instance state, like py-juju <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1167441>
<axw> jtv: what do you do when you need to get gwacl updated on the bot?
<jtv> axw: we ask Dimiter or John or one of the others.  :)
<axw> jtv: rats :) thanks
<jtv> Don't call them rats.  They might refuse to update the bot.  :-P
<axw> hehe
<davecheney> axw: understood
<davecheney> re the bot
<davecheney> i think we'll figure out how to fix the deps problem in BNE
<axw> davecheney: cool
<axw> davecheney: heard any updates on that btw?
<davecheney> axw: only that you should book flights
<axw> orly? I didn't get that message
<axw> ok
<davecheney> axw: ffs
<davecheney> https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0Aia4W3c4fbL-dEp1NHFsX2JWdzNoMFFUd2U5eXRnV2c
<axw> yeah I got the email from Antonio, but he said to wait for another email before booking
<bigjools> wallyworld: hey Rocky
<wallyworld> hi bigjools
<bigjools> wallyworld: afternoon
<davecheney> ok, goose and juju are tagged for 1.13.1
<davecheney> feel free to wreck the build
<axw> davecheney: so ppa:juju/stable has mongodb-server now?
<axw> I'm adding prereq verification into the local provider, and I need to tell people where to get mongodb from
<davecheney> axw: that is correct
<davecheney> axw: pls hold
<davecheney> https://launchpad.net/~juju/+archive/stable/+packages
<davecheney> expand the mongo ones
<davecheney> P and Q are listed
<davecheney> R and S ship with the right version of mongo
<axw> righto
<axw> ta
<axw> ugh, I suppose I'll have to detect the series then
<davecheney> axw: I think we already do that
<davecheney> from memory
<axw> davecheney: yeah it just complicates the code. no drama
<davecheney> althogu lots of environs/* has been chagned since I 13.04
 * davecheney goes to look
<davecheney> axw: worst case, you can always add the ppa
<davecheney> but there will be nohting in it that matches for R and above
<davecheney> so it will be a no op
<axw> yeah ok, I'll just add it in and say that it's only for P/Q
<axw> this is for an error message
<davecheney> axw: there is already a helper
<davecheney> in environs/cloudinit/cloudinit.gop
<davecheney> NeedMongoPPA
<axw> cool
<davecheney> all you need to do is change the ppa name, and it's finger print
<axw> thanks
<davecheney> oh, and fix the tests which are expecting the old string
<jtv> Is the buildbot in trouble?
<davecheney> jtv: you mean it got the police chief's daughter pregnant ?
<davecheney> that sort of trouble ?
<jtv> Could be, could be.
<jtv> But I'm mostly thinking of weird spurious test failures such as problems creating temporary directories.
<jtv> File exists... don't tell me we just added randomized directories, but with an unseeded randomizer?  :)
<jtv> I'll paste some examples of failurse:
<jtv> *failures:
<jtv> https://code.launchpad.net/~jtv/juju-core/maas-unexport-retry/+merge/179322/comments/404888
<jtv> https://code.launchpad.net/~jtv/juju-core/maas-unexport-retry/+merge/179322/comments/404891/+download
<jtv> Actually that's all I have.  :)
<davecheney> sidnei: ping
<davecheney> sidnei: never mind, sorted it
<davecheney> is tom haddon in the house ?
<davecheney> juju destroy-environment is confused
<davecheney> http://paste.ubuntu.com/5965256/
<axw> :\
<axw> will have a look in a sec
<axw> ah, not my stuff being confused, just destroy-environment being generally confused
<davecheney> ffs
<davecheney>      ââmongodâââ11*[{mongod}]
<davecheney>      ââmongodâââ3*[{mongod}]
<davecheney> ^ what is wrong with this picture
<davecheney> this is a regression
<davecheney> this bug _was_ fixed
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1210407
<_mup_> Bug #1210407: environs/cloudinit: bootstrap machine has two mongodb proceses <juju-core:New> <https://launchpad.net/bugs/1210407>
<axw> davecheney: I'm just curious, why does it matter?
<davecheney> axw: mongo is a pig
<davecheney> 200 interrutps per second
<axw> even if it's not serving anyone?
<davecheney> yup
<axw> :|
<axw> ok
<davecheney> if you want to know why your laptop is burning your nads
<davecheney> check if that upstart job is active
<axw> heh, it has been getting quite warm
<davecheney> http://paste.ubuntu.com/5965291/
<davecheney> status is getting verbose these days
<davecheney> not to mention confusing
<axw> davecheney: https://codereview.appspot.com/12583044
<davecheney> axw: thanks
<axw> davecheney: the signing key hasn't changed AFAICT
<davecheney> cool
<davecheney> axw: protip: lbox propose -bug 1210328
<_mup_> Bug #1210328: cloudinit: switch apt-add-repository to use ppa:juju/stable <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1210328>
<axw> apart from linking the bug to the MP (which I normally do manually), does that do anything else?
<axw> apart from setting the milestone to the wrong thing ;)
<davecheney> axw: it does both of those things
<axw> yeah I got tired of the spam. maybe I'll just hack my lbox to not set the milestone
<davecheney> +1
<axw> it would seem that the bot is rooted
<axw> I'm getting the same errors as you jtv
 * davecheney bursts into tears
<davecheney> https://codereview.appspot.com/12518044/
<axw> davecheney: reviewed
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: - | Bugs: 4 Critical, 81 High - https://bugs.launchpad.net/juju-core/
<axw> dimitern: hey, do you have access to the machine the bot is on?
<axw> its /tmp is full of gocheck temp dirs, so now gocheck is making the bot unhappy
<pavel> is here anybody who can explain me how could I use juju set from external script inside of deployed instance?
<pavel> for example I do smth like juju deploy my_service, and there on service I have a daemon which could run juju set
<dimitern> axw: yes I have, let me check
<axw> dimitern: thanks. also, could you please update goamz?
<dimitern> axw: done and done
<axw> dimitern: thanks!
<jtv> Thanks from me as well, dimitern
<dimitern> jtv: you're welcome :) i wasn't too late, right?
<jtv> No, it's not weekend for me yet.  :)
<jtv> Happily landing branches.
<jtv> dimitern: oh, we're also importing the checkers as jc?
<dimitern> jtv: yes, and definitely now as .
<jamespage> davecheney, np
<dimitern> not*
<jtv> How a single-character typo can lead to disaster.  :)
<dimitern> :)
 * jtv should have been fast and said "OK I changed that throughout the project"
<dimitern> hehe
<jtv> Any other reviewers available for https://codereview.appspot.com/12682045/ ?
<dimitern> jtv: looking
<jtv> Thanks once more
<dimitern> jtv: you've got a review
<utlemming> gz,jam, wallyworld: I just pushed simple streams for HPCloud
<utlemming> gz, jam, wallyworld: any chance I could get one of you to smoke test it?
<jtv> Thanks again dimitern
<hazmat> does an uncaught panic take down the goroutine or the whole process?
<hazmat> trying to refactor status into the api server and noticing it tosses quite a few panics on bad inputs
<thumper> hazmat: everything I think
<nate-finch> hazmat: everything
<thumper> hi nate-finch
<nate-finch> thumper: morning :)
<thumper> nate-finch: how are you settleing in?
<nate-finch> thumper: Pretty well. The european guys are a great help.
<thumper> coolio
<dimitern> b
<nate-finch> thumper: how's it going at IOM?
<thumper> well, this week has gone pretty well
<thumper> no real surprises
<thumper> and no big changes of direction
<nate-finch> always a good thing
<thumper> so, all in all, pretty good
<dimitern> nate-finch: that's us, europeans, nice guys :)
<thumper> a few minor things that we've agreed to track
<nate-finch> dimitern: ha, yep :)
<dimitern> thumper: how about api progress?
<thumper> but on the whole, I'm pretty happy with the meetings and things gone on here
<nate-finch> nice nice
<thumper> dimitern: less screaming than I expected
<thumper> :)
<dimitern> thumper: oh, cool!
<dimitern> thumper: well, considering it's only me and roger on it for the pas 2-3 weeks
<thumper> dimitern: don't stress, no names were brought up at all
<thumper> just, it has taken longer than we estimated
<dimitern> thumper: good to know :)
<thumper> we screwed up
<thumper> things are coming along
<thumper> that's pretty much how it was communicated
<thumper> and accepted
<dimitern> oh yes, they are - we're a week or so away from finishing the uniter api
<thumper> \o/
<dimitern> then the cli is the next biggest chunk
<thumper> we are trying to get some help for that too
<thumper> we do need to do the provisioner though
<thumper> as there are provisioner tasks on every machine
<dimitern> yeah
<dimitern> but the priority still is to finish non-state-machine workers' api first, right?
<thumper> yes, but what I'm saying is that the provisioner IS a non-state-machine worker
<thumper> lxc-provisioner
<dimitern> yeah - and that's already done and using state
<thumper> ah... wat?
<thumper> what do you mean done AND using state?
<dimitern> i mean the refactoring of the provisioner into environ and lxc ones
<thumper> sure, I did that
<dimitern> ok
<thumper> what I'm trying to make sure we understand is that to have a non-state machine agent only using the API, we need to fix he provisioner
<dimitern> sure
<nate-finch> dimitern: by the way, was there a team meeting yesterday morning? I realized as I was going to bed last night that I totally forgot about it.
<dimitern> nate-finch: it got cancelled
<nate-finch> dimitern: cool. I hoped as much with everyone at IOM.
<dimitern> nate-finch: there was a thread on juju-dev about it
<nate-finch> dimitern: a mailing list? maybe I'm not on that...
<dimitern> nate-finch: oh, you should subscribe to juju@lists.ubuntu.com (general and user-focused discussions) and juju-dev@lists.ubuntu.com (dev-focused, announcements, etc.)
<mgz> there are a bunch of others as well
<nate-finch> dimitern: I just joined those two.
<mgz> nate-finch: see MailingLists on the internal wiki if you've not done that already
<nate-finch> mgz: thanks, I'll take a look
<mgz> ...I'll add a juju section there
<nate-finch> mgz: yeah, I'd been there and subscribed to the main few that are suggested.
<mgz> so there's one more juju list, which I've now linked from that page, whoch you should subscribe to
<nate-finch> mgz: thanks. I was on canonical juju, just not the other two.
<sidnei> noodles775: https://bugs.launchpad.net/juju-core/+bug/1210484
<_mup_> Bug #1210484: upgrade-charm --repository doesn't expand ~ to $HOME <juju-core:New> <https://launchpad.net/bugs/1210484>
 * noodles775 looks
<nate-finch> dimitern, mgz: standup?
<noodles775> sidnei: oh - it's not fallout from one of the changes I landed, just something easy to go on with? (just finished bug 1183159, so can grab it)
<_mup_> Bug #1183159: juju ssh no longer passes parameters after unit/machine to ssh <bitesize> <cmdline> <juju-core:In Progress by michael.nelson> <https://launchpad.net/bugs/1183159>
<sidnei> noodles775: yup, indeed
<dimitern> mgz: ?
<mgz> neary there...
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: mgz | Bugs: 4 Critical, 81 High - https://bugs.launchpad.net/juju-core/
<dimitern> nate-finch, mgz: https://codereview.appspot.com/12696043
<mgz> looking
<nate-finch> ditto
<dimitern> mgz, nate-finch: any progress?
<mgz> dimitern: will return to it...
<nate-finch> dimitern: almost done. Had to help my wife with toddler for a bit :)
<dimitern> no worries :)
<dimitern> I have 2 more incoming
<nate-finch> there ya go
<nate-finch> dimitern: ^
<dimitern> nate-finch: cheers
<dimitern> nate-finch, mgz: responded to https://codereview.appspot.com/12696043/
<mgz> dimitern: ta
<dimitern> nate-finch, mgz: and this is the next one: https://codereview.appspot.com/12698043 (reproposing, due to diff issue)
 * dimitern gtg for that contract, bbiab
<mgz> dimitern: as you hopefully gathered, my comments were misc thoughts rather than must-fix-this things
 * dimitern is back
<nate-finch> dimitern: I just posted a response to your response... maybe I'm misunderstanding something, but it seems like my code is doing what your code was doing.
<dimitern> nate-finch: I did replace my code with yours and the results were different
<dimitern> nate-finch: (after tweaking it a but to compile)
<nate-finch> dimitern: hmm... weird.
<dimitern> nate-finch: the main issue is for unit-foo-42 it returned err=nil
<dimitern> nate-finch: you can try it out, pull my branch locally
<nate-finch> dimitern: yeah, was about to.  It's not like it's a big deal either way, I'm just curious where my code went wrong
<dimitern> nate-finch: there's one more thing - the code, as written is common across a lot of the other apiserver methods
<dimitern> nate-finch: well, for starters, r := result.Results[i] gives you a copy of the result, so it won't set anything back, unless you do result.Result[i] = r at the end of the loop; also it has to be r.Result, r.Ok = unit.PrivateAddress() (no := )
<nate-finch> dimitern: right right...  coming from a reference-based language always screws me up
<dimitern> mgz, nate-finch: updated https://codereview.appspot.com/12698043/ after william's "not a review" :)
<dimitern> actually, i missed some stuff that should be gone, reproposing now
<nate-finch> dimitern: ok
<nate-finch> dimitern: sounds like william had the same problem with the error handling that I did - it's not hiding errors from getUnit, it just passes them on to result.Results[i].Error = common.ServerError(err)
<henninge> Hi!
<henninge> I am trying to run the test suite but I get panics.
<henninge> I think it is because it can't find the MongoDb server.
<henninge> Do I need to configure/start/... MongoDb to be able to run the test suite?
<dimitern> henninge: yeah, you'll need mongodb + ssl, the mentioned in the docs
<dimitern> just a sec
<henninge> Oh yeah, a doc pointer would be great ;-)
<dimitern> http://juju-dist.s3.amazonaws.com/tools/mongo-2.2.0-precise-amd64.tgz
<dimitern> henninge: ^^ it's in the README
<henninge> Oh, I have it installed
<henninge> I am working with it on my other project, too.
<henninge> it's 2.4, though
<henninge> 2.4.5
<dimitern> henninge: that won't work
<dimitern> henninge: there are known issues with 2.4
<henninge> I see
<dimitern> henninge: what series are you running?
<henninge> This is my workstation, so it's raring.
<dimitern> henninge: ok, so replace 'precise' with 'raring' in the link above
<henninge> But the instances in the cloud run precise
<henninge> thanks
<dimitern> henninge: that doesn't matter, for tests run locally, your machine series matters only
<dimitern>  henninge: also, sice you have another mongo installed, make sure the one frm the tarball gets picked
<henninge> Yeah, I was just wondering about that. You mean PATH-wise?
<dimitern> henninge: just a sec, i think not..
<henninge> I can stop the other server but I cannot uninstall it.
<dimitern> henninge: it searches for it in /usr/bin/mongod
<henninge> Hm
<dimitern> henninge: so either try linking the tarball version there and link the other one somewhere else, or use a vm
<dimitern> henninge: or, alternatively, first try PATH hacking
<henninge> I'll look into those, thanks ;-)
<dimitern> henninge: export PATH=/path/to/tarball/mongo/bin:$PATH
<henninge> yes
<dimitern> henninge: hope it helps
<henninge> dimitern: Changing the PATH is totally enough. Cool ;)
<dimitern> henninge: cool! nice to know :)
<henninge> Also, it uses non-standard ports, so the other instance can keep running
<dimitern> henninge: oh yeah, and it starts/stops it as needed
<dimitern> nate-finch, mgz: I decided to update the last CL with 2 more methods: Destroy and SubordinateNames(), becasue I removed GetServiceEntity and SetResolved per william's request
<dimitern> so i'll be reproposing with the changes and update description very soon
<henninge> dimitern: Can you give me an example of how to run a single test suite?
<henninge> I guess I don't understand the line in CONTRIBUTING
<henninge> go test -gocheck.f '$REGEX'
<henninge> Or rather, a single test or whatever
<dimitern> henninge: cd inside the dir, then run "go test -gocheck.v -gocheck.f MySuite" or -gocheck.f TestSomething
<henninge> Oh, inside the dir
<henninge> cool
<dimitern> henninge: the -gocheck.f takes any subset of a suite.method match
<dimitern> mgz, natefinch: finally updated https://codereview.appspot.com/12698043/ - please take a look, won't change it anymore :)
<natefinch> dimitern: ok, looking
<dimitern> mgz: ping
<natefinch> dimitern: I don't know the code well enough, but is this correct?  canModify, err := u.getCanRead()  canRead qualifies you for modification as well?
<dimitern> natefinch: it should be called getCanModifyOrRead, but it checks the same thing
<natefinch> dimitern: ok, just checking
<dimitern> natefinch: i.e. only the owning unit agent for that particular unit is allowed to read and/or modify it's data
<dimitern> natefinch: and that's determined at API login time by the authenticating entity's tag
<natefinch> dimitern: fair enough.  Just wanted to make sure there wasn't another canModify method that should have been called.
<dimitern> natefinch: nope
<natefinch> dimitern: man, that is some boilerplate, huh?  Did you mention that this stuff was going to go away, or was that something else?
<dimitern> natefinch: take a look at the other apiserver modules, it's pretty much all like that, and it unavoidable for now
<dimitern> natefinch: the whole point it to support bulk ops on the API level, and some time in the near future we'll gut out the insides and write them properly with some concurrent mongo bulk ops or something, keeping the interface intact
<natefinch> dimitern: I think I figured out why your tests are passing, but the code can still return errors that aren't masked by ErrPerm - your unit-foo-42 test returns false from canRead because it's an invalid tag.... but if it was a valid tag, and u.getUnit() returned an error, then that error would propagate up.  I'm not sure how to put that in the tests, though.
<dimitern> natefinch: just by adding an extra line?
<dimitern> natefinch: well, unit-mysql-0 is both valid and exists, but still returns errPerm
<dimitern> natefinch: that's because you never get to getUnit - canRead filters that out
<natefinch> dimitern: so should it be impossible that getUnit returns an error when canRead returns true?
<dimitern> natefinch: basically yes, unless you consider things like when the state connection drops or your unit just got deleted from someone else
<natefinch> dimitern: basically != exactly :)  Those errors could get propagated up from this method. It might be rare, but it's possible right now.  If that's ok, then that's ok.
<dimitern> natefinch: there are several tiers of security/authorization - login level (valid user), facade level (agents only), method level (owners only)
<dimitern> natefinch: if we get errors after we checked for permissions, it's ok to propagate them i think
<dimitern> natefinch: at least by then we know the user is who he claims he is
<natefinch> dimitern: Yep.  That's cool.  Just wanted to make sure
<dimitern> natefinch: sure, security ramifications are always tricky and worth a discussion
<dimitern> anyone else willing to review https://codereview.appspot.com/12698043/ ?
<natefinch> mgz: ^ ?  (my turn to ping ;)
<dimitern> natefinch: yep, I gave up :)
<dimitern> you can't be productive like that.. :/
<dimitern> I need a bunch of reviewers to throw stuff at them :)
<dimitern> let's hope next week'll be better
<natefinch> yep. Should be a lot more populous next week.  Is Frank back, or does he have next week off too?  I forget
<dimitern> frank's off next week I think
<dimitern> I have this cute sidebar calendar in thunderbird
<dimitern> natefinch: you know I'm really itching to get rid of all that duplicated code, and especially boilerplate code in tests
<natefinch> dimitern: yuup
<dimitern> natefinch: I'll think of a way to optimize it, but all attempts so far resulted in less readable code :/
<natefinch> dimitern: yeah, it's tricky
<natefinch> lunch time for me
<dimitern> sidnei: hey
<dimitern> last call before I head off - anyone willing to review a simple branch? https://codereview.appspot.com/12698043/
<mgz> sorry guys, had to run out last minute
<mgz> will catch up with reviews now
<dimitern> mgz: yay!\
<dimitern> :)
#juju-dev 2013-08-11
 * davecheney waves
#juju-dev 2014-08-04
<waigani> axw: I've added tests, could you take a quick look please: https://github.com/juju/juju/pull/289/files
<axw> looking
<axw> waigani: reviewed
<waigani> axw: thanks - I'll mock out ssh.Client
<wallyworld> axw: hiya,  just wanted a quick update on the azure/manual bootstrap issues
<axw> wallyworld: hey! well, I was OCR this morning, so I only got back to it a little while ago
<axw> so far, not really had any insights
<axw> wallyworld: I had an idea about how to speed up bootstrap on azure (start out on ephemeral disk, then migrate over to OS disk at end of bootstrap), but that will probably only paper over the issue
<wallyworld> axw: azure has slower i/o in general so perhaps that might be implicated somehow
<wallyworld> but there seems to be a mongo/juju startup issue doesn't there
<axw> yeah, I'm pretty sure it's related to slow disk, but I don't know how to work around it at the moment...
<axw> seems so, yes
<wallyworld> axw: i can't recall the code - but we added direct=true, simplified things also; I wonder why it's started happening again
<axw> wallyworld: not sure ify ou saw my comment on the bug from late friday. I found some options relating to background indexing that we might want to use, but it doesn't help in this case
<wallyworld> didn't see that yet
<axw> wallyworld: yeah, that didn't seem to help
<wallyworld> if i recall correctly, we seem to dial mongo ok, but then fail when we try to use it
<wallyworld> maybe the dial is returning ok but the db is still initialising?
<axw> wallyworld: yeah, I think the read is just blocked on the backend
<wallyworld> axw: so, maybe be enhance dial to somehow poll for db readiness, whatever that involves
<axw> wallyworld: yep, that is what I am investigating now.
<wallyworld> axw: anyways, need to go grab breakfast, good luck, thank you, and keep me in the loop :-)
<axw> wallyworld: have not looked at manual bootstrap again yet, deferring till this is fixed as it only affects master
<wallyworld> ok
<axw> sure, will do
<axw> enjoy
<wallyworld> will do
<dimitern> jam, alexisb, hi
<alexisb> hi dimitern
<alexisb> did you see your invite?
<dimitern> alexisb, I did - not sure if I responded, but I'll be there
<alexisb> dimitern, cool, I will ping you when we start ahangount
<alexisb> dimitern, other qs for me or jame?
<dimitern> alexisb, thanks
<dimitern> jam, I don't have the g+ link - where to join?
<dimitern> jam1, g+ link?
<jam1> dimitern: working on it
<dimitern> jam1, thanks :)
<voidspace> axw: ping
<axw> voidspace: pong
<voidspace> axw: hey, hi
<voidspace> axw: about closing port 37017 in an upgrade step
<voidspace> axw: for the azure provider I discussed with you the masking rules changing - causing the firewaller to close the port
<voidspace> axw: did that only apply to azure, or would it be true of the other providers too?
<axw> if any provider's Ports method returns 37017, the firewaller will close it
<axw> any port that wasn't opened via OpenPorts
<dimitern> axw, hey
<jam1> dimitern: thanks for coming. We'll likely change how we do the meeting. We'll switch to using the conference call for audio
<axw> dimitern: heya
<jam1> so that you can hear with the good microphone
<jam1> and leave a G+ to do video
<jam1> but just mute it
<jam1> dimitern: you can call the conference line ok, right?
<dimitern> axw, catching up with emails, I've seen your question about SetAPIHostPorts and will respond soon, sorry I haven't seen it earlier
<axw> dimitern: no rush, thanks
<dimitern> jam1, oh really? I had issues with these audio conf calls before
<dimitern> jam1, I used them before (joined), but the audio quality was very poor
<jam1> dimitern: you were having trouble hearing people in the room, so we were hoping to fix those, maybe we could set up a trial run in 30 minutes when we have a break?
<dimitern> jam1, sgtm
<axw> voidspace: (sorry, forgot to prefix; here's me doing it in case you're waiting for your IRC client to poke you)
<voidspace> axw: ah, I didn't see - thanks
<dimitern> jam1, and thanks (if it was you) for scheduling this morning's call for 10:30 when we have our usual 1:1 so I couldn't miss it :)
<jam1> dimitern: that was just serendipity, but I'm glad it worked
<voidspace> axw: I got distracted by a blog post that jml linked to about the difference between concrete and abstract dependencies
<voidspace> or something like that
<axw> okey dokey :)
<voidspace> axw: cool - so I need to check the Ports methods on the other providers and check they're not masking StatePort
<dimitern> jam1, so will there be other calls before the standup?
<voidspace> thanks
<axw> voidspace: right
<voidspace> great
<axw> hrm
<axw> voidspace: that was my understanding though, but now I'm confused - openstack doesn't mask the port at all
<jam1> dimitern: you mean 1:1's? not today, because I'm sprinting.
<jam1> we'll try to test the conference call in 20 mins, though
<dimitern> jam1, no, I mean other meetings like the last one
<dimitern> jam1, alright
<jam1> dimitern: your next meeting with Mark S is tomorrow morning
<jam1> we might ask you to chat during the day
<dimitern> jam1, no worries, just let me know
<jam1> dimitern: will do
<voidspace> axw: ah
<voidspace> axw: so we probably need an explicit upgrade close
<voidspace> axw: or would you prefer further investigation?
<axw> voidspace: I'd prefer the latter. It'd be nice if we can just make use of the existing logic to reconcile ports
<voidspace> right
<jam1> dimitern: so I think we're skipping the conference call, the room just got really noisy during the break
<jam1> but likely tomorrow we will just call your phone directly
<jam1> rather than doing a conference hangout.
<jam1> If we need the hangout, I did get the information
<dimitern> jam1, my phone? can't we use hangouts somehow, so I can use my headphones?
<dimitern> well, I suppose I can connect headphones + mic to my phone as well
<jam1> axw: voidspace: so I didn't follow the whole conversation yet, but my thoughts were certainly "lets stop creating new things that are broken" rather than worrying about "upgrade" just yet.
<voidspace> jam1: I've done that bit
<jam1> if its confusing, I'd much rather have those split out and land the first part
<voidspace> jam1: Will gave me an LGTM on that PR, on the condition that I also look at upgrade
<voidspace> jam1: can't land anything due to critical bugs however
<jam1> voidspace: sure
<voidspace> jam1: but yes, upgrade/port closed as  separate PR
<jam1> dimitern: so the /etc/network/interfaces stuff needs to be a critical bug for our team
<jam1> because it means if you do "juju bootstrap local" and restart your machine, networking doesn't come up
<jam1> voidspace: sgtm
<voidspace> cool
<jam1> dimitern: I don't think thumper actually created a bug, but I'm not sure.
<dimitern> jam1, I'll file a bug and work on it today then
<jam1> dimitern: you're welcome to delegate to voidspace or TheMue
<dimitern> jam1, even better :)
<jam1> dimitern: TheMue is gone today, so it is between you and voidspace
<dimitern> jam1, righto
<dimitern> voidspace, jam1, I couldn't find a bug for that so far, so I'll file a new critical bug for it soon
<axw> dimitern: there is one
<axw> I'll see if I can dig it up
<axw> dimitern: well, the title and diagnosis don't match, but I'm pretty sure it's the cause of https://bugs.launchpad.net/juju-core/+bug/1349635
<dimitern> axw, awesome, thanks!
<axw> nps
 * rogpeppe wonders what's the likelihood of getting something landed in juju-core this week
<axw> wallyworld: azure's apt mirrors were busted before; I told IS, and they fixed it. I've just run CI for azure again and it worked, but there's been no changes to support that
<axw> I guess it's still a bit touchy
<dimitern> voidspace, are you available to work on bug 1349635 today?
<axw> wallyworld: I've not come up with any useful things to improve, so at this stage I think we should just increase the socket timeout
<wallyworld> axw: remind me, what is the timeout now?
<wallyworld> 20 secs?
<voidspace> dimitern: ok, after lunch :-)
<dimitern> voidspace, sure :)
<axw> wallyworld: 21s
<wallyworld> which is 2 x the heartbeat interval
<dimitern> mup, bug 1349635
<mup> dimitern: Bug #1349635: Networker shouldn't touch /etc/network/interfaces in a local environment <local-provider> <networker> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1349635>
<axw> wallyworld: I don't think the heartbeats require a lock though
<axw> wallyworld: db reads will not return if something is holding the lock
<wallyworld> axw: so perhaps at startup of mongo, there's no heartbeat yet so the socket can die
<axw> wallyworld: I'm pretty sure it's just that the response is taking longer than 21s, due to disk being super slow
<dimitern> voidspace, I've assigned you to the bug and added a kanban card for it as well in the moonstone lane
<voidspace> dimitern: thankds
<voidspace> dimitern: I don't like tickets with repro instructions that end "You now have a broken network configuration."...
<wallyworld> axw: yes, that's sort of what I was implying. but once mongo spins up, the heartbeat should keep the socket alive
<wallyworld> axw: so is it worth increasing the timeout just for bootstrap
<dimitern> voidspace, :) it's actually easier to reproduce, without breaking the network config (at least on my machine it didn't). I'll add comment to the bug
<voidspace> heh
<wallyworld> as we don't want it too long in practise since we want to know early when mongo has died
<axw> wallyworld: yeah, that's kinda what I was thinking too
<axw> wallyworld: I think if we increase it to 60s for bootstrap we should be safe
<wallyworld> yep, cause that's why it was reduced from 60s or whatever. or was it 10mins
<axw> I think dial timeout was 10 mins
<axw> and socket timeout = dial timeout if you don't specify otherwise
<wallyworld> yeah, and 10 mins is waaaay toooo looong
<wallyworld> so we're only guessing here that's what's causing the bootstrap failures, but it seems plausible
<wallyworld> axw: i'm still unclear why dial seemed to be returning ok though
<axw> wallyworld: dial doesn't need a lock
<wallyworld> and then the subsequent operation failed
<wallyworld> so there was nothing holding the lock? but rather it failed because the lock wasn't created yet?
<axw> wallyworld: I mean, dial would not be affected by something holding a lock
<axw> wallyworld: each db in mongo has a single write lock
<wallyworld> sure, what was holding the lock to prevent the op from running and hence cause the socket to timeout?
<axw> adding an admin user will require that lock
<axw> I don't know. I guess something to do with replica set initiation
<axw> it's pretty difficult to get the information given the time window for failure
<wallyworld> hmmm, so given repliaset can take, what 2 minutes to start, the timeout needs to be at least that long
<axw> wallyworld: that's how long it takes to start, which is what we wait for. I don't know what it's doing afterwards
<wallyworld> i really wish we could block bootstrap until mongo startup was finished
<axw> well I can set the socket timeout to 1 hour, and that'd be like waiting forever ;)
<wallyworld> axw: i wish we could poll every X seconds and each time log "waiting for mongo to start"
<dimitern> any idea why I can't go get gopkg.in/natefinch/npipe.v2 ? http://paste.ubuntu.com/7950527/
<axw> dimitern: it's Windows-only
<dimitern> axw, ah, right - ok, godeps seems happy now
<axw> wallyworld: given that I'm EOD soon, and off for the next three days, I'd have to defer that to someone else
<axw> I can increase the timeout now though
<wallyworld> axw: that would be great, thanks
<wallyworld> enjoy your "holiday"
<axw> wallyworld: thanks :)  still sore from pulling up carpet all yesterday... filthy stuff
<wallyworld> can imagine
<perrito666> morning
<mgz> mornin'
<axw> mgz: when you have a moment, can you please review https://github.com/juju/juju/pull/463
<mgz> axw: looks good, do we want a dumb test that checks the timeout is set to the longer value?
<dimitern> voidspace, the hangout acts funny again, but I think we managed to talk about what was needed, right?
<dimitern> voidspac_, (in case you missed that) the hangout acts funny again, but I think we managed to talk about what was needed, right?
<mattyw> dimitern, do you have a couple of minutes for a quick question?
<voidspac_> dimitern: I think so, yes
<dimitern> mattyw, sure, go ahead
<voidspac_> dimitern: my interenet connection is a bit rubbish I'm afraid
<mattyw> dimitern, would a hangout be ok?
<voidspac_> dimitern: generally ok, but I have to keep restarting the router
<dimitern> voidspac_, oh, bugger
<dimitern> mattyw, certainly - just send me a link
<voidspac_> dimitern: I think we covered everything though, when I need you I will hassle you :-)
<mattyw> dimitern, https://plus.google.com/hangouts/_/gywto2yelf2av2i4h7epsfy32ma?pqs=1&authuser=0&hl=en
<dimitern> voidspac_, no worries at all :)
<mattyw> dimitern, thanks
<axw> mgz: not sure how I'd test it. you can't query an mgo.Session for its timeouts
<axw> open to suggestions
<mgz> axw: I was thinking of something dumb like just mocking out all the bits that do anything and asserting that the timeout set function is given the bigger value
<mgz> or mock DialWithInfo and assert on the opts
<axw> DialWithInfo doesn't use that option, it's not set until after we get a session
<axw> hmm
<axw> I'll work something out
<mgz> your DialWithInfo
<axw> ah
<axw> yeah, I could do that
<mgz> it's less to validate the current change, and more so something breaks if we change the logic
<axw> mgz: I've added a test that checks the options passed to agent.InitializeState
<mgz> axw: landit
<axw> orrite, bbl
<mattyw> dimitern, do you have a moment for another question? - should be quick
<dimitern> mattyw, sure, what do you need?
<bodie_> morning all
<mattyw> dimitern, it appears that the unit struct returned from st.AddUnit() doesn't fill in the charmURL properly, I was wondering if this was expected? I added this http://paste.ubuntu.com/7951605/ to state/unit_test.go SetUpTest
<dimitern> mattyw, I'm not 100% sure, but I think this is the expected behavior, as the uniter is supposed to set the charm url after deployment
<bac> ping mgz
<mattyw> dimitern, is that the best place to get the charm url from? the metrics need to know which unit they came from so I'm just using unit for that, but they also need to know the charmurl so I thought they should probably get it from there
<mgz> bac: hey
<dimitern> mattyw, the best place to get the charm url is from the unit's service document, as it has to be set once the service gets created (or updated when upgraded)
<mattyw> dimitern, ok thanks
<dimitern> nps
<bac> mgz: are you using jenkins-github-lander in your ci?  if so, you should update to the newest version.  i fixed a bug last week that you'd want.
<mgz> bac: thanks, I will
<axw> mgz: increasing this timeout certainly seems to have made the tests unhappy
<axw> :/
<mgz> bac: done
<mgz> axw: :(
<mgz> can we test-override back small?
<mgz> or do we think making it longer also made real-world scenarios bad?
<axw> mgz: I dunno, I'd rather not paper over it. I found a bunch of tests that don't close mgo sessions and such, going to backport it to 1.20 now
<axw> not sure if it'll help any, but it won't hurt
<mgz> urg
<mgz> you also have a branhc up for that kind of thing, stack on our other fixes needed?
<axw> mgz: yeah that's for master, need to backport it
<mgz> axw: sounds good
<rogpeppe> anyone with good http-fu know what's going on at line 1503 here? http://golang.org/src/pkg/net/http/server.go#L1503
<dimitern> rogpeppe, according to http://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html, "The asterisk "*" means that the request does not apply to a particular resource, but to the server itself, and is only allowed when the method used does not necessarily apply to a resource."
<rogpeppe> dimitern: ta!
<rogpeppe> dimitern: i should have tried "asterisk" as a search term...
<dimitern> rogpeppe, yep, I had the same issue initially
<jam1> dimitern: voidspace:
<jam1> just checking in with you guys
<jam1> to see how its going
<jam1> mgz: did you get my email about cloudsigma?
<voidspace> jam1: hi
<voidspace> jam1: doing ok
<jam1> voidspace: did you end up picking up the /etc/network/interfaces bug?
<voidspace> jam1: working on critical bug 1349635 (delegated by dimitern !)
<voidspace> jam1: yep
<jam1> bug #1349635
<natefinch> voidspace: I cancelled our 1:1 today since you're not really working for me right now, unless you'd like to meet
<jam1> bah, mup, where are youâ¦ :0
<voidspace> jam1: think we're just going to have to special case local provider
<dimitern> jam1, I managed to almost catch up with emails (few more to reply to), decided to do the changes only in the high level networking spec, not the low level one (yet)
<jam1> mup: bug #1349635
<voidspace> natefinch: I saw
<mup> jam1: Bug #1349635: Networker shouldn't touch /etc/network/interfaces in a local environment <local-provider> <networker> <regression> <juju-core:Triaged by mfoord> <https://launchpad.net/bugs/1349635>
<voidspace> natefinch: fine
<dimitern> natefinch, :)
<jam1> dimitern: i think high level first works well, as I think that is what we're focused on discussing in the short term
<mgz> jam1: yup, will reply, can do some of the reviewing
<dimitern> jam1, yeah, as the basic design is likely to change in the coming days, let's first have it discussed and approved, then consolidate the changes in the low level doc
<axw> mgz: landed the timeout increase. having trouble getting 1.20 tests to run on my machine in general, giving up for tonight. see you in a few days
<axw> wallyworld: ^^
<mgz> axw: thanks, I'll try to keep an eye out on th reliability
<wallyworld> axw: thank you for all the work today
<mgz> enjoy the break
<axw> cheers
 * axw logs off
<voidspace> axw: o/ have fun
<bodie_> updated https://github.com/juju/juju/pull/448 w/ davecheney's comments
<voidspace> dimitern: ping
<dimitern> voidspace, hey
<ericsnow> wwitzel3: what was that tool we were using at the sprint to manage containers?
<jrwren> how can I send ctrl-a beginning-of-line to bash in a debug-hooks tmux session?
<katco> ericsnow: qemu?
<wwitzel3> ericsnow: virt-manager
<ericsnow> wwitzel3: thanks
<perrito666> sinzui: ping/morning
<sinzui> hi perrito666
<perrito666> sinzui: can I borrow a few minutes of your time?
<sinzui> yes
<perrito666> sinzui: I am trying to fix the issue with ha timeouting
<perrito666> now I see the invocation for the test is: test_recovery.py --ha --charm-prefix=local:precise/ /mnt/jenkinshome/jobs/functional-ha-recovery/workspace/extracted-bin/usr/lib/juju-1.21-alpha1/bin test-function-hp-2
<perrito666> can I omit the --charm one?
<perrito666> I do not have a local charm repo
<perrito666> or in any case what should I ahve on said repo?
<sinzui> you need the dummy charms...
<sinzui> perrito666, bzr branch lp:juju-ci-tools/repository
<sinzui> export JUJU_REPOSITORY=<path>/repository
<sinzui> perrito666, The test needs to deploy charms and it is specifically looking for the dummy source and sink charms
<perrito666> excelente branching, tx
<katco> does anyone know what an .lbox file does? contents: "propose -for=lp:goamz -cr"
<whit> katco: lbox is a go cli for reviewboard + lp
<whit> katco: lets you create merge proposal from the command line
<whit> *s
<katco> whit: ty... so a file checked into a repository root would, what, let reviewboard know how to do merges?
 * katco is having difficulty finding any docs on lbox as well.
<whit> katco: uhâ¦ Iâm not super familiar with the actual mechanics.
<whit> katco: let me fetch a link to the code
<katco> whit: that would be great ty
<whit> katco: some narrative background: http://blog.labix.org/2011/11/17/launchpad-rietveld-happycodereviews
<katco> whit: thank you so much!
<whit> katco: code: https://code.launchpad.net/~niemeyer/lbox/trunk
<whit> katco: Iâm sort of an lbox n00b. Iâm sure someone here could give you a better intro
<whit> such as the author, niemeyer
<sinzui> natefinch, axw last commit may have fixed bug 1350911. I am going to rerun the test to verify it is repeatable.
<bodie_> can I get a LGTM / LBTM on this?  https://github.com/juju/juju/pull/448 -- it's for a "high" bugfix
<natefinch> sinzui: sounds like I may have picked the right bug to assign to myself then ;)
<sinzui> perrito666, ericsnow bug 1351030 and bug 1351019 might be fixable with axw's last merge into 1.20.
<perrito666> that would be a blast :)
<perrito666> axw last commit is?
<ericsnow> sinzui: what's the hash?
<ericsnow> :)
<sinzui> ericsnow, 2c4db889f1260c8965a88171aeba14f8b21468c2
 * perrito666 pulls
 * perrito666 tries
<mgz> sinzui: could we try to confirm if bug 1350983 is resolved by the bootstrap timeout change?
<sinzui> mgz, I think we should make a patch and pr with fixes-1350983 in the message
<sinzui> CI may love it
<mgz> sinzui: I will try that. it;s a little complicated by mongo test flakiness, that axw also but some fixes for on trunk, but hasn't gone through yet (as it wasn't actually resolving a blocking bug)
<sinzui> :(
<mgz> so I may put that in first with a jfdi for the test stability first
<perrito666> sinzui: running tests with axw's fix, in the mean time, lunch
<jrwren> i don't really understand the remapping but ctrl-a, ctrl-b sends ctrl-a to bash
<sinzui> natefinch, the bug may still be present. my manual run panicked at UniterSuite.TearDownTest. CI saw the failure and has started retesting
<natefinch> sinzui: ack.  I'll run the tests locally and see what's up
<perrito666> sinzui: no luck with current master so I guess axw's patch did not fix lp:1351030
<sinzui> perrito666, :(
<sinzui> natefinch, I am marking the bug you selected from 1.20 is fixed. It is passing more than 50% of the time, which was the pld rate
<sinzui> old rate
<natefinch> woo hoo
<alexisb> natefinch, ping
<alexisb> can you and perrito666 cover the cloudbase interlock tomorrow?
<alexisb> as I will be busy with other stuff
<alexisb> natefinch, ^^
<perrito666> alexisb: I presume I do if you throw me a bit more context :p
<alexisb> perrito666, let me add you to the invite
<alexisb> really it is just a time for gabriel to ask questions and for both sides to ensure there is noone waiting on anything
<perrito666> ah certainly
<perrito666> book me in
<alexisb> alrighty I added you and I will leave it on the calendar
<alexisb> thanks perrito666 !
<natefinch> perrito666: thanks
<natefinch> perrito666: it's no big deal, I'll be there too.
<perrito666> natefinch: I usually can answer questions, the ocassions when I cannot I just forward people to you or ask myself in turn
<perrito666> meh, apparently lenovo decided that thinkpad users no longer need 16G of ram :( man Ill never find a decent computer to replace mine
<natefinch> perrito666: the XPS15 is a nice machine...
<perrito666> wight?
<katco> perrito666: fwiw http://blog.kate.cox2.name/2014/06/i-was-recently-in-market-for-new-laptop.html
<perrito666> katco: sweet thank you :)
<natefinch> perrito666: a hair under 2 KG (just weighed it)  That's with the smaller batter
<natefinch> y
<perrito666> natefinch: tx for the metric weight I know that hurt a bit inside
<katco> god i wish we used the metric system and 24h format time
<katco> it just makes so much more sense
<natefinch> yep
<perrito666> katco: the RAM column is factory default or max possible?
<natefinch> perrito666: our scale was actually already set to grams... it's what I use for cooking most of the time because doing things in 1/8ths of ounces is dumb
<katco> perrito666: factory default
<perrito666> natefinch: uh, you do precise cooking?
<perrito666> I go "by the eye"
<natefinch> perrito666: I don't even like measuring things by volume, except for liquids and stuff like sugar that is non-compressible and yet fills the available volume.
<perrito666> natefinch: well I usually do baking by the volume, most recipes are expressed in cups and stuff like that which can be found on measuring jars
 * natefinch has a scale that goes down to a tenth of a gram
<perrito666> natefinch: nothing that needs to be cooked with that level of precision is legal hre
<natefinch> yep, and a cup of flour can vary by about 30% depending on how compact it is
 * natefinch has watched way too much Good Eats
<katco> natefinch: no wonder you took us to the most complicataed restaurant ever! ;)
<jrwren> katco: mac book air shouldn't be on your list. The webcan doesn't work at all :(
<katco> jrwren: lol i did not know that.
<perrito666> apparently I want an X230 with x240 microprocessor
<katco> jrwren: i suppose it would have gotten reviewed had i landed on that
<jrwren> katco: Now you know. Other than the lack of webcam, it makes a nice Ubuntu laptop.
<katco> jrwren: well hopefully i won't be in the market anytime soon :)
<perrito666> honestly I could do with some of the providers saying "this laptop idling consumes this much" bc the battery load itself says nothing
<perrito666> for what I know the machine could take more energy than iron man
<katco> perrito666: doesn't that say more about the software than the battery?
<perrito666> katco: true, but some level of benchmark would be useful
<perrito666> system76 makes machines which runs like a charm with ubunutu, so they could say how much an idling ubuntu runs on those things
<perrito666> this looks wonderful https://system76.com/laptops/model/galu1
<katco> perrito666: the only problem i have with that is that i would never feel like i was comparing apples to apples across brands
<perrito666> katco: well of course, it would be apples vs lenovos vs s76s  :p
<katco> perrito666: rofl
<katco> (golf clap)
<perrito666> I think on my nextr trip I am getting a galago ultra pro, thanks katco
<ericsnow> at what point does /usr/lib/juju/bin/mongod actually get installed?
<perrito666> comes with the tools
<ericsnow> we have a hard-coded path to /usr/lib/juju/bin/mongod in mongo/mongo.go
<ericsnow> when I run tests it does not exist
<ericsnow> so what in the test suite is supposed to install these tools?
<ericsnow> (and uninstall them when done?)
<katco> perrito666: if i'm there, you can handle my gazelle pro a bit to see if you're satisfied with the build quality
<perrito666> katco: my wife says that I could also stop treating my computers as if they where anvils
<katco> perrito666: haha
<perrito666> my most common issue is dropping the laptop on the bag while the bag is on the floor
<perrito666> therefore hitting the floor
<katco> perrito666: i do that sometimes >.<
<perrito666> mm sinzui seems to have fled
<bodie_> https://github.com/juju/juju/pull/415 is ready for review... again >_> *pokes fwereade*
<perrito666> is anyone else playing with ha?
<menn0> pop cultural quiz: do know the common expression as "spanner in the works" or "monkey wrench in the works"
<menn0> (the reason for this will become clear soon enough)
<perrito666> menn0: http://dictionary.cambridge.org/es/diccionario/britanico/put-throw-a-spanner-in-the-works
<menn0> yeah I know
<perrito666> I was asking, I forgot the ?
<menn0> I'm just wondering what people here prefer
<menn0> oh ok. yes that's the expression I'm referring to.
<waigani> I know the spanner one
<perrito666> I prefer spanner or wrench, without the monkey
<menn0> ok cool
 * menn0 suspects he should have asked at a different time of day to get a wider cross section of the team
<perrito666> menn0: will you tell us what is the expression for?
<menn0> perrito666: to have something "throw spanner in the works" means that something disrupted plans or the workings of some thing.
<perrito666> menn0: I meant, why the question
<menn0> naming for a new top level Juju package :)
<menn0> I'll propose soon
<perrito666> menn0: oh thanks, but I am already married
<menn0> ha ha
<wwitzel3> lol
<perrito666> natefinch: do you have the setup to run the test swuite from qa?
<wwitzel3> perrito666: so I never did figure out how to get admin access :(
<wwitzel3> perrito666: I've been trying to figure out a way locally to delete the admin. files from local/db and reissue the addUser
<wwitzel3> perrito666: but so far, I haven't been able to do that wihout completely borking the system
<perrito666> you could run mongod with --noauth
<perrito666> but I am not sure how nice that is
<davecheney> wwitzel3: the admin password is your admin-secret from .juju/environments.yaml
#juju-dev 2014-08-05
<wwitzel3> davecheney: this is for local
<wwitzel3> davecheney: they can connect as machine-0 using the password from .juju agents.conf, but nothing is working for admin.
<wwitzel3> davecheney: I will have them try that when they are back online tomorrow though .. using that admin-secret from local.jenv
<wwitzel3> thanks :)
<davecheney> why is the local provider so weird ?!?
<davecheney> it has too many edge cases
<wwitzel3> yeah
<davecheney> no axw today ?
<davecheney> I might have accidentally use the word "shittle" when doing my expenses
<davecheney> oh well, i'm sure they know what I mean
<dimitern> morning all
<thumper> menn0: hey, you still around?
<TheMue> morning
<dimitern> hey TheMue
<dimitern> jam, hey, are we about to start?
<thumper> dimitern: very soon
<dimitern> thumper, i've dialed into the conference from skype, as there's a toll free us number, and i'm also in the hangout for video
<thumper> dimitern: niemeyer has just finished up with the docker demo
<thumper> people are moving to rooms now
<dimitern> right, cool
<jam1> hey dimitern
<jam1> so I like the idea of "juju deploy ânet??? foo,bar,endpoint=baz" or something along those lines.
<jam1> Did we change anything for "netzone" vs "network" in the doc already?
<jam1> I didn't get a chance to read it today
<jam1> I also should be going to the review of openstack charm changes, but I tihnk James was in with mark so that one might be getting moved
<dimitern> jam1, just a few sentences, as I'm not sure yet how netzones and subnets will be given at deployment time
<mjs0> thumper: I'm here now
<menn0> thumper: I'll be aroudn for a while working on a presentation so ping me if you like
<j-a-meinel> weird, it won't let me be "jam1" but it will let me be j-a-meinel. I wonder if my machine has been connecting + disconnecting too much and it started blocking me.
<jam> hello
<rogpeppe1> jam: hiya :-)
<jam> hey rogpeppe1, just trying to figure out what usernames I was able to connect as
<rogpeppe1> jam: i realised. just thought i'd reply anyway.
<dimitern> voidspace, hey hey
<dimitern> voidspace, any troubles with the networker bug so far?
<perrito666> morning
<natefinch> mornig
<natefinch> morning
<mattyw> folks, I'm on call reviewer today so if there's anything you want me to take a look at give me a shout
<voidspace> dimitern: hey, yes sorry
<voidspace> dimitern: missed your message - but yes I have an issue
<tasdomas> hi,
<dimitern> voidspace, we can talk about it at the standup in 5m then :)
<voidspace> dimitern: ok
<voidspace> :-)
<tasdomas> has anyone tried deploying services on ec2 with the current trunk of juju? Deploying mongo fails (it tries to connect to itself using a public address and fails to do that)
<dimitern> voidspace, standup?
<voidspace> dimitern: on my way
<voidspace> dimitern: hmmm... trying again
<voidspace> dimitern: TheMue: my internet is awful
<voidspace> dimitern: TheMue: I may be in and out I'm afraid
<voidspace> here for now though
<TheMue> voidspace: ok
<voidspace> dimitern: TheMue: http://pastebin.ubuntu.com/7960111/
<voidspace> dimitern: a.st.EnvironConfig()
<voidspace> entity.Jobs()
<voidspace> dimitern: "booting system without network configuration"
<voidspace> dimitern: possibly my network is now screwed in my vm :-)
<voidspace> dimitern: I did take a snapshot
<dimitern> voidspace, a.CurrentConfig().Value(agent.ProviderType) can give you the provider type
<voidspace> dimitern: with this change, straight away I get "agent-state: started"
<voidspace> dimitern: which I didn't before
<voidspace> unblocked!
<voidspace> dimitern: TheMue: thanks guys
<dimitern> voidspace, \o/
<voidspace> dimitern: need to confirm it works and that it fixes the bug
<voidspace> but progress
<dimitern> voidspace, :) yep, as usual
<TheMue> voidspace: great
 * dimitern needs to step out for ~45m
<perrito666> sinzui: ping
<sinzui> hi perrito666
<perrito666> good morning sir
<perrito666> sinzui: I was toying with patches yesterday, trying to run ha test and fix the issue
<perrito666> I noticed two things:
<perrito666> one, I often get an error which looks a lot like a race condition where the test fails because it tries to do something while still upgrading
<perrito666> did this happen to you?
<perrito666> the other one is, while checking past runs on jenkins I noticed that there is a not very consistent set of blue and red dots, has this test ever been consistently passing?
<sinzui> perrito666, yes, and yes
<perrito666> sinzui: sorry If I am not being very clear, I woke up a bit dizzy
<sinzui> The test doesn't pass on first try every time, CI will accept a pass if it passes withing 3 tried
<sinzui> perrito666, It usually fails because of the upgrade error, but you maye see cases in the ha-backup-restore tests where HA could not be achieved to start the backup :(
<perrito666> sinzui: I got those too
<sinzui> perrito666, and if juju and clouds were less brittle, we would require the tests to pass in the first try
<perrito666> actually when running locally I get that error all the time :(
<perrito666> so, last test run actually passed, could we trigger a couple more just in case?
<perrito666> isnt there a way to determine if juju is upgrading so we can poll until its ready for ha?
<mattyw> dimitern, ping?
<mattyw> dimitern, cancel that
<perrito666> sinzui: ?
<sinzui> perrito666, Yes, I can rerun the tests for the last build-revision id
<sinzui> perrito666, the tests are scheduled to run in about an hour when the current build published
<perrito666> oh, ok
<perrito666> "Nokia N900 modem driver included in mainline " <- the kernel certainly is falling a bit behing on device support
<sinzui> perrito666, oh, sorry, the publish job has already removed the last binaries. We need to wait
<sinzui> perrito666, once CI goes idle, we can retest as oftena s we like
<perrito666> tx sinzui
<katco> discovered https://github.com/rakyll/coop (Go concurrency examples) this morning. might be an interesting read.
<voidspace> tasdomas: ping
<tasdomas> voidspace, pong
<voidspace> tasdomas: on July 24th you had an error with local provider
<voidspace> FATAL: Could not load /lib/modules/3.13.0-32-generic/modules.dep: No such file or directory
<voidspace> tasdomas: did you discover what it was?
<tasdomas> voidspace, no - I did not
<voidspace> your mention of it in the juju-dev logs is the only reference to that particular error that google can fine
<voidspace> http://irclogs.ubuntu.com/2014/07/24/%23juju-dev.txt
<voidspace> tasdomas: hah, ok
<tasdomas> voidspace, was busy and, tbh, forgot about it
<voidspace> now google will have two mentions of it I guess
<tasdomas> voidspace ;-]
<voidspace> tasdomas: but it went away for you?
<tasdomas> voidspace, haven't tried since then
<voidspace> tasdomas: ah, ok
<tasdomas> voidspace, I can take a look later today and ping you with the result
<voidspace> tasdomas: that would be cool
<voidspace> tasdomas: networking on my vm is a bit screwed up anyway, so it just be due to that
<voidspace> I'm going to revert to a snapshot
<voidspace> hmmm, no I took my snapshot too late
<voidspace> network configuration is screwed there too
<voidspace> time to do some digging
<voidspace> helping the wife out, biab
<jam> dimitern: we're doing the ipv6 in juju, maas, and openstack. Do you want us to call you on the phone, or just a google hangout?
<rogpeppe1> any chance that someone might be able to review this straightforward PR? https://github.com/juju/charm/pull/35
<rogpeppe1> dimitern, mgz, voidspace, wwitzel3: ^
<ericsnow> natefinch: to make matters worse, the person that introduced the bug is out all week
<natefinch> rogpeppe1: must be nice to work on a repo that isn't blocked by the bot ;)
<rogpeppe1> natefinch: lol
<rogpeppe1> natefinch: our bot is down though
<dimitern> rogpeppe1, in a call, sorry
<rogpeppe1> natefinch: and i also have a couple of branches blocked by the juju-core bot
<rogpeppe1> natefinch: but yes, in theory :-)
<rogpeppe1> dimitern: np
<rogpeppe1> natefinch: so, given that you're blocked on the bot, you obviously have a few moments to spend on that review :-)
<natefinch> rogpeppe1: on the contrary, I'm trying to unblock the bot :)
<rogpeppe1> natefinch: fair enough
<dimitern> jam, is the next call happening in the same room?
<mgz> it's not the bot that's blocked...
<mgz> it's that master doesn't pass our own validation
<mgz> if you want to land things, you either say on the list that we need to land on a broken master, or we unregress
<perrito666> mm I am seeing this pop a lot 2014-08-05 13:59:02 ERROR juju.cmd supercommand.go:323 connection is shut down
<mattyw> gsamfira, thanks for making the changes - Hi, I'm Matt by the way
<gsamfira> Hi Matt :)
<gsamfira> thanks for the review
<gsamfira> much appreciated
<jcw4> rick_h__: https://github.com/juju/juju/pull/468 is a WIP initial draft of the Actions API
<jcw4> rick_h__: if you get a chance can you let me know if this is close to what you were looking for last week?
<jcw4> rick_h__: s/Actions API/& documentation/
<bodie_> if I can get a brief review on https://github.com/juju/juju/pull/448 it would be much appreciated, I can close the bug if I can land it
<bodie_> it's very simple
<mgz> bodie_: is there a reason not to use SameContents checker?
<mgz> rather than the obtainedIds expectedIds block
<bodie_> mgz, ignorance?  :/
<bodie_> didn't know I didn't have to roll my own
<mgz> 's in github.com/juju/testings/checkers/checkers.go
<mgz> -s
<bodie_> okay, great
<bodie_> mgz, I'm not sure that will work since SameContents does not regard duplicates as a problem
<bodie_> in this case, I'm counting appearances of each id
<mgz> fair enough
<perrito666> really amazon? io timeout?
<wwitzel3> that's how they do
<jcw4> perrito666: I'm wondering if it's because of the journaling changes possibly overloading mongo?
<perrito666> wwitzel3: ec2 is being especially mean with me
<jcw4> perrito666: strike that I was thinking of azure not amazon
<wwitzel3> trying to help hackedbellini get his up and running again and we are seeing this error .. http://pastebin.ubuntu.com/7962740/ .. for some reason it cannot connect to the web socket api? (local provider)
<hackedbellini> it's not the machine-0 problem anymore, this error is from the other machines (the lxc containers)
<perrito666> wwitzel3: perhaps api server is not up, check dmesg and logs for machine 0 you might get jujud-machine-0 respawning a lot trying to get up
<natefinch> 29% done on memtest. so slow
<natefinch> so far no errors
<perrito666> natefinch: welcome back
<perrito666> natefinch: 1st pass?
<natefinch> yeah, doing both sticks at once to see if anything pops up.
<natefinch> my tablet wasn't charged which is why I couldn't get on earlier... so I got the oil changed in my car while it was charging.
<perrito666> natefinch: that is slow
<perrito666> not you, I mean the memtest
<natefinch> it only started a little while ago, I got your email while I was out.
<perrito666> natefinch: anyway, if your mem if factory dual channel, the stick not broken will break soon
 * perrito666 tried to purchase 2x8sodimm today and the nearest one is 800km away and will be hard to get :p
<natefinch> 44% no errors...
<natefinch> I think it takes longer because it only tests two gigs arts a time... so it has to do 8 passes total
<perrito666> you should have removed one of the banks, your result would be less certain but the test would take half the time
<perrito666> do we have concrete info on why our replica takes so much?
<natefinch> takes so much.... what?
<perrito666> natefinch: sorry I typed half of what I was thinking
<perrito666> do we have concrete information on why each replicaSet takes so long to be ready?
<natefinch> mongo.... other than that, no.
<natefinch> we should really talk to the mongo guys about it
<natefinch> 65% done. still no errors
<perrito666> natefinch: does you bug happen on heavy loads?
<natefinch> only on boot, seems to be.
<natefinch> after I reinstalled it was fine until I rebooted.... I had just installed some updates, but I don't know if that was the cause or not.
<natefinch> 85% done
<perrito666> well if the ram is broken you should see fs corruption
<perrito666> if you dont see fs corruption you might be stumbling into some odd update breakage alghough I guess axw and ericsnow should have also
<perrito666> you might want to check the SMART data for you hd
<perrito666> or sdd
<natefinch> smart was fine last time
<natefinch> I have RAM errors in dmesg which is why I thought it was ram....
<natefinch> tests done. all tests pass
<natefinch> well fuck
<perrito666> natefinch: what is a ram error in dmesg? I am curious
 * perrito666 remembers natefinch recommending that laptop yesterday
<natefinch> hahahahahaha
<natefinch> I'll get the message when it finishes booting
<natefinch> under a section called total RAM covered: 16334 MB
<natefinch> *BAD*gran_size: 64k  chunk_size:32M  num_reg: 10   lose cover RAM: -2M
<natefinch> a bunch of those with different sizes, but the bad ones all have negative numbers at the end. ones that don't say bad all have 0 or a positive number
<natefinch> no friggin' clue what it actually means besides "bad" and "RAM"
<perrito666> ahh I see
<perrito666> natefinch: http://velenux.wordpress.com/2014/01/
<perrito666> linux
<perrito666> natefinch: you might find a better explanation at http://my-fuzzy-logic.de/blog/index.php?/archives/41-Solving-linux-MTRR-problems.html
<gsamfira> hmm.. if you are running memtest, you should run multiple passes. We used to do 20h memtest on production servers, just to make sure the memory was fine :). Sometimes we would see errors after a few passes
<natefinch> perrito666: interesting
<natefinch> I wonder if that is my problem
<perrito666> yet, that does not seem to be the cause of your other problem
<perrito666> although it might
<perrito666> try that and then see if the rest of the machine works properly
<perrito666> and in that case you might want to return the ram :p
<natefinch> perrito666: now if I can figure out what exactly it is that site is trying to tell me to do
<wwitzel3> rogpeppe1: ping?
<natefinch> it says to look at /proc/config.gz ... but that doesn't exist
<natefinch> cat /process/mtrr says almost the whole 16 gigs is in reg00 .... that seems bad.
<perrito666> natefinch: you most likely need to compile a kernel with such option on
<natefinch> OK, there is no way the answer to this problem is compile a friggin kernel
<perrito666> natefinch: try the one shorter
<perrito666> natefinch: dude this is linux, most problems, if poked enough will end up having recompile a kernel for an answer
<perrito666> natefinch: http://askubuntu.com/questions/244473/how-and-why-should-i-specify-mtrr-gran-size-mtrr-chunk-size
<perrito666> that one seems to be more ubuntu oriented
<perrito666> the answer seems to be pretty explanatory
<natefinch> OK, how do I add that to my grub config?
<perrito666> natefinch: /etc/default/grub
<perrito666> append to GRUB_CMDLINE_LINUX_DEFAULT
<natefinch> also, how do I know what to specify?  no one seems to answer that part
<perrito666> "it appears that the error you see is reported when mtrr sanitizer is unable to choose from several options of memory layout. It should print a list of possible options next to the error message. To make the message to go away, you need to specify something lik"
<perrito666> pretty trivial problem....
<natefinch> not when there's 100 different choices and the effect is not well understood.  I presume I should pick something that has a small lose cover RAM?
 * perrito666 was being sarcastic
<natefinch> oh
<perrito666> apparently the error should come with a list of possible values
<natefinch> and if I can pick anything that doesn't say bad, why can't the computer do that?
<perrito666> I guess is a trial and error approach
<perrito666> natefinch: well I guess you would not want your computer restarting randomly to try new ram mappings
<natefinch> there's a bunch of gran size lines mixed in with the bad ones.... I guess those are the options? it's not at all obvious
<perrito666> if you wanted an OS that did that you would use another one :p
<perrito666> natefinch: pastebin?
<natefinch> if it says num_reg 10 and I only have 7, I guess that's not a valid option?
<perrito666> natefinch: well apparently if you eff it up you will get dmesg to tell you what would be better choices
<natefinch> I don't have network on the stupid machine... I can boot into a live disk...
<natefinch> brb
<realnatefinch> perrito666, http://pastebin.ubuntu.com/7963815/
<realnatefinch> btw I have dmesg logs from the 29th where these things show up and i was able to reboot ok then
 * perrito666 looks
<realnatefinch> (running on the live USB disk I made last time... one nice thing about this having happened so recently)
<perrito666> do you have an nvidia video card?
<realnatefinch> yep
<perrito666> you might have better results if you use the closed driver
<perrito666> but that is just a side note
<realnatefinch> oh, yeah, I used to beforeI reinstalled
<realnatefinch> just forgot to this time
<perrito666> natefinch: could you pastebin your /etc/default/grub ?
<realnatefinch> sure
<realnatefinch> http://pastebin.ubuntu.com/7963935/
<perrito666> realnatefinch: you are using a live linux on that machine right?
<perrito666> one last, paste me cat /proc/mttr
 * perrito666 doesn't like what he sees
<perrito666> if not you can blindly try with gran_size: 128M 	chunk_size: 128M 	num_reg: 7  	lose cover RAM: 206M
<perrito666> and check if there is a bios upgrade for your machine
<realnatefinch> ok
<realnatefinch> can you link me to that article again?  I'm on my machine, so don't have it in irc history here
<realnatefinch> perrito666,  ^
<realnatefinch> nevermind, google found it
<perrito666> http://my-fuzzy-logic.de/blog/index.php?/archives/41-Solving-linux-MTRR-problems.html
<perrito666> and the ubuntu summary http://askubuntu.com/questions/244473/how-and-why-should-i-specify-mtrr-gran-size-mtrr-chunk-size
<realnatefinch> it says to append that to the "boot parameters" ...... where is that?
<perrito666> realnatefinch: /etc/default/grub -> GRUB_CMDLINE_LINUX_DEFAULT
<realnatefinch> ok rebooting
<perrito666> gluck
<realnatefinch> ks
<realnatefinch> thanks
<natefinch> bah.  well, let's see what dmesg says
<perrito666> meaning?
<natefinch> meaning same problem on startup.  dmesg says same thing - please specify chunk size etc
<perrito666> natefinch: did you run update-grub?
<perrito666> i dont recall the exact command
<natefinch> it's definitely in my grub config
<natefinch> oh uh no?
<perrito666> good thing is you dont need network for that
<natefinch> they're update grub and update grub2.
 * natefinch reboots
<natefinch> bah
<perrito666> bah means no good luck?
<natefinch> correct
<perrito666> is that factory default ram?
<natefinch> yep
<natefinch> so the mtrr error is gone
<natefinch> but my Ethernet and Wi-Fi still terminate with status 1
<perrito666> where is that I did not see it in the dmesg you passed
<natefinch> near the bottom, search for eth0
<perrito666> mm, I see
<perrito666> ifconfig -a shows something?
<natefinch> shows wifi and Ethernet as 0 packets and not make as UP or RUNNING
<natefinch> s/make/marked/
<natefinch> it does have a couple virtual Ethernet adapters that I don't recognize
<natefinch> at least I assume veth<garbage> means a virtual Ethernet adapter
<perrito666> natefinch: well, what you seem to have is  not a problem with the interfaces but with network manager
<natefinch> k
<perrito666> which is most likely something you installed
<perrito666> the virtual ones are most likely juju local :)
<perrito666> or virtualbox
<perrito666> v tells me that is virtualbox, bc I think juju local is lxbrt
<natefinch> I do have both those things installed
<perrito666> so we are close
<perrito666> tell me what is the content of /etc/networking/interfaces
<perrito666> sorry
<perrito666>  /etc/network/interfaces
<natefinch> I have /etc/network/interfaces.d/ which is empty but no interfaces file
<perrito666> natefinch: what version of ubuntu are you using?
<natefinch> 14.04.1
<perrito666> mm, as it comes ootb you should have /etc/network/interfaces which contains
<perrito666> # interfaces(5) file used by ifup(8) and ifdown(8)
<perrito666> auto lo
<perrito666> iface lo inet loopback
<perrito666> bc, basically your problem here, if I understand correctly is that your networking service is not starting properly
<perrito666> and network manager seems to be breaking along
<perrito666> and therefore your interfaces remain down
<perrito666> you could ifup them by hand but sadly negotiating wpa by hand is harder than negotiating an extension with a loan shark
<voidspace> perrito666: you sound like you speak from experience
<perrito666> have you tried to buy bonds from my country lately? they are cheaper than candy
<natefinch> sudo ifup eth0 says unknown interface eth0=eth0
<perrito666> natefinch: try adding what I told you to interfaces and sudo initctl restart networking
<voidspace> by sheer coincidence my /etc/network/interfaces file has vanished
<voidspace> and I'm attempting to repair networking too
 * perrito666 smells a bug
<voidspace> perrito666: there is an open bug that the local provider kills networking
<voidspace> and I'm working on it
<perrito666> lol
<voidspace> which is probably why my network is broken
<perrito666> you seem to have found the problem
<voidspace> perrito666: can you pastebin yours
<voidspace> perrito666: that's the result not the cause
<voidspace> well, it's the current cause of my broken network yes
<perrito666> voidspace: mine?
<voidspace> perrito666: /etc/network/interfaces
<voidspace> I have none whatsoever
<perrito666> voidspace: look 20 lines up
<perrito666> I copied its contents here
<voidspace> was that the sum total?
<perrito666> its 3 lines long
<voidspace> heh
<voidspace> fair enough, thanks
<perrito666> natefinch: ^ you might be on the same page there
<voidspace> now to reboot the vm and see if I get a network configuration this time
<perrito666> voidspace: you dont need to reboot the machine for that
<voidspace> and I do, yay
<voidspace> perrito666: no, I want to
<voidspace> I can manully start the network service anyway
<perrito666> voidspace: lol
<voidspace> I want to make sure that when I reboot I have it
<perrito666> natefinch: well, apparently if you create your interfaces by hand you get network back
<perrito666> mm, this is getting crowded with natefinchs
<natefinch__> hey, replacing my /etc/network/interfaces with a valid one and doing sudo ifup eth0 worked
<natefinch__> haha
<perrito666> natefinch: voidspace just explained that that is actually a bug of juju
<natefinch__> OMFG
<voidspace> and to be fair, one is more than enough...
<perrito666> we seem to be deleting interfaces from computers
<natefinch__> well shit
<voidspace> natefinch__: https://bugs.launchpad.net/juju-core/+bug/1349635
<voidspace> local provider kills networking on the host....
<perrito666> mmm, seems to be a bug in lx
<perrito666> lxc
<voidspace> I don't think so
<voidspace> see dimiter's comment further down
<perrito666> ah, no dimiter commented something
<voidspace> we screw with /etc/network/interfaces
<voidspace> and normally that's fine, because we control the machine
<perrito666> fuuuu, this happens bc local provider runs with sudo
<perrito666> ffff
<voidspace> but not for the host on local provider
<natefinch__> voidspace: that seems like a colossally stupid idea on local
<natefinch__> voidspace: it sort of seems like a terrible idea for manual provider, too
<voidspace> natefinch__: so my fix, which I'm struggling to test due to screwed networking, is precisely "don't do that on local"
<voidspace> natefinch__: we need to do some of it for a maas fix for legacy reasons
<natefinch__> can someone post their /etc/network/interfaces for me?
<voidspace> that maybe have already gone away
<perrito666> we should at least run local provider machine-0 on a chroot
<natefinch__> so I can update mine to what it's supposed to look like
<perrito666> if we cannot in a lxc
<voidspace> perrito666: it has to be a chroot that can create lxc containers
<voidspace> not sure if that's possible
<perrito666> voidspace: mm, should with the right config
<natefinch__> perrito666: that certainly seems better than letting it muck with the user's configuration
<perrito666> # interfaces(5) file used by ifup(8) and ifdown(8)
<perrito666> auto lo
<perrito666> iface lo inet loopback
<perrito666> natefinch__: ^
<perrito666> those 3 lines
<perrito666> you can skip the first one
<natefinch__> perrito666: I had to add stuff about eth0 to get ifup to let me turn on eth0
<perrito666> natefinch__: if you add that and then initctl restart networking (or start) and then the same with network manager you will most likely get it running or you will need to reboot
<voidspace> natefinch__: just that one worked for me
<natefinch__> perrito666: ok
<perrito666> natefinch__: for future references when the error in dmesg says exited with 1 is a service, when its the actuall device it throws an ugly memory dump
<natefinch__> ok
<natefinch__> rebooting
<natefinch> ahh better
<natefinch> well, there goes that workday
<voidspace> heh
<voidspace> heh
<voidspace> oops
<natefinch> you would think something somewhere would just say "hey, idiot, you're missing /etc/network/interfaces!"
<voidspace> that last up arrow and enter was meant for the terminal
<perrito666> I think that the chroot solution deserves further research
<voidspace> natefinch: yeah, that would have been really nice
<natefinch> like, seriously
<perrito666> natefinch: it does
<natefinch> where?
<perrito666> it says networking service who's only conf file is /etc/network/interface existed with error 1
<perrito666> :p
<natefinch> :P
<natefinch> I'm filing a bug against ubuntuy
<natefinch> ubuntu
<perrito666> which usually means that interfaces has syntax errors
<natefinch> but it doesn't say that
<perrito666> natefinch: I said, It means
<perrito666> :p
<natefinch> it just says exited with 1
<natefinch> not "config is missing" or "config has errors"
<natefinch> that's effectively like "something went wrong"
<natefinch> gotta runy
<menn0> perrito666: ping
<perrito666> menn0: pong
<menn0> perrito666: I wanted to ask you about the "update in progress" messages you were seeing while investigating bug 1351030
<menn0> perrito666: quick hangout?
<perrito666> menn0: sure, let me fetch my headphones
<perrito666> btw, isnt it like the middle of the night for you?
<menn0> perrito666: nope it's just after 9am
<voidspace> menn0: morning
<perrito666> menn0: odd, I usually use your presence as the mark that I have been here too long :p
<menn0> voidspace: howdy voidspace
<voidspace> o/
<menn0> perrito666: maybe you have :)
<perrito666> menn0: https://plus.google.com/hangouts/_/canonical.com/moonstone
<voidspace> midnight here, but I've got insomnia so working late and sleeping late
<perrito666> well I start around 7 and its 18 ok, perhas
<voidspace> machine-1: 2014-08-05 21:27:06 ERROR juju.worker runner.go:219 exited "networker": command "lsmod | grep -q 8021q || modprobe 8021q" failed (code: 1, stdout: , stderr: FATAL: Could not load /lib/modules/3.13.0-32-generic/modules.dep: No such file or directory
<perrito666> sinzui: ping
<perrito666> and since I am here menn0 could you answer my email to juju-dev with the bit that says how to know if juju is upgrading?
<menn0> perrito666: will do
<perrito666> thank you
<perrito666> with that I can ask sinzui to add the wait in the tests :p
<voidspace> I give up
<voidspace> for now
<voidspace> goodnight all
<perrito666> voidspace: dont forget to comment on the bug and un-assign it
<perrito666> menn0_: all yours added a helper script and decent instructions on how to repreoduce and since we are in it a few of my not so useless conclussions
<perrito666> cheers
<perrito666> https://bugs.launchpad.net/juju-core/+bug/1351030
<menn0_> perrito666: thanks very much. that's very helpful.
#juju-dev 2014-08-06
<wwitzel3> perrito666: you still around?
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1353239
<davecheney> menn0_: i'm going to try again
<davecheney> but this time run my mongo in the foreground
<menn0_> davecheney: good idea
<menn0_> I'm getting closer to finding the culprit rev
<davecheney> menn0_: i'm not sure it's our fault
<davecheney> mongo appears to be shitting itself
<davecheney> we're still using 2.4.x on precise
<menn0_> I think it's triggered by the way we're setting up mongo though
<davecheney> sure
<davecheney> no argument there
<menn0_> whether what we're doing is reasonable or not is another matter
<davecheney> but having to work around fragile software does not lead to robust systems
<menn0_> sure but let's narrow this down first before jumping to conclusions :)
<davecheney> reasons to hate upstart number 1<<72
<davecheney> ubuntu@ip-10-251-35-4:~$ service juju-db
<davecheney> juju-db: unrecognized service
<davecheney> ubuntu@ip-10-251-35-4:~$ service juju-db status
<davecheney> juju-db start/running, process 10783
<menn0> davecheney: "service" isn't upstart is it? You want: "status <service-name>"
<davecheney> why does the 2 arg form but the 3 arg version work ?
<davecheney> menn0: i'm seeing the same mongo connection reauthenticating itself over and over again
<menn0> davecheney: yep I think I see the same
<davecheney> that sounds wrong
 * davecheney logs bug
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1353275
<davecheney> hmmm, Wed Aug  6 05:45:38.641 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
<davecheney> Wed Aug  6 05:45:38.641 [rsMgr] replSet info electSelf 1
<davecheney> Wed Aug  6 05:45:39.574 [conn2] end connection 127.0.0.1:32995 (1 connection now open)
<davecheney> menn0: even when mongo doesn't shit itself
<davecheney> the environment is still unusable
<davecheney> the api server is offline
<menn0> yes, I've seen that
<menn0> in fact I have an env doing exactly that now
<menn0> I'm beginning to think this is culprit:
<menn0> commit 62e172632c3e9d8496805ed5223f9f4acc28986a
<menn0> Merge: 8275fa4 ce0840e
<menn0> Author: Juju bot <jujubot@users.noreply.github.com>
<menn0> Date:   Thu Jul 31 12:53:28 2014 +0100
<menn0>     Merge pull request #416 from axw/mongo-journalenabled
<menn0>     
<menn0>     Only set Safe.J (with mgo.SetSafe) if journaling is enabled
<menn0>     
<menn0>     Mongo 2.6 errors if you attempt to set Safe.J when journaling is disabled. We introduce a new fun
<davecheney> menn0: if you want to push the revert button
<davecheney> i'll LGTM that
<davecheney> i think it's time for some drastic action
<menn0> I'm going to try a local build first without that rev and see what happens
<davecheney> oh look, https://bugs.launchpad.net/mgo/+bug/1340275
<menn0_> davecheney: interesting...
<davecheney> menn0_: more information
<davecheney> machines 1, and 2
<davecheney> the new replicas
<davecheney> never start
<davecheney> sorry
<davecheney> the machine gets through cloud init
<davecheney> but they can never connect to the original api server
<davecheney> so they can never get their configuration and find they are running a state server job
 * menn0_ nods
<menn0_> I'm not sure it always happens that way but I have seen something like that
<menn0_> I'm trying now with the Safe.J change removed
<menn0_> davecheney: how much longer are you going to be around
<menn0_> ?
<menn0_> I need to go eat with the family
<davecheney> menn0_: i'm out pretty much now
<davecheney> need to get into the city for that meetup
<menn0_> ok
<menn0_> davecheney: that ensure-avail just finish and 3 state servers are up and "started"
<menn0_> things aren't happy in the logs but removing that revision seems to have helped the immediate issue
<menn0_> i'll send an email a bit later on
<davecheney> menn0_: what do you think ?
<davecheney> do you want to propose it ?
<menn0_> I think I'll email people and wait for some feedback
<davecheney> ok
<menn0_> davecheney: update... so things have now settled down post "juju ensure-availability" and the env looks healthy
<menn0_> I think it's probably that rev
<davecheney> menn0_: my vote is to roll that rev back
<davecheney> you have my LGTM if you want to land that change
<voidspace> morning all
<menn0_> voidspace: mornin'
 * menn0_ is about to upgrade his router firmware to see if it helps his flaky link
<voidspace> menn0_: morning :-)
<dimitern> morning
<voidspace> dimitern: morning
<voidspace> m1k43l
<voidspace> dimitern: ping
<voidspace> dimitern: if a machine is "pending" (instance-id and agent-state for the unit) how do I tell what state it's actually in
<voidspace> i.e. what it's waiting for / where it has got to
<voidspace> dimitern: this is local provider, and I suspect it's downloading the lxc image
<voidspace> but I'd like to *know*
<dimitern> in a call, will get back to you sorry
<voidspace> there's nothing of use in the logs
<voidspace> dimitern: sure
<voidspace> np
<voidspace> will go make coffee
<voidspace> maybe it just needs time
<dimitern> 10x :)
<dimitern> voidspace, i'm back; so to answer your question - "pending" means the machine agent haven't yet started (or at least haven't logged into the api to start the agent alive presence pinger)
<voidspace> dimitern: right, and I suspect that the vm (in this case an lxc container is not up)
<voidspace> dimitern: how do I *tell*
<voidspace> dimitern: it's still pending
<dimitern> voidspace, ah, debugging why an lxc container is not starting is fun :)
<voidspace> is it even possible...
<dimitern> voidspace, i'd try a couple of things: 1) set logging-config in env.yaml for that environment to <root>=DEBUG
<voidspace> ok, cool
<dimitern> voidspace, oops sorry, <root>=TRACE
<voidspace> good start
<voidspace> right
<dimitern> voidspace, trace logs for all lxc ops should be in the MA log
<jam1> dimitern: I rejoined juju-networking
<dimitern> voidspace, then, you can check $ sudo lxc-ls --fancy on the machine to see if lxc is running
<voidspace> so machine-1.log
<jam1> voidspace: I think if you find things like "downloading lxc images" aren't in the logs, but are there under TRACE, we need to bump those up to DEBUG
<jam1> at least, if not INFO
<voidspace> ah, I was using lxc-ls but without sudo
<dimitern> voidspace, and in /var/lib/lxc and /var/log/lxc (as root) you can find useful logs sometimes
<jam1> something that might take 10 min should be at INFO level, certainly
<jam1> though I think LXC itself downloads images outside of our control
<voidspace> heh, lots of fun
<voidspace> I *think* that's where it's stuck - but hard to tell
<voidspace> ah, cool
<voidspace> so lxc-ls --fancy
<voidspace> tells me that the precise-template exists (which is how I was able to start a precise vm)
<voidspace> but there's no trusty template
<voidspace> so I assume (hope) that's still downloading
<voidspace> precious little network activity though
<voidspace> I will restart with trace logging
<TheMue> mornign
<voidspace> TheMue: morning
<TheMue> voidspace: just seen youâre playing with lxc
<voidspace> TheMue: heh
<voidspace> TheMue: unavoidable, I'm working with local provider :-/
<TheMue> voidspace: hehe
<TheMue> voidspace: Iâm in contact with Serge and StÃ©phane about IPv6 and LXC
<voidspace> ah, cool
<jam1> morning TheMue, welcome back
<jam1> voidspace:
<jam1> if you're restarting in the middel
<jam1> then likely there are filesystem locks that are stale now
<TheMue> jam1: morning to Nuremberg
 * TheMue has to fix his client to do a better signaling
<voidspace> jam1: so how would I check / resolve?
<voidspace> jam1: a reboot in between should be sufficient to release locks, right
<dimitern> voidspace, what are you seeing?
<jam1> voidspace: no
<voidspace> dimitern: machine never starts, no trusty template downloaded
<jam1> voidspace: there is a directory that gets created as a lock, that persists
<dimitern> voidspace, ah, bugger
<jam1> so destroy-environment while it is held doesn't clean it up
<voidspace> so that's probably the problem
<jam1> voidspace: IIRC, there is a plugin "juju-clean" that nukes everything that needs nuking
<dimitern> voidspace, I have a nifty little snippet to obliterate a local env + all lxc artifacts
<voidspace> is this an lxc lock or a juju lock?
<jam1> I've found it in the past using debugging and finding the file it was waiting for and selectively deleting stuff
<voidspace> dimitern: cool
<jam1> voidspace: creation of the "precise-template" or trusty is a juju lock
<dimitern> voidspace, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ - give it a try to see if it'll help, or just run some of the steps
<jam1> voidspace: https://github.com/juju/plugins
<jam1> dimitern: ^^
<jam1> juju-clean is in there, IIRC
<voidspace> jam1: thanks, awesome
<jam1> its a pretty big hammer
<jam1> and I'd like us to never need it, so file bugs when you do need it
<jam1> I wish it would report what it actually had to clean up, but anyway
<dimitern> :) nice
<dimitern> thanks jam1
<voidspace> lunch
<voidspace> jam1: the juju clean plugin solved my problem and unblocked me.
<voidspace> I don't think I can usefully file a bug though as I don't know what it solved :-)
<voidspace> I suspect that it was stale filesystem locks from a failed / stalled template download due to my crappy network
<voidspace> but I don't know
<voidspace> and now I have new and weirder issues, but at least the lxc container starts ok and I have both precise and trusty templates
<dimitern> voidspace, it might be the templates weren't downloaded properly
<voidspace> dimitern: right
<voidspace> now I have working containers, but install hooks seem to fail consistently - but succeed when retried
<voidspace> digging in
<dimitern> voidspace, can you paste some logs?
<dimitern> from the failing unit
<voidspace> dimitern: in a bit, I just blew everything away to try again :-)
<dimitern> :) alright
<voidspace> the failure mode is consistent
<dimitern> voidspace, which charm are you trying?
<voidspace> mmand "lsmod | grep -q 8021q || modprobe 8021q" failed
<voidspace> command "lsmod | grep -q 8021q || modprobe 8021q" failed
<dimitern> voidspace, right!
<voidspace> machine-1: 2014-08-06 09:19:28 ERROR juju.worker runner.go:219 exited "networker": command "lsmod | grep -q 8021q || modprobe 8021q" failed (code: 1, stdout: , stderr: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-32-generic/modules.dep.bin'
<dimitern> voidspace, somebody reported that recently
<voidspace> dimitern: that's the mysql one
<voidspace> dimitern: I reported it yesterday
<dimitern> voidspace, :) ah
<dimitern> voidspace, the problem with this could be solved if you modprobe 8021q on the host before starting the container, hopefully
<voidspace> ah
<voidspace> maybe
<voidspace> modprobe is looking in the wrong place inside the container
<voidspace> it's using the host path
<voidspace> so maybe that would solve it
<voidspace> dimitern: juju resolved --retry
<voidspace> dimitern: seems to consistently fix it
<dimitern> voidspace, from inside the container, you can't modprobe stuff, but if it was done on the host, lsmod will list it - that's the intention of the script
<voidspace> dimitern: ok
<voidspace> so maybe that's fixed
<voidspace> just waiting for wordpress to come up so I can test the relation
<dimitern> voidspace, i think I see the problem
<voidspace> dimitern: do you want me to file a bug for this?
<voidspace> in the meantime
<voidspace> moar coffeez
<dimitern> voidspace, the hook is not really failing, maybe just the uniter (or the whole unit agent) gets killed and restarted, but because the networker already run the lsmod script, retry "solves" it
<voidspace> heh
<voidspace> sounds plausible
<dimitern> voidspace, yes, please, and attach the unit + machine logs
<dimitern> voidspace, standup?
<voidspace> dimitern: on my way
<voidspace> dimitern: it's finally letting me in
<voidspace> my crappy network
<dimitern> voidspace, rejoin?
<perrito666> so there are bad luck days and then there are days where you come to work on a borrowed office and forget the power brick for the computer....
<rogpeppe1> any chance of a review of a charm package PR, please? https://github.com/juju/charm/pull/36
<rogpeppe1> perrito666: :-(
<perrito666> https://github.com/juju/utils/pull/18
<perrito666> mm, menno and dave left already
<TheMue> perrito666: +1
<perrito666> TheMue: thanks, but for what exactly?
<TheMue> perrito666: your PR
<TheMue> ;)
<perrito666> ah lol sorry
<perrito666> thank you
<TheMue> perrito666: yw
<TheMue> dimitern: btw, will ping you a bit later. a friend just called and asked if he can grab a cup of coffee :)
<dimitern> TheMue, no worries :)
<perrito666> nooo, I just discovered the coolest thing from github
<TheMue> dimitern: Iâll also invite you to the current doc/collection of notes on Google Docs
<perrito666> you can convert a pr into a patch or diff
<perrito666> by just adding .diff or .patch to the pr
<dimitern> TheMue, sure, can you send me a link?
<TheMue> dimitern: you should have received the google mail
<dimitern> TheMue, ah, I've just seen it, thanks!
 * TheMue is afk
<voidspace>  dimitern: I filed that bug by the way https://bugs.launchpad.net/juju-core/+bug/1353443
<voidspace> tasdomas: https://bugs.launchpad.net/juju-core/+bug/1353443
<dimitern> voidspace, cheers!
<voidspace> dimitern: and I can confirm, with this branch I can use local provider and it doesn't kill my networking
<voidspace> https://github.com/voidspace/juju/tree/network-interfaces
<voidspace> dimitern: I need to work on tests, and myabe prettify it - but confirmation that this works for MAAS would be good
<dimitern> voidspace, great! I'll get to it soon to test on MAAS
<voidspace> dimitern: cool, let me know please
<dimitern> voidspace, will do
<natefinch> voidspace: huzzah for not killing networking
<voidspace> natefinch: heh, well hopefully this fix works for MAAS
<voidspace> natefinch: but yeah, the fix is basically "don't screw with networking on the local provider host"
<perrito666> natefinch: lol
 * perrito666 tries menn0 and davechenneys patch without success
<perrito666> sinzui: any clue when will that land on jenkins?
<sinzui> perrito666, I see something being tested now http://juju-ci.vapour.ws:8080/
<sinzui> perrito666, wallyworld's  maas-hostname-address is being tested now
<voidspace> natefinch: ping
<natefinch> voidspace: what's up?
<perrito666> I did not understand if menn0s patch actually fixes the bug :|
<wwitzel3> rogpeppe1: ping
<rogpeppe1> wwitzel3: pong
<wwitzel3> rogpeppe1: re your mailing list reply .. what collection in mongo do I need to put the machine-0 peer group member?
 * rogpeppe1 looks
<wwitzel3> rogpeppe1: well, I guess I could look at what collection the code is reading from ..
<wwitzel3> rogpeppe1: no point in make you do my foot work :P
<rogpeppe1> wwitzel3: :-)
<wwitzel3> rogpeppe1: was just checking if you knew without looking :)
<rogpeppe1> wwitzel3: i can never remember state collection names...
<rogpeppe1> wwitzel3: you'll need to know the field names too of course
<jam1> dimitern: hey, you brought up that we actually get 2 IPv6 addresses as the common case, can you clarify what those are?
<jam1> I *think* they are (1) Link Local (which isn't really routable from the outside and (2) Actually the routable one
<jam1> is that true?
<jam1> in which case, we can really just ignore (1), and we are down to just one IPv6 address
<dimitern> jam1, true
<jam1> dimitern: and LinkLocal is a known IPv6 prefix, right? So it is trivial to filter out in the instance poller
<dimitern> jam1, the link-local is required by the IPv6 spec to exist; obviously ::1 will exist as well, and a private (or Unique Local Address in IPv6 terms) might be there or not (in the latter case we can't usefully do anything ofc)
<dimitern> jam1, it is, and we're actually doing that already
<jam1> dimitern: can we usefully detect that it is a "private" address that we can't make use of ?
<natefinch> sinzui: do we have a tag for "blocks CI"?  if not, can we make one?
<dimitern> jam1, so my comment was more about "we need to be aware of this, if we're aiming to model and display all addresses"
<dimitern> jam1, if you refer to a ULA - no, we can't unless we try
<sinzui> natefinch, "ci regression" are the tags in combination
<dimitern> jam1, otherwise, any valid IPv6 can be usable locally in the cloud of course
<natefinch> sinzui: how do I search for two tags at the same time?
<natefinch> nevermind, I see the tags field in advanced search
<natefinch> sinzui: there's a bug marked medium that is tagged ci regression.... I presume that's not blocking CI
<natefinch> http://goo.gl/URSQcV
<sinzui> natefinch, advance search has an option for both, but right only triaged and inprogress critical ci regressions block
<sinzui> natefinch, This is the smallest query https://bugs.launchpad.net/juju-core/+bugs?field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.importance%3Alist=CRITICAL&field.tag=ci+regression+&field.tags_combinator=ALL
<abentley> sinzui: I'm catching up on email now.  It seems a fair few things happened when I was gone.  Should we have a chat?
<sinzui> abentley, I cannot now
<sinzui> abentley, I would like to chat, maybe I can steal away from a meeting
<abentley> sinzui: Sure.  I'm around all day.
<natefinch> sinzui: thanks for the link
<sinzui> natefinch, I will change reports.vapour.ws to show that information. Lp requires separate queries for trunk and stable
<perrito666> sinzui: natefinch arosales I believe one of you is the person who can give me azure credentials
<sinzui> perrito666, arosales can link your azure account to the canonical account. I can help you run tests as CI
 * perrito666 thinks he does not have an azure account at all
<natefinch> perrito666: I used to have an azure account, but then my last company decided to stop paying for my MSDN subscription 10 months after I left.  I guess they don't love me any more.
<sinzui> perrito666, you have mail
<perrito666> sinzui: yes I do :p
<sinzui> natefinch, you can read environments.yaml in cloud-city to run as CI
<natefinch> sinzui: thanks
<perrito666> sinzui: hoy likely am I to break things with the data you just passed to me?
<natefinch> wwitzel3:  let's do our 1:1 after the standup, if that's ok?
<wwitzel3> natefinch: that's fine
<natefinch> mup: help
<mup> natefinch: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
<natefinch> mup: help sms
<mup> natefinch: sms <nick> <message ...> â Sends an SMS message.
<mup> natefinch: The configured LDAP directory is queried for a person with the provided IRC nick ("mozillaNickname") and a phone ("mobile") in international format (+NN...). The message sender must also be registered in the LDAP directory with the IRC nick in use.
<natefinch> mup help poke
<natefinch> mup: help poke
<mup> natefinch: poke <query ...> â Searches people in the LDAP directory.
<mup> natefinch: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn").
<natefinch> mup: poke Finch
<mup> natefinch: Plugin "ldap" is not enabled here.
<natefinch> doh
<natefinch> mup: help infer
<mup> natefinch: infer [-all] <query ...> â Queries the WolframAlpha engine.
<mup> natefinch: If -all is provided, all known information about the query is displayed rather than just the primary results.
<natefinch> mup: infer time
<mup> natefinch: 10:03:44 am EDT Wednesday, August 6, 2014.
<natefinch> mup: sms natefinch boo!
<mup> natefinch: Plugin "aql" is not enabled here.
<ericsnow> natefinch: getting cozy with mup, eh?
<ericsnow> :)
<natefinch> ericsnow: just read gustavo's email, so figured I'd try it out.  Evidently it's good I did.
<ericsnow> natefinch: plugins are overrated <wink>
<dpb1> Hi -- when juju clones an LXC template I notice the cloud-init gets run again.  How does that happen?
<ericsnow> here's a fix for (at least most of, if not all of) issue #1351019: https://github.com/juju/juju/pull/476
<wwitzel3> sinzui: I noticed that manual for precise and trusty had clean runs this morning, is bug #1347715 still an issue?
<sinzui> wwitzel3, Those were 1.20
<sinzui> wwitzel3, http://juju-ci.vapour.ws:8080/job/manual-deploy-precise-amd64/ shows that the blues are 1.20 and the reds are master
 * sinzui adds task to make reports/jenkins be clear about what is under test
<wwitzel3> sinzui: ahh ok, I'm having trouble replicating the error locally with master. I manually started up an EC2 instance on AWS, configured the manual provider with its host, then I've done juju bootstrap .. and I'm not getting any of the SSH errors.
<sinzui> wwitzel3, Have you set authorized-keys or authorized-key-path. CI is always stating which key to use
 * sinzui thinks about the test
<dimitern> voidspace, so far my testing with the local provider and your branch looks fine (had some trouble convincing it to run at all at first)
<wwitzel3> sinzui: no, I will set that in the config and try again .. does it set both?
<dimitern> voidspace, unfortunately, I won't be able to do the MAAS test today, as I'll need to go out soon, but I'll do it tomorrow morning, if that's ok
<dimitern> voidspace, it seems the install hook errors you're getting are happening to me as well, and in 2 cases: 1) when trying parallel deployments (i.e. deploy wordpress, then deploy mysql without waiting the first to install and start), 2) with each lxc container after the first one
<dimitern> both of these are related to the lxc filesystem cloning or whatever, due to "Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?" (i.e. apt-get fails on one container, because the lock is held by another container running apt-get or the host itself)
<mattyw> hi folks, is someone able to answer some questions about MESS for me?
<jrwren> dimitern: each container should have its own /var/lib/dpkg, shouldn't it?
<dimitern> jrwren, I've guessed so, but something apparently changed recently
<dimitern> ..and it's looks troubling
<sinzui> wwitzel3, CI ran something like this to create the instance
<sinzui> euca-run-instances -k id_rsa -t m1.large -g my-manual-test ami-36aa4d5e
<mattyw> rogpeppe1, ping?
<sinzui> wwitzel3, ^ change id_rsa to the key you have in ec2. That might help reproduce the ec2 setup
<bodie_> pinging fwereade re his concerns on pr 415 -- addressed or commented
<wwitzel3> sinzui: ok, thanks
<voidspace> dimitern: how odd
<voidspace> dimitern: are you seeing that on trunk?
<voidspace> dimitern: or with my branch
<voidspace> it seems unlikely that my branch causes that
<natefinch> wwitzel3: ericsnow: perrito666:  exceedingly late standup?
<wwitzel3> natefinch: oops, sorry
<ericsnow> natefinch: hangouts is flaking out on me
<wwitzel3> omw
<perrito666> natefinch: sorry going there
<jam1> mgz: poke
<voidspace> dimitern: I have to go out soon as well (~20minutes), but I'll be working again later tonight
<dimitern> voidspace, I guess trunk is the same as your branch in this case (no local-provider-specific lxc changes); maybe it's just my env config
<voidspace> dimitern: maybe - I don't think that's the *same* error I was seeing
<rogpeppe1> mattyw: pong
<mattyw> rogpeppe1, do you know much about the potential multi environment state server stuff? I have a specific question about how it might or might not affect the agent on the state server
<rogpeppe1> mattyw: some, but i'm not directly involved
<mattyw> rogpeppe1, do you know who is?
<rogpeppe1> mattyw: in the implementation of the multi-tenant state server
<rogpeppe1> mattyw: no, sorry
<voidspace> mattyw: I believe that would be thumper's team
<voidspace> mattyw: natefinch was involved in the planning of that feature - but has now passed it on I believe
<mattyw> voidspace, ok great thanks
<mattyw> rogpeppe1, multi tenant is where the state server might be split across machines?
<rogpeppe1> mattyw: no, where a single state server can serve several environments
<rogpeppe1> mattyw: the state server itself can always be split over multiple machines (HA)
<rogpeppe1> pwd
<mattyw> rogpeppe1, that's what I was going to ask - I thought it was the same as ha
<voidspace> rogpeppe1: C:\Documents and Settings
<rogpeppe1> mattyw: nope. entirely orthogonal.
<rogpeppe1> rofl
<alexisb> natefinch, ping
<mattyw> voidspace, nice
<natefinch> alexisb: what is up, yo?
<natefinch> ericsnow: reboot, it always fixes things (or deletes your networking config.... one of those two)
<wwitzel3> ericsnow: did you try turning it off and back on again? ;)
<ericsnow> natefinch: wwitzel3: gee, thanks ;)
<fwereade> natefinch, ping
<fwereade> natefinch, do you have a rough developer-weeks estimate for remaining time on backup/restore
<perrito666> fwereade: he is afk
<natefinch> back
<natefinch> wwitzel3: should I reassign the manual provider bug to you?
<jcw4> I have a couple of minor pull requests waiting to land that are just test changes that will isolate a handful of tests from the users .bashrc better
<jcw4> Any objections to me JFDI'ing those PR's since they're approved and they would only contribute to more stable tests, and don't change non-test code?
<natefinch> jcw4: show me the PRs?
<jcw4> natefinch: https://github.com/juju/juju/pull/450
<jcw4> natefinch: https://github.com/juju/juju/pull/454
<jcw4> natefinch: re-reading 450 it does change non-test code
<jcw4> so I'm fine with that one waiting...
<perrito666> aghh lost sinzui again
<mgz> jcw4: I'm sending pr454 through now
<jcw4> mgz: tx!
<mgz> 450 also seems fine
<wwitzel3> natefinch: I will pick it up, I'm in there now anyway
<jcw4> yeah trivial change too.  Thanks mgz and natefinch
<wwitzel3> though I can't replicate it even using the euca command sinzui gave me.
<perrito666> bbl ppl
<mattyw> calling it a day folks, night!
<natefinch> phew... electrician and bee inspector showed up at the same time
<mgz> electric bees!
<natefinch> I do have an electric fence around the bees, but that's just coincidental :)
<perrito666> natefinch: what is a bee inspector_
 * perrito666 has the strangest bug no browser will start
<wwitzel3> perrito666: I've had that before
<wwitzel3> perrito666: I never figured it out, just ended up logout/login
<perrito666> lol if I tell you what it is you will laugh at me
<perrito666> browsers where starting up in the external screen and the monitor was off
<wwitzel3> lol
<wwitzel3> you are right
<natefinch> haha
<natefinch> I've had that happen
<natefinch> What's worse is when they open up completely off the screen top or bottom because you've changed resolutions or something
<perrito666> natefinch: I use all external screen apps full screen to avoid that
<perrito666> (as in f11)
<natefinch> perrito666: a bee inspector is someone paid by the state to go around to all the known beehives in his county and check them for disease, mites, etc.
<perrito666> wow you guys really have the state in your lives, our state only takes money and leaves us alone
<natefinch> perrito666: not all states in the US have inspectors, but some do.  Bees are pretty important for agriculture, so we try to make sure some idiot doesn't spread disease and screw everything up
<perrito666> natefinch: that makes sense, I dont think we have control for bees here, at least not for small producers
<perrito666> sinzui: are you around?
<natefinch> mup: infer time in Nuremburg
<mup> natefinch: Cannot infer much out of this. :-(
<wwitzel3> lol
<natefinch> hrmph
<natefinch> mup: infer time
<mup> natefinch: 4:05:16 pm EDT Wednesday, August 6, 2014.
<natefinch> mup: infer time in nuremburg, germany
<mup> natefinch: Cannot infer much out of this. :-(
<natefinch> dang
<jcw4> mup: infer time UTC+0200
<mup> jcw4: 10:05:56 pm GMT+2 Wednesday, August 6, 2014.
<natefinch> it worked for gustavo in his email :/
<jcw4> teh suk
<natefinch> mup: infer timezone in nuremburg
<mup> natefinch: Cannot infer much out of this. :-(
<niemeyer> mup: infer time in nurenbERG
<mup> niemeyer: 10:06:37 pm CEST Wednesday, August 6, 2014.
<natefinch> agg frig
<jcw4> s/m/n s/u/e/
<natefinch> niemeyer: spelling gets me every time
<natefinch> google says it's nuremberg
<natefinch> anyway
<niemeyer> Not burg
<natefinch> yep
<jcw4> niemeyer: so that's mountain not city right?
<niemeyer> jcw4: What is?
<jcw4> berg vs. burg
<niemeyer> Good question, I don't know, but I'm curious now
<jcw4> niemeyer: at least that's the difference in afrikaans
<niemeyer> The real city name is actually NÃ¼rnberg
<natefinch> mup: infer time in NÃ¼rnberg
<mup> natefinch: Cannot infer much out of this. :-(
<natefinch> boo
<jcw4> heh
<niemeyer> natefinch: Ask in German
<jcw4> what does mup use? google?
<natefinch> rofl
<niemeyer> mup: help infer
<mup> niemeyer: infer [-all] <query ...> â Queries the WolframAlpha engine.
<mup> niemeyer: If -all is provided, all known information about the query is displayed rather than just the primary results.
<jcw4> mup: infer -all berg vs. burg
<mup> jcw4: Distances: distance flight time, Berg, Jamtland, Sweden to Burg 729 miles 1 hour 20 minutes, (assuming direct flight path at 550 mph) â Demographics: Berg, Jamtland, Sweden Burg, Saxony-Anhalt, Germany, population 8175 people 24364 people.
<mup> jcw4:  â Geographic properties: Berg, Jamtland, Sweden Burg, area 2384 mi^2 (square miles), average elevation 124.7 feet.
<natefinch> niemeyer: my german is umm.... somewhat rusty
<jcw4> lol
<perrito666> mup: folgern Zeit in NÃ¼rnberg
<mup> perrito666: In-com-pre-hen-si-ble-ness.
<natefinch> rofl
<perrito666> mup: infer Zeit in NÃ¼rnberg
<mup> perrito666: Cannot infer much out of this. :-(
<jcw4> mup: was ist die ziet in NÃ¼rnberg
<mup> jcw4: Roses are red, violets are blue, and I don't understand what you just said.
<natefinch> mup: infer -all Nuremberg
<mup> natefinch: Population: city population 505664 people (country rank: 14th) (2010 estimate).
<katco> niemeyer: hey, while you're here. where will the goamz lib live on github?
<katco> niemeyer: also, hello :)
<niemeyer> katco: Hey :)
<niemeyer> katco: I don't really know yet
<natefinch> oh hey, while people are paying attention, how do you specify an empty map in yaml?  Google is failing me.
<katco> niemeyer: ok no worries. we're just about wrapping up changes, so it would be nice to have a home for the little guy somewhat soon, but we can work around it if not :)
<jcw4> mup: infer -all all your base
<mup> jcw4: ...are belong to us., (according to the video game Zero Wing).
<katco> mup: infer \b
<mup> katco: Cannot infer much out of this. :-(
<katco> good job, mup :)
<jcw4> hehe
<jcw4> mup: infer how to represent an empty map in yaml?
<katco> haha
<mup> jcw4: Cannot infer much out of this. :-(
<katco> there we go
<katco> mup: help
<mup> katco: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
<jcw4> oooh sms
<katco> mup: help poke
<mup> katco: poke <query ...> â Searches people in the LDAP directory.
<mup> katco: The provided query will be searched for as an exact IRC nick ("mozillaNickname") or part of a name ("cn").
<katco> mup: poke katco-
<mup> katco: Plugin "ldap" is not enabled here.
<natefinch> poke and sms don't work on freenode
<katco> ah
 * natefinch made that mistake this morning :)
<katco> natefinch: hey new site looks nice
<katco> mup: help sendraw
<mup> katco: sendraw [-account=<string>] <text ...> â Sends the provided text as a raw IRC protocol message.
<mup> katco: If an account name is not provided, it defaults to the current one.
<perrito666> sinzui: ping me when you are around
<natefinch> katco: thanks... Hugo is pretty cool, and since it's in Go, I can actually understand the code ;)
<katco> natefinch: lol
<katco> i have been thinking about switching to a static based site
<natefinch> katco: it's mostly the jekyll theme, ported to Hugo's format, and tweaked slightly by me.  I have near zero web dev / design ability... but I am really good at changing margins and colors
<katco> just using blogger right now
<katco> lol
<natefinch> er... the "hyde" jekyll theme
<katco> http://stchuxderbychix.appspot.com/ <-- old website i did in Go for my derby team. not using best practices though.
<natefinch> katco: highly recommend Hugo.  It's pretty easy to use, since it's all Go templates and pretty well thought out.
<natefinch> katco: I actually asked Steven Francia to be a committer on the repo so I could help fix bugs and land PRs and stuff
<natefinch> also, free hosting on github is pretty sweet
<katco> yeah haha
<katco> google app engine is probably better free hosting ;)
<katco> geographical redundancy for my dinky blog!
<natefinch> katco: that site looks 100 times better than anything I could produce.  Nice thing about a blog is that it's mostly text :)
<katco> natefinch: i was pretty proud of it. it aggregates data from different web services
<natefinch> Oh yeah, I remember you mentioning that. Pretty cool.
<katco> it's funny that it's a dead site, but it still has some current info
<katco> e.g. next bout looks like
<natefinch> katco: can GAE easily host a static site?  I like that I don't have to do anything but git push to github to update my site.
<katco> natefinch: yeah. git push is way easier, but they have a python script that pushes things up for you
<katco> but yeah, does static content quite nicely actually
<katco> and since under some huge amount of traffic is free, it's essentially free hosting :p
<katco> b/c it was designed for actual applications
<katco> but you can also run some live go code on there if you want a bit of dynamic ability. python, java, i think php now
<natefinch> yeah, that's cool
<katco> of course i don't know why anyone would want to use python /duck
 * natefinch notes katco doesn't even need to mention java or php because.... *shudder*
<katco> haha
<katco> java... ok i can do that. php... just... no. never again.
<katco> i wrote a CMS in php back in the early 00's. before it had support for classes.
<natefinch> I've never done PHP.   Java I've done.  It's ok.
<natefinch> gotta go in  a sec
<katco> java is ok albeit verbose. php to me is design by accident
<perrito666> there is this cool tool called nikola which generates your whole website which includes comments support, paging and some other goodies from rest
<natefinch> perrito666: hugo does that from markdown and Go templates
<natefinch> http://hugo.spf13.com/
<natefinch> ok, gotta go
<ericsnow> could anyone spare a few minutes for a review (the patch is small)? https://github.com/juju/juju/pull/476
<menn0> waigani: morning
<perrito666> menn0: morning
<perrito666> good findings
<menn0> perrito666: howdy
<waigani> menn0: morning
<menn0> yeah
<menn0> it took all day :)
<menn0> but I learned a lot along the way
<waigani> perrito666: morning :)
<waigani> menn0: what took all day?
<menn0> figuring out the HA problem
<perrito666> menn0: you make me feel better, i was al eeffing day setting those things and failing
<waigani> menn0: you solved it?
<menn0> well I found the commit that caused the problem and reverted it
<menn0> although, I've seen that the merge failed
<menn0> more to look at ...
<waigani> ah yeah, just saw your PR - nice work man!
<menn0> I need to take care of something here (screaming kids)
<menn0> back in a bit
<perrito666> menn0: your revert should break the build somewhere else though
<menn0> perrito666: sorry.
<menn0> perrito666: yeah I feared that it might
<perrito666> menn0: I was waiting on it to merge
<menn0> perrito666: I will try to understand the Safe.J change some more
<perrito666> but reading that commits message it seems to be fixing a bug in newer versions of juju
<menn0> new versions of mongo right?
<perrito666> well apparently setting safe.j was ok even though you had no journal set
<perrito666> menn0: yes
<perrito666> but since 2....6? i think that is no longer true
<menn0> well before doing anything else I'm trying the merge again because the way it failed is one of the errors we see regularly in test runs
<perrito666> menn0: your changes got merged
<perrito666> please have an eye on jenkins
<menn0> perrito666: \o/
<menn0> perrito666: what do you mean: "please have an eye on jenkins"
<perrito666> menn0: there are a couple of CI jobs linked on the ticket
<perrito666> make sure they pass when they test your rev
<menn0> perrito666: right
<menn0> perrito666: I'm also running the CI test you were testing with (test_recovery.py --ha) manually now
<menn0> perrito666: so far it's already getting further than it did before
<menn0> perrito666: thanks for those instructions btw. they help a lot.
<perrito666> menn0: I am glad
<menn0> I didn't end up doing the custom stream creation though
<menn0> I hacked up the test helpers a bit ...
<perrito666> heh
<menn0> perrito666: hooray! test_recovery.py --ha passes
<ericsnow> menn0: \o/
<perrito666> sweet
<perrito666> now we need to bug curtis to see if it passes a few times
<menn0> perrito666: Curtis knows
<waigani> menn0: I'm happy to do some reviewing if that helps
<menn0> waigani: that would be awesome
<waigani> menn0: okay - you owe me a beer ;)
<menn0> waigani: done!
<menn0> waigani: we should coordinate to make sure we don't end up overlapping too much
<waigani> menn0: on that point I think you can actually assign a reviewer to a PR
<menn0> waigani: ok cool. I didn't know that.
<menn0> let's do that.
#juju-dev 2014-08-07
<menn0_> grrr functional-ha-recovery still fails
<davecheney> aaaaaaaaarh
 * menn0_ rolls up his sleeves
<menn0_> at least mongo didn't go down in a ball of flames
<menn0_> it looks like the test perhaps didn't wait quite long enough
<davecheney> \o/ add more timeouts
<menn0_> hmmm.... it's already waiting up to 20mins
<menn0_> 20mins to bring up 2 state servers
<menn0_> should be enough
<davecheney> yup, that is far too long
<menn0_> davecheney: something definitely went wrong
<menn0_> there are signs that the extra state servers where on their way up
<menn0_> but they didn't get there
<menn0_> and the logs just stop well before the test gave up
<menn0_> in fact, the logs are a bit sparse throughout
<menn0_> I don't see the EnsureAvailability API call for example
<menn0_> but I do see messages from the provisioner that the extra machines came up as a result
<davecheney> menn0_: what I was seeing last night, even when mongo wasn't fucked is the _second_ we issue ensure-availabilityu
<davecheney> the primary api server goes offline
<davecheney> not stops
<davecheney> just stops accepting new connections
<davecheney> so the two new state servers get through cloud init
<davecheney> we start their jujud upstart jobs
<menn0_> I saw that yesterday too but this is different
<davecheney> and thye just sit their waiting for the api server
<menn0_> there are no errors or warnings in the logs
<davecheney> they don't evne get to the point of establishing a replica set
<menn0_> everything looks positively happy
<davecheney> thye don't even have mongo installed
<davecheney> its like turning on journalling buggers the primary state server
<davecheney> and thus the api server connected to it
<menn0_> the other thing is, I've manually run this CI test against EC2 and it worked
<menn0_> so it could be that it'll work sometimes
<menn0_> Curtis sent me the details for logging in to our CI infrastructure
<menn0_> I might run the test in Jenkins again and get on the hosts that get created
<menn0_> 20 mins is hopefully enough time to nose around
<davecheney> menn0_: so you got some working ec2 creds ?
<menn0_> yep
<menn0_> John set me up
<menn0_> there's one canonical funded account
<davecheney> given how many ne starters we have, it might be valuable to email the internal list and let them know how to get the details
<davecheney> rather than everyone figuring it out on their own
<menn0_> and he can create user accounts within that
<jcw4> waigani: thanks for the review comments
<waigani> menn0_ what's the latest?
<menn0_> waigani: well, the problem doesn't happen very often
<menn0_> I've only seen the functional-ha-recovery jenkins job fail once out of 4 attempts
<waigani> ah a heisenbug
<menn0_> and the one time it failed the logs don't indicate anything
<menn0_> the one suspicious thing is that the logs stop dead several minutes before the test gave up
<menn0_> but no errors or warnings before then
<menn0_> I'm working on a little script which slurps all the useful logs from all the machines as the test is running
<waigani> any idea what command / test is run just before blackout?
<menn0_> I have a theory that something went wrong with rsyslog (which feeds all-machines.log, which is the one we have for the failed run)
<menn0_> but that the machine-N.log may have continued on
<waigani> have you checked the machine-n.log?
<menn0_> yes sure, but only for runs where the test succeeds :)
<menn0_> for the failed one where I wasn't looking as closely
<menn0_> Jenkins only archived all-machines.log
<menn0_> from machine-0
<waigani> ah right
<menn0_> and that's the one that seems truncated
<menn0_> actually
<menn0_> I might make a change to the test so that it archives all the logs from all the machines
<menn0_> that's probably more sensible than what I'm doing now
<waigani> it's not already doing that? what is it doing now?
<menn0_> it just archives all-machines.log and cloud-init.log from machine-0
<menn0_> which is perfectly sensible if logging to all-machines.log is working correctly
<waigani> right, i see
<menn0_> but I have a feeling it isn't (or at least wasn't for the failed run)
<menn0_> I think having all the logs for all machines when tests fail is probably a good thing to have at this stage
<menn0_> ha
<menn0_> the test just ran successfully again...
<menn0_> which is great
<menn0_> and certainly better than what we had before
<menn0_> but it bugs me that we still might have a lurking issue
<menn0_> emailing curtis with an update now
<dimitern> morning all
<voidspace> morning all
<dimitern> hey voidspace
<voidspace> dimitern: hwy
<voidspace> *hey even
<dimitern> voidspace, i'll be running tests on MAAS as soon as the talk starting now is done (~1h from now)
<voidspace> dimitern: cool
<voidspace> thanks
<dimitern> jam, jam1, hey, I'm in juju-networking
<jam1> dimitern: hi, we're not done with the lightning talks, but we'll be switching to the boardroom soon
<dimitern> jam1, ah, ok
 * voidspace lurches
<TheMue> voidspace: morning, just opened your latest PR
<jam1> TheMue: voidspace:
<jam1> hey guys
<jam1> how's IPv6 stuff looking?
<TheMue> jam1: Iâm in contact with Serge and StÃ©phane regarding LXC. Got a mail back with lots of data in it. ;) There seems to be an issue with the bridge for the containers, but I have to go deeper into the pasted command output in the mail.
<jam1> TheMue: as in 'br0' doesn't actually support ipv6?
<TheMue> jam1: Will do it after finishing voidspaces review.
<TheMue> jam1: That could be the worst answer, yes. Donât hope so.
<jam1> TheMue: certainly I expect that the containers can't just come up on lxcbr0 as they default to
<jam1> because that isn't actually a bridge onto the outer network
<lifeless> jam1: allocating routable ipv6 will need a daemon offering address space to the bridge (and you can then route it onto the exterior net
<jam1> lifeless: in this case, I believe we are just setting the addresses on the individual containers, the bridge itself needs to know the range?
<lifeless> jam1: same as dnsmasq is used to offer IP addresses to containers for ipv4, you need something offering it for ipv6
<lifeless> jam1: autoconfig will get you local address space only of course
<jam1> lifeless: sure, but you can set them with ip manually
<jam1> right?
<TheMue> jam1, lifeless: yes, currently all addresses on host and in the containers are set manually
<jam1> anyway, I have another meeting now, but I'd like to hear your thoughts lifeless, perhaps in about an hour?
<lifeless> jam1: I'm at a python user group meetup, but I'll try to catch you later perhaps :)
<TheMue> and as said, host A can reach host B, host A can reach container A1 and A2, container A1 and A2 can reach, only A1 cannot reach B1
<jam1> lifeless: sounds like a good time, have fun
<lifeless> TheMue: is there an etherpad or a bug with details?
<TheMue> lifeless: not yet, so far we havenât seen it as bug, only as a result of my lack of knowledge about ipv6 :D
<lifeless> TheMue: ok, so I'm happy to cross-check, if you have a diagram (or full prose) description of the topology (and ip -6 route output etc from the hosts and containers)
<TheMue> lifeless: thanks, will add this to my âresearch docâ on google and send you the link
<dimitern> voidspace, still in a call; upgrading the maas nucs, it takes some time - when done i'm running tests
<lifeless> TheMue: cool
<waigani> I'm going to be late to hangout - my plugin has crashed, trying to sort it
<natefinch> anyone getting on the juju-core team meeting?
<TheMue> natefinch: oh, goooood hint *iirks*
<voidspace> dimitern: cool, I have some review comments from TheMue anyway
<voidspace> TheMue: thanks for the review, useful
<TheMue> voidspace: yw and just for info, itâs core meeting time
<voidspace> TheMue: ah! I always forget core meeting
<voidspace> grabbing coffee and will join
<voidspace> thanks
<TheMue> voidspace: hehe, me this time almost too
<natefinch> sinzui: are you around?  CI might be unblocked
<natefinch> morning alexisb
<mattyw> davecheney, http://i.imgur.com/TkWPd9o.jpg
<voidspace> TheMue: dimitern: I assume we're not doing standup as well?
<dimitern> voidspace, TheMue, let's skip it yeah, unless you need to talk about something specifically?
<voidspace> dimitern: no, you both know where I'm at
<voidspace> dimitern: I'd like confirmation from you at some point that my branch doesn't screw MAAS
<dimitern> voidspace, certainly, I'm trying my best to get my local maas in a usable state - almost there i hope
<voidspace> dimitern: haha, ok
<voidspace> dimitern: I have minor cleanups to do on that branch anyway
<voidspace> dimitern: so I'm not blocked
<dimitern> voidspace, sweet!
<waigani> git config --global rerere.enabled true
<waigani> that's all I need to do?
<waigani> if so, we should add that to the contributing doc - just before the instructions to rebase
<dimitern> waigani, check that blog post "Rerere your boat"
<waigani> dimitern: okay, I'll keep reading
<c7z> ma, that's a terrible blog title pun
<waigani> mattyw: thanks for the git/D3 link, great idea
<mattyw> waigani, I use 2 resources for understanding git. that's the first one - the other one is tasdomas
<waigani> mattyw: hehe, I'll have to bookmark that second one
<waigani> night all.
<wwitzel3> natefinch: you said CI might be unblocked?
<sinzui> sorry natefinch  https://bugs.launchpad.net/juju-core/+bug/1350983 is still open. While we got a pass, it failed most of the time we tried
<c7z> sinzui: well, I have another plan on that one too
<c7z> sinzui: can you confirm it's just bug 1350983 and bug 1347715 left blocking?
<sinzui> c7z, yes, just them
<sinzui> c7z, there is another critical regression reported by voidspace, We may need to add "ci" to it to make it block checkout bug 1353443
<c7z> I'm pretty certain the azure issue is slow disk, but not sure best how to mitigate, or the direct cause (as it's not that)
<c7z> sinzui: devs seem blocked on the manual provider issue from not being able to reproduce it... but ci does hit it completely reliably
<sinzui> c7z, yep. I since 1.20 always passed, and it passed before the problem revisions, I cannot fault the test.
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs 1350983, 1347715, 1353443
<katco> mgz: standup?
<voidspace> TheMue: which do you prefer, the first or the second?
<voidspace> TheMue: http://pastebin.ubuntu.com/7979232/
<perrito666> sinzui: hey, can you give me a hand with azure?
<voidspace> TheMue: I like the first as you then have a type for the function parameter
<voidspace> I have to reboot
<voidspace> screwy driver kills the mouse from time to time - so I currently have no mouse pointer
<voidspace> brb
<sinzui> perrito666, I have a few minutes between meetings
<perrito666> ok Ill be fast
<perrito666> I copied one of the setups for azure from cloud city
<perrito666> I set up the env variables from azuretoolssrc
<c7z> katco: sorry, sec
<c7z> timezone is wrong here somehow
<perrito666> and I tried a few combinations and I always end with:
<perrito666> 2014-08-06 22:12:26 ERROR juju.provider.common bootstrap.go:120 bootstrap failed: waited for 10m0s without being able to connect: Permission denied (publickey).
<voidspace> aaand back. But I have to help the wife for a few minutes. Back for realsies shortly.
<wwitzel3> realsies .. heh
<voidspace> wwitzel3: morning
<wwitzel3> voidspace: hola, how are you?
<voidspace> wwitzel3: I'm good, but I have to go
<voidspace> back in a few
<TheMue> voidspace: back from lunch, just seen your question. I prefer the first approach too. but is it do and donât or can and cannot?
<jam2> voidspace: I'm pretty sure that while you have a type for the parameter, you can still pass "myFunc(true)"
<jam2> voidspace: http://play.golang.org/p/q4ttwlIC4_
<natefinch> jam2: yeah, that's right.  "true" and "false" are constants and thus get converted to whatever special type you want that is derived from bool
<perrito666> sinzui: let me know if you can shed some light on my issue
<jam1> natefinch: which is also true for enumerate strings, and lets you create an "instance" of your enumerated type that has any arbitrary string
<natefinch> jam1: yep, it's kind of a problem at times... it's not too bad for enums based on ints, because randomly passing 8 or whatever into a function looks weird and should get caught by a review
<natefinch> jam1, voidspace:  I'm actually -1 on making constants that just mean true or false.  If you want to make the code clearer, don't use a boolean parameter at all
<natefinch> jam1, voidspace: just make two functions:  Foo() and FooNoConfig()    and
<natefinch> s/and//
<natefinch> another option is to use an integer instead of true/false  just because it only has two values doesn't mean it has to be a boolean... no one's going to pass 0 or 1 into your function and make it past a code review (hopefully)
<natefinch> but really, the best answer is just two functions that in their implementation pass true/false to a single implementation function
<natefinch> ok gotta go.  Bringing my older daughter to a doctor's appointment
<TheMue> natefinch: sounds good, and you later simply can stuff like âiDontCareâ or âwhoKnowsâ ;)
<TheMue> natefinch: but I would always start with 1, so that a passed uninitialized variabe with value would may fail internally, e.g. inside a switch.
<jam1> TheMue: voidspace: I'd also *highly* recommend affirmative statements, rather than something like "if !DontFoo"
<TheMue> jam1: yep, if statements should be âif isWantedâ or âisPossibleâ while the arguments are âwantâ, âdontWantâ, âpossibleâ or âimpossibleâ
<TheMue> jam1: but only for boolean variables, fields, arguments
<TheMue> jam1: otherwise the prefixes âisâ,âcanâ etc donât match
 * TheMue is reminded of the always good readability of smalltalk sources
<dimitern> voidspace, hey
<dimitern> voidspace, sorry for the long delay
<dimitern> voidspace, i had a ~4h fight with my hardware maas, eventually giving up and testing your branch on my kvm-based virtual maas
<dimitern> voidspace, network setup looks fine, containers on all nodes, bootstrap included, are addressable on the same subnet (the bridge works)|
<katco> wwitzel3: did a dumb script: #!/bin/bash\nPASSWORD=$(sudo cat $HOME/.juju/local/agents/machine-0/agent.conf |grep oldpassword |awk '{ print $2 }')\nmongo --ssl -u admin -p $PASSWORD localhost:37017/admin
<voidspace> dimitern: awesome, that's great news
<wwitzel3> katco: nice
<dimitern> voidspace, i've just reviewed your PR, LGTM
<voidspace> dimitern: cool, thanks
<katco> wwitzel3: seems like everyone asks how to do that
<voidspace> TheMue: nate's advice doesn't really make sense in this context
<voidspace> TheMue: we'd actually need two networkers and two configstate types to do as he suggests
<dimitern> katco, ha, awk fan, huh :)
<voidspace> TheMue: sometimes a boolean actually makes sense, I disagree with him on that point I think
<katco> dimitern: lol i am by no means an awk master. i just struggle through it when it makes sense
<voidspace> TheMue: if go had named parameters it would be easier...
<TheMue> voidspace: now, not two networkers. two constructors to avoid the parameter
<TheMue> voidspace: yeah, named parameters are nice
<voidspace> TheMue: but the networker has to pass this paramter down into another function
<voidspace> TheMue: so two constructors isn't enough...
<voidspace> we still need to store the boolean
<dimitern> voidspace, it has a struct-literal syntax for kinda the same thing
<voidspace> dimitern: heh, right - could use that I guess
<voidspace> dimitern: that's even further down the rabbit hole
<dimitern> voidspace, we use this quite a lot with 3-4+ args functions
<voidspace> dimitern: right, and there it makes sense
<TheMue> voidspace: the default constructor would set a field to the one value, while the second one calls the first one but then changes the field. ;)
<voidspace> dimitern: becuase you can add or remove paramters at will too
<voidspace> TheMue: I don't think that's any clearer than just a boolean parameter with named constants
<voidspace> that's plenty readable
<dimitern> voidspace, I'd just define a couple of int bit flags and | them when calling NewNetworker - everyone's happy
<voidspace> dimitern: bit flags!
<dimitern> :) why not? networker.DontWriteConfig
<dimitern> voidspace, ah, sorry - i've just noticed that's the only bool arg
<dimitern> if there were 2, i'd use bit|flags
<wwitzel3> where is this code in question?
<voidspace> dimitern: you mean bit flags instead of multiple bools
<voidspace> yeah, that would work...
<jam1> katco: wwitzel3: fwiw, I think the "admin" user is going to be going away as direct DB access eventually, you should be trying to connect as machine-0 user, IIRC
<wwitzel3> jam1: will the machine-0 user have the rights?
<dimitern> voidspace, In this case, the easiest thing really is another ctor NewSafeNetworker? { newNetworker(..,false) }, and NewNetworker also calls the implementation, but with true
<katco> jam1: thanks for the heads up.
<voidspace> I give in
<jam1> wwitzel3: a machine agent that has JobManageEnviron will have admin access on the DB
<jam1> in HA mode, eventually machine-0 might not, but the chances of that are quite low
<lifeless> jam1: so hi
<jam1> I'll bring it up if we *actually* want to kill the admin user, but already the "oldpassword" stuff means it isn't quite what it used to be
<jam1> hey lifeless
 * dimitern wonders how many *degrees* of bikeshedding are there :D
<wwitzel3> jam1: ok, just wondering, I know that the issue i just fixed I wouldn't of been able to without access to the admin database.
<dimitern> voidspace, interestingly, i came across some problems with deploying in lxc containers - slow startup (not using btrfs?), apt-get install failing the same way (can't get the dpkg lock) - resolved --retry fixes it, mysql start hook fails with random mysql startup errors
<voidspace> dimitern: that all sounds horrible
<voidspace> it never used to be so bad
<dimitern> so maybe the lxc package had some regressive changes lately
<voidspace> right, maybe
<dimitern> voidspace, tell me about it :) combined with a couple of hours trying to get the master nuc on my maas to work (at one point no usb ports worked, i.e. no kbd, no wifi or ethernet)
<voidspace> ouch!
<TheMue> lifeless: in https://docs.google.com/a/canonical.com/document/d/1wfdGL_vyemT2-ncAB7KIySkKI9HbT8efKeC3Sd8ID0I/edit# the current test setup and status are described
<jam1> TheMue: so that doc seems to say that they are both just on "lxcbr0" which is only the local host bridge
<TheMue> jam1: yes
<jam1> TheMue: you need a bridge that is on eth0 if you want containers on VM1 to be able to see anything on VM2
<dimitern> TheMue, that's right, and it needs to be an IPv6 bridge I think
<TheMue> jam1: Iâve tried that too, as I found a a doc. but after shredding my net this way in the first approach even the second one didnât work
<wwitzel3> perrito666, ericsnow: you guys want to wait push standup back and wait for nate? Or do i at 10?
<TheMue> jam1: so how to add an eth0 bridge for ipv6?
<jam1> TheMue: my initial understanding is that installing "bridge-utils" creates a br0 bridge on eth0
<jam1> TheMue: though possibly you need: http://xmodulo.com/2013/04/how-to-configure-linux-bridge-interface.html
<perrito666> wwitzel3: I guess we can wait
<TheMue> jam1: thanks for the link, will take a look
<dimitern> TheMue, this one is specifically for lxc+ipv6 - might help: http://blog.toxa.de/archives/606
<TheMue> jam1: so am I right that eth0 and the virtual interfaces of the containers have to be added to the bridge
<TheMue> dimitern: see my link list
<TheMue> dimitern: in the document
<dimitern> TheMue, :)
<dimitern> haven't checked all of them
<TheMue> dimitern: done it exectly this way made me restoring my VM to my snapshot, canât reach it anymore *lol*
 * TheMue will see how the bridge utils work now
<lifeless> TheMue: have requested access
<TheMue> lifeless: granted
<lifeless> TheMue: comments left
<lifeless> TheMue: ip -6 neigh show
<lifeless> TheMue: is another useful command
<c7z> what horseplay is that...
<lifeless> TheMue: in short, I think your subnetting is broken - you're putting /64 prefixed addresses on a virtual bridge with a /96 route, we don't expect neighbour discovery outside that /96 to work
<lifeless> hth
<lifeless> gnight!
<TheMue> lifeless: ah, ok, will take a look there, thanks
<TheMue> currently my vms are rejecting any networking after enslaving eth0 to br0 :(
<perrito666> whyyyyy cant I break this/
<perrito666> ?
<sinzui> perrito666, azure is not seeing the io timeout?
<perrito666> sinzui: nope, I am bootstrapping succesfully
<perrito666> sinzui: I did see the io error in aws a couple of days ago
<perrito666> that intrigued me
<sinzui> perrito666, we are testing 1.20 for a ppc fix now.
<sinzui> perrito666, jam speculates that this error is older, really just a replicate set issue and that the error we now see is just a mutation
<perrito666> you mean the io timeout?
<sinzui> perrito666, yes
<perrito666> do you have any context on that speculation/
<jcw4> rick_h__: ping
<jam1> bac: https://gist.github.com/jameinel/d0763eb6d8d38cfd64e1
<perrito666> sinzui: ok, out of 6 i only got one failure and it was dns related :|
<sinzui> perrito666, I am going to force a rebuild of master when 1.20 finishes the test
<sinzui> perrito666, did you publish your own streams? did you use --upload-tools?
<perrito666> I used upload tools I have my streams published if you want me to try with it
<sinzui> perrito666, no need. I am deploying too while I wait for CI to come free
<perrito666> too late, it was at the tip of my fingers
<perrito666> sinzui: with my own stream also works
<sinzui> perrito666, That last built binary worked for me too?
<perrito666> sinzui: that is a very hard question for me to answer man :p
<sinzui> perrito666, let's just stop testing for now. master will rebuild in about an hour
<sinzui> perrito666, s/?/./ It did work for me
<rick_h__> jcw4: pong?
<jcw4> Hi rick_h__ ; sorry - I don't think you saw my note in IRC a couple days ago?
<jcw4> about actions api docs
<rick_h__> jcw4: sorry, saw something go by but been at a sprint out of the country and not peeked at it
<jcw4> rick_h__: dropped the ball on that, but here is a WIP pr for documentation... https://github.com/juju/juju/pull/468
<rick_h__> jcw4: ty much
<jcw4> rick_h__: figured you really wanted that last week, not this week :-/
<jcw4> rick_h__: hope you're having a great sprint though :)
<rick_h__> wheeeee! :)
<jcw4> haha
<voidspace> evening all
<jcw4> voidspace: 'ello :)
<voidspace> o/
<wwitzel3> natefinch: ping
<natefinch> wwitzel3: pong
<wwitzel3> natefinch: so I wanted further pick your brain about using other syslog library? .. is that something I should be looking at for dealing with this all-machines.log issue? Or do we want to worry about that as a seperate concern?
<wwitzel3> natefinch: I guess that only solve the aggregation on windows, it doesn't solve the rotation problem.
<natefinch> wwitzel3: yeah
<wwitzel3> natefinch: should I just use logrotate (simplest path) for now?
<natefinch> wwitzel3: yep
<wwitzel3> natefinch: well that makes it easy :)
<wwitzel3> well .. I don't have to make any choices .. it probably will be a pain in the ass because it is rsyslog and logrotate
<natefinch>  yeah sorry
<wwitzel3> but at least this part was easy :)
<wwitzel3> also how in the world should I test this?
<wwitzel3> ... wait 1 day and make sure the log rotates .. our tests already take long enough :P
<natefinch> lol
<natefinch> that wouldn't extend them all that much ;)
<wwitzel3> hah
<wwitzel3> *cry*
<wwitzel3> natefinch: also do I need to request logrotate get added to some ppa/apt repo? cloud something or other?
<natefinch> for actual unit tests.... honestly, screw it, this is an external application.  test that we set the config right
<natefinch> wwitzel3: it may already be installed
<natefinch> wwitzel3: and if not, I expect it'll be in whatever thing is already available
<natefinch> i.e. sudo apt-get install should work
<wwitzel3> natefinch: sounds good, thanks. starting on it now, I'll add a card for it
<wwitzel3> is there a lp bug for it too?
<natefinch> yes....... somewhere
<wwitzel3> I searched but lp never gives me any results when I search
<wwitzel3> lol
<natefinch> wwitzel3: https://bugs.launchpad.net/juju-core/+bug/1078213
<wwitzel3> thank you .. need to start a new site .. lngtfy.com (let nate google that for you)
<natefinch> haha... search didn't find it for me, so I opened up the high bugs.... and it happened to be at the top
<natefinch> heh reported 21 months ago
<jcw4> Fix for a bug I saw cropping up on the CI builds: https://github.com/juju/juju/pull/480
<jcw4> tests assuming ordered results
<perrito666> sinzui: any news?
#juju-dev 2014-08-08
<menn0> axw: how's things?
<axw> menn0: heya. not bad, yourself?
<menn0> axw: yeah not too bad. super tired after 2 terrible nights in a row with the youngest.
<menn0> axw: did you catch up with what happened while you were on leave?
<menn0> I reverted that Safe.J change as it made ensure-availability very unhappy
<menn0> I have no idea why but it was that change that did it
<axw> menn0: I did see, yes, thanks for fixing
<axw> menn0: ok. it's weird, can't see why it'd break anything
<axw> but I think it shouldn't be too hard to work around
<menn0> axw: ok cool
<menn0> axw: also, regarding bug 1350983. is this something you can confirm as being fixed?
<menn0> needs testing against azure to be sure.
<menn0> axw: actually I just saw that Curtis says the problem is still happening even after the Safe.J change was reverted.
<menn0> axw: so it might be something else
<axw> yeah, I don't think that bug is related to Safe
<axw> anyone around to do a review? https://github.com/juju/juju/pull/483
<TheMue> morning
<dimitern> waigani, hey, are your still around?
<dimitern> TheMue, morning
<waigani> dimitern: hello, yes :)
<dimitern> waigani, i'm sending you a quick write-up how I use git rebase workflow with my scripts/aliases
<dimitern> waigani, and I'd appreciate feedback on how it works for you :)
<waigani> dimitern: thank you!
<dimitern> waigani, np, i'm glad somebody else will try it out!
 * TheMue is setting up a test environment with nested lxc containers
<voidspac_> axw: ping
<voidspac_> are you still awake?
<axw> voidspac_: awake? it's 4pm :)
<voidspac_> axw: oh!
<voidspac_> I always assume that it's the middle of the night for you when it's day for me :-)
<voidspac_> axw: anyway, so on my PR to fix a local provider bug you complain that it doesn't fix the manual provider too...
<voidspac_> axw: :-p
 * axw complains a lot
<axw> I do indeed, because that's the way it screwed up my laptop
<voidspac_> axw: heh, so report your own damn bugs...
<voidspac_> axw: seriously though - do we want the same fix for the manual provider
<axw> voidspac_: pretty sure the original one was related to manual, but yeah, I should have
<voidspac_> do we not want jujud to change networking for manually provisioned machines
<axw> I think so
<axw> umm
<voidspac_> https://bugs.launchpad.net/juju-core/+bug/1349635
<axw> well, what are the networks for a manual environment?
<voidspac_> "local environment"
<voidspac_> although, hmmm
<axw> the summary was changed
<axw> the title I mean
<axw> anyway
<voidspac_> sure
<voidspac_> so in the fix I have we don't touch networks at all for local provider
<voidspac_> "apply" just short-circuits and bails
<voidspac_> you suggest having the networker "know" if the provider supports networking or not
<voidspac_> axw: what specifically are you suggesting?
<axw> voidspac_: right. and this is the sort of thing fwereade prefers; that we operate in terms of environment/provider capabilities, rather than coding to specific providers
 * axw thinks
<voidspac_> axw: ok, but local provider machine 0 *is* a special snowflake
<voidspac_> unfortunately
<voidspac_> so it's not *just* provider, it's the bootstrap machine on the local provider
<voidspac_> not touching networking config at all for manual provider also makes sense - so long as that's what we want
<voidspac_> (and is easy enough to add a check for)
<axw> voidspac_: so what *I* would prefer is that the networker do nothing for any envrionment whose SupportsNetworks method returns false
<voidspac_> axw: when I grep for SupportsNetworks I get no results
<axw> lemme check spelling
<axw> voidspac_: sorry, no first "s": SupportNetworks
<voidspac_> axw: that would be easy enough if it fixes the issue
<voidspac_> axw: I'll look into that (I notice you did mention it in your comment - sorry)
<voidspac_> axw: thanks
<axw> voidspac_: atm, only the MAAS provider returns true (depending on MAAS version)
<voidspac_> axw: we *need* the networking stuff for local provider machine != 0
<voidspac_> axw: it sets up something we need
<axw> oh, we do? what's that?
<voidspac_> not sure, but local provider breaks without it
<axw> it's relatively new, and used just for MAAS I thought
<voidspac_> that was my first approach
<voidspac_> dimitern: ^^^^
<axw> buh
<voidspac_> axw: let me try again
<voidspac_> (once a test run finishes)
<axw> okey dokey
<axw> dimitern: is it safe/sensible to disable the networker worker if the environment's SupportNetworks method returns false at machine agent startup?
<voidspac_> if you have openstack environment variables set there's a juju-metadata test that always fails
<voidspac_> it doesn't get the error it expects
<axw> wonderful
<axw> that means it's not embedding the base suite I guess
<TheMue> $ remove => not found / $ delete => not found / $ destroy => destroyed *aaaaaaah*
<dimitern> axw, unfortunately, no
<dimitern> voidspac_, axw, supportnetworks capability is supposed to be used by providers to tell juju if *any* networking is supported, but for the local/manual case this is not enough
<dimitern> voidspac_, axw, it seems to me we need another capability or something for this
<axw> dimitern: I don't quite understand. local and manual both currently return false for SupportNetworks, so why isn't that enough to go by?
<axw> if that means "no networking is supported"
<dimitern> it's not necessarily an environment-wide setting (for local it only needs to apply to the bootstrap node; for manual too I guess, but how about manually provisioned machines?)
<dimitern> axw, because I haven't got that far yet - they will support *some* sort of networking
<axw> ok, I see
<axw> so it'd work today, but not for long
<dimitern> yeah
<axw> well then, whatever we do is going to be a stop-gap
<axw> voidspac_: ^^ maybe don't invest too much time if it's all going to change soon
<dimitern> voidspac_, axw, how about having a capability called ManageBootstrapNetworking bool ?
<dimitern> it can be true for all but the local and manual provider
<voidspac_> dimitern: that is then only used by local provider?
<axw> dimitern: I don't know that that's right either. Do we really want to be managing networking on manually provisioned machines? They won't ever know about networks, right?
<voidspac_> ManageNetworking that is always false on manual provider and false for local bootstrap machine
<dimitern> axw, they will at some point, at least units deployed on such machines
<dimitern> voidspac_, alas it can't be per-machine, it's an environ capability.. although I guess we can make it take a machine id and check..
<voidspac_> dimitern: well, the environ is running on a specific machine
<voidspac_> but if that's innappropriate for an environ capabilitiy then maybe that's not where this belongs
<dimitern> voidspac_, you know, I'd rather have this critical bug fixed now, and have a TODO + tech dept bug about improving the way we decide the value of the writeNetworkConfig flag for the networker
<voidspac_> dimitern: well, the PR is ready to merge
<voidspac_> dimitern: *but*
<voidspac_> dimitern: axw says the original report was about manually provisioned machines, not just the local provider
<voidspac_> and my PR does not address the manual provider at all
<voidspac_> I can add a TODO for sure
<axw> can we just make it != "maas" for now, and TODO FIXME
<voidspac_> axw: I will check if that works
<voidspac_> axw: I have test failures though :-/
<voidspac_> which I suspect are sporadic
<voidspac_> but checking
<axw> :(
<voidspac_> I need more coffee for this
<dimitern> axw, type != "maas" is a really dirty hack :) but I can live with it for a short while, until we find a better way
<axw> I agree, I didn't really want to go there
<axw> seems better to fix now and do properly when networking evolves tho
<axw> TheMue: it would appear you're the lucky OCR. can you please take a look at https://github.com/juju/juju/pull/483 and https://github.com/juju/juju/pull/484? these fix 2/3 critical bugs on 1.21
<TheMue> Btw, reading the notes of StÃ©phane and Serge *before* installing nested containers helps .. *blush*
<voidspac_> if you do a bootstrap (local provider), immediately followed by a deploy
<voidspac_> you get:
<voidspac_> ERROR upgrade in progress - Juju functionality is limited
<TheMue> axw: Iâm OCR, oh!
<axw> oops
<axw> TheMue: no you're not
<axw> sorry
<axw> spreadsheet hadn't updated yet
<TheMue> No, Nate and Wayne. *phew*
<TheMue> But still can take a look.
<axw> thanks :)
<TheMue> axw: 483 LGTM
<axw> TheMue: thanks
<voidspac_> axw: dimitern: latest revision uses the dirty hack
<voidspac_> https://github.com/juju/juju/pull/477
<voidspac_> axw: dimitern: only writes network config changes for maas
<TheMue> axw: 484 LGTM too with one note
<voidspac_> works for local provider (tested "manually")
<axw> TheMue: thanks
<TheMue> axw: yw
<axw> voidspac_: I'd prefer if we just disabled it altogether if != maas, but I'll defer to dimitern since I don't know enough about the networker
<voidspac_> it would simplify this code a great deal to just only start the networker for maas
<dimitern> axw, voidspac_, please, let's not disable the networker just for that
<dimitern> voidspac_, LGTM
<voidspac_> dimitern: might be late to standup by the way
<voidspac_> only by ten minutes or so
<voidspac_> dimitern: lp bug created
<voidspac_> by the way
<voidspac_> and linked in comment
<dimitern> voidspac_, no worries, cheers
<voidspac_> even though this is a critical bug it doesn't seem to be one of the blocking bugs, so I'm not sure if this can be merged yet
<voidspac_> we'll see
<rogpeppe1> dimitern: are merges to juju-core still blocked on critical bugs?
<dimitern> rogpeppe1, I'm not aware of that, if it's the case
<rogpeppe1> dimitern: ah, ok. i still see two critical bugs open, and i thought that all merges into juju-core were blocked if that was the case, unless they directly addressed those issues
<rogpeppe1> dimitern: is that not the case?
<rogpeppe1> dimitern: (i've had a branch sitting for weeks waiting for things to be unblocked)
<dimitern> rogpeppe1, one of the critical bugs has an approved fix pending merging, i need to see about the other one
<rogpeppe1> dimitern: i guess i'll try to merge again then
<dimitern> rogpeppe1, sgtm
<TheMue> aaaaah, after two failed approaches my nested containers are now running (ok, currently only, but the way how to do it has been important)
<TheMue> hehe, running the mac with a test trusty in a vm running a trusty lxc container running two trusty containers
<rogpeppe1> dimitern: hmm, looks like merges *are* still blocked: http://juju-ci.vapour.ws:8080/job/github-merge-juju/236/console
<dimitern> rogpeppe1, that's michael's fix - we'll sort it out on the standup shortly
<rogpeppe1> dimitern: cool. i'll keep my fingers crossed. could you ping me when merges are finally unblocked, please?
<dimitern> rogpeppe1, sure thing
<rogpeppe1> dimitern: ta!
<perrito666> axw: hey thanks for the patch
<perrito666> axw: the other bug seems to have been fixed after your other patch was reverted
<perrito666> but I was awaiting confirmation from sin
<perrito666> zui
<dimitern> voidspace, are you back?
<dimitern> mgz, ping
<voidspace> dimitern: yes, gimme a second
<voidspace> dimitern: TheMue: I guess I missed standup, sorry
<natefinch> people keep inviting me to meetings in Nuremberf and not actually giving me a way to participate :/
<natefinch> s/Nuremberf/Nuremberg/
<dimitern> voidspace, np, we can have a call or chat here if you like
<voidspace> dimitern: chat here is fine
<voidspace> dimitern: I guess I need something new, other than "generally explore ipv6"
<voidspace> dimitern: my PR is blocked, even though it fixes a critical bug
<voidspace> I guess it doesn't fix a critical bug on the right branch :-)
<voidspace> dimitern: I still have an outstanding task to close the database port as an upgrade step, so I can revert to that
<dimitern> voidspace, btw I had a couple of suggestions to your PR - did you see them?
<voidspace> dimitern: yes, I addressed them
<voidspace> dimitern: create LP bug and link in TODO, get rid of loop
<dimitern> voidspace, right, cheers
<voidspace> dimitern: and fix a comment
<dimitern> voidspace, it seems the bot blocked it because it has conflicting "fixes-###" comments
<dimitern> voidspace, I've never seen this before, might be a new thing
<voidspace> dimitern: that's the standard blocked message isn't it?
<voidspace>  Does not match ['fixes-1350983', 'fixes-1353443']
<voidspace> so, only a fix for issue 1350983 or 1353443 will be allowed
<voidspace> and mine fixes 1349635
<voidspace> dimitern: see the same message here https://github.com/juju/juju/pull/449
<voidspace> well, not the same message precisely but the same format
<dimitern> right
<dimitern> voidspace, we need some guys with strong bot-fu :) mgz perhaps, others?
<dimitern> voidspace, we need this fix to land, as rogpeppe1 is blocked because of that
<voidspace> dimitern: but trunk is blocked on those other issues
<dimitern> voidspace, you mean bug 1350983 ?
<dimitern> mup, bug 1350983
<voidspace> dimitern: yep and 1353443
<mup> dimitern: Bug #1350983: Cannot bootstrap on azure: cannot create log collection <azure-provider> <bootstrap> <ci> <regression> <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1350983>
<voidspace> according to the message
<voidspace> mup bug 1353443
<dimitern> perrito666, hey, any update on ^^ ?
<voidspace> mup, bug 1353443
<mup> voidspace: Bug #1353443: Install hook error due to modprobe 8021q failure <ci> <local-provider> <networker> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1353443>
<perrito666> dimitern: sinzui was going to make a few runs on CI but as of yesterday It seemd to be fixed, let me mark fix committed
<perrito666> I tested it a LOT of times without a failure
<dimitern> voidspace, I commented on that ^ bug earlier; the fix is not the same as for the networker bug screwing the local provider networking
<dimitern> perrito666, but has it landed on trunk?
<perrito666> dimitern: yes
<voidspace> dimitern: disabling the networker except for maas would fix it though....
<perrito666> ow fff is failing again
<voidspace> dimitern: does the networker do anything useful right now if you're not using maas?
<voidspace> perrito666: fff?
<perrito666> voidspace: I try not to say profanities on work channels
<dimitern> voidspace, :) well, not really
<dimitern> voidspace, we'll still have issues with lxc containers in maas
<perrito666> voidspace: if you where here you could have heard that pronounced out loud
<voidspace> dimitern: right
<voidspace> perrito666: oh :-D
<voidspace> perrito666: I thought fff was the thing that was failing and wondered what it was
<dimitern> voidspace, that's why I think the fix should involve making the networker aware whether it's running in an LXC container or not, and not try to modprobe anything
<dimitern> voidspace, it does enable/disable NICs (or it should at least)
<voidspace> dimitern: is that done in Execute ?
<voidspace> dimitern: because I disable that too
<perrito666> dimitern: oh it is failing with timeout io which is fixed by the next merge to land in CI
<dimitern> voidspace, ah
<dimitern> voidspace, i'll have to pull your branch again and do some more testing on my maas
<voidspace> dimitern: it's not disabled for maas
<voidspace> dimitern: it is for the other providers
<dimitern> voidspace, yes, but frankly I haven't tested the networker on maas myself so far :/
<voidspace> right
<dimitern> (i.e. it's main job about managing interfaces)
<dimitern> its
<dimitern> ok, brb for a while
 * dimitern lunches
<menn0> voidspace: you know how to make your PR merge right?
<menn0> voidspace: you need a comment of "__fixes-1349635__" and then also "$$whatever$$" in the same or another comment
<menn0> $$fixes-1349635$$ won't cut it
<menn0> I suspect "$$__fixes-1349635__$$" might work as a 2-in-1 though :)
<menn0> #1349635
<menn0> (that was me testing mup)
<menn0> ANYWAY... bed time!
<voidspace> mgz: are you tinkering?
<mgz> voidspace: not tinkering, just needed to apply a hammer to get your change landing
<wwitzel3> natefinch: so if we rotate purely on size, rsyslog can do it. But it has no concept of a daily. Do you think that is acceptable? Or does it need to be a daily rotation?
<voidspace> mgz: thanks
<katco> wwitzel3: dropping in on this convo: size of log = what, daily = how. i always prefer the what if possible.
<katco> e.g.: a log could grow too large w/in a day, or not grow enough in a month
<mgz> voidspace: do you know if there's any hope of bug 1353443 being got to today?
<mgz> it's currently unassigned, but blocks master and you and dimitern seem to know what's up with it
<wwitzel3> katco: I agree, the open bug is about the size, but I think we did log ration for units as a daily rotation, so having a different rotation scheme for the all-machines might be weird
<katco> wwitzel3: yeah that would be confusing as a user
<wwitzel3> katco: though if it was clearly documented, that might be ok
<katco> wwitzel3: docs? who reads the friendly docs? :)
<natefinch> wwitzel3, katco: size.   Daily rotation is useless.   You run the risk of rotating an empty log, or filling up the disk when you get 10 gigs of logs in a really busy couple hours.
<natefinch> wwitzel3, katco: I went through that with lumberjack, at first I supported rotation by time, but then I realized that was just a bad idea
<natefinch> wwitzel3, katco: the code I wrote for rotating the unit and machine logs just rotates based on size
<katco> natefinch: hey i saw a js logging package earlier this morning: sherlog. :p
<natefinch> katco: nice
<voidspace> mgz: I don't know
<voidspace> mgz: not by me I'm afraid as I have to finish a bit early, off up a mountain later
<voidspace> mgz: and I'm not sure what the best fix is - "not modprobe inside an lxc container" is most of the answer, but the right way for the networker to know whether or not it is contained is trickier
<voidspace> and dimitern should probably answer that
<dimitern> voidspace, we can tell whether we're in a lxc container or not easily from the agent config - machine tag will be something like machine-0-lxc-0 instead of machine-0
<dimitern> voidspace, we shouldn't disable modprobe for kvm containers though, only for lxc
<mgz> dimitern: I think this is the only remaining issue blocking landing
<voidspace> dimitern: so, "if strings.Contains(agent.Tag(), "lxc")...
<voidspace> it's pretty horrible
<dimitern> voidspace, no, there's a names.ParseTag
<voidspace> dimitern: do you think that is the right fix, protect modprobe with a check for lxc by parsing the tag?
<dimitern> voidspace, I think it's the right fix to disable modprobe altogether for machine agents running in lxc containers
<dimitern> voidspace, whether I think parsing the tag to get the container type is good enough - not really :)
<voidspace> right
<voidspace> that seems yucky
<dimitern> voidspace, we should have a way to tell what's the machine
<dimitern> voidspace, but nevertheless having a common way to tell if a machine is a container and what type is useful, and should be in state and the api
<voidspace> in the api?
<voidspace> a machine agent shouldn't have to make an external call to find out whether it's running in a container or not
<voidspace> or maybe it should, if that lives in the state
<voidspace> seems like the agent should know though
<perrito666> natefinch: lemme know if we do the 1:1 so I actually get onto the call
<perrito666> aghhh so close
<natefinch> prein a minute
<natefinch> perrito666:  in a minute
<dimitern> voidspace, by the api i meant "extend the agent api as needed so that info is available", no need to make a separate call (like with Life())
<natefinch> dimitern, voidspace:  do you guys think you can get #1353443 fixed today?  I'm anxious to get master open to commits again
<natefinch> mup is failing me  - https://bugs.launchpad.net/juju-core/+bug/1353443
<voidspace> natefinch: dimitern: I'm leaving in 15minutes I'm afraid...
<dimitern> natefinch, voidspace, I think it just got merged, no?
<voidspace> dimitern: no, the other one
<dimitern> oh, bugger
<voidspace> This one just got merged
<voidspace> https://bugs.launchpad.net/juju-core/+bug/1349635
<dimitern> natefinch, voidspace, I'll take it and try to do a quick fix
<dimitern>  #1353443 I mean
<voidspace> dimitern: thanks
<natefinch> be back in a couple hours.... taking my wife to her midwife appointment.
<perrito666> anyone alive here/
<perrito666> ?
#juju-dev 2015-08-03
<menn0> waigani: would you mind have looking at http://reviews.vapour.ws/r/2290/ ?
<menn0> waigani: there's no functional change
<menn0> waigani: just a tedious rearranging of tests
<menn0> waigani: no rush either
<waigani> menn0: looking
<waigani> menn0: fancy, how'd you get those lines in the description?
<menn0> waigani: markdown renders "---" as a horizontal rule
<waigani> cool
<waigani> menn0: what's the plan? will we have allWatcher and allEnvWatcher or will the latter replace the former?
<menn0> waigani: allwatcher stays as is, allenvwatcher gets added
<waigani> okay
<mwhudson> davecheney: argl
<mwhudson> davecheney: cross compiling from ppc64el to amd64 is broken
<mwhudson> http://paste.ubuntu.com/11990676/
<davecheney> how, how just that one combination ?!?
<mwhudson> davecheney: i guess some kind of grotty compiler bug
<mwhudson> but i'm surprised that 386 works, that's mostly the same backend code
<mwhudson> davecheney: do you know anything about the linker darwin/arm code?
<davecheney> not a lot
<rogpeppe1> menn0: ping
<jose> hello! anyone around?
<dimitern> jam, hey, are you here today or swapping?
<dimitern> voidspace, hey :) welcome back
<voidspace> dimitern: hey, hi
<voidspace> dimitern: I'm now online, have a little office setup here
<voidspace> dimitern: next step coffee, then machine updates and checking work emails
<dimitern> voidspace, are you in Romania now?
<voidspace> dimitern: yep
<voidspace> dimitern: same timezone as you now
<voidspace> dimitern: got in last night
<dimitern> voidspace, cool :)
<dimitern> voidspace, TheMue, dooferlad, as none of us were working or feeling particularly well on Friday, I'd suggested to combine today's 1:1 with some retro/planning discussions
<TheMue> +1
<voidspace> ok
<voidspace> TheMue: o/
<TheMue> voidspace: followed you travel notes on the net ;)
<TheMue> your
<voidspace> heh
<voidspace> yeah, it's been fun
<voidspace> not amazingly *relaxing*, but fun
<dimitern> voidspace, I bet - after driving 3200km in 10 days
<dooferlad> dimitern: +1
<voidspace> dimitern: 5000km
<voidspace> dimitern: 3200 miles...
<voidspace> dooferlad: o/
<dimitern> oh wow
<dooferlad> voidspace: hello! welcome back!
<voidspace> dimitern: but that total is over three weeks, not ten days
<voidspace> although *most* of it was in ten days
<dimitern> voidspace, still, *quite* a lot of driving
<voidspace> dimitern: heh, so maybe about 3000km in ten days
<voidspace> yeah
<voidspace> dooferlad: thanks
<voidspace> good to be back at work for a rest
<axw> fwereade: you timed out on irc.c.c, I responded with: "yeah, I'm being a bit pedantic I suppose. I have seen rare errors due to this I'm sure, but you're right in that we do this in lots of other places"
<axw> fwereade: and I've landed it now
<fwereade> axw, ah, sorry
<fwereade> axw, I guess a Sync would be a bit less racy but it's maybe still fundamentally problematic
<fwereade> axw, ie just because you know an event has been delivered to a handler does not imply that the handler has finished handling it
<fwereade> axw, blasted concurrency :)
<axw> fwereade: indeed. I think we'd need to have deeper control over how the syncs are doing. seems to be infrequent enough atm that it's not pressing
<axw> are doing = are being done
<axw> fwereade: speaking of unusual language, maltese looks exotic to me
<fwereade> axw, yeah, it's got a lot of arabic but in a roman alphabet
<fwereade> axw, `sqaq tax-xama` is pronounced more-or-less "sa'a t'sharma"
<axw> fwereade: I never would have guessed that
<fwereade> axw, there's a g->j, j->y, x->sh mapping that covers *most* of the weirdness
<axw> good to know, thanks ;)
<fwereade> axw, in particular, we'll be sprinting in st julians, or "san giljan", pronounced more--or-less "jilyan"
<fwereade> axw, I was living there for a couple of weeks before I figured out that they were the sme place
<axw> fwereade: hah :)
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<dimitern> ahh.
<dimitern> voidspace, you've disconnected before getting my pong I guess
<voidspace> dimitern: ah yeah
<voidspace> dimitern: I was seeing if I was still connected
<voidspace> dimitern: internet problems, but now I have a wired connection, so better
<dimitern> voidspace, I see
<voidspace> dimitern: I assumed the lack of a pong meant I was disconnected :-)
<voidspace> should be good now
<dimitern> voidspace, I thought 1GBps internet connections are common in RO (I'm told)
<dimitern> :)
<voidspace> dimitern: not in this house...
<voidspace> dimitern: the problem was the wifi not really going this far anyway
<voidspace> dimitern: not sure how fast the wired connection is (will check) - but "fast enough" I think
<dimitern> voidspace, speedtest.net
<voidspace> dimitern: indeed
<voidspace> dimitern: will wait until dropbox stops syncing
<voidspace> and my mail client finishes dealing with email
<voidspace> thunderbird has been frozen for about twenty minutes processing rules for three weeks worth of email!
<dimitern> :D
<voidspace> dimitern: just grabbing coffee!
<voidspace> will be at standup in a minute or two
<dimitern> voidspace, grab one for me :)
<voidspace> dimitern: heh, have done
<voidspace> omw
<voidspace> dimitern: ah, need to install hangout plugin :-/
<voidspace> looks like I've not done a hangout from this laptop
<voidspace> or maybe it's just that firefox updated
<dimitern> voidspace, ok, we'll start slowly I guess until you're ready
<menn0> rogpeppe1: pong
<rogpeppe1> menn0: hiya
<rogpeppe1> menn0: just wondering what the status is on the gigawatcher
<menn0> rog: the state work is *mostly* done. I was about to put up a WIP PR b/c I want some feedback from fwereade about one aspect where resource usage could be an issue
<menn0> rogpeppe1: ^^
<menn0> (forgot to hit tab)
<rogpeppe1> menn0: cool
<rogpeppe1> menn0: we'd like to start using it at some point :)
<menn0> rogpeppe1: I know. I was asked to pick it up again last week at the sprint
<rogpeppe1> menn0: cool
<menn0> (not that I was at the sprint... just that thumper and rick_h_ asked me to pick it up)
<menn0> rogpeppe1: a number of PRs prepping for the change have already landed
<rogpeppe1> menn0: awesome
<menn0> rogpeppe1: i'm reusing much of the existing allwatcher and multiwatcher stuff so it's coming along fairly nicely
<menn0> rogpeppe1: the plan is the sneak it in before the 1.25 feature freeze
<rogpeppe1> menn0: nice
<rogpeppe1> menn0: looking forward to it
<menn0> rogpeppe1: do you need it for the same thing that rick_h_ wants it for?
<rogpeppe1> menn0: yes
<menn0> rogpeppe1: cool cool
<rogpeppe1> menn0: it's me that wants it, really :)
<rick_h_> rogpeppe1: menn0 one thing we have to be careful of. It came up that we sometimes have allwatcher issues because core itself doens't use it.
<rick_h_> rogpeppe1: menn0 so when we get this we need to make sure we work together on a testing plan that takes the uses we put it to into account.
<rick_h_> rogpeppe1: menn0 and think about ways that changes to core, that might sneak past/not into this new watcher can be caught.
<menn0> rick_h_: you're worried about stability of the API right?
<rick_h_> menn0: you might also reach out to sparkiegeek as they're another potential users, though they weren't aware of it so it was new to them at the sprint.
<rick_h_> menn0: yea, or things like the bug in the machine agent stuff, that new things are added, or bugs fixed, but they're not made through to the higher level watchers because they're not directly used/hit in core
<rick_h_> menn0: we talked and I'm supposed to setup a chat with folks from our end, landscape, and core to think of ways of getting our uses of the allwatcher into some sort of test system core can rely on. It's not figured out yet.
<rick_h_> menn0: and this fits the same bill I expect.
<menn0> rick_h_: the same concerns certainly apply
<menn0> rick_h_: well let me know when that call happens
<rick_h_> menn0: will do, in the meantime as you and rogpeppe1 chat about things something to noddle on in the back of your brains. I think this is going to be a hard problem to figure out and having you two on board <3
<menn0> rick_h_: i'll have a think about what we can do
<rogpeppe1> menn0: one thing that occurred to me is: will this watcher provide a message when the main JES is being destroyed?
<menn0> rogpeppe1: I *think* it will, as long as the API server is up for long enough to deliver the message
<menn0> rogpeppe1: when the main env goes to Dying an environment update should be emitted
<rogpeppe1> menn0: cool
 * dimitern steps out for ~ 1/2h
<dimitern> fwereade, back
<perrito666> goo morning all
<mgz> lots of goo to you too perrigoo
<voidspace> dooferlad: ping
<jose> anyone around who may be able to give me a hand?
<perrito666> fwereade: you are not around here by any chance, are you?
<fwereade> perrito666, yeah
<fwereade> perrito666, what can I do for you?
<perrito666> fwereade: I could go for a coffee with croissants, but since that is out of reach i am fixing https://bugs.launchpad.net/juju-core/+bug/1479289
<mup> Bug #1479289: statushistory uses sequence, fails in multi-env state servers <blocker> <jes> <status> <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1479289>
<perrito666> I see that probablyUpdateStatusHistory has a commet from you reagarding the explicit setting of EnvironUUID
<perrito666> I was wondering if you talk about the particular way we set envuuid all over juju or something else
<sinzui> mgz: I disable build-revision because I don't a build to collide with proposal of 1.24.4
<fwereade> perrito666, it's specific to the Insert
<voidspace> dimitern: ping
<fwereade> perrito666, envStateCollection really ought to do its job properly and automatically
<dimitern> voidspace, pong
<perrito666> fwereade: agreed
 * perrito666 kills 2 birds with one shot
<dimitern> katco, rogpeppe1, axw, davecheney, a trivial goamz review anyone? https://github.com/go-amz/amz/pull/59 - much appreciated
<dimitern> mgz, ^^
<natefinch> +2,992 â2,721
<natefinch> I'd hate to see a non-trivial change
<natefinch> ;)
<dimitern> natefinch, :) well, that's due to the package header comments
<perrito666> natefinch: those start by changing a couple of universal constants
<dimitern> natefinch, 99.9% of it is moving code around
<mgz> dimitern: I can have a look
<dimitern> mgz, awesome! ta!
<natefinch> dimitern: all the package comments will get concatenated together... just put it in one file, and put the "this file contains" separate from the package file.  Also you need the copyright headers
<perrito666> yay my new eye-glasses are ready... just need a moment to drive to the other side of the city :p
<rogpeppe1> dimitern: LGTM
<natefinch> s/package file/package comment/
<dimitern> rogpeppe1, thanks!
<rogpeppe1> dimitern: and +1 to natefinch's comment
<dimitern> natefinch, good point about the headers, will do
<rogpeppe1> dimitern: you should only have one package-level doc
<dimitern> not that server.go got them in the first place
<dimitern> rogpeppe1, will do, thanks
<mgz> natefinch: explain that to me? I think I'm missing something about go comments
<mgz> (or point me at doc page on it)
<natefinch> mgz: doc comments on the "package foo" declaration in multiple files will all get concatenated together, so you really only need it in one file  (and if you put the same comment in multiple files, it'll be duplicated in the godoc)
<natefinch> brb
<mgz> natefinch: everything at the top of each file? or is there some extra checking of bits before 'package' like there are for functions?
<mbruzek> Which developer worked on the KVM provider?
<dimitern> mbruzek, a lot of people, myself included
<natefinch> mgz: just the doc "attached" to the package foo line (i.e. without a space before the package foo declaration)
<mbruzek> dimitern: First of all thank you!  It works wonderfuly.
<dimitern> mbruzek, thanks! (that wasn't what I expected frankly) :D
<natefinch> mgz: My godoc documentation (written in godoc) https://godoc.org/github.com/natefinch/godocgo
<mbruzek> dimitern: I had a few questions about it.  Can I bother you with them?
<dimitern> mbruzek, sure - shoot, and I'll try to help
<ericsnow> natefinch: you coming?
<mbruzek> dimitern: I ran  out of disk on one machine. I don't see how I can add more disk in the set-constraints document.  Is it possible to add more disk to the images?
<mgz> natefinch: thanks!
<mbruzek> dimitern: I have done speed testing between kvm, while it is slower than lxc it is not much slower.  I am very happy because I can do more isolation with KVM.
<dimitern> mbruzek, let me check - IIRC constraints are supported, but most likely ultimately ignored by the provisioner
<mbruzek> dimitern: Wait I found the devel document on kvm.  https://jujucharms.com/docs/devel/config-KVM  I actually see the answer in the docs.  RTFM -> mbruzek!
<mbruzek> The developer docs are not default so I had to do some searching for them.
<dimitern> mbruzek, I'm wrong apparently - root-disk=20G as a constraint should work
<mbruzek> dimitern: it looks like root-disk is what I am looking for.  Do you know what the default root-disk value is?
<dimitern> mbruzek, there are no defaults in the code, so I guess whatever uvt-kvm does by default (the tool we use to start kvm instances)
<mbruzek> dimitern: I tried get-constraints but I get empty if I bootstrap a kvm environment.  I would have hoped to see what the default constraints would be.
<mbruzek> dimitern: OK fair enough.  Thank you
<dimitern> mbruzek, well *bootstrapping* should in general have a 8G root-disk constraint unless given explicitly
<dimitern> mbruzek, but as the local provider is a bit special, YMMV
<mbruzek> dimitern: My mileage has been great!  I really needed this KVM provider for doing more container work.  So thanks again!
<dimitern> mbruzek, glad to help and thanks!
<fwereade> dimitern, I don't suppose you have a recent overview of the details of agent upgrade?
<dimitern> fwereade, no, not really
<mup> Bug #1480942 opened: Need a `juju add-constraints' command <juju-core:New> <https://launchpad.net/bugs/1480942>
<dooferlad> dimitern: I am about to go into a meeting, but http://reviews.vapour.ws/r/2274/ should be worth a look.
<dooferlad> dimitern: I hope!
<dimitern> dooferlad, sure, will have a look a bit later
<dimitern> dooferlad, reviewed!
<dimitern> jam, do you know if alexisb is around today?
<dimitern> I guess not
<mgz> I'd expect her to be swapped from travelling still
<dooferlad> dimitern: ping! hangout!
<perrito666> bbl, going to get my new glasses
<moqq> https://bugs.launchpad.net/juju-core/+bug/1477281 does anyone have any idea of a workaround for this issue? itâs completely hosing every machine we have running juju
<mup> Bug #1477281: machine#0 jujud using ~100% cpu, slow to update units state <canonical-bootstack> <canonical-is> <performance> <juju-core:Triaged> <https://launchpad.net/bugs/1477281>
<alexisb> moqq, what version of juju are you using?
<moqq> 1.24.4-trusty-amd64
<alexisb> and you are still seeing the issue?
<alexisb> moqq, ^^
<moqq> yes
<alexisb> moqq, we believed that issue had been fixed and needed someone to verify
<alexisb> moqq, can you please add a note to the bug
<alexisb> w/ logs if you have them
<moqq> ok
<alexisb> we will get eyes on it asap
<alexisb> thanks moqq
<moqq> HEY! i lied
<moqq> alexisb: false alarm
<moqq> i upgraded this morning and was still seeing the issue but manually restarted all the services and then i guess it just took awhile
<moqq> but cpu usage eventually dropped back down. thank you thank you juju team!!
<alexisb> moqq, heh ok good, you had me scared
 * perrito666 finishes reading backlog and breathes agai
<perrito666> n
<alexisb> moqq, can you still please add a note to the bug, so that we can work on getting it closed
<moqq> i would add note
<moqq> except when i try to log in
<moqq> it says, âBad bot, go away! Request aborted."
<perrito666> :|
<perrito666> bbl bike
<moqq> >:( after a brief stint of relative quietness, juju is back to pegging the cpu
<natefinch> alexisb: ^
<alexisb> moqq, do you have either logs or ability to give the dev team access to the env?
<moqq> alexisb: doubtful on direct access but i can get some logs together shortly
<alexisb> moqq, that would be most helpful
<bdx> hows it going everyone??
<bdx> core, devs, charmers: Is there a method by which juju can be forced to not overwrite changes to config files on node reboot?
#juju-dev 2015-08-04
<davechen1y> bdx: no
<davechen1y> bdx: why do you ask ?
<mwhudson> davechen1y: rebuild testing suggests that making 'real go' the default  on ppc64 would lead to unhappiness
<mwhudson> mostly because of the limited cgo
<moqq> alexisb: FYI after wrestling around with a failed upgrade-juju its working and the cpu is behaving. will hit up that bug report if anything goes sideways again
<davechen1y> mwhudson: why do we need cgo ?
<mwhudson> davechen1y: "we" as in juju don'
<mwhudson> t but lots of things in the archive do
<davechen1y> when you say lots
<davechen1y> can you quantify ?
<davechen1y> 10,000 packages ?
<davechen1y> 10 packages ?
<davechen1y> i find it hard to believe that gccgo's cgo support is better ?
<mwhudson> in between those
<mwhudson> cgo on ppc64el is internal linking only
<mwhudson> so it's pretty easy to step outside the bounds of what that can handle
<mwhudson> i'll be able to quantify better tomorrow, still lots of builds going
<mwhudson> some of these indeed don't build on ppc64el with gccgo either
<davechen1y> http://paste.ubuntu.com/11997181/
<davechen1y> oh come on
<davechen1y> mwhudson: i'm annoyed by this
<davechen1y> this is a problem canonical has done to itself
<mwhudson> davechen1y: i am also annoyed
<davechen1y> and if that means we don't get go 1.5 everywhere
<davechen1y> i will be more than just annoyed
<mwhudson> a recent go 1.5 change breaks docker and my go tools packages are insane for reasons i don't understand
<davechen1y> mwhudson: more practically
<davechen1y> is now the time to suggest taht go versions be versioned in ubuntu in the same way gcc is ?
<davechen1y> ie, go-1.4, go 1.5
<davechen1y> like we have gcc-4.8
<mwhudson> i think there is certainly merit in that idea
<davechen1y> how do we make that a reality ?
<davechen1y> it needs to happen immediately
<davechen1y> ie, before the 20th
<davechen1y> hmm that means they also need to be co installable
<mwhudson> yeah
<davechen1y> and it's going to widen the set of alternatives for who owns /usr/bin/go
<mwhudson> it wouldn't be trivial
<davechen1y> co installable is doable
<davechen1y> who owns /usr/bin/go will be more complicated
<mwhudson> yeah not super hard, just a bunch of pain
<mwhudson> well that's managed by alternatives now
<mwhudson> (which is not how gcc does /usr/bin/gcc)
<davechen1y> yeah
<mwhudson> mm
<mwhudson> maybe i should retract my opening statement
<mwhudson> so long as go 1.5 builds juju and lxd on ppc64el
<mwhudson> i don't know how much of this other stuff we care about
<mwhudson> oh
<mwhudson> lxd fails to build :(
<mwhudson> https://launchpadlibrarian.net/213514232/buildlog_ubuntu-wily-ppc64el.lxd_0.14-0ubuntu3_BUILDING.txt.gz
<mwhudson> still if it's just that, we can build it with gccgo
<mwhudson> (which does work)
<davechen1y> it doesn't build because go-sqlite3 does not support ppc
<davechen1y> oh
<davechen1y> crap
<davechen1y> fucking cgo
<davechen1y> why does that encode a database ?
<davechen1y> why does every fucking project have to have a database
<davechen1y> how is that helping us, as a species ?
<davechen1y> https://bugs.launchpad.net/juju-core/+bug/1481133
<mup> Bug #1481133: version: tests fail on arm64 14.04 <juju-core:New> <https://launchpad.net/bugs/1481133>
<davechen1y> if this is a test isolation problem I'm going to throw a shoe
<mwhudson> davechen1y: i guess there was never a precise for arm64?
<mwhudson> if that is the cause, i think shoe-throwing would still be appropriate
<davechen1y> i'm going to throw both
<davechen1y> https://bugs.launchpad.net/juju-core/+bug/1481133/comments/1
<davechen1y> FML
<mup> Bug #1481133: version: tests fail on arm64 14.04 <juju-core:New> <https://launchpad.net/bugs/1481133>
<mup> Bug #1481133 opened: version: tests fail on arm64 14.04 <juju-core:New> <https://launchpad.net/bugs/1481133>
<mup> Bug #1481133 changed: version: tests fail on arm64 14.04 <juju-core:New> <https://launchpad.net/bugs/1481133>
<mwhudson> davechen1y: is the proto-plan to try juju on ppc64 next?
<davechen1y> mwhudson: i guess
<davechen1y> actually, yes
<davechen1y> now I have the script
<mup> Bug #1481133 opened: version: tests fail on arm64 14.04 <juju-core:New> <https://launchpad.net/bugs/1481133>
<axw> wallyworld: I'm wanting to add status to volumes and filesystems. what's the appropriate way to marshal that over the API? existing stuff is using api.AgentStatus, which doesn't seem quite right for a non-agent entity... but it is being used for workload and service status too
<wallyworld> axw: from memory, that struct grew to accomodate the extra workload status info, somewhat avoiding api churn. it's messy because there's stuff from 1.18, different stuff added in 1.19 which is supposed to be removed, the legacy info for backwards compat, and the new stuff
<wallyworld> maybe it's time to bite the bullet and add a new api
<axw> wallyworld: okey dokey. I'll look into that, thanks
<axw> kinda what I figured, was hoping I'd overlooked something
<wallyworld> nah :-(
<wallyworld> all a bit messy
<dimitern> we're using ErrorResults.Combine() now instead of OneError() ?!
<dimitern> ok, only in a few cases it seems, otherwise it would've been *insane*
<dimitern> fwereade just texted me they have a power cut and he'll come online when he can
<voidspace> dimitern: is it correct that in EC2 you can have multiple NICs, but only one subnet per machine (even if you have multiple NICs)?
<dimitern> voidspace, subnets in ec2 map 1:1 with AZs, so you definitely *can't* run an instance with multiple NICs attached to subnets in different AZs
<dimitern> voidspace, it's unclear though if you can run an instance with multiple NICs bound to separate subnets in the same VPC and AZ
<dimitern> voidspace, probably not, but it can be verified easily
<voidspace> dimitern: ok - the network model document states that machines and subnets map 1:1 and you *can't* have multiple NICs bound to different subnets
<dimitern> voidspace, we don't intend to allow this initially, yes
<dimitern> voidspace, to simplify the implementation, but effectively a machine can end up in multiple spaces and subnets
<dimitern> voidspace, so which subnet a machine is in is not the right question to ask, as NICs are bound to subnets, not machines directly
<jam> dimitern: there was a recent bug that made me think https://bugs.launchpad.net/juju-core/+bug/1478660
<mup> Bug #1478660: juju uses proxy to access bootstrap node <juju-core:Fix Committed by cherylj> <juju-core 1.24:In Progress by cherylj> <https://launchpad.net/bugs/1478660>
<jam> we should probably be adding the space to the no-proxy list
<jam> dimitern: ^ what do you think?
<voidspace> dimitern: right, it's fine to say we don't support it - the doc implies it's an EC2 restriction and that machines map to subnets 1:1
<voidspace> dimitern: I'll add a comment
<dimitern> jam, is that what we want to do always?
<dimitern> voidspace, cheers
<jam> dimitern: I don't think you want to proxy for things that you consider on your local LAN do you ?
<dimitern> jam, is this for MAAS or in general?
<jam> dimitern: in general I'm thinking
<dimitern> jam, trying to think of scenarios - AWS is different from MAAS for example
<jam> dimitern: they mentioned working around wget, but ISTM that just fixing our wget is shortsighted
<jam> dimitern: still, if you're in a space, it seems to define that you have direct access to other things in that space
<jam> voidspace: I think the idea right now is that service endpoints map 1:1 to a space, not machines
<dimitern> jam, that's right, yes it seems we should do what you suggest
<dimitern> jam, I got confused by client-side proxies for a moment
<jam> dimitern: so in general, I think we should avoid proxies for things that want to talk to the state server (consider debug-log, or any future file downloads, etc). It may be that we explicitly ignore proxy settings in our code
<dimitern> jam, and that relates to voidspace's comment yesterday - *thou shalt always be able to connect to any state servers*
<voidspace> jam: the network model docs state that for EC2 machines map to subnets 1:1 - so a service can only be on one space
<voidspace> *in one space
<voidspace> jam: as a machine can only be on one subnet (even with multiple NICs)
<jam> voidspace: I do see that text, I haven't expiremented personally to know if it is true, though it might complicate (or maybe simplify) things.
<dimitern> voidspace, you mean a machine, not service, right?
<dimitern> voidspace, a service spans multiple units (machines in different subnets of the space)
<voidspace> dimitern: well, you deploy services not machines (well you do both)
<voidspace> dimitern: so the deploy command can only take one positive space for EC2
<jam> dimitern: page 9 of Network Model Phase 1.1 says machines map to subnets 1:1 verbatim
<voidspace> dimitern: a service can only be in one space *because* machines can only be on one subnet (i.e. one space)
<jam> voidspace: dimitern: it should probably take as long as this discussion to bring up a machine, create 2 subnets in the same AZ, and see if you can attach 2 network interfaces from each subnet to it
<voidspace> :-)
<voidspace> probably longer for me, but it would be time well spent I think
 * dimitern gets increasingly confused
<jam> voidspace: I do think that if that constraint is true, we need to think carefully about the ramifications to the network model
<dimitern> we need to chat about this at standup
<dimitern> I'm trying to review a bunch of things and am distracted right now
<jam> specifically, if you ever want to separate the ADMIN endpoint for MYSQL onto a separate space from the DB endpoint, then we need to know if that is actually possible.
<voidspace> jam: the space model seems to only be *really* useful when you can have some of your services on multiple spaces
<voidspace> jam: and you have firewalling between the spaces
<jam> voidspace: well, it can be useful between services (postgres is only in the DB space, and WebApp is in a different space) with custom routing/firewalling
<voidspace> right
<voidspace> we need to do the custom routing
<voidspace> fair enough
<voidspace> and we also need to do custom routing for the state server
<jam> so I thought for P1 we still had the flat space for Juju traffic.
<voidspace> jam: you mean no firewalling?
<jam> voidspace: we aren't directly controlling the firewall yet (relations != connections in all cases)
<jam> voidspace: but I mean that being in *a* space means you are actually in 2 (the Juju meta space and the explicit one for the service)
<voidspace> jam: "the Juju meta space" isn't something mentioned in the network doc
<voidspace> jam: can all machines in every space route to each other through the meta space?
<voidspace> as spaces are described as being "a security subdivision of a network" it seems that they only have any reason to exist if traffic between them is restricted / controlled (or am I wrong)?
<voidspace> is the "meta space" just a consequence of having no routing restrictions - or is it (or will it be) something else (a specific set of routing rules)?
<jam> voidspace: so I think it was a pragmatic decision in first phase to make sure everything can talk to the state server without requiring the server to have an address in every space.
<jam> but it may have been in conversations and not captured in the doc
<voidspace> right
<jam> it certainly does get us away from strong security if we don't have binding
<jam> (if your service always binds to all available interfaces, then having an interface on a globally routable space means you lost the security aspect)
<voidspace> which is the raison d'etre for spaces to exist at all :-)
<voidspace> jam: when I login to the AWS console and then navigate to EC2 I get a "Your service sign-up is almost complete!" message
<voidspace> I've tried us-east-1 and us-west-2
<voidspace> jam: could you verify it works for you? (sorry to be a pain)
<voidspace> I can try from go-amz instead
<jam> voidspace: trying to go in
<voidspace> seems like I get the same response from all the AZs
<jam> voidspace: I see a juju-env machine running in us-east-1
<jam> m1.small
<voidspace> ok, must be a problem with my account
<jam> though that is via the master account, if you are using an IAM role
<jam> voidspace: is it a personal account?
<jam> or shared?
<voidspace> I logged in with the wrong account first (and then signed out) - it maybe a cookie issue
<voidspace> jam: it's my canonical account
<jam> voidspace: interesting, your mfoord user in the shared account has password last used of 2014-11
<voidspace> jam: odd, I get the same result from another browser too
<voidspace> jam: gah, user error
<voidspace> jam: I have a personal account with my canonical email address
<voidspace> jam: I wasn't using the shared account
<voidspace> jam: when launching an instance I can create multiple nics bound to different subnets (in the same AZ)
<voidspace> dimitern: ^^^
<dimitern> voidspace, I can't say off hand, but should be easy to verify by using the AWS CLI commands
<dimitern> voidspace, or even the web UI
<dimitern> TheMue, I've responded on your review, but I'm afraid I found even more issues missed earlier, ping me if anything is unclear
<TheMue> dimitern: ah, great, thank you. just wanted to ping you that I added some questions/remarks. but the I found your mail about the sprint and currently answering it.
<dimitern> TheMue, cheers
<dimitern> axw, are you still around?
<dimitern> wallyworld, or you?
<axw> dimitern: I'm here, what's up?
<dimitern> axw, I'm reviewing your RB2295 and found out we're now using things like storage.XXXResults similarly to apiserver/params.XXXResults with Error fields
<dimitern> axw, but that's in the providers; wasn't clear at first, but I figured it out, sorry for the noise :)
<axw> dimitern: well, this is the first place it's done AFAIK. I'm trying to enable bulk provider API calls, even though the current ones mostly don't support it
<axw> dimitern: some do (DestroyVolumes for some), some don't (all CreateVolumes)
<axw> dimitern: cool, no worries
<dimitern> axw, we need this for sure - we can use it for ReleaseAddresses as well for example
<axw> dimitern: you might be interested in the next branch I put up, which does retries for failed operations. the storage provisioner will include a scheduler for operations, with failed ones being put back on the schedule to retry at a later date
<dimitern> axw, think about a few common cases there: retrying transient errors (how to make that easier to convey) as well as a few helpers on the results types that have errors to verify e.g. the number of results == number of args
<dimitern> axw, nice! I'll definitely be interested in looking at that
<axw> hm yes I suppose it should check those things
<axw> I'm currently relying on the fact that it's all in the same process, so we *should* be able to rely on them doing the right thing
<axw> but I could make it more pedantic
<dimitern> might be worth it in the face of future extensions to the behavior/implementation
<dimitern> axw, it seems a bit unwieldy how ([]error, error) returns are handled - a some common Combine() helper might be useful
<axw> dimitern: that's a stop-gap. in a follow-up, the individual errors will be used to decide whether or not to reschedule, and they won't surface. a further follow-up will propagate the errors to status
<axw> so you can see the provisioning error from the CLI
<axw> I had it as one big branch, but it was >1200 LOC :)
<dimitern> wow :)
<dimitern> axw, ok, I'll follow the follow-ups then
<axw> dimitern: thanks for the review
<dimitern> axw, thanks for starting to solve the retrying in general for providers :)
<voidspace> dimitern: I have verified it!
<voidspace> dimitern: I was saying I *can* create an instance with multiple nics on different subnets
<dimitern> voidspace, ah, I see
<dimitern> voidspace, well that's AWS, in other clouds we'll have different constraints like this, OpenStack being an example of almost "all is allowed" sort of cloud
<voidspace> TheMue: dooferlad: stdup?
<TheMue> voidspace: omw, fetch a coffee
<TheMue> fetched
<perrito666> morning all
<TheMue> heya perrito666
<dimitern> perrito666, hey, my fellow OCR :)
 * perrito666 feels something coming his way
<dimitern> perrito666, I've left you the best review of all - william's final leadership one
 * dimitern ducks
<dimitern> :)
<perrito666> dimitern: aren t you a sweetie
<dimitern> perrito666, I'll buy you a beer on the next sprint hehe
<perrito666> ill make some coffee before this
<wwitzel3> ericsnow: ping
<dimitern> dooferlad, I think the build errors you're getting might be due to the "type SpaceDoc spaceDoc" definition in export_test.go (or something else like this when only happens while running tests)
<perrito666> dimitern: well it was the second time around most of these changes
<perrito666> which made them a bit easier
<dimitern> perrito666, I've been on some of the previous ones as well, but there are new things there as well
<perrito666> dimitern: yep, but still much better than the first time
<perrito666> :)
 * dimitern found out this background music makes it a *lot* easier to go through reviews :) https://www.youtube.com/watch?v=exWNztpgfIw
<perrito666> dimitern: sounds like it would make you shoot at the review at some points
<dimitern> perrito666, it occasionally does, yes :)
<dooferlad> dimitern: the code builds fine here. Very strange
<dooferlad> dimitern: in fact I have 100% tests passing, and I have rebased on upstream so I should have identical code.
<dooferlad> dimitern: maybe a go version thing? I am on 1.4.2
<dimitern> dooferlad, I'm on 1.2.1 which is the same CI uses IIRC, I'll pull your branch and try it here
<dooferlad> dimitern: thanks. I thought we had moved to 1.4.2. Guess not. Are we waiting for 1.5?
<dimitern> dooferlad, we are
<dimitern> dooferlad, but it's useful to have a mix of versions until then, just for such issues
<voidspace> dooferlad: ping
<dooferlad> voidspace: hi
<voidspace> dooferlad: hey, you available to hangout?
<dooferlad> voidspace: yea, I can come back to this.
<voidspace> dooferlad: cool, stdup hangout?
<dooferlad> sure
<dimitern> mgz, ping
<dimitern> dooferlad, btw while looking at the changes your branch introduced - https://github.com/juju/juju/compare/net-cli...dooferlad:net-cli-state-add-space I noticed the last change - removing Assert: isAliveDoc from Subnet.EnsureDead() - why is that?
<dooferlad> dimitern: odd.
<voidspace> dimitern: must spaces list be a bulk call?
<dimitern> voidspace, I don't think so, but it should take filters
<voidspace> dimitern: what do you mean by filters?
<voidspace> dimitern: filtering output by name or something?
<voidspace> dimitern: the CLI doesn't have filters (like that)
<dimitern> voidspace, let me double-check the model
 * voidspace too
<dimitern> I might be thinking of subnet list
<voidspace> dooferlad: the juju command is actually "space" rather than "spaces"
<voidspace> dooferlad: so "space" seems more correct
<voidspace> subnet list hasn't been done yet
<dimitern> voidspace, no filters for space list
<voidspace> so I can't just copy it
<voidspace> dimitern: yeah
<dimitern> voidspace, the --short argument is purely a CLI thing - we still get all the details from the API
<voidspace> dimitern: ok
<voidspace> dimitern: and what about format?
<voidspace> dimitern: do we do that on the server?
<voidspace> I'll find another list cli command and look at that
<dimitern> voidspace, how about this? http://reviews.vapour.ws/r/1390/
<dimitern> voidspace, the format is also a CLI thing
<voidspace> dimitern: yeah, I meant the apiserver bit
<voidspace> dimitern: thanks though, worth looking at
<dimitern> voidspace, ah, yeah - no apiserver support for listing spaces, but it should just call Backing.AllSpaces
<voidspace> dimitern: so if format, output and short are all handled on the client - ListSpaces takes no arguments
<dimitern> dooferlad, please reintroduce that assert
<dooferlad> dimitern: will do. Did you have any problems with go 1.2?
<dimitern> voidspace, yeah
<dimitern> dooferlad, tests are still running - so far a few failures which look like flakyness
<mgz> dimitern: hey
<dimitern> dooferlad, passed the cmd package where the build issue happened on the CI machine - no issues
<dooferlad> dimitern: I thought the problem was in state.
<dimitern> mgz, hey, I'm trying to analyze why this merge fails with such an odd error: http://juju-ci.vapour.ws:8080/job/github-merge-juju/4188/consoleFull
<dooferlad> dimitern: you could "cd state; go test -c" to check for a compile error
<dimitern> mgz, it's happening a few times now and both I and dooferlad get no errors when running tests with the same branch locally, I'm on 1.2.1, he's on 1.4.2
<mgz> dimitern: that's not one I've seen before
<dimitern> dooferlad, I know, but just in case it's some weird issue which *only happens* under load or when running the full suite..
<dimitern> mgz, can you force the instance to stay after the tests fail, so we can get in and inspect it?
<mgz> hm, I should just attatch the tarball as an artifact
<mgz> ugh, that branhc adds more ConnSuite derived tests?
<mgz> dimitern: I bet the problem is the export_test
<dimitern> mgz, that's what I suspected, but there's no redefinition of that type there
<mgz> I am surpised it worsk for you on 1.21 but not on the clean instance for the bot
<dimitern> mgz, and since #testmain is involved, I strongly suspect it's actually due to the way we're running CLI tests (another possibility is a dirty GOPATH on the CI instance)
<mgz> dimitern: try running `make check` and see if you get it?
<dimitern> mgz, I did, it's running for a while now
<dimitern> mgz, and there it is! I've reproduced it (in a different package, but that's due to suites running in parallel I guess)
<dimitern> dooferlad, I've got a repro
<dooferlad> dimitern: sweet!
<alexisb> katco, are you in today?
<katco> alexisb: i am :)
<dimitern> dooferlad, now running go test ./... in cmd/juju to see if that triggers it
<dimitern> incredibly annoying :(
<dimitern> it's not that then trying cmd/jujud/
<dimitern> dooferlad, I managed with go test -c in state/
<katco> dimitern: perrito666: more input needed on this: http://reviews.vapour.ws/r/2199/
<dimitern> katco, ack, will look shortly
<dimitern> dooferlad, found a workaround
<perrito666> katco: will look
<dimitern> dooferlad, if you change "func (s *Space) Doc() spaceDoc { return s.doc }" to "func SpaceDoc(s *Space) spaceDoc { return s.doc }" (and the tests from space.Doc().X to state.SpaceDoc(space).X) it builds ok here
<dimitern> dooferlad, amazingly it doesn't complain about spaceDoc not being exported, which I expected
<dimitern> oh boy PowerShell
<dimitern> gsamfira, ping :)
<dimitern> perrito666, I'm with you on that one re PS
<perrito666> dimitern: ?
<dimitern> perrito666, about http://reviews.vapour.ws/r/2199/
<dimitern> katco, replied, if somewhat unhelpfully I guess
<dooferlad> dimitern: thanks. Just setting up a container to let me test in a clean environment before I try and merge again, (trusty + default go)
<dimitern> dooferlad, +1
<mup> Bug #1481366 opened: leader-get may fail in -departed hooks <juju-core:New> <https://launchpad.net/bugs/1481366>
<katco> alexisb: hey i'm out tomorrow
<alexisb> katco, ok np
<alexisb> I need to catch pat before she goes on vacation, so just decline the call
<katco> alexisb: k will do
<katco> alexisb: just lemme know what you need from me
<perrito666> katco: that patch is still a hughe powershell blob, I have no idea what is going on inside
<mup> Bug #1481368 opened: Deploy hangs: jujud-machine-0 upstart process running but juju status shows it down <canonical-bootstack> <deploy> <state-server> <juju-core:Triaged> <https://launchpad.net/bugs/1481368>
<alexisb> perrito666, ping
<perrito666> alexisb: pong
<perrito666> sorry was having lunch
<alexisb> perrito666, no worries
<alexisb> I saw https://launchpad.net/bugs/1481368 come in on the proposed 1.24.4
<mup> Bug #1481368: Deploy hangs: jujud-machine-0 upstart process running but juju status shows it down <canonical-bootstack> <deploy> <state-server> <juju-core:Triaged> <https://launchpad.net/bugs/1481368>
<alexisb> looks to potentially be status related
 * perrito666 checks
<alexisb> was curious on your perspective
<perrito666> alexisb: mm, I think it might be something else being masked by status
<alexisb> perrito666, katco, cherylj, wwitzel3, ericsnow, natefinch if someone has free cycles https://launchpad.net/bugs/1481368 needs to be worked
<mup> Bug #1481368: Deploy hangs: jujud-machine-0 upstart process running but juju status shows it down <canonical-bootstack> <deploy> <state-server> <juju-core:Triaged> <https://launchpad.net/bugs/1481368>
<katco> alexisb: might be possible later. today is moonstone's catchup+planning day
<katco> alexisb: since i was out on fri.
<alexisb> katco, ack
<cherylj> alexisb: I can take a look in a few
<alexisb> cherylj, thank you as always
<sinzui> cherylj: I am trying to force CI to retest your commit to master.
<cherylj> sinzui: for bug 1480298?
<mup> Bug #1480298: unknown object type "Charms" <blocker> <ci> <compatibility> <regression> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1480298>
<sinzui> cherylj: yes. the branch failed because of a logging test that may have been intermittent
<cherylj> ah, ok
<mbruzek> Hello core devs.  Someone is setting up a network for a conference and wants to know all the ports to open to get Juju to work.  I know about 22 and 37017 and 38017, are there any other ones that I am forgetting?
<mbruzek> when I do a google search for juju ports I only get the open ports for charms.
<katco> wwitzel3: ericsnow: we'll delay the remainder of planning until thurs. do you feel like you have a good idea of what to work on tomorrow?
<ericsnow> katco: yep
<ericsnow> katco: I also did my best to flesh out the cards we have
<katco> ericsnow: awesome ty
<mbruzek> The default port numbers for api-port, state-port, and storage-port are 17071, 37017, and 8040 respectively.
<wwitzel3> katco: yep, sounds good
<wwitzel3> ericsnow: thanks :)
<perrito666> menn0: ping
<menn0> perrito666: sorry, in standup
<perrito666> menn0: np, just ping me when you have a moment plz
<menn0> perrito666: ok i'm done. what's up/
<perrito666> you dont seem happy to see me :p
<perrito666> ill go priv so I have proper logs to read after
<mup> Bug #1481500 opened: automatic devel streams selection for beta releases <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1481500>
<perrito666> aghh my fix depends on fwereade patch
<alexisb> perrito666, this one:   https://github.com/juju/juju/pull/2909
<perrito666> yeah, I am trying to make a patch for it
<perrito666> alexisb: mm, it seems he did not fwport all of it, ill propose a patch tailored to master
#juju-dev 2015-08-05
<mup> Bug #1480298 changed: unknown object type "Charms" <blocker> <ci> <compatibility> <regression> <juju-core:Fix Released by cherylj> <https://launchpad.net/bugs/1480298>
<perrito666> menn0: it appears memoserv is not as useful as it used to
<menn0> perrito666: why not? I've never used it before and was just looking at its help.
<menn0> perrito666: review is all done btw
<perrito666> menn0: well I found you before memoserv :)
<perrito666> it only works when people identify
<menn0> perrito666: I did see the message when I logged in but I didn't know what is what about
<menn0> perrito666: I only just checked now
<perrito666> oh I see, it used to be waaay more crude
<perrito666> iirc it was a message thrown at you upon identification
<perrito666> menn0: are you sure about makeEnvironment in 1.24?
<menn0> perrito666: no, I was looking at master so there's a chance it's not there in 1.24.
<perrito666> it isnt, just checked
<menn0> cool
<menn0> perrito666: you could copy it back if you wanted
<perrito666> well, Ill address all other points and if by then its still today Ill do it
<menn0> perrito666: no problems
 * perrito666 is running dangerously low on today reserves
<perrito666> menn0: It is too late for me, I pushed as much as I could please take a look, re master my patch will deppend on will's one so you might need to add a seccond look at his patch pending (the one with 70 files)
<perrito666> cheers
<menn0> ok will look at both before I EOD
<menn0> go to bed man!
<mup> Bug #1461888 changed: Units stuck in agent-state: down state <juju-core:Expired> <https://launchpad.net/bugs/1461888>
<menn0> perrito666: I see you've responded to most of the issues I raised but I don't see the pushed changes
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: do you have a minute to talk about spaces and subnets?
<dimitern> voidspace, sure, hangout?
<voidspace> ok
<voidspace> stdup hangout?
<dimitern> ok, omw
<wallyworld> jam: i see you're looking at the doc :-) still WIP,  i need to break for dinner but will come back to it after
<jam> wallyworld: sgtm
<dooferlad> dimitern: hangout?
<dooferlad> jam: ^^
<jam> dooferlad: oops, omw
<voidspace> dimitern: I can see you editing my work :-)
<dimitern> voidspace, I hope you don't mind :)
<perrito666> hey  I have http://reviews.vapour.ws/r/2297/ and its master version http://reviews.vapour.ws/r/2299/ both fixing  https://bugs.launchpad.net/juju-core/+bug/1479289 in case anyone feels like reviewing
<mup> Bug #1479289: statushistory uses sequence, fails in multi-env state servers <blocker> <jes> <status> <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1479289>
<perrito666> dimitern: those are the patches you are looking for, if you think they are mergeable feel free to now I really need some sleep
<dimitern> perrito666, ack, thanks
<voidspace> dimitern: ping
<dimitern> voidspace, pong
<voidspace> dimitern: apiserver/testing/stub_network.go
<voidspace> dimitern: if initialised with spaces sets up a few spaces, including a duplicate
<voidspace> dimitern: the duplicate has the comment  "duplicates are ignored when caching spaces."
<voidspace> dimitern: however, of course, AllSpaces is returning this duplicate
<voidspace> dimitern: do you know why the duplicate is there and what the comment means?
<voidspace> also FakeSpace only stores subnet IDs and is unable to return real subnet information - so I guess I have to expand it
<voidspace> dimitern: if you don't know I can ask dooferlad
<dimitern> voidspace, that whole complicated logic around caching spaces is overengineered I'm afraid
<voidspace> dimitern: where is that done?
<dimitern> voidspace, needs to be simplified and get rid of the cache, but that's not affecting you I guess?
<voidspace> I can see a reference to it in apiserver/subnets/subnets.go
<voidspace> well, the cache isn't directly affecting me I don't think
<dimitern> voidspace, the duplicates are there to verify they are ignored
<voidspace> dimitern: but I think I'll need some logic now in the fake AllSpaces method to ignore them there too
<dimitern> voidspace, that sounds good to me
<voidspace> ok
<dimitern> voidspace, I can have a look at some point if you want
<voidspace> and the FakeSpace will need to be more sophisticated to construct real Subnets to return from AllSpaces
<voidspace> the FakeSpace doesn't have references to the actual subnets, just their ids
<voidspace> but I can work my way through that
<TheMue> aaaaaaaaaaaaaargh!
 * TheMue slams his head on his desk
<TheMue> whole time hunting a so far non existent failure in a test case, but introduced it himself by changing some of the test data
<TheMue> gnah!
<dimitern> TheMue, bad luck :/
<TheMue> dimitern: yeah, so dumb
<mup> Bug # changed: 1452422, 1459785, 1464335, 1466969, 1468581, 1474606, 1475271, 1477358
<mup> Bug # opened: 1452422, 1459785, 1464335, 1466969, 1468581, 1474606, 1475271, 1477358
<voidspace> dimitern: where does the caching stuff exist?
<voidspace> dimitern: FakeSpace.SubnetIds seems to be unused
<voidspace> dimitern: I'd like to replace SubnetIds with Subnets
<voidspace> if I get rid of the SubnetIds from the test stub SetUp nothing seems to fail
<voidspace> (nothing in apiserver anyway)
<voidspace> so it looks like it's safe to modify
<mup> Bug # changed: 1452422, 1459785, 1464335, 1466969, 1468581, 1474606, 1475271, 1477358
<voidspace> no references to cache in apiserver/spaces
<voidspace> apiserver/subnets/subnets.go has a reference to a spaces cache
<dimitern> voidspace, yes, I only needed the ids then, so feel free to extend it as you need
<voidspace> dimitern: but getting rid of the ids doesn't seem to cause any tests to fail
<voidspace> I can't see where they're used...
<voidspace> dimitern: ok, I'll proceed carefully :-)
<dimitern> voidspace, the caching stuff is in apiserver/subnets on create
<voidspace> dimitern: ok, thanks
<dimitern> voidspace, TheMue, dooferlad, guys, the induction sprint will be 1-4 Sept in London; now it's being requested and we should sort the travel details as soon as we have the req num
<voidspace> dimitern: ok
<dooferlad> dimitern: sounds good
<TheMue> dimitern: fine, already marked a flight at my favourite site for flight information retrieval
<dimitern> cheers!
<dimitern> voidspace, was August, 31 the UK bank holiday?
<dimitern> dooferlad, ^^
<dooferlad> dimitern: yes
<dimitern> dooferlad, ta! added to the calendar
<wwitzel3> ericsnow: ping
<perrito666> well the day I decide to go cut my hair, the barber shop is closed, such is my luck
<dimitern> jam, TheMue, (as you'll be the only ones beside me booking flights) the sprint request is approved, please sort your flights
<TheMue> dimitern: yep, having the req num and I can book. already looked for a flight
<jam> dimitern: TheMue: alexisb: the Sprint Request includes Monday, but I'm pretty sure we decided just Tues-Fri
<alexisb> jam, that is my fault
<jam> alexisb: k. just making sure I didn't miss a conversation
<jam> I replied to your Sprint email
<alexisb> thanks
<dimitern> jam, I assumed the same
<alexisb> that is what happens when you are trying to do 5 things at once
<jam> alexisb: no harm. just getting confirmation and thanks a ton for pushing so fast on it.
<dimitern> alexisb, yes, thanks a lot for organizing it so quickly!
<TheMue> +1
<voidspace> alexisb: yep, thanks
<TheMue> alexisb: ping, hangout
<alexisb> TheMue, heya, sorry my current meeting is running over
<TheMue> alexisb: ok, leaving and wait until you ping me then ;)
<dooferlad> voidspace: we don't do anything about the hotel booking, right?
<alexisb> ok TheMue , I am jumping on
<TheMue> alexisb: /me too ;)
<voidspace> dooferlad: that's the usual situation
<voidspace> bbiab
<xwwt> mgz:  Can you join the windows call?
<mgz> omw
<perrito666> bbl, construction site checking :D
<ericsnow> wwitzel3: could you give me a couple quick reviews?  http://reviews.vapour.ws/r/2301/ and http://reviews.vapour.ws/r/2300/
<wwitzel3> ericsnow: sure, once I cut myself out of these weeds
<ericsnow> wwitzel3: :)
<wwitzel3> ericsnow: I have one test giving me a hard time
<ericsnow> wwitzel3: np
<ericsnow> wwitzel3: can I help?
<wwitzel3> ericsnow: mind taking a look? been stuck on it a while
<wwitzel3> ericsnow: in moonstone
<menn0> perrito666: ping
<perrito666> menn0: pong
<perrito666> menn0: no, I actually found no one willing to review those
<menn0> perrito666: for some reason your latest changes to the statushistory PR aren't showing on RB
 * perrito666 headbutts the kb
<menn0> perrito666: but i've looked on Github and things look good there
<menn0> perrito666: the only problem I can see is that the tests in state/upgrades_test.go aren't checking the error return from the upgrade funcs
<menn0> perrito666: other than that ship it!
<perrito666> menn0: oops, I missed that?
<menn0> perrito666: yep
<menn0> perrito666: i'll update the review anyway and then you can start merging
<menn0> perrito666: note that I also updated the ticket so that both master and 1.24 series are shown
<perrito666> menn0: I seem to have missed that in only one
<perrito666> menn0: I did see the email pass
<perrito666> well for a bug fix that was a big patch
<menn0> perrito666: review updated
<menn0> perrito666: that happens :)
<perrito666> menn0: $$merged$$ now lets wait
<perrito666> sometimes I reall miss subversion where I could do tests/commits while working in something else
<perrito666> menn0: how do you feel about repeating the hit with http://reviews.vapour.ws/r/2299/
<perrito666> you will have to look at ghub too
<perrito666> while you where not here fwereade's large patch was merged
<menn0> perrito666: rather than having to review the whole thing again, can you point me at the parts where you had to make significant changes?
<perrito666> menn0: finally I did not do any change, the only sifnificant differences is megawatcher became allwatcher (but the file contents did not change) and makeEnvironment was already there
<menn0> perrito666: well in that case, just merge away
<perrito666> seems the bot is a bit lazy today
<axw> alexisb: will be a couple of mins
<alexisb> axw, no worries
<alexisb> wait I should so no wuckers
<alexisb> thumper, menn0, cherylj can you all please make sure that bug https://bugs.launchpad.net/juju-core/+bug/1481368 gets updated with the latest status and your eod
<alexisb> it is being closely tracked
<thumper> ack
<perrito666> Both patches for https://bugs.launchpad.net/juju-core/+bug/1479289  have landed (and include in them the uncommente test that caused the bug report in first place, working)
<mup> Bug #1479289: statushistory uses sequence, fails in multi-env state servers <blocker> <jes> <status> <juju-core:Fix Committed by hduran-8> <juju-core 1.24:Fix Committed by hduran-8> <https://launchpad.net/bugs/1479289>
<perrito666> bbl dinner
<thumper> hmm...
<thumper> is archive.ubuntu.com down for anyone else?
<thumper> hmm...
<thumper> apt-get update inside my container is having issues
<anastasiamac> thumper: i can reach archive.ubuntu.com
#juju-dev 2015-08-06
<davecheney> works for me
<thumper> yeah...
<thumper> I think my cache was fubared
<davecheney> thumper: are you using 8.8.8.8 by any chance ?
<thumper> nope
<thumper> squid-deb-proxy in the way
<davecheney> but does that use 8.8.8.8 ?
<davecheney> as your name server
<mup> Bug #1482015 opened: featuretest: panic on ppc64le <juju-core:New> <https://launchpad.net/bugs/1482015>
<mup> Bug #1482074 opened: worker:  environSuite.TestInvalidConfig fails on ppc64 <juju-core:New> <https://launchpad.net/bugs/1482074>
<wallyworld> jam: hi, i'm off to soccer soon, but i made some changes to the resources doc. i also posed a question about specifying series. i've also put down a brain dump on mongo migration which i'll share. it's a start
<jam> wallyworld: k
<jam> enjoy soccer
<wallyworld> will try to
<wallyworld> link sent
<dooferlad> dimitern: hangout?
<dimitern> dooferlad, omw
<voidspace> dooferlad: dimitern: ready for review http://reviews.vapour.ws/r/2303/
<dooferlad> voidspace: *click*
<dimitern> voidspace, looking
<voidspace> relatively straightforward in the end
<voidspace> the estimation for spaces list state is 2
<voidspace> I think that's quite an underestimation given how much subnets need to change
<voidspace> :-/
<dooferlad> voidspace: and that is why we review and re-estimate :-|
<voidspace> yeah
<voidspace> "Thinking Fast and Slow" (a book on experimental psychology) has a good section on project estimation
<voidspace> and why humans will always tend to underestimate the time a project takes
<voidspace> I think that agile velocity measuring is actually a pretty good counterbalance against that, but he (Daniel Kahneman I believe) does suggest some other mitigation strategies
<mup> Bug #1482155 opened: lxc restriction on multiple state servers <juju-core:New> <https://launchpad.net/bugs/1482155>
<voidspace> the "optimisim bias" is one reason and "What You See Is All There Is" is another reason
<voidspace> the main mitigation strategy is to not estimate based on how long you think a task will take
<voidspace> but first take *how long other similar tasks actually took* as your baseline, and then think of reasons to move from the baseline
<dooferlad> voidspace: I have used two point estimation (worst case, best case) and three point (worst, gut feel for how it should be, best case) and none seem to do any better than what we are doing
<voidspace> right, but starting from a baseline that is not related to thinking about the current task is slightly different
<voidspace> it's effectively a statistical approach
<voidspace> "statistically how long is this likely to take based on similar tasks"
<voidspace> experiments demonstrate that even *knowing* the baseline doesn't affect people's estimation (they still underestimate wildly) unless they *specifically* start from the baseline
<dooferlad> sounds about right!
<voidspace> very interesting book
<voidspace> I reccommend it for all sorts of reasons
<voidspace> it just happens that section is relevant
<dooferlad> I would add it to my to-read list, but it is way too long anyway.
<voidspace> heh
<voidspace> I know the feeling
<dooferlad> I just need a solid 10 years in a cave to make a decent dent.
<dooferlad> a cave without net access.
<voidspace> :-)
<voidspace> and then you really will have an "emerging from the cave" experience (a-la Plato)
<dooferlad> and some magic anti-aging time machine
<TheMue> talking about PERT?
<voidspace> PERT?
<TheMue> E = (P + 6L + O) / 6 with U = ((P - O) / 6)^2
<voidspace> that didn't help...
<voidspace> :-p
<TheMue> voidspace: https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique
<voidspace> TheMue: thanks :-)
<TheMue> a very simplified model for software cost estimation, opposite to FPA oder even COCOMO II
<TheMue> in the first line P is pessimistic, L is likely and O is optimistic, E is the effort and U the uncertanity
 * TheMue once needs to translate his 70 slides about the different ways for SCE from German to English ...
<TheMue> contains reasons about wrong estimations too, and all the factors that are part of a full process
<voidspace> TheMue: it's not clear to me that PERT uses a baseline for individual task estimation
<TheMue> okay, different topic, back to work
<voidspace> so I'm not sure it's directly the sort of estimation I was talking about
<TheMue> voidspace: only seens that sentence with three points, so jumped on the wagon to quickly ;)
<TheMue> seen
<voidspace> heh
<voidspace> Kahneman talks about a statistical approach to individual task estimation - not just for a whole project
<voidspace> it seems that PERT will still be subject to the optimist bias and availability bias even in the pessimistic estimation case
<voidspace> and Kahneman has experimental data to back up his assertions about the way we estimate
<TheMue> I once managed a 7 people/3 year project very good with a mix of FPA and COCOMO II, but we had very clear requirements, so it worked
<voidspace> tracking a velocity is a good counterbalance, but it won't tend to improve estimation - merely correct for the inherent errors
<voidspace> right, lunch
<TheMue> voidspace: yep, matches my experience
<voidspace> cool
<TheMue> dimitern: do we have a way to inject a value into a running watcher? would like it to return the tag of an illegal IP address to provoke an error
 * TheMue steps out for lunch, reads answer later
<dimitern> TheMue, not outside the state package, as this will require inserting an invalid document in the collection the watcher is watching
<dimitern> TheMue, alternatively, you can use a mock watcher of course
<TheMue> dimitern: hmm, mocking would be an alternative. seems like the only way to kill my worker in a bad way :)
<TheMue> dimitern: it's a funny test, after hardening the worker show, that it still can die after an internal error
<voidspace> dooferlad: the nice thing about a magic space name ("diediedie") is that space name is already passed down from BackingInstance to Space to Subnets
<voidspace> dooferlad: to add another way to configure Subnets to fail (as I only have access to the backing instance) requires adding infrastructure to pass that knowledge down from BackingInstance to Spaces to Subnets
<voidspace> dooferlad: all for that one test
<voidspace> dooferlad: in terms of obfuscation, I thought "diediedie" was pretty clear...
<voidspace> :-)
<voidspace> dimitern: I replied to two of your review comments
<TheMue> mock-di-di mock-di-da *sing*
<voidspace> dimitern: also, you said it was "mostly done, apart from the apiserver method signature"
<voidspace> dimitern: what do you mean by the apiserver method signature, and what isn't done about it?
<voidspace> dimitern: you mean I missed something?
<dimitern> voidspace, did you get my other 2 comments?
<dimitern> voidspace, I've replied
<voidspace> dimitern: yeah, about to push the other two changes
<dooferlad> voidspace: it isn't just the diediedie thing, it is the looking at subnet IDs and using that to decide how to construct the fake subnet.
<voidspace> dooferlad: that's not what the comment says
<voidspace> dooferlad: it's just constructing precanned data
<voidspace> dooferlad: deterministically constructing it from input
<voidspace> dooferlad: I don't think it's unclear
<voidspace> dooferlad: we only have SubnetIds on the Space
<voidspace> dooferlad: and really it needs to stay like that because AddSpace takes a subnetId not a subnet
<voidspace> dooferlad: so unless we want our backing stub to start growing a full implementation, we need a way of going from subnet id to full subnet details
<voidspace> dooferlad: I could turn it into a function that does it, store the generated subnets and have AddSpace use it to generate a subnet
<voidspace> dooferlad: but it would be exactly the same thing, just extra work
<dimitern> voidspace, that's why i asked did you get my other comments - I tried to explain there about the signature
<voidspace> dimitern: ah!
<voidspace> dimitern: you mean lose the Error on the result and return an error instead
<voidspace> dimitern: hmmm... isn't it normal for result params to have an Error field?
<voidspace> dooferlad: I've added my explanation (including why I don't like the alternatives - especially for the CIDR magic) as a comment to the review
<voidspace> dooferlad: getting rid of "diediedie" would be easier, not too much work - just a method on BackingInstance and a corresponding flag on FakeSpace I think
<voidspace> dooferlad: so maybe it's worth it
<voidspace> (I agree in general that magic names aren't ideal.)
<voidspace> dooferlad: and subnet.get_params()! You've been doing too much Python :-)
<voidspace> (method name casing)
<dooferlad> voidspace: You could put the code to generate those subnets in StubBacking.SetUp so at least all the set up is in one place.
<voidspace> dooferlad: no, see the comment about AddSpace
<voidspace> dooferlad: we need to do it on the fly too
<voidspace> dooferlad: so in terms of "one place", Subnets is the best place
<voidspace> dooferlad: we could do it in SetUp *and* AddSpace
<voidspace> but that seems worse
<voidspace> to me anyway
<voidspace> and we'd need to add an extra field to FakeSpace to store them
<voidspace> there will also be a method to add a subnet to an existing space as well
<voidspace> that will then be a third place needing to do the generation
<voidspace> dooferlad: I could add a comment to the SetUp, so anyone investigating where the data is generated will be able to find it easily
<dooferlad> I guess the question really is should the stub emulate state well enough to accomplish this and will it be useful for other tests. That I don't have an answer for without giving it some thought, but leaving it as it is now and adding comments seems good.
<voidspace> dooferlad: currently it's just for this test
<dooferlad> that way the giving it some thought gets done as part of writing other bits
<voidspace> dooferlad: and I don't like writing code for imaginery future use cases
<dooferlad> but, yes, comments++
<voidspace> if we have a need for BackingInstance to better emulate state then we should add that at the point we need it
<dooferlad> indeed
<TheMue> dimitern: you've got mail about an idea. let me know what you think.
<voidspace> so I'll get rid of diediedie and add a comment about subnet data generation to SetUp
<dooferlad> I see TheMue is now using IRC as a push email notification. Next, telnet to ping people about IRC messages :-D
<voidspace> dooferlad: you have a params.SpaceResult - presumably the result of calling CreateSpace
<voidspace> dooferlad: it just holds a Tag and an Error
<voidspace> I'm about to create params.Space (a rename of params.SpaceListResult)
<voidspace> dooferlad: they have no fields in common... but still I wonder if they should be unified?
<TheMue> dooferlad: yep, working on a bot to automate it
<dimitern> voidspace, result structs have Error fields only when part of a bulk call of some sort (entities -> someresults, error)
<voidspace> dimitern: so just returning the error is sufficient? ok. I'm already doing that anyway, I'll just kill the Error field
<dooferlad> TheMue: but where will I telnet to? Perhaps you could call me?
<dimitern> voidspace, in this case we have the same results (assuming no filtering)
<dimitern> voidspace, well, ListSpaces is not a bulk call anyway and can either return all results or error out (no partial results possible)
<TheMue> dooferlad: ok, will add a pluggable channel adaptor, so you can add your telnet module
<dooferlad> TheMue: sweet. I will post you a letter with my phone number.
<TheMue> dooferlad: hmmm, missing the fax here
<voidspace> dimitern: yep, cool
<dimitern> TheMue, I like your proposal actually, but how do you estimate the work?
<dooferlad> TheMue: oh dear, I don't know your address and I won't be able to see smoke signals at this distance. Oh well.
<dimitern> TheMue, not what do you mean by "the ip address watcher would be part of networking api"
<dimitern> TheMue, s/not/not sure/
<TheMue> dimitern: ah, great. it should be pretty simple with the made experiences, I'm only angry that I didn't wrote it down earlier
<dimitern> TheMue, as a professor of mine liked to say "once you've done with a task, you already know how you should've done it" :)
<TheMue> dimitern: IMHO as it is a business domain I would put all networking related functionality into one API, not like e.g. now the extra addresser, which only makes sense for the addresser worker
<TheMue> dimitern: a good professor, indeed
<TheMue> dooferlad: how about a carrier pigeon?
<dooferlad> TheMue: thought about that but with neither of us having a pigeon trained to fly to the others address, I think we are out of luck.
<TheMue> dooferlad: will use the London sprint to work on a solution. maybe installing a direct wire for telegraphing
<dimitern> TheMue, I'm not sure about a networking api in general - let's keep that in mind for the time when we'll have pluggable brokers/providers
<TheMue> dimitern: ok
 * TheMue continues mocking for an API error
<TheMue> dimitern: will make a more concrete outline to see, if a later change of the approach would make sense
<voidspace> dimitern: dooferlad: review updated
<voidspace> dimitern: ah damn, forget JSON tags
<voidspace> dimitern: will do that now
<voidspace> dooferlad: see the SetSpaceSubnetsFail monstrosity you're responsible for
<dooferlad> voidspace: you have answered all my queries. Have a +1
<voidspace> dooferlad: thanks
<dimitern> TheMue, ok
<dimitern> voidspace, please don't forget json tags
<voidspace> dimitern: done
<voidspace> dimitern: as I have a +1 from dooferlad I'm going to merge
<tasdomas> who could I talk to about actions in juju-core ?
<dimitern> voidspace, ahh..  now I got it why you like the verbose slice-item multi-line formatting to the shorter []X{{\nF: 42,\n}, {\nF: 2,\n}}
<dimitern> voidspace, it looks more pythonic :)
<dimitern> voidspace, and you have a +1 from me as well
<voidspace> dimitern: I'm not sure it's more pythonic, I think it's more readable
<voidspace> dimitern: it adds one line per slice member, and when you have nested slices (as in this example) it's very clear what type you're looking t
<TheMue> dimitern: take a look at http://paste.ubuntu.com/12013753/, that would be the API function. api.releaseIPAddress() is adopted from the original code, only changed the parameter to be an address.
<voidspace> *at
<TheMue> dimitern: it's pretty small ;)
<voidspace> dimitern: if it wasn't a nested slice I wouldn't do it
<voidspace> but nested "untyped" slice members is a bit hard to read (you need to keep a state stack to remember which type you're reading)
<voidspace> dimitern: anyway, thanks
<voidspace> dimitern: should I go to space list (api) or space list (state)
<voidspace> dimitern: I'm not sure which is more logical as a next step, I've been looking at state but it occurs to me that (api) might be more logical
<voidspace> or should that be done last...
<voidspace> dimitern: the list card says that it assumes that "create" has already been done
<voidspace> dimitern: but I think that assumption is for the estimate only, not actually a requirement of implementation order
<dimitern> voidspace, so far I found it's best to do cli -> apiserver -> api -> state
<voidspace> dimitern: so api next then
<voidspace> col
<voidspace> *cool
<xwwt> mgz: I may be a cpl min late
<mgz> xwwt: no probs, poke me when ready
<dimitern> TheMue, CleanupIPAddresses should not return the first error - it's better to keep going and return a typed error like "some addresses not released (will retry)"
<dimitern> TheMue, and I'll suggest adding some debug logging around the steps
<mgz> I was just wondering what the allen v watcher was and why we named a watcher after someone...
<TheMue> dimitern: yeah, thought about that too
<TheMue> dimitern: but this is only an outline to show how small it would be
<dimitern> TheMue, +1
<TheMue> dimitern: say ok and I quickly create a new branch, copy the reusable code of my current branch and go with it *lol*
 * perrito666 considers moving to uk just to be able to see tgbbo sooner
<mgz> ehehe, baking nut
<perrito666> I have to wait a whole day to see it on the internet
<perrito666> and months to get it on bbc
<perrito666> our version of bbc is quite subpar
<perrito666> mgz:  I am now into a whole new level of torture, watching cooking shows while on a diet :p
<natefinch> perrito666: so don't do that :)
<perrito666> natefinch: I assume you are talking about the diet
<natefinch> lol
<natefinch> no
<perrito666> natefinch: one doesn't miss tgbbo, it is not an option, even living in the spanish colonies as I do :p
<natefinch> perrito666: never heard of it
 * perrito666 slaps natefinch with a cookbook by mary berry
<natefinch> be back in ~25 minutes
<perrito666> aghh why on earth noboy uses /away instead of doing that?
<katco> ericsnow: wwitzel3: natefinch-afk: ok planning time
<wwitzel3> katco: rgr, refilling my water
<perrito666> anyone seen fwereade lately?
<cherylj> does anyone have information on using the garage MAAS?  I need an environment where I can deploy KVM containers.
 * dooferlad wonders if he is the only person who gets annoyed by KVM being called a container when it isn't.
<perrito666> dooferlad: you most likely are :p since we all call it that
<TheMue> dimitern: some changes at http://paste.ubuntu.com/12014105/, also contains the important part of the worker. only missing part now is the correct signal, that it's no hard error but a try-again
<TheMue> dimitern: so yes, missing some QA aspects, but shows the idea how few code it is
<cherylj> terminology aside, my question stands :)
<TheMue> dimitern: simply by moving logic to the server side
<dooferlad> sorry cherylj, no. I work in a converted garage and it has a MAAS in it, but that isn't what you were asking about :-|
<perrito666> dooferlad: nice, and where did you put the car?
<dooferlad> perrito666: it never went in the garage anyway. Left part at the front for bikes.
 * perrito666 went for the laundry room
<lazyPower> wwitzel3: natefinch-afk - so i've done quite a bit of hacking on the containerizer i wanted to land for tracking branches but the sizes are quite large and I dont really recommend a 2.8gb container image from the registry. So if you want/need any of these branch bins in an isolated space just lmk. I'm abandoning the effort to get these published in an automated fashion.
<lazyPower> it turned into quite the rabbit hole
<katco> natefinch-afk: ping?
<wwitzel3> lazyPower: ahh ok
<wwitzel3> lazyPower: thanks, yeah, I'd love to look at what you have
<lazyPower> wwitzel3: https://github.com/chuckbutler/jujubox/tree/nightly
<lazyPower> repo is a little messy, i still have cleanup to do in there
<katco> natefinch-afk: ping?
<katco> natefinch-afk: ping?
 * TheMue hears an echo on the channel, ...annel, ...nel, ...l
<jw4> tasdomas: did you get your actions questions answered?
<perrito666> bbl
<katco> perrito666: natefinch: team meeting
<fwereade> evening all
<natefinch> evening fwereade
<alexisb> heya fwereade !
<perrito666> hey katco sorry got a doctor appt and wasnt out of there in time
<alexisb> did you get power back?
<perrito666> fwereade: hey, welcome bacl
<fwereade> so far I have had four public officials and two electricians hemming and hawing and replacing one bit after another
<fwereade> there was going to be a third electrician today but he turned out to be fictional
<fwereade> I am more than somewhat vexed by the situation
<fwereade> am currently in a bar up the road for the wifi
<perrito666> fwereade: well, now you know how everyone else sees software engineers debugging
<fwereade> and kinda sorta for the alcohol too now I think of it
<fwereade> interestingly, the electricity meter (replaced this morning) dated from 1966
<fwereade> didn't *quite* make it 50 years
<fwereade> anyway I'm mainly on to fetch and rebase and maybe push a couple of branches if they're clean
<perrito666> fwereade: one would think that, of all possible poblems a house can have, electricity supply is one of the easiest to debug by simple binary search
<perrito666> fwereade: your extremely big pr to master was merged
<fwereade> but if I can do anything for anyone while I'm here, well, I'm here
<fwereade> perrito666, I saw, thanks :)
<alexisb> fwereade, I sent you mail on a critical bug
<alexisb> just fyi
<katco> fwereade: beer makes you code better, or at least think you are.
<alexisb> beer has that affect for many popular activities
<perrito666> katco: beer makes you code like this https://www.youtube.com/watch?v=KEkrWRHCDQU
<fwereade> alexisb, ok, menn0's explanation sounds like it's on the money
<fwereade> alexisb, I will look into it
<alexisb> thanks
<natefinch> try ssh whoami.filippo.io
<natefinch> (it's harmless but interesting)
<perrito666> wallyworld: leme know when you are here
<wallyworld> perrito666: hey, just about to drive my wife to work, hopefully back soonish
<perrito666> wallyworld: np, just ping me whenever you feel like
<perrito666> :)
<perrito666> I have literally nothing better to o
<perrito666> do
<arosales> wallyworld, congrats on the spotlight award :-)
 * perrito666 will ask an autograph on the next meeting
<katco> wallyworld: dude! the spotlight award! congratulations!!! :D
<katco> wallyworld: definitely well deserved :D
<perrito666> katco: we at tanzanite are thinking on changing the name to Ian's people
<perrito666> and also wearing Ian's face on t-shirts
<perrito666> and off course, tatoos
 * perrito666 np: Men at work - down under
<perrito666> wow, I actually know 2 songs from these people
<katco> perrito666: how does "mountain man" (axw) feel about that?
<perrito666> katco: about what of all that?
<katco> perrito666: about renaming the team to "ian's people"
<perrito666> katco: (also we dont need to ask him, he is still 2 hs away from morning) :p
<katco> perrito666: haha, i always forget "mountain man" axw lives in a different time zone than wallyworld
<perrito666> it is 6 AM for him still
<mwhudson> the fact that perth is connected to the rest of australia by land is an interesting academic fact that has no effect on reality
<anastasiamac> perrito666: like some of ur ideas but not the ones above :D
<axw> katco: mountain man? nani?
<anastasiamac> axw: we should have a sprint in perth to make abslutely sure that everyone apprecaites paradise u r in :D
<axw> anastasiamac: I think I have missed something :)
<anastasiamac> axw: I'd b siurprised if u have:D
<axw> anastasiamac: according to flock.teleport.org, if tanzanite has a sprint we should all go to sydney
<anastasiamac> axw: since everyone is curious/ talking about perth, I think we should sprint there :D
<axw> anastasiamac: are you sure you want to be in the State of Excitement?
<anastasiamac> axw: m always in the state of excitment
<mup> Bug #1482226 opened: juju status with 'prefer-ipv6' shows address, not DNS name. <amd64> <apport-bug> <trusty> <uec-images> <juju-core:New> <https://launchpad.net/bugs/1482226>
<axw> anastasiamac: in case that's lost on you... http://theworstofperth.com/2007/11/03/state-of-excitement/
<anastasiamac> axw: but Perth also has great food :D
 * anastasiamac needs to run - ttyl
<wallyworld> perrito666: you back yet?
#juju-dev 2015-08-07
<mup> Bug #1482502 opened: API open error: EOF during bootstrap <ci> <intermittent-failure> <maas-provider> <test-failure> <juju-core:Triaged> <juju-core windows-public-clouds:Triaged> <https://launchpad.net/bugs/1482502>
<mup> Bug #1430205 changed: lxc template needs refreshing every 24 hours <lxc> <sts> <juju-core:Fix Released> <https://launchpad.net/bugs/1430205>
<mup> Bug #1482513 opened: checklist failed with "at least three machines that have more than one disk" <juju-core:New> <https://launchpad.net/bugs/1482513>
<voidspace> dooferlad: are you around?
<dooferlad> voidspace: here now
<dooferlad> voidspace: had rubbish night, so later than wanted start
<voidspace> dooferlad: thanks
<voidspace> dooferlad: sorry about your night
<voidspace> dooferlad: and thanks for the review
<voidspace> fwereade_: you have internet back yet, or you just found somewhere with wifi to work?
<dooferlad> voidspace: no problem - wan't a big hunk of code!
<voidspace> dooferlad: hehe, not exactly...
<voidspace> dooferlad: did your state subnets branch land?
<voidspace> dooferlad: if so you could move the card from Landing to Merged
<dooferlad> voidspace: done. Thanks for the reminder.
<voidspace> dimitern: I may need help with adding reference counting to subnets
<voidspace> dimitern: I'll look at how it's done in other places first
<voidspace> dimitern: maybe you could suggest somewhere else to look, otherwise I'll go spelunking through code
<voidspace> dimitern: dooferlad: FYI my api branch landed
<dimitern> voidspace, certainly
<dimitern> voidspace, a nice, isolated example would be service settings ref counts
<voidspace> dimitern: thanks
<dimitern> voidspace, look at and around settingsRefsDoc
<voidspace> dimitern: so it requires a separate collection for the refs
<TheMue> dimitern: can you also provide edit access to the spreadsheet?
<dimitern> voidspace, essentially you need explicit txn-revno to assert the collection hasn't changed
<dimitern> voidspace, it doesn't require a separate collection, but fwereade I know prefers this approach
<voidspace> dimitern: yeah, it's more the general shape of the infrastructure (that we use a separate doc for example) and where the refs are updated
<dimitern> TheMue, which one ? for the sprint?
<voidspace> dimitern: if fwreade prefers it that sounds like a good reason to do it :-)
<TheMue> dimitern: exactly, otherwise I do it the standard way via alexisb
<dimitern> voidspace, yeah, and to make life easier add helpers inc and dec ops and assert unchanged ops
<voidspace> ok
<dimitern> TheMue, let me see if I can do it
<TheMue> dimitern: thx
<dimitern> TheMue, try now
<TheMue> dimitern: it works, great, thx again
<dimitern> TheMue, np
<dimitern> voidspace, dooferlad, both of you should have edit access as well
<voidspace> dimitern: settingsIncRefOp and settingsDecRefOps (inconsistent naming!) both seem straightforward
<voidspace> although there decrementing ref count to zero deletes the settings
<voidspace> we don't want that behaviour
<dimitern> voidspace, the naming is inconsistent because one is returning txn.Op and the other []txn.Op, but fair point
<rogpeppe> anyone know if juju CI is currently blocked on critical bugs?
<rogpeppe> i remember there was a website you could check, but i can't remember the name
<rogpeppe> dimitern: ^
<dimitern> rogpeppe, http://juju.fail
<dimitern> rogpeppe, and it seems it's blocked
<rogpeppe> dimitern: ok, i won't bother trying to propose my one line change to provider/dummy :)
<rogpeppe> dimitern: thanks
<rogpeppe> dimitern: i'll bookmark that
<dimitern> rogpeppe, it's great :)
<rogpeppe> dimitern: it should have a favicon :)
<dimitern> rogpeppe, marcoceppi accepts contributions for it - I've done it once :)
<rogpeppe> dimitern: then i'd have to make one :)
<perrito666> I fix committed that bugfix why is it still blocked?
<perrito666> There is no ci failure to support it and the relevant test is passing I say just change the severity
<perrito666> Dimitern rogpeppe ^
<rogpeppe> perrito666: i've no idea
<dimitern> perrito666, no idea - ping mgz, sinzui ?
 * perrito666 goes back to breakfast, irc on the phone is very annoying 
<perrito666> Mgz?
<dimitern> TheMue, I'm looking at your branch
<TheMue> dimitern: the WIP one or the not yet PR branch with the cleaner approach?
<dimitern> TheMue, about the tests we're talking about
<dimitern> TheMue, the one I can see (ending with "-simplified-approach" or something)
<TheMue> dimitern: ah, fine, just adding tests here
<dimitern> TheMue, let's finish all the tests with mocking, but at the end add a separate suite embedding JujuConnSuite (with comment that it's an integration test) that tests the full stack end-to-end
<TheMue> dimitern: yep, will do, sounds logical to me
<tasdomas> dimitern, ping?
<dimitern> TheMue, i.e. add some IPs, make some of them dead (not all), call Cleanup...() and ensure they are released and removed
<TheMue> dimitern: so far integration has been on worker level only. a bit late.
<dimitern> tasdomas, pong
<tasdomas> dimitern, a quick testing question
<dimitern> tasdomas, sure
<TheMue> dimitern: yeah
<tasdomas> dimitern, in https://github.com/juju/juju/pull/2927/files#diff-34326fed18c50f27a1b95570dc428e0aR573 I'm adding a call to state from runner.Context
<tasdomas> dimitern, what would be the best way to path that call to state, making it return an error?
<tasdomas> uniter.State is not an interface, so I can't just mock it out
<dimitern> tasdomas, let me have a look
<voidspace> ericsnow: what's a "WMP feature test"?
<mgz> perrito666: I marked bug as released to unblock branches - as I understand it we don't have coverage at present without jes testing, also master failed for other reasons when running the rev.
<voidspace> ericsnow: WPM even
<perrito666> mgz: oh, why did it fail?
<mup> Bug #1479289 changed: statushistory uses sequence, fails in multi-env state servers <blocker> <jes> <status> <juju-core:Fix Released by hduran-8> <juju-core 1.24:Fix Released by hduran-8> <https://launchpad.net/bugs/1479289>
<mgz> perrito666: juju not coping with errors from ec2 when deploying a bunch of stuff with quickstart, and joyent networking not being sorted yet
<mgz> so, known robustness issues
<perrito666> meh
<TheMue> dimitern: ping
<dimitern> TheMue, pong
<mup> Bug #1482634 opened: leadership manager lifetime tied to state <tech-debt> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1482634>
<TheMue> dimitern: do you have a good example for working with a mock in one suite and the dummy in another suite in the same package?
<mup> Bug #1482634 changed: leadership manager lifetime tied to state <tech-debt> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1482634>
<TheMue> dimitern: the test itself is done, I only needed the coretesting.MgoTestPackage(t)
<TheMue> dimitern: and then, because they are two suites, one panics
<TheMue> dimitern: one simple solution could be to not mock and use dummy for all tests of the API, but hmm, we tried to have less mongo starts
<TheMue> dimitern: running the one with our Mgo... and the other one with gc.TestingT(t) would be nice. *sigh*
<dimitern> TheMue, you only need to call MgoTestPackage once, regardless of how many suites you have
<dimitern> TheMue, I can't think of a good example off hand for a package where there are both mocked unit tests and dummy provider based integration tests
<TheMue> dimitern: yes, but doing so in TestPackage leads to a "Panic: cannot share a state between two dummy environs; old "only"; new "dummyenv""
<dimitern> TheMue, can I have a look at the code that causes the panic?
<TheMue> dimitern: sure, I'll quickly commit and push the branch
<TheMue> dimitern: here it is https://github.com/TheMue/juju/blob/addresser-simplified-approach/apiserver/addresser/addresser_test.go
<mup> Bug #1482634 opened: leadership manager lifetime tied to state <tech-debt> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1482634>
<TheMue> dimitern: and package_test.go contains the former code commented plus the call of MgoTestPackage
<marcoceppi> dimitern glad you still use it!
<TheMue> dimitern: the DummyAddresserSuite is now the new 2nd one using the dummy provider while the AddresserSuite uses the mock
<TheMue> dimitern: the ttest at the bottom still misses the final check, that the address has been removed. wanted to test so far and then had the trouble with Mgo...
<dimitern> marcoceppi, absolutely - that's my go-to source to check for blockers :)
<TheMue> marcoceppi: dimitern: juju.fail? yeah, a very good and helpful tool
<perrito666> I love it, just learned a couple of days ago about it
<dimitern> TheMue, so you're saying calling MgoTestPackage in a package which have some suites using JujuConnSuite and some not panics?
<perrito666> fwereade: hehey, internet is back?
<TheMue> dimitern: yep
<dimitern> TheMue, if that's the case too bad :( we can move that integration test to the worker then
<TheMue> dimitern: *sniff*
<TheMue> dimitern: liked the idea of having pure functional tests as well as integration in a so compact way
<TheMue> hmmm, "in such a compact way" sounds better and more correct
<fwereade> perrito666, yeah
<perrito666> fwereade: what was the culprit?
<fwereade> perrito666, we're pretty sure we tracked it down to an a/c unit that hadn't actually been unplugged from everything
<fwereade> perrito666, so we've got someone coming over to take a look at that
<perrito666> fwereade: :) dont forget to ask for the neutral line check
<fwereade> perrito666, on the a/c? or everywhere?
<TheMue> dimitern: that's btw one reason why I like the standard go tests where I create the environment in helper functions as a first statement in TestFoo(t *testing.T)
<perrito666> fwereade: everywhere, it is not normal to have the protection break on all 3 circuits if only 1 is faulty
<natefinch> TheMue: +1 for standard go tests w/ a help if setup is needed.
<fwereade> perrito666, yep. will do
<TheMue> dimitern: so my tideland test typically load the assertion type/helper as a first statement, then create their environment(s) and then do the tests
<TheMue> natefinch: thx
 * perrito666 +1s natefinch 
<natefinch> TheMue: I think it's just a lot more clear w/o magical functions getting called that you don't even see.   Plus, I think mgo's test timing doesn't include setup or teardown, which means you miss a lot of important information about how long a test really takes
<TheMue> natefinch: yep
<natefinch> TheMue: er s/mgo/gocheck/
<TheMue> natefinch: here at https://github.com/tideland/golib/blob/master/cells/cells_test.go#L168 is an example how I'm doing it, always trying to be as simple as possible
<fwereade> I know I'm OCR, but cann I hit someone up for a review of http://reviews.vapour.ws/r/2315/ please? somewhat urgent
<TheMue> natefinch: and if a larger setup is needed it's simply a privat function in the same test (extra or same file)
<natefinch> TheMue: yep.
<TheMue> natefinch: the task of gocheck for more simple test is done by https://godoc.org/github.com/tideland/golib/audit#Assertion
<natefinch> TheMue: the nice thing about using the standard test's framework is better integration with third party tools, like goconvey's web UI.
<TheMue> natefinch: yes, another big advantage
<perrito666> fwereade: ship it
<perrito666> I am satisfied by the amount of explanations around the dirty hack :)
<fwereade> perrito666, tyvm
<fwereade> ericsnow, ty for your review; and by the way, RB seems not to have picked up https://github.com/juju/juju/pull/2932
<perrito666> rb is missing a lot of things lately
<natefinch> oh for fsck's sake... the tests are wiping environment variables... now I understand why the tests keep telling me they can't find $HOME
<perrito666> lol
<katco> natefinch: standup
<voidspace> perrito666: a few questions on your version checker PR
<voidspace> perrito666: other than my questions looks good
<perrito666> voidspace: hey thanks :)
<ericsnow> fwereade: re: that review request, it barfed because of unicode (in Python 2)
<mup> Bug #1482513 changed: checklist failed with "at least three machines that have more than one disk" <Landscape Server:Incomplete> <https://launchpad.net/bugs/1482513>
<fwereade> ericsnow, eww, what did I do to it?
<ericsnow> fwereade: not your fault :)
<perrito666> heh, thank I didn't write my name properly on github nor in my git account
<perrito666> that usually blows djangos with unicode errors
<fwereade> ericsnow, what should I do to goose it into action when that happens?
<ericsnow> fwereade: I thought I had fixed it but I guess not
<ericsnow> fwereade: create a new pull request? (discard the old one)
<perrito666> ericsnow: you never fix all unicode problems
<fwereade> ericsnow, ok, cheers
<ericsnow> perrito666: :)
 * perrito666 used to be the "reminder there is a 'rest of the world'" to former co workers doing python web apps
<perrito666> I wont say what was their nationality :p
<dooferlad> voidspace / dimitern: I am being told by Juju that it is out of IP addresses when trying to perform a static allocation. Do you know how to reset the pool? This is with MAAS.
<fwereade> ericsnow, does PR2939 have the same problem?
<dooferlad> I have already reset the leases files it is using
<voidspace> dooferlad: "reset the pool"? there is no way...
<voidspace> dooferlad: juju attempts to allocate addresses from the static range
<voidspace> dooferlad: if it fails (MAAS returns an error) it marks the address as unavailable
<perrito666> fwereade: how do you even cause a unicode error? do you have special chars in your kb?
<voidspace> dooferlad: if it succeeds it uses the address
<fwereade> perrito666, I didn't think I was doing anything special -- no :hankey:s in the comments or anything
<dooferlad> voidspace: so what I do when MAAS thinks it has run out?
<voidspace> dooferlad: well if *MAAS* thinks you've run out it usually means you have
<voidspace> dooferlad: increase the static range
<voidspace> *statically allocated range*
<voidspace> that should work
<dooferlad> voidspace: I have two physical machines, one is on, there are three containers on it. I have 100 addresses in the static range.
<voidspace> dooferlad: something has gone wrong then :-)
<dooferlad> voidspace: yea.
<voidspace> some error was causing address allocation to fail and addresses be marked as unavailable
<voidspace> you could poke mongo directly and clear all addresses marked as unavailable
<voidspace> we should have an issue (we've certainly talked about) retrying unavailable addresses
<voidspace> *for retrying
<perrito666> voidspace: why on earth are we not using more periodicworker
 * perrito666 facepalms
<voidspace> perrito666: I don't know... :-)
 * perrito666 changes it a bit to use a custom clock
<dooferlad> voidspace: https://gist.github.com/dooferlad/082996faeb2168a76ab4 does it
<dooferlad> voidspace: provided your maas is called "maas"
<voidspace> dooferlad: my maas usually is called maas
<voidspace> dooferlad: ah, so they were allocated with maas
<voidspace> I wonder how they got allocated
<voidspace> we must have allocated them, then something *else* went wrong so we marked the address as unavailable and tried another one
<voidspace> still worrying
<voidspace> dooferlad: hmmm... maybe not
<voidspace> I'm slightly confused but never mind, if it worked then great
<dooferlad> voidspace: juju destroy-environment -y --force maas
<dooferlad> voidspace: I bet repeatedly using that doesn't help
<voidspace> dooferlad: hah, yes
<voidspace> dooferlad: they just remained allocated from previous runs
<voidspace> that makes sense
<voidspace> if you use maas 1.8 and trunk it should no longer be an issue
<voidspace> they'll now be freed correctly
<dooferlad> I am
<voidspace> using devices
<voidspace> in theory...
<dooferlad> though I ran out when using net-cli
<voidspace> the devices are correctly cleared and we're assuming maas also releases their IP addresses
<voidspace> that may not have the new devices stuff in it
<voidspace> you can tell - when you create a container check if a corresponding device is created (visible in the MAAS UI)
<dooferlad> Oh, that's new (to me):  ERROR juju.cmd supercommand.go:429 upgrade in progress - Juju functionality is limited
<dooferlad> voidspace: ohh, now there are devices! Shiny!
<voidspace> right EOW
<voidspace> happy weekend everyone
<dimitern> dooferlad, hey
<dooferlad> Hi
<dimitern> dooferlad, did you manage to fix your ip addresses issues?
<dimitern> dooferlad, I have a tool for this btw
<dooferlad> Yep. Pasted above
<dooferlad> Well, mine is pasted above
<dimitern> dooferlad, ah :) good - but here's mine as well - expectedIP
<dimitern> ?!
<dimitern> https://github.com/dimitern/go-tools
<dooferlad> dimitern: cool, will try to remember that is there.
<dooferlad> I think we should bubble up a warning to the CLI if we can't allocate a static address and we wanted to. It is the sort of thing that hunting through logs for just sucks.
<dooferlad> but for the moment I am out of energy, so calling it a day.
<dimitern> dooferlad, We will do that soon
<dimitern> dooferlad, have a nice weekend :)
<perrito666> bbl
<bogdanteleaga> do we support instance type constraints on azure?
<dimitern> bogdanteleaga, looking at provider/azure - selectInstanceTypeAndImage it seems we do
<bogdanteleaga> dimitern: yup, I managed to get it working in the end
<bogdanteleaga> sadly, I don't think it matters since I wanted an SSD for the main drive and D# instances provide it only for a secondary drive
<rogpeppe> anyone know of an easy way of getting just the public address of a service from a juju status?
<rogpeppe> i'm wondering if there's a command buried somewhere along the same kind of lines as juju api-endpoints
<mgz> rogpeppe: a service doesn't have a public address
<mgz> rogpeppe: units of the service do
<rogpeppe> mgz: i mean a unit, sorry :)
<bogdanteleaga> the closest I get is juju status --format=oneline
<rogpeppe> bogdanteleaga: that + awk or sed will do the trick, thanks
<mgz> I've just used --format=json and a jsonpath tool thing in the past
<bogdanteleaga> juju status --format=oneline | cut -d ' ' -f 3
<rogpeppe> bogdanteleaga: yeah.
<bogdanteleaga> cut is a bit easier, at least for me :)
<rogpeppe> bogdanteleaga: i haven't used cut in years :)
<rogpeppe> bogdanteleaga: i can never remember whether field numbers start at 0 or 1
<bogdanteleaga> heh, I always keep trying until I find it
<bogdanteleaga> awk always seemed hard to use
<rogpeppe> bogdanteleaga: i'll probably do: addresses=$(juju status --format oneline | awk '/^- myservicename/ {print $3}')
<bogdanteleaga> but then, it does way more
<rogpeppe> bogdanteleaga: awk is kinda awkward :)
<bogdanteleaga> I didn't wanna use that one :D
<rogpeppe> bodie_: i actually only realised the pun after i pressed return
<alexisb> folks, do I have developers still around in US hours?
<katco> alexisb: yep
<alexisb> we have a hot issue I need someone to take
<katco> alexisb: you should have natefinch, ericsnow, perrito666, and cherylj at your disposal
<katco> cherylj: hey you around?
<natefinch> alexisb: still need help?
<alexisb> natefinch, nope
<natefinch> alexisb: excellent
<perrito666> ok, both branches pushed :) I am done
#juju-dev 2015-08-08
<wallyworld> fwereade: hey, you get your electricity issues sorted?
<mup> Bug #1482876 opened: Deployment times out but all agents are started <oil> <juju-core:New> <https://launchpad.net/bugs/1482876>
<fwereade> wallyworld, if you're on -- yes, I think so
<fwereade> wallyworld, there is a suspicious a/c unit that is now disconnected and everything else has been happy for a goood 24h or so now
#juju-dev 2015-08-09
<fwereade> in case nobody has anything better to do on their sat night or sunday morning: http://reviews.vapour.ws/r/2323/
<mup> Bug #1482939 opened: during juju-reboot --now, agent-status says current=failed which is confusing <juju-core:New> <https://launchpad.net/bugs/1482939>
<davecheney> thumper: can you please share your shitlist document
<davecheney> thanks
#juju-dev 2016-08-08
<menn0> thumper, wallyworld: for the record, using a abstract domain socket path like "@juju-mutex-" doesn't work
<mup> Bug #1610239 changed: Race in src/gopkg.in/mgo.v2 <ci> <intermittent-failure> <race-condition> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1610239>
<wallyworld> sad trombone
<menn0> wallyworld: in fact, I can't even get it to work if I patch the path prefix to "/var/snap/juju-menno/current/mutux-"
<wallyworld> hmmmm, well that is unfortunate
<wallyworld> looks like we do need an interface
<menn0> which makes sense... I don't think the path is an issue. it might be that juju isn't being allowed to use abstract domain sockets at all
<thumper> menn0: damn
<menn0> wallyworld: thumper: installing with --devmode works though
<wallyworld> yep
<menn0> that will do for my immediate use
<mup> Bug #1555969 changed: API server stops responding after juju add-unit <canonical-is> <juju-core:Won't Fix> <https://launchpad.net/bugs/1555969>
<mup> Bug #1449044 opened: juju add-unit resets AWS security groups <add-unit> <ec2-provider> <security> <juju-core:New> <https://launchpad.net/bugs/1449044>
<menn0> wallyworld: looks like the snappy developer namespace is used after all
<wallyworld> oh, right, ok
<menn0> wallyworld: i've just uploaded my snap and the name in the store is now "juju-menno.menno" (where .menno is the developer namespace)
<menn0> wallyworld: so including your own suffix in the name isn't really necessary
<wallyworld> menn0: ah, did try that (juju.wallyworld) but still got a clash
<wallyworld> i think it's all a bit in flux
<menn0> wallyworld: ok fair enough... maybe there's still a global check
<veebers> Am I doing something wrong or does the cs:ubuntu application have no content in status or anything? I want to know when it's gone/cleaned up
<menn0> wallyworld: actually... I have no idea what's going on - I get "snap not found" when I try to install juju-menno or juju-menno.menno
<menn0> wallyworld: the store website says it's there and published though
<wallyworld> menn0: you need to use --devmode
<menn0> wallyworld: I am
<wallyworld> and --edge
<menn0> sudo snap install juju-menno --edge --devmode
<wallyworld> get rid of the dev namespace, that screwed me up as well
<menn0> wallyworld: i'll ask on #snappy in case there's something i'm doing wrong
<wallyworld> or u1-internal
<menn0> ok
<menn0> wallyworld: why u1-internal?
<mwhudson> wallyworld: hey juju-mongodb2.6 is just there for upgrades, right?
<mwhudson> wallyworld: can we lose it from yakkety?
<wallyworld> mwhudson: yeah, and now that we wil not be upgrading from 1.25 to 2.0 directly, but via model migration, we don't even need it t all
<mwhudson> wallyworld: this is the best kind of bugix
<mwhudson> *bugfix
<wallyworld> yep :-)
<mwhudson> fixing mongo 3.2 seems not too bad
<mwhudson> wallyworld: can you comment to this effect on https://bugs.launchpad.net/ubuntu/+source/juju-mongodb2.6/+bug/1610778
<mup> Bug #1610778: please remove this package from yakkety <juju-mongodb2.6 (Ubuntu):New> <https://launchpad.net/bugs/1610778>
<wallyworld> ok
<mwhudson> wallyworld: do we need juju-mongodb (i.e. 2.4) in yakkety?
<wallyworld> only if we need to support juju 1.25 in yajjety
<wallyworld> which i think we do as we have committed to 12 (or 18?) months of support
<wallyworld> but that should be confirmed
<axw> veebers: does QA have a vsphere set up that I can test some changes with?
<mwhudson> wallyworld: ok
<veebers> axw: I'm not sure off the top of my head, let me have a look
<veebers> axw: is vsphere known as anything else?
<axw> veebers: vmware vsphere?
<axw> veebers: not that I know of otherwise...
<axw> I'm not very familiar with vmware tools tho
<veebers> axw: sorry the only thing close to a  mention I can see is "VMware vCenter Server Appliance"
<axw> veebers: that might be it
<axw> wallyworld: do you know about vsphere?
<wallyworld> no :-(
<wallyworld> that was eric et al
<axw> veebers: I don't see anything about it in cloud-city, so I guess it's not part of CI at the moment...
<veebers> axw I saw that mention in consoles.txt but that's all, I don't know if or how it's used.
<axw> veebers: no worries, thanks for checking
<veebers> axw: sorry I couldn't be immediately helpful. I'll follow up tonight/tomorrow if we don't come across anything else
<veebers> axw: would you have any insight to my previous query re: status and cs:ubuntu application?
<axw> veebers: sorry, which query was that?
<veebers> axw: I'm unable to see any status re: the cs:ubuntu charm/application. i.e. I want to see if it's been removed, but I can't seem to see it when It's been deployed
<axw> veebers: don't know sorry. you didn't rename the app (juju deploy cs:ubuntu <something-else>)? which user is the test logged in as when running status?
<veebers> axw: (this is for the test) I do rename the application to a timestamp, but don't see it mentioned in any status. The user is the one that just successfully deployed it (not sure at the moment which it is)
<veebers> I just know that if it happens quick it errors due to "the application 'ubuntu' is already deloyed"
<axw> veebers: you're definitely deploying to and running status on the same model?
<axw> veebers: if you push your branch I might spot something, otherwise I'm stabbing in the dark
<veebers> axw ack, let me dig around a bit so I have answers for you.
<thumper> http://reviews.vapour.ws/r/5388/
<thumper> menn0: if you feel like a break ^^^
<mup> Bug #1595155 changed: new systemd and dbus dependencies are broken <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1595155>
<menn0> thumper: i'll get there soon... want to finish battling with this snap first
<veebers> axw: btw I figured out what I needed to do :-)
<axw> veebers: cool
<anastasiamac> axw: talking about vsphere :)
<axw> anastasiamac: ?
<anastasiamac> axw: bug 1588390 has a question about how to specify host and datacentre..
<mup> Bug #1588390: 2.0 beta7: can't bootstrap with vsphere cloud provider - ERROR invalid config: host: expected string, got nothing <oil> <oil-2.0> <vsphere> <juju-core:Triaged> <https://launchpad.net/bugs/1588390>
<anastasiamac> axw: could u comment?
<axw> anastasiamac: sure
<anastasiamac> axw: \o/
<thumper> menn0: ack
<mup> Bug #1575797 changed: AddressableContainerSetupSuite.TestContainerInitialised lxc-net: no such file or directory <centos> <ci> <regression> <test-failure> <unit-tests> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1575797>
<mup> Bug #1592366 changed: Juju machines failed to die, with added storage <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1592366>
<menn0> wallyworld: I've just emailed you with those instructions, can you try running through them please?
<wallyworld> ok
<mup> Bug #1592366 opened: Juju machines failed to die, with added storage <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1592366>
<menn0> thumper: ship it
<veebers> wallyworld, axw: Re: model config tree. Does this start of a test plan make sense? Does it cover concerns? https://pastebin.canonical.com/162579/
<wallyworld> veebers: yes, but you really need to set up an environment with controller defaults also, using clouds.yaml
<mup> Bug #1592366 changed: Juju machines failed to die, with added storage <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1592366>
<axw> veebers: as a start, I think so. we'll also want to cover controller- and region-inherited config
<wallyworld> and you also need to check the dynamic nature of the source values
<wallyworld> ie a value may come from "model" but if the controller default is changed to match, it will say "controller"
<veebers> wallyworld, axw; good feedback. Is there any docs re: the region stuff?
<wallyworld> only the spec
<veebers> ack
<wallyworld> https://docs.google.com/document/d/1PWUwx9kITQajQQgvHnweUuqTGmBo9p35bVwd-uLKAi4/edit?ts=57626636
<mup> Bug #1524572 changed: Azure provider: bootstrap results in error "PUT request failed: BadRequest - XML Schema validation error in network configuration at line 24,18." <azure-provider> <bootstrap> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1524572>
<wallyworld> menn0: something's wrong, after the controllers got set up (looking at watch and running the setup script), there's no juju home created under ~/snap/juju-menno and the controllers can't be found
<axw> veebers: do you know which, if any, of the QA MAASes I can use to test a bootstrap change?
<veebers> axw: I should but I don't sorry :-\ that's 0-2 for today
<axw> veebers: no worries :)
<veebers> axw: I see "munna-maas-slave (A machine capable of running kvm, and maas 1.9)" in the agent configs, but that's as much as I can tell from there
<axw> veebers: yeah just don't want to step on anyone's toes
<veebers> axw: I've given myself 2 action points to sort out for tomorrow (vsphere and maas). I'll add that to the information that I intend to add to the wiki etc.
<axw> veebers: thanks :)
<menn0> wallyworld: hmmmm... works for me... are you running juju-menno.juju instead of plain old juju?
<wallyworld> menn0: i tried juju as well
<wallyworld> but how could plain juju work?
<wallyworld> the snap doesn't write to ~/.local/share/juju
<menn0> wallyworld: indeed. I was making sure you *weren't*  running juju
<wallyworld> menn0: right
<menn0> wallyworld: I'm just running through the instructions again.
<wallyworld> the email says "juju", but i did use /snap/bin/juju-menno.juju
<menn0> wallyworld: so the boostrap starts?
<wallyworld> yeah, and the watch output is fine. then when I ^C once everything is started, there's nothing there
<wallyworld> well, there's a ssh file but no controller.yaml
<wallyworld> ssh directory
<wallyworld> i'll try again
<menn0> I get all the usual stuff in ./juju-menno/2/snap/juju-menno/2/.local/share/juju
<wallyworld> yeah, all i got was an ssh directory
<menn0> wtf
<wallyworld> menn0: there's an extra level of directories
<wallyworld> ls ~/snap/juju-menno/2/snap/juju-menno/2/.local/share/juju/
<wallyworld> that has all the stuff in it
<menn0> wallyworld: yep, that's where everything is
<wallyworld> when i run juju from snap, everything is in ~/snap/juju-wallyworld/2/.local/share/juju
<wallyworld> yours has an extra bit of the path duplicated
<wallyworld> and in ~/snap/juju-menno/2/.local/share/juju i have just an ssh dir
<wallyworld> maybe it's a race or something starting 2 controllers at the same time?
<menn0> I wonder if it's because the setup script is in the snap and is running another thing in the snap
<menn0> I might just distribute the script separately if that's the case
<anastasiamac> mwhudson: the bug 1570650 u commented on deals specifically with 2.6 for 1.25 on xenial... could u please clarifywhy you think that yakkety approach is inconsistent with this bug?
<mup> Bug #1570650: Use juju-mongodb2.6 for 1.25 on xenial <local-provider> <packaging> <juju-core:Fix Released> <juju-core 1.25:Triaged> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1570650>
<menn0> wallyworld: I've got to stop for now but I'll play around with it some more later on
<menn0> wallyworld: thanks for trying it out
<wallyworld> menn0: no worries, i'll try again later if i can, try starting one controller at a time
<wallyworld> anastasiamac: 1.25 is only getting critical fixes. i doubt upgrading mongo is a critical fix, but i could be wrong
<wallyworld> and i think 1.25 support is xenial or trusty only, but again, i could be wrong
<anastasiamac> wallyworld: i completely agree. did u read the bug?
<wallyworld> i did, may have misunderstood it. it seems to be asking for cganges to 1.25 to support mongo 2.6
<anastasiamac> wallyworld: so if this fix goes into 1.25 is yet to b decided by leads. my question ws with respect to michael's commnet about yakkety :)
<anastasiamac> wallyworld: m not sure how this relates to whther we r going to suport mongo2.6 on 1.25 on xenial and wanted to clarify in case I missed something: "Ian just told me that it would be safe to remove juju-mongodb2.6 from
<anastasiamac> yakkety, which seems inconsistent with this."
<wallyworld> ok. i very much doubt we'll be expending effort on mongo 2.6 in 1.25,unless people have changed their minds or we get a bunch of new resources
<anastasiamac> wallyworld: this decision is not related to my question
<wallyworld> yeah, i'm a bit confused by the "inconsistent" bit
<anastasiamac> wallyworld: the bug is about xenial. why are mentioing yakkety here?
<wallyworld> don't know
<anastasiamac> wallyworld: k. i'll ask Michael tomorrow ;0 NZ must be coming offline \o/
<veebers> wallyworld: There is support for --upload-tools within juju-ci-tools, the abrupt removal of it would trip up some tests.
<wallyworld> veebers: yeah, i grepped the source just before, i'll add an alias
<wallyworld> thanks for looking, doesn't look like there's too much to change, but it will take some time
<veebers> wallyworld: ah sorry had meant to get back to you earlier. You intend to fully remove it at some poitn?
<veebers> no, not much to change, but might need some mechanism to make it backwards compat.
<wallyworld> yeah, once everyone agrees, right now it's a spike
<wallyworld> we have a mechanism for 1.25 vs 2.0 tests i think
<veebers> wallyworld: Ah ok. If you could ping the qa list closer to the time to warn of intentions (i.e. we intend to remove/rename/revamp upload-tools arg within x time)
<wallyworld> veebers: on my list - en email will be going to a wider audience than just qa
<veebers> yeah there is (also, every beta revision release too)
<veebers> wallyworld: awesome, thanks.
<veebers> axw, wallyworld: what requirements (config, cloud etc.) is required to get the regional config tree options? i.e. the apt-mirror example in the spec:
<veebers> Attribute               Default                  Controller
<veebers> apt-mirror:             archive.ubuntu.com        -
<veebers>   aws/us-east-1:        us-east-1.aws.ubuntu.com  -
<axw> veebers: I don't think it's implemented yet -- right wallyworld?
<axw> veebers: yeah, the initial work is still pending: http://reviews.vapour.ws/r/5339/
<wallyworld> veebers: yep, still wip
<wallyworld> only the controller inherited config is done
<veebers> ah ok, I'll ask that question at a later date then :-)
<wallyworld> veebers: what's done is documented in the release notes, as is what is still todo
<veebers> Ok, I'm just trying to figure out the clouds.yaml and how it can be used in a test
<wallyworld> release notes contain that info too
<veebers> are latest release notes always found here: https://jujucharms.com/docs/devel/temp-release-notes
<veebers> hmm, nvm. I see that it is
<veebers> wallyworld: Cool, the release notes pretty much outline a test where having a controller default, setting it to a model value then unsetting should revert back to the controller value (not the default)
<wallyworld> veebers: yep, but there's then additional nuances. eg change a controller value to be the same/different to a model value and see the source get updated each time model-config is run; create new models with different varioations of config etc
<mup> Bug # changed: 1402696, 1568181, 1568185, 1568190
<veebers> wallyworld: how does one change a controller value after a bootstrap? Or is it always read from clouds.yaml?
<wallyworld> juju set-model-default
<veebers> that sets a controller default? or am I getting confused
<wallyworld> also juju unset-model-default
<veebers> there is base default config, model config and controller config right?
<wallyworld> it sets the default value for a model attribute, which just happens to be a controller wide default
<wallyworld> it will grow the ability to set a region default also when that bit is done
<wallyworld> juju set-model-default --region=us-east-1 http-proxy=foo
<wallyworld> or something liek that
<veebers> ah ok.
<veebers> I'll update that testplan I shared earlier and bother you again about it tomorrow :-)
<wallyworld> there's current 3 sources of model config: default, controller, model
<wallyworld> it will soon be 4
<wallyworld> the order goes: default, controller, region, model
<mup> Bug #1605770 changed: firewallerSuite.TearDownTest inst.Dial() failed <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1605770>
<mup> Bug #1610260 changed: AWS Error fetching security groups EOF/timeout <bootstrap> <ec2-provider> <intermittent-failure> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1610260>
<mup> Bug #1610880 opened: Downloading container templates fails in manual environment <juju-core:New> <https://launchpad.net/bugs/1610880>
<dimitern> fwereade__: hey, do you think upgrading from e.g. 2.0-beta14.1 to 2.0-beta14 should be allowed?
<dimitern> fwereade__: currently the upgrader does not compare patch or build numbers when deciding to upgrade (or downgrade, which is more puzzling - see allowedTargetVersion's doc comment in worker/upgrader/upgrader.go
<dimitern> fwereade__: i.e. I was wondering if there's a good reason not to just compare versions with version.Number.Compare() in the upgrader
<fwereade__> dimitern, thinking... but *mainly* thinking that if we're not supporting upgrades during beta it's somewhat moot
<babbageclunk> fwereade__, dimitern: take another look at http://reviews.vapour.ws/r/5365?
<dimitern> fwereade__: the issue I'm trying to fix is outlined in bug 1607749
<mup> Bug #1607749: juju bootstrap fails with MAAS trunk and juju (beta12 behind a proxy too) <bootstrap> <maas-provider> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1607749>
<dimitern> babbageclunk: will do in a bit
<dimitern> fwereade__: weird case I agree, but it did still expose that wrinkle of the upgrading logic
<dimitern> fwereade__: bootstrapping with --upload-tools from a released client binary triggers it
<fwereade__> dimitern, so I *think* what triggers it is just having a mismatch between the tools we bootstrap with and the tools we record ourselves as having bootstrapped with, right?
<fwereade__> dimitern, and so I guess --upload-tools from a released client will do that? but instant-upgrade-on-bootstrap is something we haven't done for years
<dimitern> fwereade__: I think it's the difference between version.Current and having FORCE-VERSION
<dimitern> even though --upgrade-tools adds the latter to fake it's .1, the binary version is still sans that .1
<fwereade__> dimitern, well, maybe, but why aren't we recording the same tools version that we're eploying?
<fwereade__> dimitern, where does it think the binary version doesn't have the .1? it looks to me like the version we recorded in state doesn't have the .1
<dimitern> fwereade__: I don't know yet - still digging, but ISTM not considering versions with different patch and build number equal in the upgrader will fix the symptom
<fwereade__> dimitern, am super-confused: ISTM that the upgrader *is* quite reasonably and correctly considering the versions to be different
<dimitern> fwereade__: and there's bootstrap --auto-upgrade as well (false by default), totally not taken into account AFAICS
<dimitern> fwereade__: well: INFO juju.worker.upgrader upgrader.go:199 upgrade requested from 2.0-beta12.1 to 2.0-beta12 - doesn't that look weird to you?
<fwereade__> dimitern, looks like it's doing exactly the right thing
<fwereade__> dimitern, saw local version, server said it should run a different version, trying to do that
<fwereade__> dimitern, the problem there is the server asking us to change in the first place
<dimitern> fwereade__: ok, then I'll keep digging - wanted to confirm that's the correct behavior in the upgrader
<fwereade__> dimitern, fwiw, don't have a strong position on --auto-upgrade -- if we make sure we have the proxy settings inited from agent config, I don't see why that wouldn't work
<dimitern> fwereade__: I've verified the proxy settings are populated (into ~/.juju-proxy) on agents
<fwereade__> dimitern, ok... does that mean that the agent always uses them?
<dimitern> fwereade__: that's what I'm trying to figure out now :)
<anastasiamac> dimitern: fwereade__: in beta14, there was a fix for --upload-tools and build version mismatches... seee https://bugs.launchpad.net/juju-core/+bug/1605050
<fwereade__> dimitern, I don't see why it would
<mup> Bug #1605050: Controller doesn't use tools uploaded with juju bootstrap --upload-tools <sts> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1605050>
<dimitern> fwereade__: btw the origAgentVersion in the upgrader comes as PreviousAgentVersion, via agentconfig.UpgradedToVersion()
<dimitern> anastasiamac: cheers - i'll look there as well
<anastasiamac> dimitern: \o/
<dimitern> anastasiamac: I see you've been on fire closing 1.25 bugs :)
<anastasiamac> not closing....just brooming the house a bit..
<anastasiamac> :)
<dimitern> anastasiamac: ;) nice to see the count going down though!
<fwereade__> dimitern, AFAICS the heart of the problem is that the *server* is asking us to upgrade
<fwereade__> dimitern, it looks like we have the right binary and the right config on the agent
<dimitern> fwereade__: I think anastasiamac nailed it - the discrepancy seems to be fixed with tim's PR https://github.com/juju/juju/pull/5879/files
<dimitern> to confirm that I'll try the same scenario, but with the next beta
<fwereade__> dimitern, sweet
<dimitern> \o/
<dimitern> fwereade__, anastasiamac: it works in beta14, so indeed a dup of bug 1605050! thanks for the help ;)
<mup> Bug #1605050: Controller doesn't use tools uploaded with juju bootstrap --upload-tools <sts> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1605050>
<anastasiamac> dimitern: :) m glad!
<macgreagoir> win 17
<macgreagoir> bah
<mup> Bug #1607749 changed: juju bootstrap fails with MAAS trunk and juju (beta12 behind a proxy too) <bootstrap> <maas-provider> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1607749>
<dimitern> sorry babbageclunk I'm on to your PR now
<babbageclunk> dimitern: thanks, no worries
<menn0> fwereade: reviews done
<fwereade> menn0, tyvm
<fwereade> menn0, re unit tests -- I'm basically seeing this as a pure internal refactor that doesn't really want new tests because it doesn't change behaviour
<fwereade> menn0, what tests would you write that aren't just this-input-produces-this-output, who-knows-what-it-means-or-does?
<menn0> fwereade: ok fair enough.
<menn0> fwereade: some of those methods /are/ conditional ... but I see your point
<fwereade> menn0, restate conditional please?
<menn0> fwereade: sorry, they have conditional logic which could perhaps warrant a test
<menn0> fwereade: doing different things depending on what's in the DB
<fwereade> menn0, yeah
<fwereade> menn0, how about if I do a pass for the externally-visible impact of same and make sure we have suitable hooks tests?
<menn0> fwereade: sgtm
<fwereade> menn0, cheers
<dimitern> babbageclunk: reviewed
<babbageclunk> dimitern: Thanks!
<rick_h_> morning
<anastasiamac> rick_h_: \o/
<mup> Bug #1598390 changed: Juju 2.0 Resources - Issue faced in the deployment of a charm from charm store when juju-attach is used <2.0> <juju-core:Fix Released> <https://launchpad.net/bugs/1598390>
<mup> Bug #1598390 opened: Juju 2.0 Resources - Issue faced in the deployment of a charm from charm store when juju-attach is used <2.0> <juju-core:Fix Released> <https://launchpad.net/bugs/1598390>
<mup> Bug #1598390 changed: Juju 2.0 Resources - Issue faced in the deployment of a charm from charm store when juju-attach is used <2.0> <juju-core:Fix Released> <https://launchpad.net/bugs/1598390>
<fwereade> babbageclunk, dimitern: sorry, I had a couple of comments on the review but I didn't hit post
<babbageclunk> fwereade: cool, thanks
<rick_h_> macgreagoir: welcome back, good week?
<macgreagoir> rick_h_: It was, thanks. Week of sun!
<rick_h_> macgreagoir: added some small "get back in the flow of things" bugs your way on the kanban board
<rick_h_> macgreagoir: please take a look when you're through  email and looking for next task
<macgreagoir> rick_h_: Noted :-) I have been trying to catch up.
<rick_h_> macgreagoir: all good, wheeeee
<rick_h_> natefinch: katco macgreagoir dimitern ping for standup
<dimitern> omw
<rogpeppe> hmm, git blame is becoming less useful. What is "Juju bot" doing as an author in apiserver/network.go ?
<rick_h_> katco: <3 I went to create a tracking card and it yelled at me that the id already existed
<rick_h_> dimitern: if can you please look at http://reviews.vapour.ws/r/4684/ and see if it still applies and if so work to get it landed?
<dimitern> rick_h_: sure, added to my list
<rick_h_> ty dimitern
<katco> rick_h_: np
<rick_h_> fwereade: and added the one you were reviewing from eric into the review land if you can peek at that please and see if it still applies at all.
<rogpeppe> fwereade: ping
<mup> Bug #1610990 opened: list-storage in aws contains non-created items <blocker> <ci> <regression> <storage> <juju-ci-tools:Incomplete> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1610990>
<mup> Bug #1610993 opened: UnitSuite.TestWorkers timed out waiting for workers <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1610993>
<voidspace> rick_h_: a side effect of allowing a maas server url without a prefix (so that the port can be specified unambiguously)
<voidspace> rick_h_: is that it's basically impossible to detect invalid maas server urls until we try to use them
<voidspace> rick_h_: I think the failure error is still clear enough, it's just a bit later
<voidspace> rick_h_: (for invalid urls we now add http:// prefix automatically - and according to url.Parse that makes anything a potentially valid url)
<voidspace> just noting it
<rick_h_> voidspace: yea, with the add-cloud work I think our goal is to ping and reach the server to validate it's real and there
<voidspace> rick_h_: cool
<rick_h_> voidspace: so we'll just have to pick up that "try it" work where it makes sense to fail fast for the user
<voidspace> rick_h_: I've pinged menno for an updated review
<rick_h_> voidspace: <3 ty
<babbageclunk> dimitern: Is it reasonable for this machineundertaker worker to use something from the provisioner API, if it does what it needs? Or is that bad?
<babbageclunk> fwereade: ^^
<babbageclunk> dimitern, fwereade: more specifically - ProvisionerAPI.GetContainerInterfaceInfo does pretty much exactly what I want. Is there any way to reuse that?
<jose> rick_h_: hey. do you have a sec to chat?
<rick_h_> jose: sure thing
<rick_h_> jose: you up for a HO?
<jose> rick_h_: sure, one sec
<dimitern> babbageclunk: I don't think it's a good idea
<dimitern> babbageclunk: we have cases where methods on different facades are shared via apiserver/common mixins
<dimitern> babbageclunk: but why do you think you need the same API method?
<babbageclunk> dimitern: Well, I need something that takes a set of machine ids and returns a set of networkconfigs for those machines.
<babbageclunk> dimitern: I'm writing it at the moment, but making backend interfaces + shims for State -> Machine -> LinkLayerDevice -> Address is pretty annoying.
<dimitern> babbageclunk: what shims? can I see at a diff?
<dimitern> s/at//
<babbageclunk> dimitern: Well, you can see the old version of it at http://reviews.vapour.ws/r/5366/diff/
<babbageclunk> dimitern: There it's State -> MachineRemoval -> LinkLayerDevice
<dimitern> babbageclunk: have you seen apiserver/common/networkingcommon ?
<babbageclunk> dimitern: Ooh, no - checking now
<dimitern> babbageclunk: there are useful helpers for converting configs
<dimitern> might save you some of that
<babbageclunk> dimitern: Oops, killed this buffer.
<dimitern> babbageclunk: :) I noticed
<babbageclunk> dimitern: Unfortunately none of that seems to cover the types I need. :(
<dimitern> babbageclunk: I think the shim based approach for apiserver facades didn't work quite well
<dimitern> babbageclunk: why not have a look at other examples - e.g. storage-related facades might be better designed
<babbageclunk> dimitern: ok - any one specifically?
<dimitern> babbageclunk: storageprovisioner comes to mind
<babbageclunk> dimitern: Hmm.
<babbageclunk> dimitern: That kind of does what I'm doing. The main difference is that it can get away without shims because all of the methods return either interfaces or publically constructable types.
<babbageclunk> dimitern: Without many shims, I mean.
<babbageclunk> dimitern: But Machine, LinkLayerDevice and Address all have private constructors and embed a *State, so I need to cover them with interfaces to test.
<dimitern> babbageclunk: let's discuss it tomorrow? I need to go soon..
<babbageclunk> dimitern: Oops, it's late for you! I'll bug you and Will about it tomorrow.
<babbageclunk> Ha ha snap
<babbageclunk> dimitern: Bye!
<dimitern> ok ;)
 * rick_h_ grabs lunchables
<natefinch> wow uh, juju credentials <cloud> is completely useless
<natefinch> how is the first step of setting up juju with azure "install node" ?
<sinzui> natefinch: bug 1611067 is a recent regression that related to bug 1418139 that is assigned to you.
<mup> Bug #1611067: kill-controller of manual failed to clean up models <blocker> <ci> <kill-controller> <manual-provider> <regression> <trusty> <juju-core:Triaged> <https://launchpad.net/bugs/1611067>
<mup> Bug #1418139: Can't reprovision a machine with manual provider <bootstrap> <destroy-environment> <manual-provider> <manual-story> <juju-core:In Progress by natefinch> <juju-core 1.25:Fix Committed by natefinch> <https://launchpad.net/bugs/1418139>
<rick_h_> natefinch: there's WIP around that node azure stuff
<natefinch> rick_h_: oh good
<mup> Bug #1611067 opened: kill-controller of manual failed to clean up models <blocker> <ci> <kill-controller> <manual-provider> <regression> <trusty> <juju-core:Triaged> <https://launchpad.net/bugs/1611067>
<rick_h_> natefinch: yea, https://github.com/Azure/azure-sdk-for-go/issues/360
<natefinch> rick_h_: nice
<natefinch> sinzui: those two bugs are definitely different, even though they may have similar external behavior.  The previous bug left a very obvious panic at the end of the controller's machine log, and this one does not.
<mup> Bug #1611076 opened: we need a juju rename-credential command <juju-core:New> <https://launchpad.net/bugs/1611076>
<mup> Bug #1611076 changed: we need a juju rename-credential command <juju-core:New> <https://launchpad.net/bugs/1611076>
<mup> Bug #1611076 opened: we need a juju rename-credential command <juju-core:New> <https://launchpad.net/bugs/1611076>
<natefinch> katco: responded.  I need to review the tests (didn't get a chance last time) and then pretty sure I can just LGTM it.
<katco> natefinch: cool ty
<katco> natefinch: your point about logging the retries at a different level is interesting. i'm not sure where i come down on that yet... i like the idea of being able to look at the log while it's happening to understand what's going on, but not sure if i would expect to flip into trace mode to do that
 * katco wonders if anyone else has opinions on that
<natefinch> katco: my opinion - there's errors and there's non-errors. This is not an error, because it's being retried.  Once it fails forever, then it's an error.  until then, it's just info (or debug).
<katco> natefinch: there are warnings too
<mup> Bug #1611093 opened: "juju models" hangs <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1611093>
<natefinch> katco: I'm with Dave on warnings. Warnings, IMO, don't have a good use case.  Warning is basically "we didn't know if we should make this an error or not, so we put it at warning"... except that the code definitely has to treat it as an error or not, and usually it's treated as not an error.
<natefinch> sinzui, mgz: have you seen this azure error? Provisioning failed. Shrinking a disk from 136367309312 bytes to 34359738880 bytes is not supported.. ResizeDiskError
<katco> natefinch: warnings are definitely not an error (i.e. the system is not in an errored state), but i disagree that one only arrives at utilizing warnings because they cannot decide between info and error.
<katco> natefinch: e.g., this code. juju is warning you that something is going wrong, but it may recover.
<natefinch> katco: it's a bit of a simplification, for sure.
<natefinch> katco: I guess, if it works in the end, then I feel like it's not super noteworthy.  Like dropped tcp packets.
<katco> natefinch: except dropped tcp packets can forewarn of a failure and are often logged :)
<natefinch> katco: sure, and I'm not saying we shouldn't log retries at all.  They definitely 100% should be recorded somewhere. I'm just arguing to make them less in your face.
<katco> natefinch: yes, that is the part i'm on the fence about
<sinzui> natefinch: We have not see ResizeDiskError before
<katco> natefinch: it seems like a normal part of the system's operation, and not super-detailed tracing information
<natefinch> katco: it's a matter of degrees.  The problem is, I think whether it's info or warning is very much dependent on exactly what caused the retry... so do we err on the side of visibility or the side of keeping noise low?
<katco> natefinch: i don't think you should be worrying about noise at the raw log data level. that is a problem for log views
<natefinch> sinzui: it's unfortunate - Juju seems not to realize that this instance will never come up. It still says the agent is allocating
<natefinch> katco: we should definitely worry about noise in the log.  Most of our interactions with the log are at the raw text level.  There have been bugs filed and work done to remove spam from the logs.  I think that's worth the effort.
<katco> natefinch: good point; we do seem to come down on that side of the argument (although i disagree with it)
<katco> natefinch: still, these messages a) won't always be there, b) will happen at most 3 times
<natefinch> sinzui: crap, I seem to be getting that consistently now (well, 2 out of 2 tries)
<mgz> natefinch: hm, are you doing anything differently from the runs that worked?
<natefinch> mgz: Â¯\_(ã)_/Â¯
<natefinch> mgz: using 2.0-beta14
<natefinch> mgz: forgot to use upload-tools, so I presume it's using whatever is in streams
<natefinch> also, interstingly:
<natefinch> $ juju add-machine --series win2012
<natefinch> ERROR empty apt-https-proxy in model configuration
<mgz> hmph
<mgz> that seems like a bug
<natefinch> yeah, I'm going to rebuild and make sure I'm running master with upload tools and see what happens then
<sinzui> natefinch: in azure I think you need to use series win2012r2
<natefinch> sinzui: ahh, maybe that's the problem
<natefinch> sinzui: hmm... getting the same error if I use win2012r2
<sinzui> :/
<sinzui> natefinch: Our tests from this morngin get farther http://reports.vapour.ws/releases/4221/job/azure-arm-deploy-windows-amd64/attempt/45
<mup> Bug #1611097 opened: model already exists but can't be destroyed because it's not found <oil> <oil-2.0> <juju-core:Incomplete> <https://launchpad.net/bugs/1611097>
<sinzui> natefinch: We never use upload-tools since it doesn't worth with multiple os/arch. we used streams. We also deployed a charge from lp:juju-ci-tools/repository
<sinzui> juju --show-log deploy -m azure-arm-deploy-windows-amd64:azure-arm-deploy-windows-amd64 /var/lib/jenkins/repository/charms-win/dummy-source --series win2012r2
<natefinch> man, our pedantry for credential vs credentials is really annoying
<natefinch> mgz: fwiw, that "no apt-proxy set" error went away with a rebuild, so probably I was just in some weird bad state in my local code (hopefully).
<mgz> hm
<mgz> yeah, hopefully just a version incompat
<natefinch> wow, juju add-model takes a heck of a long time for something that should be almost a noop
<natefinch> h$ juju kill-controller azure -y
<natefinch> panic: empty value for "firewall-mode" found in configuration (type <nil>, val <nil>)
<natefinch> woo
<rick_h_> natefinch: did destroy-controller not work?
<natefinch> rick_h_: I never use destroy controller
<rick_h_> natefinch: :(
<rick_h_> natefinch: that's the official way for users to perform that operation
<natefinch> rick_h_: kill-controller is shorter and doesn't make me type out that extra flag about killing models too
<natefinch> rick_h_: FWIW destroy gives that error too :0
<natefinch> rick_h_: I gotta run to make dinner, but will be back in a few hours
<mup> Bug #1611110 opened: Need remove-{space,subnet} commands <juju-core:New> <https://launchpad.net/bugs/1611110>
<mup> Bug #1611111 opened: Model still exists for a while after running destroy-model <oil> <oil-2.0> <juju-core:Triaged> <https://launchpad.net/bugs/1611111>
<thumper> :(
<bdx> other then specifying `--upload-tools`, should there be anything else to keep in mind when bootstrapping/deploying from a build of master ?
<thumper> bdx: I don't think so
<bdx> thumper: thx
<tvansteenburgh> any one have an example of deploying a local charm via the api?
<tvansteenburgh> with juju 2, specifically
<tvansteenburgh> i have and example from juju1 that uploads a charm archive and then deploys it with the local:series/charm nomenclature
<tvansteenburgh> does that still work?
<tvansteenburgh> and if so, what would the charm-url passed to the api be?
<menn0> wallyworld: I've updated my snap. Could you have another go at those instructions again please when/if you have a chance?
<menn0> wallyworld: the weird directory hierarchy under ~/snap was being caused by the setup script calling another binary in /snap/bin
<menn0> wallyworld: the setup script now calls $SNAP/bin/juju directly instead and that fixes the issue
<menn0> wallyworld: not sure exactly what was going on but the environment variables seen by juju weren't right due to the nested invocation
<wallyworld> menn0: sure. yeah that was weird. seems like a bug to me. i'll try after standup
<redir> brb reboot
<redir> wallyworld!
<wallyworld> yo
<redir> wallyworld: axw wanted a test (see RB), I am curious if there's already a related test for other inherited config and if so where it lives.
<redir> wallyworld: he suggested cloudconfig, but at a glance I don't see anything similar.
<wallyworld> there's no feature test yet (still todo). but there are other tests, i'll look at rb and see what's been requested
<alexisb> redir, welcome back
<alexisb> hope you had a lovely holiday
<redir> alexisb: tx
 * redir nods
<redir> very relaxing:)
<redir> and more sun than is probably healthy
<alexisb> :) sun and relaxation both sound great
<redir> Highly recommended:)
<alexisb> thumper, wallyworld ping
<mup> Bug #1605008 changed: juju2beta12 and maas2rc2:  juju status shows 'failed deployment' for node that was 'deployed' in maas <oil> <oil-2.0> <juju-core:Invalid> <MAAS:Invalid> <https://launchpad.net/bugs/1605008>
<mgz> alexisb: I think we need to have a plan over bug 1570650 - what happens if we get a security bug reported against 1.25/db in the trusty support lifespan
<mup> Bug #1570650: Use juju-mongodb2.6 for 1.25 on xenial <local-provider> <packaging> <juju-core:Fix Released> <juju-core 1.25:Won't Fix> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1570650>
<mup> Bug #1610990 changed: list-storage in aws contains non-created items <ci> <regression> <storage> <juju-ci-tools:Incomplete> <juju-core:Won't Fix by axwalk> <https://launchpad.net/bugs/1610990>
<perrito666> redir: what is the bug?
#juju-dev 2016-08-09
<redir> perrito666: which bug?
<redir> axw: go a minute?
<redir> got even ^
<mup> Bug #1611159 opened: model not successfully destroyed, and error on "juju list-models" <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1611159>
<wallyworld> redir: did you want to catch up?
<wallyworld> or you all good?
<redir> wallyworld: sure, gimme 1 minute
<redir> k
<redir> standup?
<redir> wallyworld: ^
<wallyworld> ok
<axw> redir: sorry, was afk
<redir> np just off HO with wallyworld, going EoD but will prolly hit you up tomorrow.
 * redir goes EoD
<natefinch> axw: let me know when/if you'd like to spend a little time discussing the jsonschema stuff... otherwise, Ã'll get to writing that proposal for it that I was supposed to do earlier.
<axw> natefinch: happy to chat whenever, I think having a proposal first would be best though - something concrete we can discuss and mould
<natefinch> axw: cool, will whip something up.
<natefinch> weird, somehow chrome is linking zoom size to domain... if I have two different chrome windows open - if they're the same domain, they both increase or decrease in size. If they're different urls they change independently.
<natefinch> which is super annoying when I want to have godoc open on two different monitors with vastly different pixel densities
 * natefinch switches one window to gowalker.org ... oh the indignity
<axw> anastasiamac: I have added this change to the PR, as it was failing with mongo 2.4: https://github.com/juju/juju/pull/5877/commits/ac97bffd0261f06df93255bed0a48ce35d1a5eb1
<axw> anastasiamac: tested with mongo 2.4 and 2.6 (2.6 works with and without the change)
<axw> 2.4 works only with the change
<anastasiamac> axw: 2.4 does like "or" :( ..?..
<axw> anastasiamac: not really sure what it doesn't like, seemed kosher to me
<axw> anastasiamac: but they're bot happy with $ne even if hte field doesn't exist
<anastasiamac> excellent \o/ lgtm
<wallyworld> menn0: one thing that i miss when using juju from a snap is tab completion :-)
<menn0> wallyworld: yeah true. I guess there might be some kind of workaround where juju completion workers for *.juju too but that's a bit hacky
<menn0> s/workers/works/
<wallyworld> i'm sure we can fix it somehow
<mup> Bug #1580391 changed: juju/errors tests do not pass with go 1.6 <tech-debt> <unit-tests> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1580391>
<axw> anastasiamac: PTAL at http://reviews.vapour.ws/r/5392/ when you have a moment
<axw> anastasiamac: i.e. please see my replies
 * axw goes for lunch
<anastasiamac> wallyworld: could u do a quick ho in a-team call?
<wallyworld> ok, give me a minute
<anastasiamac> wallyworld: k. ping u r available
<anastasiamac> when*
<natefinch> my proposal for conversion to jsonschema for anyone interested: https://docs.google.com/document/d/137ejUl8qRAU98F4eg9jfO9L90fkkvRZ5aGINmaxKlM0/edit#
<wallyworld> anastasiamac: there now
<natefinch> axw: I'm a little stuck on the actual implementation of getting the non-jsonschema values into useful places in our code... without terrible hacks.  Any ideas you have there are welcome.  I'm going to go to bed.. will check my email overnight, so when william gets up, you guys figure out a time to talk, and I'll get up and talk.
<natefinch> axw: unless you think it would be useful to talk now for a bit, without william
<axw> natefinch: I need to think about it some more first I think
<axw> ... think think
<natefinch> axw: no problem. I've gone around and around on it.  I welcome more brains being applied to the problem.
<fwereade> axw, menn0, jam: shall we talk schemas?
<axw> fwereade: happy to, but shouldn't we wait for natefinch?
<fwereade> axw, perhaps, but I'm not sure when he'll arrive; and several of us look likely to be here; and I got the impression that you were pretty happy with the proposal in the doc
<axw> fwereade: okey dokey, I can chat with natefinch about it tomorrow if need be
<fwereade> axw, so, well, *I'm* happy with the proposal in general, and have no attachment to juju/gojsonschema if a different library gives us something we need
<axw> fwereade: agreed, as long as we end up with one or the other
<axw> (and not both)
<fwereade> axw, right, yes, we should replace the action bits if we switch
<axw> fwereade: do you have an opinion on whether we should retain the EnvVar bits?
<axw> or Group?
<axw> I'm *fairly* sure Group is redundant now
<axw> anything that's "juju defined" shouldn't really be in the model config anyway
 * axw just waits for doc comments to roll in
<fwereade> axw, I'm very happy to drop group
<fwereade> axw, and I am a bit snooty about the env var bits
<fwereade> axw, do you think we can cast them as interactive-only? in which add-credential is considered interactive?
<axw> fwereade: that's all they would be used for
<fwereade> axw, it's the "magically grab them from the env" stuff that I always hated
<axw> fwereade: autoload-credentials and add-credential
<fwereade> axw, yeah
<axw> fwereade: yeah, that should definitely not happen post-add
<axw> fwereade: I guess I can live with it. but it feels kinda awkward that there will be some credential detection methods that just won't work, like looking for files
<fwereade> axw, (if I were to shoot for the moon I'd prefer a schema that didn't allow defaults, tbh, it's just another way to handwave the difference between ingestion and validation)
<fwereade> axw, well, we *could* define file paths as well...
<fwereade> axw, not that I really want to
<fwereade> axw, but the *important* thing in my mind is not to have the *validation* bits go charging off through the env, filesystem, network, whatever, just to find some value to wedge in and make things look right
 * axw nods
<axw> fwereade: these defaults should only apply when creating. providers can/should take care of adding defaults as necessary on upgrade
<fwereade> axw, yeah, which is why I'd like to keep default-insertion out of validation too, if we can
<fwereade> axw, I just don't see a path that doesn't involve us writing our own schema thing
<axw> fwereade: I mentioned this in London: https://github.com/juju/juju/blob/master/environs/interface.go#L121
<fwereade> axw, we could just tell people not to specify json-schema defaults (ha), but if people *did* eschew them then it is just the same need as the env/file cases: if not specified, get a value <like this>
<axw> fwereade: isn't it enough to just implement that in each provider, and stop modifying config in ValidateConfig?
<axw> fwereade: ah I see what you mean, JSON-Schema validation will insert the defaults?
<fwereade> axw, yeah
<fwereade> axw, (regardless, I'm +1 on breaking up the ValidateConfig god-methods any way we can)
<axw> fwereade: I would think we would just ignore the defaults post-creation
<axw> same for env vars
<fwereade> axw, yeah -- that's easy if we can clearly separate them -- but I don't see how we can avoid default-insertion basically every time we validate a config
<anastasiamac> axw: wallyworld: menn0: any suggestions? or workaround for this? https://bugs.launchpad.net/juju-core/+bug/1611159
<mup> Bug #1611159: model not successfully destroyed, and error on "juju list-models" <oil> <oil-2.0> <juju-core:Triaged> <https://launchpad.net/bugs/1611159>
<axw> anastasiamac: nope, sorry
<anastasiamac> axw: tyvm!
<mup> Bug #1449044 changed: juju add-unit resets AWS security groups <add-unit> <ec2-provider> <security> <juju-core:Invalid> <https://launchpad.net/bugs/1449044>
<mup> Bug #1548564 changed: restart failed upgrade <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1548564>
<mup> Bug #1517499 changed: i/o timeout on bundle deployment <juju-core:Triaged> <https://launchpad.net/bugs/1517499>
<rogpeppe> axw: hiya
<axw> rogpeppe: hey
<rogpeppe> axw: i just ran across mongotest.DialTimeout...
<rogpeppe> axw: it was causing a test to take 5 minutes
<rogpeppe> axw: is 5 minutes really necessary as the dial timeout?
<rogpeppe> axw: it seems like quite a long time...
<axw> rogpeppe: per the comment, I think so. mostly for the unit test merge jobs, because the machines thrash quite a bit after starting up
<axw> rogpeppe: why is the test taking 5 minutes? sounds like an error in the test?
<rogpeppe> axw: well, arguably it's a problem with state.State.Close
<rogpeppe> axw: or with the whole strategy used to manage mgo.Sessions in juju
<rogpeppe> axw: it's an interesting thing though
<rogpeppe> axw: the test that takes 5 minutes is apiserver.serverSuite.TestNewServerDoesNotAccessState
<rogpeppe> axw: well... 7/10 times it takes 0.08 seconds; the other times it takes 5 minutes
<rogpeppe> axw: it depends how far the state workers get before the state is shut down
<rogpeppe> axw: the problem is that the session is copied (with Session.Copy) whenever a mongo operation is performed
<rogpeppe> axw: and if the mongo is unavailable (as it is in this test - deliberately), the mongo operation will block until a connection can be made (or it times out)
<rogpeppe> axw: and while worker is trying to perform the operation, it cannot be stopped
<rogpeppe> axw: and the State cannot be closed until all the workers have been stopped
<mup> Bug # opened: 1611267, 1611269, 1611271, 1611273, 1611275
<axw> rogpeppe: hrm :(  I don't suppose we can tell when the workers are idle?
<rogpeppe> axw: how would that help?
<axw> rogpeppe: don't close the proxy until nothing should be accessing it
<axw> rogpeppe: which is... never? the transaction watcher will always be active won't it
<rogpeppe> axw: one mo, in a call
<babbageclunk> fwereade: so I was bugging dimitern about api stuff yesterday evening, but it got late.
<babbageclunk> fwereade: (ping)
<axw> mgz: ping? would now be a good time for me to run some tests against finfolk?
<axw> scratch that, will be back later
<fwereade> babbageclunk, hey, sorry
<fwereade> babbageclunk, I did do a little review this am, not sure if that's relevant?
<babbageclunk> fwereade: no worries - just noticed the review, going through it now.
<babbageclunk> fwereade: I'm not sure what you mean by "wrap the txn-building in a func that guarantees a sensible error"
<babbageclunk> fwereade: You mean something other than onAbort?
<fwereade> babbageclunk, ah sorry
<fwereade> babbageclunk, there's run, which takes a `func(attempt int) error`
<babbageclunk> (I mean, I understand the problem with letting ErrAborted get out, I'll fix that, just wondering if there's a deeper fix than chucking an onAbort around it.)
<fwereade> babbageclunk, which will get called again if the txn is aborted
<fwereade> babbageclunk, and if you structure the buildTxn func nicely, you have OAOO place where you check state, and can construct the associated assert at the same time
<rogpeppe> axw: yes, the transaction watcher (and the pinger) will always be active
<fwereade> babbageclunk, so when it fails it just gets called again and you never have to worry about, e.g., having two places to complain about non-dead machines that produce slightly different error messages
<babbageclunk> fwereade: Ah, so run can recheck the preconditions each time to work out which one failed and abort with a nicer error?
<fwereade> babbageclunk, exactly
<fwereade> babbageclunk, https://github.com/juju/juju/wiki/mgo-txn-example
<babbageclunk> fwereade: Right, so if I change Remove, MarkForRemoval and CompleteMachineRemovals to use those then places that add preconditions get a chance to return the right kind of error at the same time as they add the assertion.
<fwereade> babbageclunk, yes please :)
<babbageclunk> fwereade: Ok, thanks, I'll do that.
<fwereade> babbageclunk, incidentally, if run failed enough you'll get ErrExcessiveContention out, which hsa text something "state changing too fast, try again soon"
<fwereade> babbageclunk, but what it almost invariably *actually* means is "some developer messed up the correspondence between state checks and txn asserts"
<babbageclunk> fwereade: Ah, right.
<fwereade> babbageclunk, but that's generally easier to understand and fix than when an onAbort triggers in response to ops defined somewhere else, because that means you often just get the *wrong* error message
<babbageclunk> fwereade: ok
<fwereade> babbageclunk, (because someone changed the ops and the client is still assuming there's only one way for the ops to trigger abort)
<rogpeppe> axw: i think the real solution is not to Copy the session on every database operation
<rogpeppe> axw: but that's a significant change
<rogpeppe> a small juju-core code cleanup: https://github.com/juju/juju/pull/5954
<rogpeppe> reviews appreciated. fwereade, you'll appreciate this change
<fwereade> rogpeppe, very nice, LGTM with a trivial
<rogpeppe> fwereade: tvm
<rogpeppe> fwereade: that trivial is not *quite* so trivial. AFAICS there is no need for that remoteError value at all. it couldn't work even if the call did return an error. and i don't see why that code is invoking FacadeCall directly rather than using an api package.
<rogpeppe> fwereade: ha, that *is* the api package :)
<babbageclunk> fwereade: Can you take a look at http://reviews.vapour.ws/r/5365/diff/# again? Everything's using buildTxn/run now, but I'm worried I might be doing too much in completeMachineRemovalsOps.
<natefinch> axw, fwereade: morning... anything you need to talk about?
 * babbageclunk goes for a run.
<wallyworld> looking for a review of a small CLI change for stakeholders to go into beta15 http://reviews.vapour.ws/r/5399/
<axw> natefinch: I think you're good to get deeper into it - not sure that there's anything to discuss. there's a few comments on the doc. I'm here for a bit but will be disappearing soon - will be more free to chat tomorrow if you're uncertain about anything
<axw> mgz sinzui balloons: any of you about?
<natefinch> axw: no problem.  The comments seems clear, and I think we'll be good to go with it.
<axw> natefinch: did you look at replacing gojsonschema with the other one already?
<axw> for actions
<axw> I suspect it'd be trivial, I don't think we do anything particularly complicated there
<fwereade> babbageclunk, I think that's all fine
<fwereade> babbageclunk, completeMachineRemovalsOps is quite big but it doesn't seriously upset me
<babbageclunk> fwereade: great! Thanks.
<sinzui> axw: I am now about
<sinzui> fwereade: babbageclunk What is the sig we send to jujud to stop and uninistall?
<babbageclunk> sinzui: I think it's a file that gets left in a directory somewhere, but I haven't found the bit of code yet.
<sinzui> babbageclunk: touch uninstall-agent, then SIGABRT?
<mup> Bug #1506225 changed: Failed bootstrap does not clean up failed environment w/o --force and error message is unhelpful <bootstrap> <destroy-environment> <jujuqa> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1506225>
<mup> Bug #1611379 opened: api/backups: restore code needs improvement <juju-core:New> <https://launchpad.net/bugs/1611379>
<babbageclunk> sinzui: not sure about the signal, but that sounds plausible
<sinzui> babbageclunk: I am reading notes I left to myself more than a year ago. Plenty of time for juju to change
<babbageclunk> sinzui: True that. I'm just trying it out.
<wallyworld> natefinch: I think you're ocr? can i get a small review for a stakeholder request http://reviews.vapour.ws/r/5399/
<natefinch> wallyworld: will do
<wallyworld> ta
<rick_h_> fwereade: standup ping
<mup> Bug #1516150 changed: LXC containers getting HA VIP addresses after reboot <canonical-bootstack> <juju-reboot> <juju-core:Invalid by dimitern> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1516150>
<tvansteenburgh> Hi everyone...in juju1 I could upload a local charm to the apiserver, then immediately deploy it via the Deploy api using the 'local:...' url returned by the upload. That doesn't work with juju2 - I get a 'charm not found' error from the api when I Deploy, and the Charms.List api doesn't show my charm. What am I missing?
<natefinch> mgz, sinzui: for https://bugs.launchpad.net/bugs/1604474, can you leave up a windows VM that CI tried to deploy to?  I wasted a lot of time trying to get azure to start a windows machine yesterday
<mup> Bug #1604474: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <ci> <juju2.0> <oil> <oil-2.0> <regression> <vpil> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1604474>
<sinzui> natefinch: sure, we just add --keep-env to the job, or just run the test locally with the same flag
<babbageclunk> Hi tvansteenburgh - you can deploy a local charm just by specifying the charm directory in juju 2. Or do you want to upload it specifically?
<mgz> there's then a slight wiggle making sure you get the env details to actually inspect that machine
<mgz> but it's doable
<tvansteenburgh> babbageclunk: yeah but only via the cli right? i'm using the api
<babbageclunk> tvansteenburgh: oh, I see. Sorry, not sure.
<babbageclunk> tvansteenburgh: Reading the code it sounds like what you're describing should still work. Posting to the /charms endpoint gives you a url back?
<tvansteenburgh> babbageclunk: yeah, a local: url
<tvansteenburgh> but then when i try to deploy using that url, i get charm not fourd
<tvansteenburgh> found
<tvansteenburgh> and if i call the Charms.List api, my charm isn't in the list
<natefinch> axw: you're not actually around are you?
<sinzui> natefinch: I run the test scripts locally all the time. I think this will run the windows charms in azure and keep the env about after your send a control-c https://pastebin.canonical.com/162690/
<babbageclunk> tvansteenburgh: Sorry, I'm not familiar with that part of the code at all.
<rick_h_> tvansteenburgh: katco and wallyworld were just talking about some work needed around that code
<rick_h_> tvansteenburgh: I defer to katco on the exact work required/possible bug there but funny that you bring it up right as they're investigating that area of things
<mup> Bug #1516150 opened: LXC containers getting HA VIP addresses after reboot <canonical-bootstack> <juju-reboot> <juju-core:In Progress by dimitern> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1516150>
<mup> Bug #1611391 opened: model migration fails while removing from original controller <model-migration> <juju-core:Triaged> <https://launchpad.net/bugs/1611391>
<natefinch> sinzui: thanks for the detailed example
<babbageclunk> fwereade: around?
<katco> tvansteenburgh: i'll ping you when i open that bug. from my discussions with wallyworld though, i would think that what you're doing should work (i.e. deploying from the api). there is likely something wrong with how we're caching charm archives
<fwereade> babbageclunk, heyhey
<babbageclunk> fwereade: migrations!
<babbageclunk> fwereade: I think I should migrate machineRemovals records?
<fwereade> babbageclunk, I think you should add a note saying they're not migrated because we won't migrate when we're in an unstable state
<babbageclunk> fwereade: I *could* treat them like cleanups, but then migration would be blocked by the provider?
<fwereade> babbageclunk, and double-check with menn0 that "any entity in any state other than alive" == unstable (from migrations perspective)
<babbageclunk> fwereade: Ok, so I should treat them like cleanups then - don't let migration happen while there are machineRemovals.
<babbageclunk> fwereade: ok
<fwereade> babbageclunk, yeah -- and just sync up with menn0, I am pretty certain he's about to start on that side of it
<tvansteenburgh> katco: thanks! really eager to get past this - trying to help a customer who is stuck here
<katco> tvansteenburgh: ack. other than asap, what kind of timeframe would be ideal for you?
<babbageclunk> fwereade: ok cool
<tvansteenburgh> katco: yesterday? :)
<tvansteenburgh> katco: i know you guys are swamped, whatever you can do
<katco> tvansteenburgh: not a problem. in this timeline you'll have to wait 40y or so while i invent time travel, but the alternate-you will have it yesterday :)
<tvansteenburgh> haha!
<niedbalski> dimitern, re: 1610037, I just uploaded the cloud init logs for both container/host.
<balloons> jam, thanks for opening the conversation again about sharing configs for snappy
<natefinch> katco: ship it with one minor question
<katco> natefinch: tal, ty
<katco> natefinch: ah, yeah the reason i do that is so that the formatting is completely left to the errors package. basically for consistency
<katco> natefinch: e.g. if the errors package ever changes its string formatting (or we use different errors), this would still format correctly in the log
<mup> Bug #1611404 opened: failed migration leaves model unkillable <model-migration> <juju-core:Triaged> <https://launchpad.net/bugs/1611404>
<natefinch> katco: It was a little confusing to me.  My first thought was that you were accidentally not saving the annotated error to a local variable.... took me a second to realize you were probably doing it on purpose to let it do the formatting for you.
<balloons> I believe my PR's are still invisible, so does someone mind having a look at https://github.com/juju/juju/pull/5956 today? It's upstreaming the snapcraft.yaml file
<dimitern> niedbalski: thanks, I'll have a look
<dimitern> niedbalski: /var/log/cloud-init.log from the container is not very useful I'm afraid - I'd like to see /var/log/cloud-init-output.log from the container please, and thanks for adding the machine's /v/l/c-i-o.log
<niedbalski> dimitern, ok. cloud-init and cloud-init-output http://pastebin.ubuntu.com/22811404/
<dimitern> niedbalski: is there nothing else in the container's /var/log/cloud-init-output.log? That's very strange..
 * rick_h_ grabs lunchables
<niedbalski> dimitern, yep, it seems not complete. btw; http://pastebin.ubuntu.com/22811951/ this is the hypervisor.
<katco> natefinch: hey you're OCR, can i get a quick ship it? +8L: http://reviews.vapour.ws/r/5401/
<mup> Bug #1611427 opened: Code which performs retries should consolidate on github.com/juju/retry <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1611427>
<rick_h_> katco: I'd feel free to self-review that one and just run with it
<katco> rick_h_: k
<katco> rick_h_: wondering... in the time it took you to type that message, could you have clicked "Ship It!" ;)
<mgz> katco: I'm happy to stamp a comment-only change :)
<rick_h_> katco: yes, but then I'd not have said that in the future you should be able to self-review trivial such as that
<rick_h_> katco: so figured I'd type that vs both :)
<katco> hehe
<katco> mgz: ta
<babbageclunk> fwereade: still around?
<fwereade> babbageclunk, heyhey
<babbageclunk> fwereade: hey, so now I'm trying to understand auth in the API.
<fwereade> babbageclunk, basically, your facade is passed a facade.Authorizer that tells you who made the (already-authenticated) connection to your facade
<fwereade> babbageclunk, often you can just return common.ErrPerm from the facade ctor when you know that the caller isn't allowed -- e.g. a user connecting to a facade intended for a worker
<babbageclunk> fwereade: Ok, so there's the AuthModelManager() method, that is only true for something that's running on the controller? I'm using that in the ctor.
<babbageclunk> fwereade: You mentioned in the review that AllMachineRemovals should take entities and check that it's the model tag.
<fwereade> babbageclunk, ah yes
<fwereade> babbageclunk, right, the AuthModelManager is the right one for the ctor
<babbageclunk> Do I do that just by checking that the type of the tag is model, and then calling authorizer.AuthOwner(modelTag)?
<fwereade> babbageclunk, and... it is sort of ludicrous that the Authorizer *doesn't* have a ModelUUID representing when model it's connected to, but you*can* get that out of state
<fwereade> babbageclunk, AuthOwner isn't quite right
<babbageclunk> fwereade: Maybe HasPermission?
<fwereade> 	// AuthOwner returns true if tag == .GetAuthTag(). Doesn't need
<fwereade> 	// to be on this interface, should be a utility fun if anything.
<fwereade> GetAuthTag will be the machine that connected
<fwereade> babbageclunk, I would (1) reject non-model-tags, and (2) reject model tags that don't match the state's modeluuid
<fwereade> babbageclunk, for bonus points, a ConnectedModel or something on Authorizer would be much nicer than having to ask state
<babbageclunk> fwereade: Ah, right - not really looking at auth at all.
<fwereade> babbageclunk, yeah, but I think conceptually that info *should* be coming from the authorizer
<babbageclunk> fwereade: yeah, that does sound nicer.
<fwereade> babbageclunk, it's just that it hasn't really evolved much at all over the years
<babbageclunk> Ok, I'll try to thread that through.
<fwereade> babbageclunk, <3
<babbageclunk> And should it accept multiple models? Presumably that would always be denied - the agent's connected to one model specifically, right?
<fwereade> babbageclunk, connections to a controller muddy the waters a little bit
<fwereade> babbageclunk, I would probably go no further than reporting *what* model the connection is for, and leave the interpretation of that in the facade, at least to begin with
<babbageclunk> fwereade: Right, sorry. I meant that check in AllMachineRemovals facade method.
<fwereade> babbageclunk, ah, got you, yeah -- the connection is specifically for just one model
<fwereade> babbageclunk, so constructing a canAccess AuthFunc and passing that on to the rest of your logic is probably ideal
<babbageclunk> fwereade: Cool. I'll do the same check on WatchMachineRemovals.
<fwereade> babbageclunk, that's a `func(names.Tag) bool` iirc
<fwereade> babbageclunk, perfect
<babbageclunk> fwereade: And then the CompleteMachineRemovals and GetNetworkConfig ones already get lists of machines - so I do the checks against them?
<fwereade> babbageclunk, I think they're implicitly authorized by the AuthModelManager -- whereas if it were just some machine agent, we'd want to restrict info about any *other* machines
<fwereade> babbageclunk, but this is very much situational IYSWIM
<fwereade> babbageclunk, deciding what gets access to what is pretty much the facade's primary job
<fwereade> babbageclunk, we should generally be expecting to write the logic in terms of an AuthFunc chosen by the facade, or something very similar
<babbageclunk> fwereade: Yeah, I think I see that.
<fwereade> babbageclunk, so that when we end up wanting to do the same stuff elsewhere, it's easy to pull that logic into a common/whatever type and parameterise it with an AuthFunc on facade construction
<fwereade> babbageclunk, e.g. apiserver/lifeflag, where I was very happy to just grab a couple of existing common services and specialise them
<babbageclunk> fwereade: Oh, right - I see some things like that in the machiner.
<babbageclunk> fwereade: Actually, something related that I was going to ask you about this morning but then got distracted by your review...
<fwereade> babbageclunk, yeah -- it's not all super-elegant but at least it separates the responsibilities, or lets us chalk out a path towards doing so
<fwereade> babbageclunk, oh yes?
<babbageclunk> fwereade: ProvisionerAPI.GetContainerInterfaceInfo is very close to what I want for getting the network config for machines.
<fwereade> babbageclunk, then that's a great candidate for just that sort of extraction :)
<babbageclunk> fwereade: Right, that was what I was thinking/wondering. It's slightly different because it only uses the first address on the device though.
<fwereade> babbageclunk, hmm... I'm not sure I have the context for a firm judgment there
<fwereade> babbageclunk, if there's no drawback to sending extra info to the one that doesn't need all of it, might still be sensible
<babbageclunk> fwereade: ok, I'll think about that.
<fwereade> babbageclunk, there might be something genuinely in common, but there might not ;)
<fwereade> (not always easy to tell)
<babbageclunk> fwereade: might be better to just do my one and then see what's common, rather than the other way around.
<fwereade> babbageclunk, ack
<babbageclunk> fwereade: ok, great - thanks! Sorry to bug you late!
<fwereade> babbageclunk, http://thecodelesscode.com/case/233
<fwereade> babbageclunk, no worries!
<babbageclunk> fwereade: cute
<fwereade> babbageclunk, I like that site a lot, there's some good stuff there
<fwereade> babbageclunk, occasionally self-contradictory in the very best possible way
<mup> Bug #1611453 opened: manual-provider: systemd services left behind <ci> <kill-controller> <manual-provider> <regression> <systemd> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1611453>
<natefinch> katco: sorry, was out.  Looks like you got it though.
<rick_h_> perrito666: got a sec?
<perrito666> rick_h_: certainly
<rick_h_> perrito666: so pulled that pr down and built a juju with it and verified my local show.go has your patch in it
<rick_h_> perrito666: but it's not working for me when I run $ djuju show-model controller
<rick_h_> perrito666: I get the output for model "default"
<perrito666> rick_h_: mm, something might have changed since I PRd that, tx for the test, ill re-review it and fix it and re-propose it
<rick_h_> perrito666: rgr ok ty
<rick_h_> perrito666: and let me know if I'm doing something wrong, only built things a couple of times so problem coould be on my end without a doubt
<perrito666> rick_h_: well the way you describe it, the issue is most likely on my end :)
<rick_h_> perrito666: k, let me know if I can be of any service
<mup> Bug #1611463 opened: GUI: test_upgrade_gui failed KeyError: 'bootstrap-config' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1611463>
<katco> sinzui: is there any easy way to check if lp:1596462 is fixed in master?
<sinzui> katco: Lets be optimistc and close it. http://reports.vapour.ws/releases/issue/5770f063749a56783c8c36fc shows we saw it once a week, but we haven't seen it in 4 weeks
<katco> sinzui: well, i just landed something today that (in theory) should fix it
<katco> sinzui: (last comment in the bug)
<balloons> rogpeppe, so I started making the changes to migrate everything from launchpad.net/gnuflag to github.com/juju/gnuflag. I fixed everything in juju/juju, but I see there's imports in several other repos. What's the best way to coordinate this change?
<sinzui> katco: I suspect this bug was caused by an overloaded server. We restructured tests to avoid cpu starvation
<katco> sinzui: ah. well in any case, even under load the bug should be fixed
<sinzui> why was it only seen on Fridays?
<katco> sinzui: not sure if that was a question for me. i don't know. weekend test run?
<sinzui> katco: jsut a public musing. CI doesn't schedule extra work on late Thursday early Friday. The pattern is odd
<mup> Bug #1609494 changed: grant-revoke: reenabled users missing from list-users <ci> <grant> <regression> <user> <juju-ci-tools:Invalid> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1609494>
<balloons> natefinch, can I get you to review something for me / make sure I've done things properly?
<natefinch> balloons: sure
<balloons> natefinch, https://github.com/juju/juju/pull/5956. I'm attempting to land snapcraft.yaml and updating 2 depends off bzr to git instead. tomb and gnuflag
<natefinch> balloons: wow, those were our last two launchpad dependencies, huh?
 * perrito666 cries in spanish
<natefinch> balloons: for romulus, you updated tomb dependencies.tsv to use tomb.v1... but didn't update any dependencies that use tomb.  that'll probably break something in CI.
<balloons> natefinch, yes indeed
<natefinch> (I assume gnuflag doesn't use tomb)
<balloons> they are the last. I was trying to find all the places it was used, but juju/* is quite big obviously. I also assumed perhaps I need to land the supporting PR's before, the main PR in juju/juju
<natefinch> yes, you do need to land those first, and then update juju's dependencies to point to the new commits for those repos
<natefinch> it's just a giant PITA
<balloons> natefinch, so i'm not sure what you mean by the romulus comment
<perrito666> brb
<natefinch> balloons: you changed what dependencies.tsv says the repo depends on... but you didn't actually change any code that imports launchpad.net/tomb to import the github repo
<natefinch> balloons: oh, I think I see what it is.  utils has the dependency on tomb, and romulus uses utils
<balloons> natefinch, right. utils has the inverse problem
<natefinch> actually... romulus also depends on juju/juju ... which is just a horrible idea
<balloons> i updated gnuflag, but it only has tomb underneath
<natefinch> wait, gnuflag depends on tomb?
<natefinch> oh nevermind
<balloons> natefinch, no sorry.. I meant utils only uses tomb, not gnuflag, but depends on romulus that does use gnuflag
<natefinch> I misunderstood
<balloons> I'm not sure why those sub-repos also have dependencies.tsv..
<natefinch> they really shouldn't.  The only thing that should have a dependencies.tsv is a repo that makes a binary
<perrito666> natefinch: that is a strong statement
<balloons> natefinch, ok, well do we feel I've gotten all the places that need updated?
<natefinch> anyway, my point is.... you can't change a dependency path without changing either some codein the repo that uses it, or changing another dependency that had code changed to use it.
<natefinch> this change can't be correct: https://github.com/juju/utils/pull/230/files
<natefinch> nothing is using the new gnuflag path
<natefinch> which means something is likely using the old gnuflag path, which means this is wrong
<balloons> natefinch, why is it in the dependencies.tsv to begin with then? nothing under utils uses it
<balloons> that was I assumed the utils <-> romulus relationship.. I gues
<natefinch> relationships are never two way.  always only one way
<natefinch> something utils imports is using gnuflag
<balloons> I can find no other uses anywhere
<balloons> does godeps actually take out dependencies that aren't needed?
<balloons> I felt like juju/juju has stuff it might not use for instance; not that I've explored it
<natefinch> if you run godeps ./... it'll print out the actual dependencies
<natefinch> or godeps -t ./... to include testing dependencies
<perrito666> natefinch: what is wrong with the link you passed?
<natefinch> and that includes gnuflag somehow..
<natefinch> perrito666: gnuflag was updated in dependencies.tsv, but no imports that actually referenced the old one were updated to use the new one (and no other dependencies changed)
<perrito666> ah I missed thatr
<natefinch> ahh, I see what it is
<natefinch> juju/cmd still references the old one.  You need to check in that change, then update utils to reference the new one
<balloons> that makes sense to me
<natefinch> (to reference the new juju/cmd)
<perrito666> in github.com/juju there are 237 imports of gnuflags
<natefinch> this is why it's a PITA
<natefinch> perrito666: almost all of those are in juju/juju
<perrito666> sorry 153
<natefinch> so it's a rolling update - juju/cmd is the leaf, check that in, update juju/utils to reference the new juju/cmd, check that in, update juju/romulus to reference the new juju/utils. check that in
<natefinch> althouhg romulus depends on juju/juju, so you'll need to do that before romulus
<perrito666> yup, ironically utils only deppends on gnuflags but indeed has no dependency as you said, then there is terms-client, romulus and cmd
<natefinch> and this is why only the top level should use dependencies.tsv.... because it's only the top level one that actually matters anyway.  It's not like juju runs the dependnecies.tsv in juju/utils
<perrito666> natefinch: nah, that is bogus
<perrito666> if you create a library, it makes perfect sense that the library has dependencies.tsv
<perrito666> godeps is just an imperfect method, it should know how to handle cascades
<natefinch> I guess you may need dependencies.tsv to build juju/utils to be even able to check stuff in
<perrito666> natefinch: a library has the same issue than a binary
<perrito666> we need vendoring :p
<natefinch> vendoring is good, but it doesn't fix this problem.  utils would still need to vendor its own dependencies if master of the things it depends on won't build together
<natefinch> then you're just updating the vendor directories for each... not much better
<balloons> natefinch, ok, I'll start with $$merge$$ on juju/cmd
<natefinch> balloons: there, I gave you provisional LGTMs.  Just roll in the updates to dependencies.tsv as things get committed.
<natefinch> balloons: and then feel free to land
<balloons> on it
<natefinch> balloons: why are you pushing these dependency updates, btw?  I mean, it's a good idea, but why you why now?
<balloons> natefinch, ahh. I need it to make building a snap in launchpad possible
<balloons> natefinch, there's a bug using bzr to pull depends inside the builder. This 'fixes' that bug
<natefinch> balloons: I thought it might be that.  It's kind of nice, actually... the go tool complains when you tell it to download stuff from launchpad, since it's over http
<balloons> right.. this cleans up things for everyone, and gets rid of the bzr dependency as well
<balloons> so why not :-)
 * natefinch tries not to cheer too loudly.
 * katco quietly mentions that all our ci stuff is hosted in bzr
 * balloons notes it should not only be in git, but in the same repo(s) as juju itself
 * natefinch cries in the corner.
<katco> balloons: go get 'em :)
<perrito666> well its also writen in python, so if you are going for consistency you have a long road ahead
<natefinch> I was going to mention that...  sigh.
<katco> i don't know that it being written in python is a huge deal
<niedbalski> dimitern, https://pastebin.canonical.com/162703/ latest run
<katco> and this is coming from someone who doesn't know python
<balloons> yea actually I think you should be glad they aren't written in go
<natefinch> katco: it's a barrier of entry for our go developers
<balloons> natefinch, looks like loggo depends is out of date; I'm going to update it too. build failed
<natefinch> balloons: cool
<natefinch> I'm outta here for a while.  back in 3-4 hours.
<balloons> thanks natefinch-afk for the review
<katco> natefinch: not a really high one imo. i'd sooner complain about the custom infrastructure built up for interacting with juju
<balloons> mmm.. indeed katco
<mup> Bug #1611514 opened: "local" charm schema should allow deploying previously deployed local charms <feature> <juju-core:New for cox-katherine-e> <https://launchpad.net/bugs/1611514>
<alexisb> thumper, ping
<thumper> yus?
<alexisb> http://reviews.vapour.ws/r/5386/
<alexisb> can you please review this for perrito666 ^^
<thumper> k
<thumper> perrito666: conflicts against master
<thumper> perrito666: also, bigger than 500 lines
<thumper> :)
<perrito666> aghh I am never again changing code that is called all over the place
<perrito666> thumper: fixing the merge, could you make an exception with the lines ?
 * thumper is reviewing
<thumper> ssh
<perrito666> I am prepared to pay in alcoholic beverages
<perrito666> rick_h_: take a look at the branch,I fixed the issue
<rick_h_> perrito666: ty, will have to wait until the morning
<rick_h_> perrito666: but will do
<alexisb> menn0-exercise, liking dev snap, the setup script is mighty handy
<menn0> alexisb: yeah, it took a little while to figure out how to make that work (it's easy once you know how however)
<perrito666> we should rename ship-it to amen so I can ask "can I get an amen" when asket for a shipit
<redir> perrito666: halleluja!
<alexisb> perrito666, did thumper give you a ship it?
<alexisb> if so I will second redir's halleluja
<thumper> almost
<thumper> have now with one potential change
<alexisb> halleluja!
<perrito666> alexisb: implementing change after standup and then its ok to go
<perrito666> and we can use $$halleluja$$ for the merge
<alexisb> :)
<redir> axw: got a minute?
<menn0> alexisb, thumper: tech board agenda now has your items for today's meeting
<alexisb> thank you menn0
<axw> redir: did you want to talk to me?
<redir> yeah just a couple questions
<redir> axw: ^
<axw> redir: yup, hangout or here?
<redir> either but HO is prolly faster
<redir> unless you remember where we put region config in globalSettingsC and have an idea where I would get the region and cloud from in modelConfigSources
<redir> axw: ^
<perrito666> If the standup would have been its usual lenght i could have finished my recipe :p
<perrito666> Standup + cookingshow
#juju-dev 2016-08-10
 * redir eod
<redir> thanks for the clue axw
<menn0> thumper: http://reviews.vapour.ws/r/5404/ pls
<thumper> menn0: done
<menn0> thumper: thanks
<menn0> thumper: dropped your issue (with an explanation)
<thumper> ok
<natefinch> axw: I couldn't tell, did you and William come up with a magic way to make (un)marshaling work nicely with jsonschema and our extra properties?  Seems like the least amount of work would be to define schemas in yaml strings and marshal into Schema, with helpers to get metadata out of the "extras".  Not elegant, but a lot less work than making our own custom jsonschema struct.
<axw> natefinch: we did not
<natefinch> axw: any preference?
<axw> natefinch: I'd prefer if it wasn't just a blob of text, but I'm unsure of how it'd look in the end. I guess you would have to duplicate the entire Schema type
<axw> and SchemaList etc.
<natefinch> I think forking that schema package and adding our properties would also be easy enough... just sort of a maintenance burden to keep up with upstream (which looking at gojsconschema, we usually just don't do).
<axw> natefinch: I'd prefer if we didn't fork
<axw> for that reason
<natefinch> it sucks that there's no nice way to make recursive data structures easily extensible
<natefinch> axw: so... you're saying we're boned :)
<axw> natefinch: I don't think so? I think we can duplicate the *types* from the jsonschema package, and then introduce functions to extract/inject the from/into the Extras field
<axw> natefinch: we would still rely on the jsonschema package for all of the standard parsing and validation
<natefinch> axw: interesting, ok.
<natefinch> axw: the tricky thing is that unmarshaling into the Schema type is all hand coded in that go-jsschema package, I think mostly because the rules for jsonschema are really wacky (values that can be a boolean or a list of strings, for example).  Some of it may be overengineered... but I'm guessing we might want to copy a lot of the logic in how to unmarshal... which might make it better to just fork the package outright.
<natefinch> Note that the validation is actually in a subpackage, the main package is just the types and unmarshalling.
<axw> natefinch: I'll have a poke and see if I can come up with something that looks workable
<axw> natefinch: https://gist.github.com/axw/0d395b86aa5ac9ee0c4f18a3ca81cb68
<axw> natefinch: you would need to have a "toInternal" to do the opposite, putting the juju bits back into the Extras map
<menn0> axw or thumper: fairly easy one: http://reviews.vapour.ws/r/5405/
<axw> natefinch: I dropped all of the json tags from the cloned struct. instead of having tags, you'd have a MarshalJSON and UnmarshalJSON that use from/toInternal
<axw> menn0: looking
<natefinch> axw: right
<natefinch> axw: it's funny, the jsschema thing has custom json (un)marshaling too, so the json tags aren't really used.
<natefinch> axw: that looks pretty good. Reusing as much as possible from the internal type, just replacing the recursive part.  Very nice.
<axw> menn0: LGTM
<menn0> axw: (delayed) tyvm
<axw> np
 * thumper waits for the featuretest package to run tests...
<thumper> should have used -check.v
<thumper> hmm...
<thumper> how long to wait...
<thumper> killed it
<thumper> PASS: dblog_test.go:149: debugLogDbSuite.TestLogsAPI	0.005s
<thumper> OK: 1 passed
<thumper> PASS
<thumper> ok  	github.com/juju/juju/featuretests	7.662s
<thumper> 5ms to run the test, the rest of 7.6 seconds for setup / teardown
<thumper> w00t
<jam> menn0: are you still around?
<menn0> jam: yes, for a few more minutes
<jam> menn0: sorry I ended up missing you, I'll see you at the tech board if you can make it
<rogpeppe> balloons: the best way is to migrate all the leaf repositories first, then juju-core itself
<babbageclunk> menn0: ping?
<menn0> babbageclunk: hi, in tech board call atm
<babbageclunk> menn0: ok
<menn0> babbageclunk: done!
<menn0> babbageclunk: I imagine you're bugging me about the email you sent which I didn't reply to?
<babbageclunk> menn0: Hi! Yes!
<menn0> babbageclunk: sorry... I was trying to staying focussed on migrations so email took a back seat today
<menn0> babbageclunk: I'll reply now but the short answer is that you don't need to worry about migrations for what you're working on
<menn0> babbageclunk: a migration will abort early if there's any dying/dead machines or units
<babbageclunk> menn0: Ok - so I can ditch that commit then?
<babbageclunk> menn0: No need to reply right now if that's the takeaway.
<menn0> babbageclunk: yep, you can ditch that... there will be a higher level check for dying machines
<babbageclunk> menn0: Sweet, thanks!
<menn0> babbageclunk: should be working on prechecks this week
<fwereade> menn0, if you're still around... the nsWhatever thing
<babbageclunk> fwereade: I can't see any way of testing apiHandler.ConnectedModel without either adding the test to a suite based on JujuConnSuite (so I can call TestingApiHandler) or adding another TestingApiHandler* func to apiserver/export_test (so I can specify a UUID directly).
<fwereade> menn0, I feel like I want *some* way to visibly distinguish these (basically) static/stateless types
<fwereade> babbageclunk, hmm
<babbageclunk> fwereade: I'm aware that both are eyebrow-provoking.
<fwereade> babbageclunk, let me look
<fwereade> babbageclunk, wouldn't surprise me if the use-existing-JujuConnSuite were the pragmatic approach
<babbageclunk> fwereade: ok, I'll put my test in one of those for now.
<fwereade> perrito666, ping re Authorizer.HasPermission
<fwereade> perrito666, ehh, don't worry about it
 * fwereade needs someone to talk to about structuring/naming some things in state, is off for a ciggie, would be delighted if someone were willing to chat in a few minutes
<perrito666> fwereade: i am just waking up, if you want I can ping you in a moment?
<fwereade> perrito666, ofc
<fwereade> jam, dimitern, babbageclunk: I am going to waffle vaguely about the ns* types I've been adding to state, please jump in if anything springs to mind
<fwereade> so
<dimitern> fwereade: sure ;)
<fwereade> in the payloads work, we made a stab at separating persistence from business logic
<fwereade> and it wasn't entirely successful, but when I moved the necessary payloads bits into state, I tried to preserve the distinction
<fwereade> and I ended up with an exported Payloads type that clients use, and an unexported `type nsPayloads_ struct{}` to hold all the internal methods that actually implement the persistence side
<fwereade> (and an unexported `var nsPayloads nsPayloads_` to actually call methods on)
<fwereade> and it's nice that it's entirely stateless, and it has methods like `untrackOp(coll mongo.Collection, docID string) (txn.Op, error)`
<fwereade> so I'm pretty happy with the *structure*
<fwereade> but the "nsFoo" naming style has raised many eyebrows
<fwereade> so the *easy* thing to do is just to drop the ns stuff
<fwereade> and just have refcounts.CreateOrIncRefOp, and life.notDeadOp, and so on
<fwereade> but
<fwereade> the style, while I think it is good, is somewhat at odds with the stateful-model-type approach we're currently using almost everywhere
<fwereade> and I can't help but feel it needs *something* to clearly and visibly distinguish it from... all the other code
<fwereade> am I overthinking this? from a certain perspective the prefix tells you very little, whatever it might be
<dimitern> fwereade: so you separate the external interface of Payloads from the low-level ops, which go in the ns* internal type?
<fwereade> dimitern, yeah
<dimitern> fwereade: ns for namespace.. I could say it's not quite obvious at first
<dimitern> fwereade: I mean namespaces vs entities
<fwereade> dimitern, (and in payloads, because it uses that mechanism exclusively, I can implement the Payloads thing largely in terms of distinct Change types which themselves use the ns type)
<dimitern> fwereade: things we store
<fwereade> dimitern, yeah, indeed
<fwereade> dimitern, it is evidently not obvious
<dimitern> fwereade: how about persistence ? too long I guess..
 * xnox is failing to bootstrap localhost provider with 2.0 beta14 on xenial =(
<fwereade> dimitern, yeah, I guess that's the obvious one, but I rue all the characters it costs every time you want to use it
<dimitern> fwereade: backing is shorter, and already used here and there
<dimitern> it's not perfect though..
<dimitern> xnox: about lxdbr0?
<fwereade> dimitern, the other solution is to pull them out into their own packages, which is clearly what they *want* to be, but I don't think I can do it non-invasively
<fwereade> dimitern, state/persistence/life etc
<xnox> dimitern, tools info mismatch
<fwereade> dimitern, you know, the more I talk, the less I care about having any prefix at all
<dimitern> xnox: oh..
<fwereade> dimitern, if they ever make it into packages it'll just be `life.notDeadOp(...)` anyway
<dimitern> fwereade: having the prefix is fine I think
<fwereade> dimitern, that's seemed to be the major sticking point for people though
<dimitern> fwereade: the underscore suffix is making me wince a little
<xnox> does 2.0 need agent-stream: devel ?
<xnox> ERROR failed to bootstrap model: cannot start bootstrap instance: tools info mismatch ({2.0-alpha1-xenial-amd64  b67c1484745bd58e7fac6ad672a7f6e45042ebef7a1e0e995f3f0f3c2baa7d33 18556414}, {2.0-alpha2-xenial-amd64  ceb165a45206eddadc06a7c986b44a3f76195c71a317d0c87810727c71bcc0f8 18073871})
<xnox> $ juju bootstrap --config agent-stream=devel localhost localhost
<xnox> seems to work better,
<xnox> however specifying --config cancels "interactive" mode of juju bootstrap
<dimitern> xnox: nice! I usually always bootstrap from src with --upload-tools
<dimitern> xnox: yeah any args do that
<dimitern> fwereade: if not ns how about stored? e.g. storedPayloads
<dimitern> fwereade: saved ?
<dimitern> fwereade: dbPayloads even :D
<fwereade> dimitern, yeah, if you can see a way around the suffix that'd be nice, but you only see that when you're implementing it
<fwereade> dimitern, hmm
<fwereade> dimitern, dbPayloads
<fwereade> dimitern, I think maybe I like that
<dimitern> db is short and more obvious yeah
<fwereade> dimitern, awesome, tyvm
<dimitern> :) np
<fwereade> jam, babbageclunk: any thoughts? ^^ or anyone else? :)
 * dimitern as usual has a lot of things needing attention.. sinks back to bugs
<perrito666> ok, this is completely new to me: alarm notification buffer full
<babbageclunk> fwereade: Sorry, was following but didn't have much to add. I definitely like db better than ns. I don't think the suffix is a problem, since you only have to look at it in the implementation.
<fwereade> babbageclunk, cheers :)
<babbageclunk> fwereade: Also, ns is used a lot in iOS objective-C development (from NextStep), so it's weird seeing it in Go code. :)
<babbageclunk> Oops NeXTSTEP apparently.
<fwereade> babbageclunk, a sense of happy familiarity with the NS prefix did sort of come into it, even if it's ultimately unhelpful
<dimitern> fwereade: m_lpszDataSource evokes that sort of familiarity for me, but with the opposite sign :)
<fwereade> dimitern, hahaha
<babbageclunk> dimitern: lol
<fwereade> dimitern, I wish they'd done hungarian notation right
<dimitern> oh MFC days...
<dimitern> :)
<perrito666> does anyone know if the clock interface recently changed?
<dimitern> perrito666: rogpeppe was talking about changing something there recently IIRC
<rogpeppe> dimitern, perrito666: i haven't actually changed anything recently AFAIR
<rogpeppe> dimitern, perrito666: though I have an unfinished PR that revamps the testing clock stuff and the interface
<dimitern> rogpeppe: ah ok - wasn't the discussing around alarms though? timers?
<rogpeppe> dimitern, perrito666: i just need to find time to write some tests for it
<rogpeppe> dimitern: i wanted NewTimer
<dimitern> rogpeppe: that's it
<rogpeppe> dimitern, perrito666: FWIW this is the PR in question: https://github.com/juju/testing/pull/108/files
<perrito666> rogpeppe: I am getting a strange panic in tests (pastebining now)
<perrito666> http://pastebin.ubuntu.com/22902510/
<perrito666> Ill now go see how in the universe I triggered that without even going near it
<rogpeppe> perrito666: you can fix that by reading from the clock.Alarms channel
<perrito666> rogpeppe: I am curious on how I originally broke that (more than the actual fix) :)
<rogpeppe> perrito666: tbh i think that that clock code shouldn't panic in that case
<perrito666> I mean, I know what I changed, I agree with you that this thing should definitely not panic
<dimitern> perrito666: I suspect you're using juju/testing/clock whereas before you used juju/juju/testing/clock
<perrito666> dimitern: I suspect someone made that change and I made a change and we both clashed :p
<rogpeppe> dimitern, perrito666: please, please can we change everything to use juju/testing/clock and remove juju/juju/testing/clock ?
<perrito666> rogpeppe: sure you can, ill review the PR :p
<rogpeppe> dimitern, perrito666: there's no need for both to exist
<dimitern> rogpeppe: I don't mind that :)
<rogpeppe> perrito666: most of what i do these days seems to be juju-core code cleanups
<perrito666> rogpeppe: and we love you for it
<dimitern> rogpeppe: indeed <3 :)
<rogpeppe> perrito666: speaking of which... did you see this? https://bugs.launchpad.net/juju-core/+bug/1611379
<mup> Bug #1611379: api/backups: restore code needs improvement <juju-core:New> <https://launchpad.net/bugs/1611379>
<perrito666> "needs improvement" is perhaps the bigest understatement in history of mankind
<dimitern> lol
<rogpeppe> perrito666: well, i thought i might be out of order to say "this code is fucking shit" :)
<rogpeppe> perrito666: in a bug report
<rogpeppe> perrito666: anyway, i rewrote it, but didn't have time to write the tests (the current code has no tests)
<rick_h_> macgreagoir: can you review https://github.com/juju/juju/pull/5747 please, I did the QA to make sure it worked when you get a sec
<macgreagoir> rick_h_: ack
<macgreagoir> dimitern: Ready for HO when you are.
<dimitern> macgreagoir: ok, let's use the standup HO I guess?
<voidspace> babbageclunk: ping
<babbageclunk> voidspace: pong, sorry!
<niedbalski> dimitern, https://pastebin.canonical.com/162703/ latest run
<niedbalski> dimitern, after this the boot gets stuck.
<mup> Bug #1611764 opened: upgraderSuite.SetUpTest An existing connection was forcibly closed <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1611764>
<mup> Bug #1611766 opened: upgradeSuite.TearDownTest sockets in a dirty state <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1611766>
<rick_h_> voidspace: time to sync?
<voidspace> rick_h_: yep
<balloons> rogpeppe, so I worked with Nate and perrito666 yesterday to get everything settled to do the changeover to the git version of gnuflag, but I've hit a snag. juju/romulus and juju/juju both reference each other. It's a circular dependency.
 * fwereade has a problem; is anyone up to date with the details of charm migrations?
<fwereade> babbageclunk, voidspace perhaps? ^^
<rogpeppe> balloons: that is annoying
<rogpeppe> balloons: but
<rogpeppe> balloons: there's a way around it
<balloons> I guess my thought is does juju/romulus really need to depend on juju/juju? And if so, we have to land without attempting to build for the first commit of one of them -- unless you know something cooler we can do
<rogpeppe> balloons: make a feature branch of either juju-core or romulus that uses only the github gnuflag
<rogpeppe> balloons: unfortunately it does
<rogpeppe> balloons: it's ok (temporarily) for a project to depend on a commit in a feature branch
<rogpeppe> balloons: once you've landed romulus depending on the juju feature branch, you can make juju depend on that just-landed romulus branch
<rogpeppe> balloons: then you can update romulus to depend on juju master again and delete the juju feature branch
<rogpeppe> balloons: or you can do it the other way, making a feature branch in romulus
<balloons> this sounds educational
<balloons> I think a feature branch in romulus would be preferred perhaps -- avoid invoking a CI run for instance
<rogpeppe> balloons: good point
<babbageclunk> fwereade: sorry, not me - voidspace has been doing a fair bit on migrations though.
<voidspace> fwereade: babbageclunk: not charm migrations, no
<fwereade> babbageclunk, voidspace: not to worry
<fwereade> babbageclunk, voidspace: (fun fact: the data model *never* actually checked that a given charm existed before creating an application that used it)
<voidspace> nice
<fwereade> babbageclunk, voidspace: (which is somewhat cramping my style re refcounting charms, there are a number of state tests that add apps without charms... and *all* migrated apps are added without charms, which are then filled in later)
<natefinch> sinzui, mgz: the CI scripts ask for an environment... how do I pass a cloud and credentials?
<natefinch> (running deploy_job.py)
<babbageclunk> fwereade: doh
<rick_h_> voidspace: natefinch dimitern standup ping
<dimitern> omw
<mup> Bug # changed: 1502130, 1523608, 1556961, 1586197
<sinzui> natefinch: the ci script only take an env. uses the clouds.yaml and credentials.yaml in cloud-city ($JUJU_HOME). the env is still needed to provide the config.
<natefinch> sinzui: so, the thing is, I need to be able to inspect the broken machine, and I thought we had established that I can't rdp into the machines brought up by the cloud-city creds
<mup> Bug #1611463 changed: GUI: test_upgrade_gui failed KeyError: 'bootstrap-config' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Invalid by jcsackett> <https://launchpad.net/bugs/1611463>
<mup> Bug #1611789 opened: GUI: test_upgrade_gui failed KeyError: 'cloud' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Triaged by jcsackett> <https://launchpad.net/bugs/1611789>
<sinzui> natefinch: I don't know how to do that with azure. Can we open rdp using the azure portal?
<natefinch> sinzui: yes.  If I can log into azure with the right credentials
<sinzui> natefinch: ah. with azure you login as yourself. I can add you to the subscription
<natefinch> sinzui: cool
<sinzui> natefinch: Register with azure with your canonical address. When Azure knows it, I can add you
<arosales> any work arounds folks know to kill a controller
 * arosales getting http://paste.ubuntu.com/22918062/
<arosales> I destroyed all the models separately
<natefinch> sinzui: oh, can you just use my current azure account? it's nate.finch@gmail.com
<sinzui> natefinch: no, this is a canonical account
<natefinch> sinzui: is that a can't or won't?  I really don't want to have to keep track of multiple microsoft accounts if I can avoid it
<sinzui> natefinch: I wont. and I understand the pain
<natefinch> sinzui: I guess I don't understand why it matters what email address I used to sign up with
<sinzui> natefinch: this isn't my account. it is canonicals and it gets audited
<natefinch> sinzui: ahh.  OK, I get it.
<natefinch> sinzui: ok, I have a nate.finch@canonical.com azure account now
<sinzui> natefinch: reload the page in azure, I think you can see all the resource groups and vms
<natefinch> sinzui: ahh yes, once I switch to your directory
<natefinch> perrito666: works for me
<mgz> balloons: you're not on the other network...
<mgz> balloons: see lp:~juju-qa/ubuntu/yakkety/juju/juju-1.25.6 for the base branch without bug fixes or the per-series versions
<perrito666> aghh 4 hours chasing an isolation problem
<balloons> mgz, ty
<mgz> balloons: there are some existing lintian warnings that I didn't fix
<balloons> mgz, sorry can you link me again.. I bounced my bouncer
<mgz> balloons: lp:~juju-qa/ubuntu/yakkety/juju/juju-1.25.6
<redir> morning
<balloons> natefinch, sadly looks like updating a dependency perhaps is causing a unit test failure? What do you make of http://juju-ci.vapour.ws:8080/job/github-merge-juju/8699/artifact/artifacts/trusty-out.log/*view*/?
<balloons> natefinch, I'm still trying to land that gnuflags update from yesterday :-)
<mgz> balloons: did fail the same way both times
<balloons> yes it did
<mgz> what's the diff between the gnuflag packages before and after updaye?
<balloons> nothing. it was merely a move from bzr to git and lp to github
<balloons> but I pulled forward some outdated depends in a few other places
<natefinch> balloons: looking
<natefinch> balloons: just a spurious failure... retry
<balloons> what's our opinions on these spurious failures -- Are we proactively fixing these / filing bugs / disabling?
<mgz> natefinch: I haven't seen the uniter tests fail like that before
<mgz> cannot set invalid status "rebooting" seems... not good
<natefinch> yeah I was just looking at that line
<natefinch> but that's obviously nothing to do with gnuflags
<natefinch> the reboot tests have always been a little flaky
<mgz> we've not had that test fail in CI since May
<mgz> and then it was a catacomb race
<mgz> not this error
<natefinch> ERROR juju.apiserver Unable to prime /mnt/tmp/check-2402074302189717898/174/logsink.log (proceeding anyway): chown /mnt/tmp/check-2402074302189717898/174/logsink.log: operation not permitted
<mgz> ...also interesting, though likely not fatal
<katco> natefinch: mgz: i've seen that before
<mgz> katco: which?
<katco> mgz: the error message natefinch just posted
<mgz> seems like we need to file at least two bugs...
<mgz> and I still think balloons' branch is causing this test to fail somehow
<katco> mgz: i assumed it was something to do with the test environment
<mgz> though I don't have a good guess remaining on why
<mgz> katco: yeah, I think it probably is, and likely is just fine in general because the tests probably don't care about logsink
<mgz> but the code is obviously expecting to be root when running but happening as a normal user when called from a unit test
<mgz> really it's an isolation escape...
<mgz> anyway, not causing this failure, just visible as the log is shown
<mgz> balloons: so, I don't think it will hurt to try landing again, but I bet it will fail again
<balloons> it got conflicted again
<balloons> i'll have to fix
<mgz> yeah, joy for mass renaming
<mgz> balloons: can you split out the bits?
<mgz> just do one branch with one dep change
<natefinch> balloons: gah, sorry... that's one of the reasons I was trying to rush with LGTMs last night, because I knew it had to go in quick or get conflicted to death
<mgz> get that landed
<mgz> then do the next
<mgz> then do the snapcraft bits?
<balloons> I don't mind the conflicts, it's fine
<mgz> well, it adds another point of failure
<balloons> and I had to make a feature branch this morning to depend on it
<mgz> if you misresolve after review
<balloons> impossible to conflict with snapcraft almost -- it's new
<ionutbalutoiu> Hello, guys! How are the upstream juju tools for 2.0 beta are generated for Windows ? I tried generating them myself and got this weird error when running the juju service on Nano Server: http://paste.ubuntu.com/22928687/. But on the other side, if use the upstream ones, they work on Nano without a problem.
<natefinch> ionutbalutoiu: that's a weird error
<ionutbalutoiu> Wondering if when generating the tools you guys pass any extra parameters or something. I just pull the sources, do godeps and go install.
<natefinch> you're sure the ones we build on nano work?  because I see some Go bugs about not running on nano
<alexisb> redir, ping
<redir> pong
<alexisb> redir, can you please get this in your review q: http://reviews.vapour.ws/r/5403/
<redir> yup
<alexisb> thanks
<ionutbalutoiu> natefinch: Even today I bootstrapped Nano with beta14 and upstream tools and Nano worked.
<ionutbalutoiu> natefinch: bootstrapped an env and deployed Nano **
<redir> do I need to setup mass to test the proposed change? uh
 * redir has no maas
<natefinch> ionutbalutoiu: oh, looks like it's maybe not a bug in 1.6.0
<natefinch> sinzui: what version of Go do we build our windows tools with?
<ionutbalutoiu> natefinch: 1.6.3
<natefinch> ionutbalutoiu: there's a bug on this issue for the Go language: https://github.com/golang/go/issues/15286  it seems to indicate that building with 1.6.0 should work.  I'd recommend trying that.
<redir> I am seeing http://paste.ubuntu.com/22931893/ this locally, but don't think it should be related to my current branch. Does anyone know what it is?
<sinzui> natefinch: 1.6-0 from Ubuntu trusty. it is cross-compiled
<ionutbalutoiu> natefinch: Thanks! I'll give it a shot right now and see if it works.
<natefinch> sinzui: is there a reason we're not building the tools with a newer version of Go?  There are some security fixes in 1.6.1 and bug fixes in general in .2 and .3
<natefinch> (aside from the fact that it'll evidently break windows nano)
<sinzui> natefinch: we use what Ubuntu provides. It is possible to use a newer version if the work is scheduled.
<ionutbalutoiu> natefinch: Go 1.6.0 worked. :)
<natefinch> sinzui: I guess I was just thinking that we control what's in streams, so we can build with whatever version we like, as long as the code also builds with what's in Ubuntu
<natefinch> ionutbalutoiu: awesome
<sinzui> natefinch: well I am not too keen on arbitray Gos. That is what got us into a mess with utopic. vivid, and wily. I want one go version to do everything. We do want to control the go version. I don't want yakkety's Go 1.7 without using it every where.
<sinzui> natefinch: balloons and I want to make CI build all agents separate from ubuntu. Our packages will onyl contain the client. Te client will be buillt with the Go Ubuntu chooses, but the agents use the best Go we can use everywhere
<natefinch> sinzui: that sounds awesome
<sinzui> natefinch: yeah, I wish the plan was a priority
<natefinch> ahh ffs... OSError: [Errno 17] File exists: './logs/controller'
<natefinch> there goes 10 minutes of my day :/
<redir> natefinch: I think that twice every time I run the test suite
<natefinch> yeah
<natefinch> sinzui:  I keep seeing this: Provisioning failed. Shrinking a disk from 136367309312 bytes to 34359738880 bytes is not supported.. ResizeDiskError
<sinzui> natefinch: on the bootstrap machine?
<natefinch> sinzui: this is using origin/master
<natefinch> sinzui: no, looks like only on the non-controller model
<natefinch> so, windows
<sinzui> natefinch: ah, that is interesting. We cannot get logs from the windows machines so we cannot see what is going wrong
<natefinch> sinzui: looks like we're not even successfully starting the vm
<sinzui> :/
<natefinch> sinzui: I'm going to go write some code and poke axw when he comes on.  Maybe he'll have an idea what's going on.  Google hasn't been much help
<sinzui> natefinch: ack
<mup> Bug #1606278 changed: juju (2.0) deploy <charm-name>/<revision#> fails <juju-core:Triaged> <https://launchpad.net/bugs/1606278>
<mbruzek> I am having some problems with the "charm publish" command, I got this : ERROR cannot publish charm or bundle: cannot publish charm or bundle: cannot update base entity for "cs:~containers/kubernetes-6": Field name duplication not allowed with modifiers
<balloons> sinzui, do you think there is a risk in removing jujud from 1.25 packages?
<mbruzek> oh sorry, wrong channel.
<mbruzek> forgive me
<sinzui> balloons: from our packages or ubuntus?
<balloons> sinzui, ubuntu's really, but I suppose our stable ppa's too
<balloons> sinzui, my thought is it's not a kosher change, and we should seek to do it in 2.0 only
<sinzui> balloons: you cannot remove jujud from our packages until you change CI to make all the agents and change the rlease scripts to retrieve them instead of getting from the the debs
<balloons> sinzui, right, apart from that
<balloons> I feel like 1.25 workflows might use it (who knows what some folks are doing), while since 2.0 isn't released yet, and --upload-tools is dying, we can feel better about removing it
<natefinch> manual provider needs it
<natefinch> doesn't it?
<balloons> well, our adt tests will break without it for instance, :-)
 * natefinch forgets
<sinzui> balloons: by removing jujud from ubuntu's packages, we ensure no bad jujuds get into production. This prevents uses from using --upload-tools. Users need to ise sync-tools and bootstrap --metadata-source to use a closed network
<sinzui> natefinch: manual does not need it because manual works across arch
<natefinch> sinzui: good point
<balloons> sinzui, I agree but I guess I'm saying removing a binary isn't really a bugfix SRU, and I think it's saner just to leave it packaged for 1.x series
<sinzui> natefinch: I wish add-machine worked with windows. We could start a win2012r2, then add-machine to prove the agent works if the machine comes up
<natefinch> sinzui: is there a bug filed for that?  I didn't realize add-machine didn't work for windows.
<sinzui> natefinch: i thought there was. I brought it up many times in the past. windows doesn't come with ssh and juju doesn't support winrm.
<natefinch> sinzui: I don't understand why that matters.  We obviously have the ability to bring up a windows machine using juju during deploy.  There's no reason we can't reuse that code and just not deploy anything.
<sinzui> natefinch: with the azure provider, it should not matter. add-machine will ask azure to bring up a machine.
<rick_h_> perrito666: heads up that macgreagoir looked at the pr, I qa'd it and he's got a question before it can land in case we want to get it into the beta tomorrow https://github.com/juju/juju/pull/5747
<perrito666> rick_h_: yup its on my today queue
<rick_h_> perrito666: coolio
<perrito666> rick_h_: no need to throw bad 90s music at me ill do it
<rick_h_> perrito666: :P
 * perrito666 sees there is an upgrade for the fortran library... what in the univers is using fortran in my computer
<natefinch> lol
<mup> Bug #1611789 changed: GUI: test_upgrade_gui failed KeyError: 'cloud' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Invalid by jcsackett> <https://launchpad.net/bugs/1611789>
<balloons> natefinch, 3 runs, 3 fails all on that uniter test
<balloons> I have to believe it's something in the diff
<mgz> balloons: can you land just the tomb change or just the gnuflag change?
<balloons> it may be possible to land just the tomb change. But I'm lockstep now. it's already landed in the underlying repos; it can't be split now
<balloons> I guess I was hoping for understanding why the test is failing -- gnuflag and tomb did NOT change at all
<balloons> so mgz, the only possible source of difference is in juju/utils: https://github.com/juju/utils/pull/230/files
<balloons> But to be fair, that's landed, so in theory that should affect everyone's builds right? So there is no reason I can think of why it's failing
<balloons> ohh right.. they aren't using it, so indeed, I can say that's the issue :-)
<mgz> balloons: so, that utils rev hasn't changed anything, but you mean there are other interveening changes?
<balloons> mgz, that utils rev pulls newer versions of other dependencies. newer commits
<mgz> balloons: that's actually unimportant
<balloons> that's the only place code could have actually changed though.. nothing else has changed
<mgz> as when juju itself builds, it goes off its dependencies.tsv which overrides the one in any deps
<balloons> true.. I just am at a loss as to what the issue could be
<mgz> balloons: run our build tarball script for before and after
<mgz> and do a diff on the whole tree, and send me that
<mgz> it's possible there's actually no code change and this is compiler oddness, but I bet we do have a functional change hidden in there
<anastasiamac> alexisb: rick_h_: release call :)
<alexisb> anastasiamac, be there soon
<mup> Bug #1418139 changed: Can't reprovision a machine with manual provider <bootstrap> <destroy-environment> <manual-provider> <manual-story> <juju-core:Fix Released by natefinch> <juju-core 1.25:Fix Released by natefinch> <https://launchpad.net/bugs/1418139>
<balloons> mgz, pulling the source tarball shows only the expected changes
<balloons> I'm giving.. I'll split it tomorrow as you suggest and don't land it together
<balloons> perhaps we can piecemeal the problem out
<mgz> balloons: I see changes in utils exec code and changes in arg parsing
<mgz> the uniter tests are unfortunately a twisty mess so it's unclear why it's now unhappy
<mgz> probably just want to bug william (or maybe horacio?) to have a look at the test output vs diff
<mup> Bug #1520571 changed: Juju destroy-environment stacktraces on local provider. <landscape> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1520571>
<mgz> balloons: this is perfectly reproducable locally
<mgz> balloons: it fails because of the utils exec change
<mgz> juju/utils pr #203
<mgz> presumably because the test framework is somewhat bogus and doesn't expect the error right away, which it should
<mup> Bug #1610037 changed: Juju2 beta14, missing network stanzas. <sts-needs-review> <juju-core:Invalid> <https://launchpad.net/bugs/1610037>
<mup> Bug #1611981 opened: LXD guests not configured due to the lack of DHCP on the interface selected as eth0 <sts> <juju-core:New> <https://launchpad.net/bugs/1611981>
<menn0> wallyworld, alexisb: no hangout?
<menn0> standup even
<wallyworld> menn0: yeah, we are here
<wallyworld> https://hangouts.google.com/hangouts/_/canonical.com/a-team-standup?authuser=1
<perrito666> menn0: you are in the old one
<mup> Bug #1611990 opened: maas bootstrap fails if maasrc is missing <bootstrap> <maas-provider> <juju-core:In Progress by wallyworld> <https://launchpad.net/bugs/1611990>
<alexisb> axw, I need a few minutes of your time
<alexisb> if you dont mind
<axw> alexisb: sure, gtg help with kids soon though
<alexisb> it should be quick
<alexisb> https://hangouts.google.com/hangouts/_/canonical.com/alexis-bruemmer
#juju-dev 2016-08-11
<wallyworld> menn0: you have a sec? https://hangouts.google.com/hangouts/_/canonical.com/ian-horatio
<menn0> axw or redir or wallyworld: http://reviews.vapour.ws/r/5408/ pls
<wallyworld> menn0: otp, will look soon
<redir> menn0: looking
<menn0> redir: thanks
<axw> redir: are you a graduated reviewer yet?
<redir> axw: nope
<redir> IANAGR
<anastasiamac> axw: do we have the idea of graduating reviewers, now that we have a checklist?
<axw> anastasiamac: dunno
<anastasiamac> axw: for graduating idea to work, each un-graduate reviewer should have a graduated mentor. I don;t think that was followed thru...
<perrito666> anastasiamac: we used to have achecklist in a spreadsheet, I presume is no longer there
<anastasiamac> perrito666: m talking about code review checklist on wiki
<anastasiamac> perrito666: gimme a sec to pull up the link..
<perrito666> ahh I mixed what you where saying
<anastasiamac> perrito666: https://github.com/juju/juju/wiki/Code-Review-Checklists
<perrito666> anastasiamac: yes, that one is always open
<axw> anastasiamac: do you have time for a review? http://reviews.vapour.ws/r/5407/
<axw> fairly straight forward
<anastasiamac> axw: for u- always :D
<axw> anastasiamac redir: thanks both for the reviews
<anastasiamac> axw: it's an awesome addition! thanks for doing it :)
<redir> axw: np
<redir> menn0: I haven't been able to make it through the QA steps as expected. I have to go now
<redir> but I'll look when I get home and fill in details
<redir> no QUIES... i n logs
<redir> bbl
<anastasiamac> axw: wallyworld: scream-related http://reviews.vapour.ws/r/5411/ :)
<wallyworld> joy
<anastasiamac> wallyworld: this one is the easiest by far :D
<axw> anastasiamac: did you run the QA steps on one of those at least?
<axw> "desired provider" is a bit vague :)
<anastasiamac> yes. i'll change to "from the list above" or could just list them in [,,,]
<axw> anastasiamac: as long as a live test has been done, I'm happy
<anastasiamac> axw: \o/ i'll double-;triple-check tonight :) don't need it in beta15 anyway \o/ but it will make the diff with anything else I propose so much more manageable :)
<wallyworld> so long as supportedArchitecures() only hits mongo
<wallyworld> before we stored metadata in state, looking up image metadata was expensive
<wallyworld> hence the caching
<anastasiamac> wallyworld: ? unrelated... this change only removes cached supported arch collection from provider environ
<wallyworld> exactly
<wallyworld> and hence the caching is now gone
<wallyworld> so if suportedArchitecures() only hits mongo, that's fine
<anastasiamac> wallyworld: the problem was that we'd never pick up new arch even if images are updated
<anastasiamac> wallyworld: caching was undesired from the start i believe.. it was an oversigh
<wallyworld> nope
<wallyworld> was deliberate
<wallyworld> the expectation was that the providers would have all images defined
<anastasiamac> wallyworld: if it was deliberate, it was shooting juju in the leg/foot..
<wallyworld> and generally they do
<wallyworld> aws, google, all public clouds have images defined
<wallyworld> and the expectation with openstack clouds also is that they are set up with all images in glance
<axw> anastasiamac wallyworld: once we stop query simplestreams, this is all moot. the caching will then be done by the image metadata update worker only
<wallyworld> exactly, that's what i was asking
<axw> stop querying simplestreams from the providers*
<wallyworld> s/asking/confirming
<wallyworld> i just wanted to make sure that the query to mongo was confirmed before we removed the caching
<axw> wallyworld anastasiamac: we're still doing it. I assumed that the "precursor to..." statement in the description referred to that being done as a follow-u
<axw> p
<anastasiamac> axw: exactly, so this work is a precursor to stop provider looking ins simple streams.. m expecting the whole lot will end up in one beta without biting anyone... but wanted to break it into separate PRs for ease of review :)
<wallyworld> if we can do it all in one beta cycle that would be desireable
<wallyworld> yay, ty
<anastasiamac> :D
<wallyworld> just wanted to be sure that not having the cache escapes
<wallyworld> otherwise performance will regress badly
<wallyworld> s/will/probably
<axw> anastasiamac: maybe you could line up the next one and get a LGTM and land at the same time then?
<axw> anastasiamac: no value in landing this one by itself I think
<menn0> redir: really? did you use the debug-log command in the QA steps?
<mup> Bug #1612026 opened: depoloyer/jujuclient broken: show-model flag provided but not defined: -m <ci> <deployer> <regression> <juju-core:Incomplete> <juju-deployer:Confirmed> <https://launchpad.net/bugs/1612026>
<anastasiamac> axw: agreed
<mup> Bug #1612026 changed: depoloyer/jujuclient broken: show-model flag provided but not defined: -m <ci> <deployer> <regression> <juju-core:Incomplete> <juju-deployer:Confirmed> <https://launchpad.net/bugs/1612026>
<mup> Bug #1612026 opened: depoloyer/jujuclient broken: show-model flag provided but not defined: -m <ci> <deployer> <regression> <juju-ci-tools:Triaged> <juju-core:Incomplete> <juju-deployer:Confirmed> <https://launchpad.net/bugs/1612026>
<menn0> redir: I'm going to merge as I'm pretty sure it's right and I have to positive reviews
<menn0> redir: let me know if you really can't repro the QA steps
<redir> menn0: OK, I am back and running through QA one more time cut and paste style
<menn0> redir: ok great.
<redir> just waiting for apt-get up*
<redir> seems slow right now
<menn0> redir: make sure you're waiting long enough for the migration to finish (check the top of "juju status -m A:foo --format yaml")
<menn0> redir: apt-get update always seems a bit slow
<redir> menn0: yeah was doing  watch juju status
<redir> but re running to make sure I am on correct branch
<menn0> redir: you should set up squid-deb-proxy if you haven't already
<redir> that I did go install ... and not juju install ....(which is in my history so I've definitely done that at some point)
<redir> I should set that up, it sounds like it would be awesome.
 * redir googles
<menn0> redir: also for this test using --config enable-os-refresh-upgrade=false and --config enable-os-upgrade=false is totally fine
<menn0> redir: I usually use this config when testing stuff locally
<menn0> default-series: xenial
<menn0> logging-config: "<root>=DEBUG"
<menn0> apt-http-proxy: http://10.0.8.1:8888
<menn0> enable-os-refresh-update: true
<menn0> enable-os-upgrade: false
<redir> I thought i had that set as defaults in my yaml config
<menn0> where the service on 8888 is squid-deb-proxy
<redir> is that on the wiki?
<redir> :)
<menn0> probably not
 * menn0 adds that to his todo list
<redir> meh, I'll bug you for details tomorrow afternoon and put it up while I do it.
<redir> menn0: ^^
<redir> my afternoon
<menn0> redir: ok
<mup> Bug #1587993 changed: lxc creation fails, invalid include <canonical-bootstack> <intermittent> <juju-core:Invalid> <juju-core 1.25:Incomplete> <https://launchpad.net/bugs/1587993>
<menn0> redir: how's the bootstrap going?
<redir> menn0: on the second one
<menn0> redir: another thing you can do is bootstrap multiple controllers in parallel
<menn0> redir: that makes a big difference
<redir> with above config... so hopefully much faster
<redir> I'm in cut an paste mode:)
<menn0> redir: I didn't put it in the instructions because it complicates things
<redir> no worries
<redir> I just figure I fat fingered something earlier
<redir> ok
<redir> I see no quies
<redir> menn0: ^
<menn0> redir: hmmm ok
<redir> I can try migrating it back to A
<redir> :)
<natefinch> axw: you around?
<menn0> redir: did the migration complete? i.e. juju status -m A:foo now fails, and juju status -m B:foo now works?
<redir> yeah it disappeared from status
<mup> Bug #1469077 changed: Leadership claims, document larger than capped size <landscape> <leadership> <juju-core:Invalid> <juju-core 1.24:Won't Fix by fwereade> <https://launchpad.net/bugs/1469077>
<mup> Bug #1612043 opened: improvement to juju register text <juju-core:New> <https://launchpad.net/bugs/1612043>
<veebers> axw, wallyworld: I have a query re: model config. If I have a clouds.yaml file as shown in this paste and run the bootstrap shown in the paste, I should expect to see ftp-proxy set to the controller value when using 'juju model-config' right? (Or have I misconfigured something): http://pastebin.ubuntu.com/22991841/
<menn0> redir: what does "juju debug-log -m A:controller --replay -T | grep migrationmaster" show?
<wallyworld> veebers: no, you are overriding it
<wallyworld> --config ftp-proxy=ftp://fake.com
<redir> menn0: nada
<axw> natefinch: I am around, what's up?
<menn0> redir: nothing at all?
<redir> correct
<wallyworld> veebers: if you juju unset ftp-proxy, it wil revert to the controller value
<natefinch> axw: saw your comments on that windows bug after I pinged you.  Was going to ask about the ResizeDiskError
<menn0> redir: ok, what about: juju list-controllers?
<redir> menn0: http://paste.ubuntu.com/22992085/
<axw> natefinch: I have a patch ready. just need to change the azure provider to set a minimum for the root-disk constraint to 127G
<veebers> wallyworld: ah right that's the one. I had in my mind that model overwrites default and controller overwrites model
<axw> natefinch: it seemed that there were other errors after that though
<wallyworld> ah, i see
<redir> menn0: wanna live look for a minute?
<menn0> redir: that's probably easier
<natefinch> axw: I didn't look much further than that... I poked around, but I don't know the azure interface very well
<natefinch> axw: were there other logs that indicated an error?
<veebers> wallyworld: So it's actually, model overwrites controller and controller overwrites default?
<wallyworld> yes, and soon model > region > controller > default
<axw> natefinch: in the portal, if you go to the VM and click on VM Extensions, you can see the output of the script we run on machine startup that effectively does the same thing as cloud-init
<redir> menn0: pm'ed link
<axw> natefinch: there were some errors in there to do with contacting the controller for tools
<natefinch> axw: ok... yeah, that's the bug I was trying to look into :)
<axw> natefinch: so if you want to repro and get past the root-disk resizing thing, pass "--constraints root-disk=127G"
<natefinch> axw: I'm surprised it got that far... seemed like the VM failed to even start, but maybe I was misinterpreting the UI
<natefinch> axw: nice
<axw> natefinch: yes it did fail to start, because of the root disk thing
<axw> natefinch: I mean after you get past that issue, you'll see the other one :)
<natefinch> oh I see. Ok :)
<veebers> wallyworld: cool thanks. Oh and you said the other day that if model and controller values are the same it should say controller in the output?
<wallyworld> yes
<wallyworld> and that is calculated dynamically
<mup> Bug #1612043 changed: improvement to juju register text <juju-core:New> <https://launchpad.net/bugs/1612043>
<mup> Bug #1469077 opened: Leadership claims, document larger than capped size <landscape> <leadership> <juju-core:Invalid> <juju-core 1.24:Won't Fix by fwereade> <https://launchpad.net/bugs/1469077>
<veebers> ah cool, I'll ensure that's tested too.
<mup> Bug #1469077 changed: Leadership claims, document larger than capped size <landscape> <leadership> <juju-core:Invalid> <juju-core 1.24:Won't Fix by fwereade> <https://launchpad.net/bugs/1469077>
<mup> Bug #1612043 opened: improvement to juju register text <juju-core:New> <https://launchpad.net/bugs/1612043>
<mup> Bug #1612046 opened: inconsistent error text when attempting unauthorised command <juju-core:New> <https://launchpad.net/bugs/1612046>
<axw> wallyworld: I don't think we should be testing the provisioningState != Succeeded in the provider. That's handled by the Azure SDK.
<wallyworld> axw: yeah, that is true and i agree. i just wanted to be 100% sure by asking
<axw> wallyworld: okey dokey. just bringing it up here in case you wanted to debate :)
<wallyworld> that we didn't need to have a unit test for a failure case where provider didn't succeed
<wallyworld> you can drop, feel free
<veebers> wallyworld, axw: When you have a moment could you take a quick shufti at my revised testplan for the config tree and confirm if it's sensible please? https://pastebin.canonical.com/162911/
<axw> anastasiamac natefinch menn0: http://reviews.vapour.ws/r/5412/ is just over 500 lines, do any of you have a little time to spare for a second review?
<axw> a what now :)
<wallyworld> veebers: sure, give me 5
<axw> veebers: sounds fine to me
<veebers> cool thanks.
<veebers> oh axw, can I set a global setting in clouds.yaml, or does it need to be under a clouds -> cloudname -> config stanza?
<axw> veebers: the latter
<veebers> axw cool thanks
<natefinch> axw: I'm a bit knackered, otherwise I'd take a shufti at your review
<axw> natefinch: heh ;p
<axw> natefinch: no worries
<natefinch> axw: btw, I'm thinking of putting the json tags back on the schema object, if for informational purposes only
<natefinch> axw: I would imagine that someone wanting to write a schema would go look at this package... even though jsonschema is standard, it's nice to give the standard names here where they'd already be looking
<axw> natefinch: if they're not being used, seems more like misinformation to me. if I come across a type with json tags, I expect that they're used for json un/marshaling
 * axw shrugs
<axw> not terribly fussed
<natefinch> axw: well... they do indicate the correct json tags for marshaling/unmarshaling. The fact that we ignore them is an implementation detail ;)
<axw> natefinch: I guess it doesn't matter since they can't reasonably be changed anyway
<natefinch> yep
<mup> Bug #1612048 opened: juju needs a "list-shares" command for a controller <juju-core:New> <https://launchpad.net/bugs/1612048>
<anastasiamac> axw: sorry was afk - school pickup. did u get a 2nd review on 5412?
<axw> anastasiamac: no, if you have time I would greatly appreciate it
<anastasiamac> axw: looking now \o/
<natefinch> wallyworld: what does logout actually do?
<wallyworld> natefinch: logs out of the specified controller, meaning yuo need to login again to do stuff
<natefinch> wallyworld: so like uncaches whatever credentials?
<wallyworld> it is reflected in list-controllers
<wallyworld> yeah
<natefinch> ok
<natefinch> axw: I ended up tweaking the schema struct a bit more to make it more friendly to struct literals, but it's working for marshal/unmarshal now: https://godoc.org/github.com/natefinch/cloudschema
<natefinch> axw: this let me drop all visible references to go-jsschema, so consumers don't even need to know about it (presuming we add a validation function to this package)
<axw> natefinch: yeah, I was thinking about that. sounds good
<natefinch> cool.  Bedtime for me.
<axw> natefinch: good night
<anastasiamac> axw: LGTM and saw that u have already confirmed that u've live-tested :)
<axw> anastasiamac: thank you
<anastasiamac> axw: \o/
<axw> anastasiamac: next time in the code I'll make that number a var. it's a bit magic either way - I'll see if I can think of a better way to do it
<anastasiamac> axw: nps - just saw that it was shuffled in couple of places from previous codebase :) was just a note \o/
<veebers> axw: Another clouds.yaml file, if I have a bootstrap arg like in this paste, I should need option B (in the paste) in clouds.yaml right? http://pastebin.ubuntu.com/23001462/
<axw> veebers: correct
<veebers> axw: awesome, thanks :-)
<axw> veebers: assuming you meant shouldn't :)
<veebers> axw: oh hah, um. Better asked, which of A or B should work in this example?
<axw> veebers: you don't need either of them
<axw> veebers: (unless you want to use config of course; I think you should just use B)
<axw> veebers: --of course. I don't think we're yet merging config with existing cloud definitions. so if you want to define cloud config, you need an entry in clouds.yaml
<axw> veebers: gtg pick up my daughter, be back in a little while
<veebers> axw: To clarify I'm wanting to set a controller config setting (which I do via clouds.yaml as my understanding), using --config for bootstrap will set a model config option right?
<axw> veebers: correct. so use B from the example, and add a "cloud:" section as a sibling of "regions:"
<veebers> axw: should 'cloud:' contain anything/
<axw> veebers: gah, I meant "config:" not "cloud:"
<axw> veebers: i.e. with the controller-level attributes in it
<veebers> axw: ah right, that makes sense :-) Thanks
<mup> Bug #1612099 opened: mark currently logged in user in "list-users" <juju-core:Triaged> <https://launchpad.net/bugs/1612099>
<mup> Bug #1612105 opened: juju help usage output <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1612105>
<mup> Bug #1612110 opened: update README.md to include running godeps <adoption> <juju-core:Triaged> <https://launchpad.net/bugs/1612110>
<mup> Bug #1612112 opened: upgrade-charm should warn when the same version is detected <juju-core:Triaged> <https://launchpad.net/bugs/1612112>
<babbageclunk> fwereade, dimitern: I've just realised I also need to include subnet info in the NetworkConfig I return from the API (if I want it to be complete), which means my mocks and shims need to cover State -> Machine -> LinkLayerDevice -> Address -> Subnet.
<babbageclunk> fwereade, dimitern: I feel like this is increasingly impractical - is there a better way of doing it?
<dimitern> babbageclunk: I don't think that's necessary
<babbageclunk> dimitern: Good! :) What should I do instead?
<dimitern> babbageclunk: having the IP and MAC addresses, along with the provider IDs of the interface and the address will be sufficient I think
<dimitern> babbageclunk: in fact, I'm OK with changing the definition of ReleaseContainerAddresses not to take []params.NetworkConfig, but a slice of a new, simpler type
<babbageclunk> dimitern: Yeah, I was reaching towards something like that too.
<dimitern> babbageclunk: e.g. params.InterfaceAssignments { InterfaceName, InterfaceProviderID, MACAddress, IPAddress, IPAddressProviderID }
<dimitern> (not necessarily with these names
<dimitern> )
<babbageclunk> dimitern: Then I think why include the ip address details? We don't need them at the moment.
 * dimitern *thinks*
<fwereade> babbageclunk, dimitern: I would say that the db-backed-object style of state is the root problem here
<dimitern> babbageclunk: we do, if we're going to do IPAM for MAAS as well
<dimitern> babbageclunk: and it shouldn't be a huge deal to add it
<babbageclunk> fwereade: Yeah, I think so too, but I'm worried about trying to change that as part of that work.
<babbageclunk> fwereade: s/that/this
<fwereade> babbageclunk, dimitern: I don't think we need to *change* it so much as just route around it
<dimitern> fwereade: I tried to make the model inside network/ and extract the bits to store in the db and pass over the api from there
 * dimitern needs to step out to the ironmonger quickly..
<babbageclunk> fwereade: So I could make other methods in state that return LinkLayerDeviceArgs instead of LinkLayerDevices?
<fwereade> babbageclunk, dimitern: `func (*State) ContainerAddressStuff() *ContainerAddressStuff` with the methods you need, calling down to implementations shared with machine et al, means you need to write and test that... but it also means you can make the apiserver tests much simpler
<fwereade> babbageclunk, I'm -1 on exposing params stuff from state vs core types
<fwereade> babbageclunk, I think it's the apiserver's job to convert dependency-free core types into wire representations
<fwereade> babbageclunk, similarly to how it's state's job to (internally!) convert core types back and forth into db representations
<fwereade> babbageclunk, making any sense?
<babbageclunk> fwereade: So what about moving the {LinkLayerDevice,Address}Args types from state into core and making methods to return them?
<babbageclunk> fwereade: And SubnetInfo
<babbageclunk> fwereade: Sorry, got the name of the address one wrong - it's LinkLayerDeviceAddress.
<babbageclunk> fwereade: Then I think that matches what you're saying - they're just plain old data that the tests can construct, no nasty *States or private ctors.
<babbageclunk> fwereade: I think this is what katco was suggesting in a review a while ago but I didn't really understand because I hadn't experienced the full horror of all of these nested shims and mocks.
 * babbageclunk thinks maybe fwereade's popped out to the apothecary?
 * babbageclunk should really visit the barber-surgeon sometime.
<babbageclunk> fwereade: Oh, hang on - I'd forgotten that those types already exist in core/description. Can I use those?
<babbageclunk> fwereade: Annoyingly they also have private constructors, so what I'd end up returning from the methods in state would be core.description.{LinkLayerDevice,IPAddress,Subnet}Args.
<fwereade> babbageclunk, sorry -- but, yes, I think what you're saying makes sense
<fwereade> babbageclunk, I'm not 100% sure whether they're description types, or whether description should be pulling from core/network or something, but it seems like a good start
<fwereade> babbageclunk, the more things we can represent and communicate as POD the better, I think
<babbageclunk> fwereade: Ok - I'm going to start adding the methods to state to return core.description.*Args first - if we want to move those types elsewhere we can do that. Should I go as far as State.Machine(id) -> State.MachineArgs(id)
<babbageclunk> ?
<babbageclunk> fwereade: There must already be something that converts from eg a state.LinkLayerDevice to a core.description.LinkLayerDevice, right?
<fwereade> babbageclunk, apart from suspicion re "Args" being a suboptimal name vs "Spec" or "Info" or something, I think that's the right structure
<fwereade> babbageclunk, I'm sure there is, but I suspect it's hidden away in migration_export
<babbageclunk> fwereade: Yeah, the name was putting me off too.
<babbageclunk> fwereade: Gah! BDSM languages with their access rules and their enforced interfaces! :)
<fwereade> babbageclunk, oddly enough I was just fulminating privately about the CharmInfo type we use to add charms but don't ever return from state
<fwereade> hahaha
<dimitern> babbageclunk: I was trying to follow along, but getting distracted - please ping me when you have those changes to have a look though
<mup> Bug #1612163 opened: state refcounts should be consolidated <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1612163>
<babbageclunk> fwereade, dimitern: I feel like this might be a boil the ocean solution.
<fwereade> babbageclunk, if it's too much, you're closer than me and your judgment is probably clearer
<fwereade> babbageclunk, I'd been hoping we could add a relatively clean/narrow interface to the functionality we need
<fwereade> babbageclunk, i.e. we don't need to rewrite what exists, just hide it behind something more fit for purpose
<voidspace> fwereade: team meeting?
<fwereade> voidspace, thanks
<voidspace> dimitern: do you have an example /e/n/i for a maas rack controller that defines vlans? I just have one subnet (and one nic) on the rack controller - but can reconfigure if needed (all KVM).
<dimitern> voidspace: yep, just a sec
<voidspace> dimitern: sure
<dimitern> voidspace: http://paste.ubuntu.com/23010779/ here's my 2.0 vmaas
<voidspace> dimitern: thanks!
<rick_h_> morning
<anastasiamac> rick_h_: \o/
<rick_h_> katco: dimitern ping for standup
<rick_h_> katco: can you take a peek at https://github.com/juju/juju/pull/5978/files please for ui folks?
<katco> rick_h_: sure
<rick_h_> katco: ty
<mup> Bug #1520292 changed: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <bug-squad> <upgrade-juju> <juju-core:Invalid> <https://launchpad.net/bugs/1520292>
<mup> Bug #1520292 opened: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <bug-squad> <upgrade-juju> <juju-core:Invalid> <https://launchpad.net/bugs/1520292>
<katco> frankban: hey, left a new comment on http://reviews.vapour.ws/r/5419/
<frankban> katco: sure, I like this new rules actually, good to know
<mup> Bug #1427257 changed: Juju backup doesn't contain .juju files <backup-restore> <juju-core:Invalid> <https://launchpad.net/bugs/1427257>
<mup> Bug #1520292 changed: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <bug-squad> <upgrade-juju> <juju-core:Invalid> <https://launchpad.net/bugs/1520292>
<mup> Bug #1545686 changed: Multi-series not backwards compatible <openstack> <uosci> <juju-core:Triaged by uros-jovanovic> <https://launchpad.net/bugs/1545686>
<frankban> katco: updated instructions at http://reviews.vapour.ws/r/5419/
<katco> frankban: ta, tal
<frankban> katco or axw: could you please take a look?
<frankban> we'd like to include that in next beta
<katco> frankban: did you publish your changes?
<frankban> katco: I just did, sorry
<katco> frankban: np, ty
<katco> frankban: this is great, ty. please also amend your commit message with this information (you can elide the qa portion if you like)
<frankban> katco: done
<frankban> katco: thanks for the review, shipping it
<katco> frankban: thanks for the change! <3
<babbageclunk> dimitern: Could you take a look at http://reviews.vapour.ws/r/5420/? Should be a quick one!
<dimitern> babbageclunk: looking
<dimitern> babbageclunk: LGTM
<babbageclunk> dimitern: Yay, thanks!
<babbageclunk> dimitern: This is the beginning of the end for this bug, I can feel it in me waters.
<dimitern> :D indded
<redir> morning
<mup> Bug #1466152 changed: [RFE] log locks and other relevant information when jujud receives SIGUSR1 <feature> <logging> <juju-core:Invalid> <https://launchpad.net/bugs/1466152>
<balloons> mgz, so I saw you got a little further on my PR woes
<mgz> balloons: yeah, I hoped that would be enough to get you unstuck
<balloons> mgz, so do you have any thoughts then on getting it landed? I broke it up a little, but my attempt failed (beyond landing just the snapcraft.yaml)
<mgz> I'm not totally clear from william what his preferered solution is, but we need to change the utils/exec code at least partly back to how it was before
<balloons> is fwereade still about?
<balloons> mgz, if not, I'd appreciate your thoughts on what we can try
<balloons> shall I land something in juju/utils to undo the updates to exec?
<mgz> that's certainly an option
<balloons> mgz, seems simple enough honestly I suppose
<balloons> bogdanteleaga are you about?
<balloons> mgz, thank you for figuring out the bad commit btw :-)
<mup> Bug #1612335 opened: Azure rate limit leads to catastrophic failure for subscription <azure-provider> <bootstrap> <ci> <destroy-controller> <kill-controller> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1612335>
<balloons> mgz, I like that option. See https://github.com/juju/utils/pull/232
<balloons> I'll notify bogdanteleaga he'll need to redo his PR
<mgz> balloons: okay, poke ocr for a review (I probably shouldn't as I was also involved in the change)
<balloons> mgz, ack
<balloons> mgz, I believe that's frobware though and he's EOD
<balloons> redir, perhaps you can review for me? https://github.com/juju/utils/pull/232
<mgz> pretty sure he's on holiday so it must be someone else?
<balloons> calendar says Andy; which I suppose is ambigous :p
<balloons> I'm going to pretend it's Wednesday and ask Reed
<dimitern> if anyone has half a minute, I'd appreciate a review on this: http://reviews.vapour.ws/r/5421/
<dimitern> it's quite tiny
<dimitern> rick_h_: here's the extra logging PR ^^
<redir> balloons: looking, lost in apt-get failures:/
<redir> balloons: so reverting a revert?
<balloons> redir, reverting https://github.com/juju/utils/commit/8a9dea08709bd9f65bbf7ed2c9bcfa0656eb8a7b, which causes a unit test failure
<balloons> and I'd like to land my otherwise clean PR without being hit by hit (because I need a new version of juju/utils than juju/juju has)
<redir> changes made to 10adcbf.
<natefinch> man I hate middle click paste
<redir> natefinch: weechat multiline paste to msg command
<natefinch> I can't tell you how many times I have random bits of text appear in my code because I pressed too hard while scrolling with scroll wheel
 * rick_h_ can't live w/o middle click paste
<redir> I don't have a scroll wheel:)
<natefinch> if I had a button separate from the scroll wheel it might be ok
<natefinch> redir: OMG, how can you live like that?
<redir> natefinch: can you give a graduated review to balloons PR above?
<balloons> natefinch, touchpad scrolling doesn't work for you? 2 fingers!
<redir> natefinch: with a trackball when I really have to
<balloons> natefinch, i have itchy fingers, but if it's not up to snuff, I'll can the merge
<redir> balloons: touchpad constantly reads my palms
<natefinch> balloons: sure, when i'm forced to use a touchpad like a sucker.  When I'm at a desk, I always have a real mouse.
<balloons> redir, ahh.. yea, the laptop matters SO much. The one I have behaves really nicely. No palm reading!
<natefinch> yeah, what redir said... linux "disable touchpad while typing" 100% does not work on my laptop (not sure about others)
<redir> anyone else having 404 issues wiht archive.ubuntu.com?
<redir> I cannot bootstrap a controller:/
<natefinch> balloons: ship it
<rick_h_> redir: just bootstrapped here in the last 3min
<balloons> :-)
<rick_h_> redir: pipes your way to your mirror cranky?
<natefinch> redir: I can get to it via my browser
<natefinch> balloons: oops... I reviewed dimtern's change, not yours
<redir> natefinch: http://archive.ubuntu.com/ubuntu/pool/main/q/qemu/qemu-utils_2.5+dfsg-5ubuntu10.2_amd64.deb
<balloons> natefinch, :p. I trust you'll come to the same conclusion
<rick_h_> and dimiter goes "winning!"
<redir> rick_h_: ^
 * balloons waves hand in front of natefinch's eyes
<natefinch> redir: that's a 404 year
<natefinch> yeah
<redir> whelp seems bootstrap in LXD hands on a handfull of those for me
<natefinch> balloons: link me?
<redir> tried deleting the lxc images for fun, cleaned out dns, etc
 * redir scratches head
<natefinch> I think the people who wrote Ubuntu SSO forgot what that first S stands for
<rick_h_> redir: apt-get update fail? there's a 10.3 in the archive not a .2
<redir> Serial Sign On
<rick_h_> redir: looks like it's just out of date index atm
<redir> rick_h_: k
<redir> I'll just do something else for a while
 * redir looks at email!
<rick_h_> redir: so http://archive.ubuntu.com/ubuntu/pool/main/q/qemu/qemu-utils_2.5+dfsg-5ubuntu10.3_amd64.deb is there
<mup> Bug #1583771 changed: RunnerSuite.TestOneWorkerStartWhenStopping timed out <ci> <intermittent-failure> <race-condition> <regression> <unit-tests> <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/1583771>
<mup> Bug #1596462 changed: Deployment failed because state DB is locked <ci> <deploy> <reliability> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1596462>
<mup> Bug #1600301 changed: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Fix Released by menno.smits> <https://launchpad.net/bugs/1600301>
<redir> rick_h_: grumble unfixing enable-os-refresh-update for speed fixes my problem. but also unfixes speedup.
<redir> I guess speedup doesn't matter if it doesn't work
<rick_h_> redir: if you want it to go fast...add caching. If you want it to work...
 * rick_h_ thinks that's famous ancient chinese proverb
<redir> I want both
 * rick_h_ goes to find some sort of ancient proverb about greed
<redir> all three really
<redir> rick_h_: yu gung can move mountains
<natefinch> I think a meeting with 97 people is gonna be a little crowded
<perrito666> natefinch: ?
<natefinch> perrito666: just got an invite to the juju cross team meeting... which evidently includes everyone except you
<natefinch> perrito666: nope, I'm wrong, you're on there
<perrito666> natefinch: it includes me
<perrito666> i just ignore it
<perrito666> I just went when I was replacing you
<natefinch> I thought it might be new, I didn't remember it being on my calendar
<alexisb> perrito666, ping
<perrito666> alexisb: pong
<alexisb> sounds like some brokeness with your permissions commit
<alexisb> can you please reach out to sinzui and team
<perrito666> alexisb: uff, life cant be easy on me apparently
<perrito666> alexisb: sure
<alexisb> thanks
<mup> Bug # changed: 1558061, 1566011, 1570657, 1577776, 1587653, 1588390, 1594720, 1597170, 1597601, 1597720, 1599972, 1604482, 1605096, 1607457, 1607620, 1608421, 1608818, 1608821, 1609343, 1610012
<mup> Bug #1612395 opened: TaaS RestAPI fails due to "Unknown column 'tap_services.project_id'  <tap-as-a-service> <juju-core:New> <https://launchpad.net/bugs/1612395>
<mup> Bug #1612397 opened: juju 2.0 does not have an unset-config command anymore <juju-core:New> <https://launchpad.net/bugs/1612397>
<balloons> morning wallyworld
<wallyworld> hey
<katco> wallyworld: you're up early
<katco> or maybe just on early
<wallyworld> goota drive the kid to school in a sec, early start today
<katco> cool
<balloons> wallyworld, anyways as you guessed, we're releasing beta15 atm, but yea, I would like it to land right after
<katco> that will be me in a week
<wallyworld> :-)
<wallyworld> balloons: no worries, willdo asap
<wallyworld> after release
<wallyworld> been wanting to land
<wallyworld> balloons: we still got to sort out how to deal with settings, as per email conversations
<balloons> wallyworld, brillant. I'd like to also talk with you about how you want to snap the agents as well
<balloons> wallyworld, right. We need to sync up a bit on the next steps
<wallyworld> plus i wish the godeps plugin could be pointed to a source dir
<balloons> it's just simple python. I'd like it fixed too, and I can't believe it's a difficult thing to do
<balloons> but for my needs, re-pulling is fine, heh
<wallyworld> yeah agreed, patches accepted i'm srue
<wallyworld> it works, but is sloooooow
<wallyworld> and doesn't really fit dev workflow very well
<balloons> well, it's horrible for developer workflow
<balloons> but for releasing, it's just fine. I have to pull anyway
<wallyworld> true for release
<wallyworld> right, bbiab, need coffee and breakfast after school drop off
<natefinch> gah, why does yaml.Unmarshal spit out map[interface{}]interface{} instead of map[string]interface{}?
<katco> gah... my computer's been freezing all day. finally figured out gocode was eating 8GB of memory o.0
<katco> ...and wants to continue doing so? wtf
<redir> katco: I blame emacs
<redir> egacs
<redir> eight gigs and continuous swapping
<redir> jk
<katco> redir: emacs irc module, erc, has this great "fool" feature which demphasizes/ignores fools who don't know what they're talking about. ;p
<rick_h_> natefinch: what's up with this one? https://bugs.launchpad.net/juju-core/+bug/1552274 is that something still looking at?
<mup> Bug #1552274: juju list-credentials inconsistencies between format output <2.0-count> <bitesize> <conjure> <juju-release-support> <rc1> <usability> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1552274>
<mup> Bug #1612417 opened: revoke failed: User checked status without read permission <ci> <regression> <revoke> <juju-ci-tools:In Progress by sinzui> <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1612417>
<mup> Bug #1484606 changed: bootstrap fails if control bucket not specified and exists <config> <openstack-provider> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1484606>
<mup> Bug #1612395 changed: TaaS RestAPI fails due to "Unknown column 'tap_services.project_id'  <tap-as-a-service> <juju-core:Invalid> <https://launchpad.net/bugs/1612395>
<redir> katco: and psychoanalyze-pinhead
<katco> redir: lol
<redir> wallyworld: you back?
<wallyworld> in a meeting, but yeah
<redir> k
<niedbalski> rick_h_, Hey rick ? how's going? I saw that you assigned 1611981 to yourself :), let me know if you have any question.
<redir> holler if you get free wallyworld
<wallyworld> redir: will do, got another meeting straight after this one, will ping as soon as i can
<redir> wallyworld: no sweat
<rick_h_> wallyworld: want to have our chat?
<wallyworld> rick_h_: ok, if you are free, can you give me 5? i thought i saw a decline so i am having another meetig, so give me a sec
<rick_h_> niedbalski: cool, that's my way of saying my team will look at it in the near future. will bug if we have questions
<wallyworld> rick_h_: i'm free now if you are
<mup> Bug #1605986 opened: Confusing error: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
<mup> Bug #1612397 changed: juju 2.0 does not have an unset-config command anymore <juju-core:Invalid> <https://launchpad.net/bugs/1612397>
<wallyworld> redir: finally finished other meetings if you wanted to talk. oh how i love meetings
<perrito666> I definitely need an apt proxy or lxd provider is sloooow
<alexisb> perrito666, it has gotten a lot slower recentl
<alexisb> I meant to fire up beta8 and see if it was faster
<perrito666> alexisb: this seems to be just cloudinit downloading ~50M from a repo that might not be too close
<perrito666> and then juju most likely downloads another 50
<alexisb> yes but th eimage I have should be cached at this point
<perrito666> alexisb: the image is, notice how it goes fast until you reach cloudinit
<perrito666> then aptget update/upgrade
#juju-dev 2016-08-12
<wallyworld> redir: did you want to talk?
<redir> sure
<redir> where?
<wallyworld> redir: standup ho is fine with me
<redir> OK
<redir> brt
<mup> Bug #1333162 changed: cannot bootstrap on openstack provider:  Multiple security_group matches found for name 'XYZ', use an ID to be more specific. <bug-squad>
<mup> <charmers> <openstack-provider> <repeatability> <security> <uosci> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1333162>
<menn0> wallyworld, axw: can we schedule some time next week to talk about the impact of model config tree on migrations?
<axw> menn0: sure. any day after 9am is fine for me. some days I can do earlier, some I can't
<menn0> axw: 9am your time right?
<axw> menn0: yup
<menn0> cool
<axw> menn0: gotta do school dropoff
<menn0> axw: let's wait until wallyworld responds and we'll make a time
<wallyworld> anytime suits me
<wallyworld> 11am works
 * menn0 opens time zone planner
<menn0> ok, that works for me
<menn0> i'll put it in the calendar
<menn0> wallyworld, axw: done
<wallyworld> menn0: not shown up for me yet - you made it next monday?
<menn0> checking
<wallyworld> you made it last monday
<wallyworld> :-)
<menn0> wallyworld: I'm a moran
<menn0> fixed
<wallyworld> we've all been there :-D
<wallyworld> so it i
<wallyworld> s
<menn0> https://cdn1.vox-cdn.com/thumbor/cdRv1O8Rg-_aVC7N81Z6ROrV6_A=/0x0:524x349/1310x873/cdn0.vox-cdn.com/uploads/chorus_image/image/25006307/moran.0.jpg
<wallyworld> lol
<wallyworld> classic
<mup> Bug #1428439 changed: retry-provisioning launches instances for containers; cannot retry containers at all <juju-core:Invalid> <juju-core 1.24:Fix Released by hduran-8> <https://launchpad.net/bugs/1428439>
<mup> Bug #1612478 opened: remove retry-provisiosing command <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1612478>
<natefinch> am I doing something wrong?  When I marshal from json to a generic structure in Go, I get map[string]interface{}, when I do it with gopkg.in/yaml.v2, I get map[interface{}]interface{} ... what's up with that, anyone know?
<perrito666> I would guess that yaml is wrong (but no, yaml does that iirc)
<perrito666> Menn0 I would be grateful if you publish the instructions (even if crude) for the proxy in juju, the creation of a new lxd instance takes forever
<menn0> perrito666: ok, I'll try and do that today
<perrito666> Just a hint would be enough
<perrito666> I am at near 25 per bootstrap
<perrito666> Or more
<natefinch> wallyworld: can you make a new repo in github.com/juju called cloudschema?
<wallyworld> sure
<wallyworld> cloudschema? did we want ot more general?
<wallyworld> it
<natefinch> not sure
<wallyworld> is this for the jsonschema stuff?
<natefinch> right now it's not cloud config specific, but it is "juju config that we'll want to use for interactive prompts" specific
<natefinch> wallyworld: current API: https://godoc.org/github.com/natefinch/cloudschema
<wallyworld> right, so i can't ser why "cloud" is relevant, it's just a schema lib
<natefinch> wallyworld: because there's metadata that isn't validation schema, like "secret" and metadata for generating CLI prompts
<wallyworld> that's not cloud either
<wallyworld> passwords are a generic concept
<natefinch> yes, true
<natefinch> clischema?
<wallyworld> gojsonschema is taken it looks like
<wallyworld> is it reall just CLI; the same thing could be used got GUI
<natefinch> true
<natefinch> uxschema?
<wallyworld> what's in gojsonschema?
<natefinch> that's the old jsonschema we forked from someone else
<natefinch> it's what this will replace
<natefinch> that and environschema
<wallyworld> hmmm, so we really want this new one call gojsonschema. why not gojsonschema.v2
<natefinch> no I really don't want to call it gojsonschema
<wallyworld> that's what it is
<wallyworld> isn't it?
<natefinch> it's jsonschema... you don't need "go"
<natefinch> if you want, we can call it jsonschema
<wallyworld> the convention with go projects is goblah right?
<wallyworld> but jsonschema works
<natefinch> the problem is that it's not really just jsonschema.  it's jsonschema with extra junk
<wallyworld> hmmm
<wallyworld> i hate naming stuff
<wallyworld> natefinch: because it's off juju, maybe jsonschema is ok, since it's a jsonschema lib for use in juju? if it does jsonschema plus extras, i think that's ok?
<wallyworld> it's not not jsonschema
<natefinch> that's fine
<wallyworld> natefinch: done
<natefinch> wallyworld: sweet, thank you
<natefinch> wallyworld: and thanks for the input on the name
<wallyworld> sure :-)
<wallyworld> bikeshed :-)
<natefinch> names matter more than we think :)
<natefinch> lol
<natefinch> $ ls
<natefinch> LICENCE  LICENSE  README.md  schema.go  schema_test.go
<natefinch> crazy brits
<wallyworld> natefinch: it's only you guys in the US that spell differently to the rest of the whole world
<wallyworld> we know the difference between nouns and verbs
<wallyworld> licence is a noun. license is a verb
<wallyworld> advice is a noun. advise is a verb
<wallyworld> etc
<natefinch> hmm, good example.
<wallyworld> we also understand the difference between least to most significant. YYYY-MM-DD or DD-MMM-YYYY
<wallyworld> not YYYY-DD-MM
<natefinch> no no, we do MM-DD-YYYY
<wallyworld> same diff :-)
<wallyworld> still out of order
<natefinch> but yes, still dumb
<natefinch> code review anyone?
<natefinch> https://github.com/juju/jsonschema/pull/1
<mup> Bug #1612500 opened: Can't migrate back to a controller due to user permissions <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1612500>
<axw> wallyworld: would you please take a look: http://reviews.vapour.ws/r/5423/
<wallyworld> axw: sure, just got to go to school pickup, will lookin a bit
<axw> wallyworld: np, thanks
<wallyworld> axw: so did you discover why rate limiting wasn't kicking in? or was that a red herring?
<axw> wallyworld: red herring. it is, I verified it works - I think their subscription is just so busy it never stopped getting rate limited
<wallyworld> wondered as much
<wallyworld> i think storage accouny key used to be a pointer rather than a slice before the sdk change right?
<axw> wallyworld: it was a pointer to a struct that had multiple keys in it
<axw> wallyworld: also it never used to have this permission thing
<wallyworld> ah, right ok. lgtm. code looks ok, i'll take your word thst it works
<axw> wallyworld: thanks
<wallyworld> np
<mup> Bug #1603910 changed: model-level log forwarding not supported <oil> <oil-2.0> <juju-core:Fix Released> <https://launchpad.net/bugs/1603910>
<rogpeppe> anyone up for a review of this please? it just adds a (failing) test for some currently broken functionality. fix in subsequent PR. https://github.com/juju/juju/pull/5984
<dimitern> rogpeppe: LGTM
<rogpeppe> dimitern: thanks!
<rogpeppe> dimitern: i just realised that to save CI test time, it's better if I submit the fix as well in the same PR, so I've proposed this: https://github.com/juju/juju/pull/5985/files
<rogpeppe> dimitern: that has the PR you just reviewed as the first commit
<rogpeppe> dimitern: and the fix as the second
<dimitern> rogpeppe: on it
<rogpeppe> dimitern: feel free to comment directly on the second commit: https://github.com/juju/juju/pull/5985/commits/9d3478263f1d5fd4be0adb038e0811c25c9641a0
<dimitern> rogpeppe: ok, but where are your QA steps? :)
<rogpeppe> dimitern: check that juju still works, i guess :)
<dimitern> rogpeppe: that's preferable, but it should be possible to trigger the issue you're fixing?
<rogpeppe> dimitern: no, the issue only occurs in tests that I've seen
<dimitern> rogpeppe: or it's one of those races that's hard to trigger?
<rogpeppe> dimitern: because generally apiserver instances aren't torn down
<dimitern> rogpeppe: ah, so it's not observable otherwise?
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: ok, housekeeping then ;)
<rogpeppe> dimitern: you could look at it like that, i guess
<rogpeppe> dimitern: but it would impact in any real situation where we try to tear down workers synchronously when there are outstanding blocked API calls
<dimitern> rogpeppe: right
 * dimitern adds rogpeppe's fork as remote to run the tests
<dimitern> rogpeppe: wow, you're like voidspace - hardly ever deleting merged branches
<rogpeppe> dimitern: i never delete merged branches
<rogpeppe> dimitern: why bother? :)
<dimitern> rogpeppe: for the sake of anybody pulling regularly from your fork I guess ;)
<rogpeppe> dimitern: it'll only make a difference the first time they pull...
<dimitern> rogpeppe: and seeing a couple of screenfuls each time heh
<mup> Bug #1612624 opened: Bootstrap fail on MAAS if ipv6 is disabled <juju-core:New> <https://launchpad.net/bugs/1612624>
<dimitern> rogpeppe: LGTM
<rogpeppe> dimitern: ta!
<rogpeppe> mhilton: i just realised i pushed $$merge$$ having assumed you'd LGTM'd the branch
<rogpeppe> mhilton: and you haven't, sorry!
<rogpeppe> mhilton: any chance of a retrospective... ?
<mgz> rogpeppe: eheh
<rogpeppe> mgz: :)
<rick_h_> morning, happy friday
<babbageclunk> dimitern, katco: could you take another look at http://reviews.vapour.ws/r/5366/diff please?
<dimitern> babbageclunk: sure, in a moment
<babbageclunk> dimitern: Thanks!
<rogpeppe> here's the branch that exposed the bug fixed by my earlier PR. it splits the controller and model APIs. reviews much appreciated! http://reviews.vapour.ws/r/5426/)
<rogpeppe> dimitern: ?^
<dimitern> rogpeppe: will do after I finish babbageclunk's review
<mup> Bug #1612645 opened: New EC2 region in India (ap-south-1 - Mumbai) <canonical-is> <juju-core:New> <https://launchpad.net/bugs/1612645>
<babbageclunk> rogpeppe: I'll take a look at yours while he's doing mine! :)
<rogpeppe> babbageclunk, dimitern: ta!
<babbageclunk> rogpeppe: Would it be hard to split the Api -> API change into a separate diff? It'd be a lot easier to follow the controller API / model API change.
<rogpeppe> babbageclunk: i hoped you wouldn't say that :)
<rogpeppe> babbageclunk: it would be quite hard now, unfortunately
<babbageclunk> rogpeppe: oh no - squished commits?
<rogpeppe> babbageclunk: yeah
<babbageclunk> oh well
<rogpeppe> babbageclunk: perhaps you can just slide your eyes over the spelling changes :)
<babbageclunk> rogpeppe: I guess I'll have to! :)
<rogpeppe> babbageclunk: thanks
<rogpeppe> babbageclunk: i should've thought at the time
<mup> Bug #1612653 opened: bootstrap fails in beta15 when installing via snap <juju-core:New> <https://launchpad.net/bugs/1612653>
<babbageclunk> rogpeppe: reviewed - looks good, just one bit of confusion on my part.
<rogpeppe> babbageclunk: tyvm
<mup> Bug #1612658 opened: LXD no activate interface is bonding is used in MAAS2rc4 <juju-core:New> <https://launchpad.net/bugs/1612658>
<rogpeppe> babbageclunk: OpenControllerAPI landed in a PR just recently
<babbageclunk> rogpeppe: Ah, right - thought it could be something like that.
<natefinch> rick_h_: I'm going to miss standup.  My wife is sick in bed, so I'm with the kids.  Might get some work done this afternoon, but not sure.  I'll send a summary email of where I am.
<natefinch> and if anyone wants to do a review: https://github.com/juju/jsonschema/pull/1
<rick_h_> natefinch: k
<rick_h_> natefinch: please make sure the kanban board is up to date with links/etc
<rick_h_> natefinch: of particular interest is where the windows bu g is
<rick_h_> katco: dimitern standup time
<dimitern> owm
<natefinch> rick_h_: I didn't do much with the windows bug, unfortunately... maybe that was a prioritization failure on my part, if so, I am sorry.  I was finally getting somewhere with the jsonschema stuff, so I was trying to push through with it.
<rick_h_> natefinch: I've promised that bug by next beta. We need to get it in. it's carried over and block OIL for multiple weeks atm
<rick_h_> natefinch: so that bug is priority #1 please
<natefinch> rick_h_: right.  My apologies.  I'll do my best to get it done today.
<dimitern> babbageclunk: sorry it took me so long, but you have an initial review
<dimitern> babbageclunk: I'm happy to chat / clarify anything after standup
<dimitern> rogpeppe: reviewed yours as well, finally
<rogpeppe> dimitern: thanks!
<macgreagoir> katco: If you have five minutes now, stand-up HO?
<katco> macgreagoir: still there :)
<macgreagoir> Cool, cheers :-)
<katco> macgreagoir: well for 1, it looks like there's more recent precedent for utilizing testing.RunCommand without the goroutine
<katco> macgreagoir: so i would say at the very least, the way you started to do things is more correct
<katco> macgreagoir: e.g. take a look at cmd/juju/controller/listmodels_test.go
<katco> axw: has some very nice tests
<katco> macgreagoir: and it is done by mocking the api client, which is what i remembered doing, so i would definitely hold this up as the way to properly write tests against commands
<katco> macgreagoir: does that sound ok?
<babbageclunk> dimitern: cool, thanks
<babbageclunk> dimitern: The connected model checking was something Will asked for. Is he around today? Can't see any holiday on the calendar.
<macgreagoir> katco: Cheers
<katco> macgreagoir: hope that helps :)
<macgreagoir> (my irssi bell seems quite, sorry)
<katco> macgreagoir: lol no worries
<macgreagoir> *quiet
<macgreagoir> katco: That might be where unregister tests ideas were scraped from :-)
<katco> macgreagoir: :) the cargo culting is real. just gotta make sure you're copying the right stuff! ;p
<rick_h_> babbageclunk: sorry, not around today
<babbageclunk> rick_h_: Stink - thanks!
<dimitern> babbageclunk: he may have forgotten about GetAuthTag
<dimitern> babbageclunk: sorry, I guess you got conflicting suggestions :/
<dimitern> babbageclunk: let's have a chat with Will next week I guess..
<babbageclunk> dimitern: I don't think GetAuthTag does what he wanted, but I can't really explain why he wanted the model check - I do feel like it's a bit redundant given the check in the ctor.
<dimitern> babbageclunk: ok
<dimitern> babbageclunk: I'll have a look at his earlier comments
<dimitern> babbageclunk: and possibly update mine, but a bit later (need to go out ~1h)
<ejat> anyone here use juju at azure?
<mup> Bug #1612717 opened: Pinger facade not implemented on controller websocket connection <juju-core:New> <https://launchpad.net/bugs/1612717>
<mup> Bug #1612717 changed: Pinger facade not implemented on controller websocket connection <juju-core:New> <https://launchpad.net/bugs/1612717>
<mup> Bug #1612717 opened: Pinger facade not implemented on controller websocket connection <juju-core:New> <https://launchpad.net/bugs/1612717>
<mup> Bug #1612722 opened: Juju cannot destroy models/machine, all testing substrates are exhausted <blocker> <ci> <destroy-controller> <kill-controller> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1612722>
<mup> Bug #1612687 opened: Juju not showing real states of deployed nodes via MAAS. <juju-core:New> <https://launchpad.net/bugs/1612687>
<rick_h_> katco: can you please review https://github.com/juju/juju/pull/5987
<rogpeppe> here's some apiserver cleanup; reviews appreciated: https://github.com/juju/juju/pull/5988/files
<katco> rick_h_: sorry, was eating lunch. tal
<rick_h_> katco: all good, decided it was a clean revert so let it rip
<katco> rick_h_: ok. redir's +1 is probably sufficient as well. verifying a revert requires no knowledge of juju :)
<rick_h_> katco: :)
<perrito666> man, we really need to stop writing doc lines that are tautological
<rick_h_> katco: can you peek at https://github.com/juju/juju/pull/5988/files then which is more interesting?
<katco> rick_h_: ok
<rick_h_> katco: ty
<mup> Bug #1612744 opened: MachinerSuite.TestMachinerMachineAssignedUnits agent should be terminated <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1612744>
<mup> Bug #1612745 opened: rackerSuite.TestGainLeadership number of ClaimLeadership is wrong <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1612745>
<mup> Bug #1612747 opened: DebugHooksServerSuite.TestRunHook signal: killed <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1612747>
 * rick_h_ gets some lunch with that done
<kwmonroe> ejat: i use juju/azure
<ejat> kwmonroe: okie .. its working now .. thanks
<redir> bbiab
 * rick_h_ runs to get my truck and the boy from summer camp. have a good weekend folks
<mup> Bug #1612775 opened: resource get fails, resource already staged <juju-core:New> <https://launchpad.net/bugs/1612775>
<perrito666> it would be extremely nice of us to name the same thing in the same way across facades
<redir> perrito666: novel idea
<perrito666> authorizer and state are called something different across all facades
<perrito666> and just found one without authorizer
<natefinch> sinzui: are you still having problems with that windows deploy CI test?  I seem to be able to deploy a windows charm just fine on azure
<sinzui> natefinch: I think not. I am still wating for one to complete to say all is well
<sinzui> natefinch: once juju broke the reate limit. azure locked our for many hours
<natefinch> sinzui: great :)
<natefinch> oops
<sinzui> natefinch: I see some azure successes. most failed though.I suspect the failure were at the tail end of the lockout. I will retest.
<katco> sinzui: hey does qa have any plans to release the qa tools as a snap?
<sinzui> katco: we expect to do dailies
<katco> sinzui: daily releases of the qa tools?
<sinzui> katco: oh, no. no plans to snape jujuc-ci-tools or juju-release-tools
<katco> sinzui: sorry, jujuc-ci-tools
<katco> sinzui: having just gone through the setup process, it would be awesome to just download a snap
<sinzui> maybe balloons will want try it. Since snap doesn't work with windows, centos, or osx, we are dedicated to working with a common solution.
<katco> sinzui: ah good point
<mup> Bug #1612793 opened: bootstrap azure/southcentralus is failing in 2.0-betaX <juju-core:New> <https://launchpad.net/bugs/1612793>
<perrito666> ... obtained int = 2
<perrito666> ... expected int = 2
<perrito666> evidently
<katco> yay my recator compiles for the 1st time
<katco> refactor
<mup> Bug #1612335 changed: Azure rate limit leads to catastrophic failure for subscription <azure-provider> <bootstrap> <ci> <destroy-controller> <kill-controller> <regression> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1612335>
<mup> Bug #1612722 changed: Juju cannot destroy models/machine, all testing substrates are exhausted <blocker> <ci> <destroy-controller> <kill-controller> <regression> <juju-core:Fix Released by rharding> <https://launchpad.net/bugs/1612722>
<mup> Bug #1612836 opened: Cannot bring up hosted  model machines in azure <azure-provider> <ci> <deploy> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1612836>
<katco> rick_h_: ping, don't suppose you're on
<perrito666> mmm, ppas dont work nicely with my deb proxy
#juju-dev 2016-08-13
<redir> EoW. LALters
<redir> laters even
#juju-dev 2016-08-14
<mup> Bug #1592179 changed: Juju2 with MAAS2, log shows errors after having created a controller <juju-core:Expired> <https://launchpad.net/bugs/1592179>
 * thumper wonders how much mail there will be
<menn0> thumper: as per anastasiamac, can you take a look at this one please: http://reviews.vapour.ws/r/5418/
<thumper> yeah
#juju-dev 2017-08-07
<axw> babbageclunk: I am now
<babbageclunk> axw: have you bootstrapped on the maas that thumper pointed us at?
<axw> babbageclunk: nay
<axw> babbageclunk: still working through basic lxc to lxd conversion bits, was intending to move onto LXC/LXD on MAAS next
<babbageclunk> axw: d'oh - I can't get to the nodes that it deploys, and I was hoping you'd know how.
<axw> babbageclunk: I'll try it straight after standup
<babbageclunk> ok
<thumper> wallyworld: got a few minutes for a chat?
<axw> babbageclunk: hmm yeah not sure how we're supposed to get to that network. trying to sleuth
<babbageclunk> axw: Can we use lxc on xenial?
<axw> babbageclunk: in what capacity?
<axw> babbageclunk: I've done some testing of lxc-to-lxd on xenial hosts
<babbageclunk> axw: I was just about to try bootstrapping 1.25 locally.
<babbageclunk> axw: Actually I've got a 1.9 vmaas, I'll use that
<axw> babbageclunk: can you please take a look at https://github.com/juju/1.25-upgrade/pull/11? you can pretty much ignore the first commit I think, it's just a copy of the lxc-to-lxd script from the lxc repo
<babbageclunk> axw: hmm, bootstrapped ok, but I keep getting this error when I try to deploy anything: ERROR cannot retrieve charm "cs:trusty/mysql-38": cannot get archive: cannot open archive data for cs:trusty/mysql-38: blob not found
<babbageclunk> axw: any ideas?
<axw> babbageclunk: hmm nope sorry
<blahdeblah> babbageclunk: turn on your work IRC :-P
<thumper> menn0: ping?
<menn0> thumper: pong
<thumper> 1:1?
<menn0> thumper: sorry, coming
<babbageclunk> axw: oops - I think I got silently disconnected after asking that (I tried turning off my vpn). S
<babbageclunk> axw: did you reply?
<axw> babbageclunk: I did, but only to say I don't know why that would happen
<babbageclunk> axw: ah bums. Not having much luck today.
<axw> babbageclunk: would you please review https://github.com/juju/1.25-upgrade/pull/11 (ignore first commit, it's just a copy from lxc repo) and https://github.com/juju/1.25-upgrade/pull/12 (ignore first two commits, they're from #11)
<babbageclunk> axw: sure
<babbageclunk> axw: delightfully a reboot seems to have fixed my deploy problem
<axw> babbageclunk: cool.
<babbageclunk> axw: reviewed, sorry for holdup
<axw> babbageclunk: np, thank you
<axw> babbageclunk: is there a reason we should not error out if any of the "stop-agents" or "start-agents" SSH commands fail?
<axw> babbageclunk: atm just warning, feels like it should be a failure...
<babbageclunk> axw: Not that I can see - I'd say we definitely want it to be failure.
<axw> babbageclunk: cool, will change it later today. going for a ride in a moment
<menn0> jam: ping
<mwhudson> veebers: are you the correct person to goad into doing sru verification of the juju-mongodb update?
<mwhudson> veebers: https://bugs.launchpad.net/ubuntu/xenial/+source/juju-mongodb3.2/+bug/1699354
<mup> Bug #1699354: upgrade juju-mongodb3.2 to 3.2.15 <verification-needed> <verification-needed-xenial> <verification-needed-zesty> <juju-mongodb3.2 (Ubuntu):Fix Released by mwhudson> <juju-mongodb3.2 (Ubuntu Xenial):Fix Committed by mwhudson> <juju-mongodb3.2 (Ubuntu Yakkety):Won't Fix by mwhudson>
<mup> <juju-mongodb3.2 (Ubuntu Zesty):Fix Committed by mwhudson> <https://launchpad.net/bugs/1699354>
<veebers> mwhudson: Either me or balloons have done some in the past. I think we would just need to re-do some SRU testing jobs to do so.
<mwhudson> veebers: in any case, can i leave it with you? :)
<veebers> mwhudson: sure, we'll get back to you in the next couple of days :-)
#juju-dev 2017-08-08
<menn0> wallyworld: I've been consistenly getting these for a while on the develop branch: http://paste.ubuntu.com/25266422/
<menn0> just rebased and I'm still getting them
<wallyworld> interesting ok. haven't seen those
<wallyworld> oh wait
<wallyworld> that i think is a mongo bug
<wallyworld> we needed to upgrade to 3.2 to solve it
<wallyworld> it happened on 2.4 in CI
<wallyworld> but since using 3.2 it hasn't been seen since
<menn0> wallyworld: that makes sense. the machine i generally run tests on has 2.4
<menn0> wallyworld: we still need to support 2.4 b/c of trusty
<wallyworld> sure, but if there's a mongo bug....
<menn0> wallyworld: what's the bug?
<wallyworld> i can't recall now, i'd have to google it
<menn0> wallyworld, axw, babbageclunk: big model refactoring (part 1/2): https://github.com/juju/juju/pull/7710
<menn0> wallyworld: never mind
<menn0> wallyworld: about the bug, not the review
<wallyworld> :-)
<wallyworld> menn0: "which meant that the Model referred to a state.State associated with the correct model."  <----- that should be "incorrect" right?
<menn0> wallyworld, axw, babbageclunk: and just remember it was 10x more painful for me to make these changes than it will be for you to review them :)
<menn0> wallyworld: yes!
 * menn0 fixes
<babbageclunk> menn0: Oh wow, sounds like a fun change
<wallyworld> menn0: i wish we didn't have *both* Tag() and UUID() methods at the sate layer - should be UUID() only. tags are for the apiserver layers
<menn0> wallyworld: I completely agree but they've got their little taggy claws in everywhere now
<wallyworld> yeah i know :-(
<wallyworld> just having a bitch
<wallyworld> i once had a gf like that
<wallyworld> menn0: although in places in this PR the UUID() variant could easily be used instead
<menn0> wallyworld: yes there are
<menn0> wallyworld: I had to draw a line somewhere
<wallyworld> that is true
<menn0> wallyworld: if I have the energy after getting rid of State.GetModel, I'll try and tidy that up too
 * menn0 makes note
<wallyworld> sgtm
<wallyworld> tbh, getting rid of ForModel() makes me very happy
<menn0> me too
<menn0> I wasn't even aiming to do that but it just fell out
<wallyworld> menn0: just going to standup, left an initial comment
<menn0> wallyworld: ok thanks
<thumper> axw: quick question for manual provisioning, the user on the host needs to have sudo permissions (I'm assuming) to configure the service, but does it have to be passwordless sudo?
<axw> thumper: no, the process will set up passwordless if it's not done already: https://github.com/juju/juju/blob/develop/environs/manual/sshprovisioner/sshprovisioner.go#L31
<thumper> sweet
<thumper> ta
<wallyworld> menn0: quick chat? easier than typing into a pr
<axw> babbageclunk: FYI, you'll need to use a proxy for external access on that MAAS: http://squid.internal:3128
<axw> babbageclunk: otherwise LXC containers won't start
<babbageclunk> axw: ooh, thanks.
<axw> babbageclunk: and if you didn't already find out, it's ppc64el-only
<babbageclunk> axw: Ahh, just saw your reply about the staging key - thanks
<axw> np
<menn0> wallyworld: now ok?
<wallyworld> menn0: sure, griday ho?
<menn0> wallyworld: ok
<thumper> OH FFS
<thumper> every time I'm about to start writing this spec, I end up replying to another email
<thumper> now I get my physio reminder
<thumper> I feel I'll be back in the evening to work on this spec, and disconnect email
<babbageclunk> griday ho
<anastasiamac> babbageclunk: glad u said it :D
<axw> babbageclunk: in case you missed it, PTAL at https://github.com/juju/1.25-upgrade/pull/13 when you have some time
<babbageclunk> axw: sorry, I did miss it - will go through it in a bit.
<axw> babbageclunk: aw crap I just realised, this isn't going to work on the MAAS thumper provided, because it's ppc64el... the upgrade command assumes the client and server are the same arch
<axw> le sigh
 * axw wonders if the controllers we intend to upgrade are amd64
<babbageclunk> axw: oh yeah, that's annoying.
<thumper> babbageclunk: griday?
<babbageclunk> thumper: sure
<thumper> I don't get it
<babbageclunk> 14:13 <wallyworld> menn0: sure, griday ho?
<wallyworld> sigh
<anastasiamac> :D
<wallyworld> "g" and "f" are next to each other
<wallyworld> you all know what i meant
<thumper> oh...
<anastasiamac> comic relief is nice this windy tuesday :D and yes, we did and that only helped the laughter
<thumper> I didn't notice that wallyworld said it first
<babbageclunk> It was just enough typos and jargon that it tickled me.
<thumper> I thought it was a reference to me...
<babbageclunk> It can be.
<wallyworld> veebers: is the landing bot down?
<veebers> wallyworld: let me have a look, you have a link to what you're trying to land?
<wallyworld> veebers: https://github.com/juju/juju/pull/7708
<veebers> I'll take a look
<wallyworld> ta
<veebers> wallyworld: that should be unstuck now
<wallyworld> aesome ty
<wallyworld> so it is
<bradm> is there anything I can look at to see why a juju status is taking a long time?  this stack is taking anywhere between 7 and 9 minutes just on a bare juju status
<wallyworld> bradm: status is somewhat inefficient, especially on large models. there are plans to fix it but right now there's not much that can be done. it is worse if the controller is dealing with a lot of traffic like config change processing etc. it is better in juju 2.2.2 than other versions but still not great
<bradm> wallyworld: this is 2.2.2, about 50 machines
<bradm> wallyworld: and about 80 lxds on the machines it looks like
<wallyworld> it's more the number of apps and units
<bradm> its an openstack deploy, so its not tiny
<wallyworld> that would do it
<wallyworld> sadly there's nothing that can be done until a fix lands in a future release
<bradm> it seems like a machine-0 restart helps for a bit, but it gets worse over time
<wallyworld> sadly yes
<wallyworld> veebers: tests passed but merge failed :-( http://juju-ci.vapour.ws:8080/blue/organizations/jenkins/github-merge-juju/detail/github-merge-juju/97/pipeline
<bradm> the other problem is out of all those hundred or so units, one has decided its going to use its ipv6 address for juju, is there any way to fix that?
<veebers> wallyworld: looking
<veebers> wallyworld: rats, sorry about that, need to re-run it now. But the merge will work this time around
<wallyworld> ok
<veebers> wallyworld: that's done now
<wallyworld> great ty
<veebers> wallyworld: nw, sorry for the hassle there
<wallyworld> shit happens
<wallyworld> babbageclunk: maybe i can swap you reviews? this is proposed against a branch of my fork because i'm waiting on a review of that upstream branch before it can be landed https://github.com/wallyworld/juju/pull/44
<babbageclunk> wallyworld: sure!
<babbageclunk> wallyworld: also I need to do one for axw
<wallyworld> so many reviews - we must be doing work or something
<wallyworld> babbageclunk: lgtm but with a request to change the constructor - hope it makes sense
<wallyworld> axw: witg bug 1706535, to refresh my memory, it's just a matter of setting the upgrade step against 2.2.3 version i think?
<mup> Bug #1706535: Upgrading juju 2.2.1 to 2.2.2 fails on Azure and vSphere <juju:Triaged by axwalk> <https://launchpad.net/bugs/1706535>
<axw> wallyworld: 2.2.2, but otherwise yes
<wallyworld> axw: 2.2.2 is released, so it has to be 2.2.3
<wallyworld> i guess it doesn't matter really
<axw> wallyworld: no... because we don't need 2.2.2 to run the step
<axw> i.e. if you upgrade 2.2.2 to 2.2.3, it shouldn't run
<wallyworld> right ok. but it will be run from 2.1.x to 2.2.3
<axw> wallyworld: right, and (crucially) also 2.2.0 and 2.2.1 to 2.2.3
<axw> it was those ones for which it was broken
<axw> was/is
<axw> brb, vbox wants me to restart
<wallyworld> axw: not sure if you'll get a chance to look at the 2nd pr today? where the macaroon auth is wired up? i re-targetted to develop since the other branch landed earlier. maybe in between tasks? thanks for reviewing the first one
<axw> wallyworld: not sure yet, I'm flailing with the LXC-on-MAAS testing. I'd like to make some progress here
<wallyworld> sure np
<babbageclunk> wallyworld: Sorry, I got pulled onto kid duty but I'm going through that PR now - hope I'm not doubling up with axw!
<axw> babbageclunk: nope
<babbageclunk> cool cool
<babbageclunk> axw: I'm looking at your PR next if it's still waiting. :)
<axw> babbageclunk: thanks, it is - but it can wait until tomorrow. no rush
<babbageclunk> ok
<wpk> babbageclunk is on review-frenzy ;)
<axw> babbageclunk: I'm trying to get a vMAAS set up, that other one seemed to be a waste of time
<babbageclunk> axw: yeah, that architecture thing really stymies it.
<babbageclunk> axw: I was thinking you could point it at another (cross-compiled) binary to do the hash, but it's a waste of time unless we actually need to do the upgrade on other archs.
<axw> babbageclunk: I did do that, but ended up with other issues that looked like ppc64 bugs. didn't want to spend too much time on it unless I can repro in another MAAS
<babbageclunk> oh stink
<babbageclunk> wpk: my power level's over 9000!
<wallyworld> menn0: if i take a review or 2 off your plate tomorrow, could i ask you to review this for me? https://github.com/juju/juju/pull/7711 also, did you need me to look again at that pr from yesterday? I think your were going to tweak some stuff right
<wallyworld> hml: free now for a bit
<hml> wallyworld: standup ho?
<wallyworld> ok
<menn0> wallyworld: will take a look
<wallyworld> \o/
<menn0> wallyworld: I'll have that other PR updated soon. will ping you.
<babbageclunk> wallyworld: hey, did you see my responses on https://github.com/juju/juju/pull/7712?
<wallyworld> babbageclunk: i did and as soon as i finish talking to heather was going to ping you
<babbageclunk> wallyworld: :)
<wallyworld> babbageclunk: wanna jump into the standup ho, almost done
<menn0> wallyworld: review done
<wallyworld> menn0: awesome just finished talking to eric, looking now. eric will be ready to do stuff for you today or tomorrow
<wallyworld> menn0: i have another PR stacked on top of the one you reviewed. i'll make the changes in that one and merge them in one go. otherwise i'll get lots of conflicts
<wallyworld> thanks again for looking
<anastasiamac> wallyworld: since u r talking to everyone today, could i take a ticket and stnad in line too, plz?
#juju-dev 2017-08-09
<wallyworld> anastasiamac: sure, sorry i had to go out and buy a new sim card as someone took my phone at soccer last night :-(
<anastasiamac> wallyworld: that's awful!! all these contacts and messages and saved games... :(
<wallyworld> yeah
<anastasiamac> wallyworld: oh... just sim? u mean u had to get a new phone too, right?...
<anastasiamac> wallyworld: what did u get?
<wallyworld> i have a spare for now
<anastasiamac> u mean like a very old, very heavy, very dusty one? :)
<wallyworld> pretty much
<menn0> menn0: great * 2
<babbageclunk> axw: Forgot to say, I reviewed https://github.com/juju/1.25-upgrade/pull/13
<axw> babbageclunk: gracias
<axw> babbageclunk: actually my prefixWriter implementation is broken :) fixing now...
<menn0> babbageclunk: would mind joining a call pls? I suspect you have some state which will help
<menn0> babbageclunk: https://hangouts.google.com/hangouts/_/canonical.com/menno-tim
<babbageclunk> menn0: sure
<wallyworld> menn0: this commit (on the followup pr) refactors the api calling to be more clear as per your other pr request. if you are happy, i'll merge the 2 prs together and land https://github.com/wallyworld/juju/pull/44/commits/011102c8d3768389aa0961795e0b5eaf40a8a38d
<wallyworld> menn0: and it also exposed a bug which resulted in a test change
<menn0> wallyworld: cool. i'm in a call but will look after.
<wallyworld> np
<menn0> wallyworld: that commit looks great. exactly what I was thinking. happy for that to land.
<axw> babbageclunk: I've updated https://github.com/juju/1.25-upgrade/pull/13
<babbageclunk> axw: thanks - on a call, but I'll look after
<mup> Bug #1709520 opened: juju-db spams syslog, fills disk <juju-core:New> <https://launchpad.net/bugs/1709520>
<babbageclunk> axw: Thanks, that's heaps better.
<axw> babbageclunk: cool, thank you
<menn0> wallyworld: another look at https://github.com/juju/juju/pull/7710 pls
<wallyworld> menn0: sure
<wallyworld> menn0: lgtm with a couple of todos for the state pool stuff we discussed
<balloons> just fyi all CI is paused for a little bit; this includes check merge jobs.
<babbageclunk> wallyworld: around?
<wallyworld> babbageclunk: hi
<babbageclunk> wallyworld: fixing a weird status bug and trying to understand an existing test - the expected output seems wrong.
<babbageclunk> wallyworld: got time for a ho?
<wallyworld> sure
<wallyworld> standup
<babbageclunk> wallyworld: here's what it would look like: http://paste.ubuntu.com/25279863/
<babbageclunk> wallyworld: not sure about the headings - I guess consumes feels wrong now that we have offer/consume.
<wallyworld> babbageclunk: i don't think we need extra columns, but perhaps the endpoint names could be tabbed out like in Go with var/type blocks
<wallyworld> yeah, maybe Requires
<babbageclunk> wallyworld: so they're lined up? Might say a min of one space between them (so they're closer related than between cols)
<wallyworld> yeah
<wallyworld> just a though
<wallyworld> t
<wallyworld> instead of the :
<babbageclunk> wallyworld: http://paste.ubuntu.com/25279902/
<babbageclunk> wallyworld: not sure about that
<wallyworld> hmmm, but now it doeslook like it should just be an extra column :-(
<wallyworld> yeah
<wpk> wind 39
<wallyworld> babbageclunk: maybe stikc with first one and get feedback
<wallyworld> wpk: i assume you're talking about the weather :-)
<babbageclunk> wpk: did you eat a curry?
<wallyworld> that's what i was thinking :-)
<thumper> morning
<thumper> babbageclunk: https://github.com/juju/description/pull/17
<thumper> veebers: ping
<babbageclunk> thumper: Oh hai - taking a look at that now. Which do you prefer out of these? http://paste.ubuntu.com/25279938/
 * thumper looks
<babbageclunk> thumper: (should be principals)
<thumper> ugh
<thumper> ok
<wpk> wallyworld: babbageclunk https://vignette1.wikia.nocookie.net/fantendo/images/8/8f/Nothing-to-see-here.gif/revision/latest?cb=20160415175201 :P
<wpk> And g'nite :)
<thumper> night wpk
<thumper> babbageclunk: I feel that we should have Relation in the heading somewhere
<babbageclunk> bye wpk
<babbageclunk> thumper: Yeah, that's definitely a problem. But according to wallyworld the fact that it's sorted by provider endpoint name makes it not very useful. I tried with the headings Relation/To but it felt a bit forced.
<wallyworld> thumper: we discussed with ante at the sprint and having the left column sorted by "provides"  was the most important thing. Ideally yes, "Relation" in heading but not at the expense of sorting
<babbageclunk> thumper: So a better heading for the requirer column that let us use Relation for the requirer column would be ideal.
<thumper> why not just have the heading called "Relation Provider" ?
<thumper> babbageclunk: fixed principal spelling, review plz?
<babbageclunk> thumper: yeah, I like Relation Provider
<babbageclunk> thumper: continuing review
<thumper> ta
<babbageclunk> thumper, wallyworld: http://paste.ubuntu.com/25279987/
<thumper> babbageclunk: I think that is OK'
<babbageclunk> good enough for me!
<babbageclunk> thumper: Don't you also need to fix the error in migration_export?
<babbageclunk> thumper: I thought there was a missing relation scope error?
 * babbageclunk gets it
<babbageclunk> Oh, this is the description change.
<thumper> yes I do, that's next
<babbageclunk> thumper: lgtm'd
<thumper> veebers: ping-a-ling
<veebers> thumper: pong o/ Wat's the haps
<thumper> I need to talk about updating migration acceptance test
<thumper> hangout?
<veebers> thumper: yeah sure, just need 2 seconds
<thumper> akc
<anastasiamac> a small review plz: https://github.com/juju/juju/pull/7723
<veebers> thumper: release call HO?
<thumper> veebers: sure
<anastasiamac> and another equally tiny: https://github.com/juju/juju/pull/7716
#juju-dev 2017-08-10
<anastasiamac> thumper: \o/ tyvm!
<anastasiamac> and another tiny pr plz: https://github.com/juju/juju/pull/7724
<babbageclunk> anastasiamac: ding!
<anastasiamac> babbageclunk: \o/
<anastasiamac> and another one plz: https://github.com/juju/juju/pull/7725
<babbageclunk> gah
<babbageclunk> anastasiamac: done
<anastasiamac> babbageclunk: not necessarily for u to review :D anyone really ;) but TYVM
<babbageclunk> :)
<anastasiamac> another review plz - https://github.com/juju/juju/pull/7726
<thumper> wallyworld: there shortly
<anastasiamac> babbageclunk: on fire! :) tyvm
<wallyworld> thumper: ?
<thumper> babbageclunk: test added https://github.com/juju/juju/pull/7722/commits/f1c7e8618d2ad5dd37c4dacdd85ac0c0cbe690eb
<wallyworld> veebers: is this a known issue? http://juju-ci.vapour.ws:8080/blue/organizations/jenkins/github-merge-juju/detail/github-merge-juju/103/pipeline
 * veebers looks
<babbageclunk> thumper: test looks great - don't you still need the - on the index?
<thumper> babbageclunk: yes, yes I do
<thumper> thanks
<veebers> wallyworld: I'll take a look onw
<veebers> now*
<babbageclunk> thumper: Do you need an upgrade step to drop the old index as well?
 * babbageclunk might be pushing it
<thumper> ah... I'll check
<veebers> wallyworld: oh, that slave is out of space. Fixing now.
<wallyworld> ta
<babbageclunk> wallyworld: So, not changing the YAML/JSON status output - what about if I just added a top-level relations key?
<wallyworld> babbageclunk: to the yaml/json? i thought those were ok as is?
<babbageclunk> wallyworld: I can hide the new info I need for the tabular output by just making one field non-public, but it seems a little perverse.
<wallyworld> we typically have bespoke structs don't we for yaml ve tabular? i can't recall the code of fhand
<wallyworld> oh, looks like we use the same one for both
<wallyworld> if what comes off the wire is ok, you'll either need a "-" in the format tag to hide, or use a new struct for tubular. i don't see anything wrong with hide fields
<babbageclunk> wallyworld: ok cool - hiding is nicer than just lowercasing the field name which is what I was thinking.
<wallyworld> yeah, just use "-"
<veebers> wallyworld: I have cleared out that node so we're moving forward with merge jobs now
<veebers> I have also fixed the merge job itself to clean up properly after itself and will land that now too.
<thumper> meetingology: https://github.com/juju/juju/pull/7709 updated
<thumper> menn0: https://github.com/juju/juju/pull/7709 updated
<thumper> babbageclunk: https://github.com/juju/juju/pull/7722 updated too
<menn0> thumper: k, will look in a sec
<thumper> ta
<babbageclunk> thumper: lgtm'd!
<wallyworld> thumper: do you have time to look at this fix for the instance id bug for 2.2? https://github.com/juju/juju/pull/7728
<thumper> wallyworld: just need to send an email
<thumper> not sure ... have calls coming too
<wallyworld> ok
<babbageclunk> thumper or wallyworld: can you review my status fix? https://github.com/juju/juju/pull/7729
<wallyworld> ok
<wallyworld> babbageclunk: just a couple of nits
<wallyworld> good to have htis fixed
<babbageclunk> wallyworld: cool cool, thanks
<wallyworld> babbageclunk: we don't export dying or dead thigs do we?
<babbageclunk> wallyworld: In a migration? I don't think we'll allow a migration to happen when there are things in flux like that.
<wallyworld> ok, in a pr i have up, i added a field but it's only relevant if a machine is dead
<wallyworld> i probably can add that to the ignore list then
<babbageclunk> wallyworld: sounds right - in migration/precheck.go, checkMachines it will fail if there are any dead machines.
<anastasiamac> wallyworld: i was just looking at big merge... there were couple of things i wanted to bring from 2.2 into dvelop...
<anastasiamac> :(
<mup> Bug #1709791 opened: juju deployed lxd falls back to lxdbr0 bridge when binding is specified <juju-core:New> <https://launchpad.net/bugs/1709791>
<wallyworld> anastasiamac: i realise there's more to land into 2.2 - i have a PR myself. it's trivial to do another merge whenever needed
<anastasiamac> not into 2.2.. from 2.2.
<wallyworld> but the decks needed to be cleared as the diff was getting out of hand
<wallyworld> exactly
<wallyworld> i have a pr to land into 2.2
<wallyworld> which also then needs to be in develop
<anastasiamac> eceything i have in 2.2 landed. i just wanted to check that it is brough over
<wallyworld> now that the big merge is happening, anything else should hopefully be trivial
<wallyworld> babbageclunk: any chance of a small mechanical review? https://github.com/juju/juju/pull/7731
<babbageclunk> sure
<wallyworld> yay, ty
<hml> wallyworld: i *think* I have the update-series application changes done.  any chance youâre ready to look again?
<wallyworld> can do
<hml> wallyworld: still towards 2.2 - changing to develop will be interesting.  a few things have changed
<hml> wallyworld: ty
<wallyworld> yeah, hopefully retarget should not be too bad
<menn0> wallyworld: have you got time for a hangout? i'm stuck with something that you might be able to shine some light on.
<wallyworld> sure
<wallyworld> team ho?
<menn0> wallyworld: sure
<babbageclunk> wallyworld: lgtm'd
<axw> babbageclunk: standup
<wallyworld> babbageclunk: awesome, ty for review
<babbageclunk> oh, sorry!
#juju-dev 2017-08-11
<wallyworld> hml: let me know if the next round of comments make sense. the txn stuff is close but the business logic and ops generation needs to be separated
<hml> wallyworld: looking.
<hml> wallyworld: do you have a few mintues for an HO?
<anastasiamac> hmmm 'user-cat'?
<wallyworld> hml: sure
<wallyworld> sorry, missed ping
<wallyworld> samde one as before
<hml> wallyworld: sure
<wallyworld> babbageclunk: here's a small juju/description followup to the other juju one if you have a moment whenever https://github.com/juju/description/pull/18
<babbageclunk> wallyworld: approved
<wallyworld> awesome ty
<axw_> wallyworld: yay, finally got my VLAN setup right with openvswitch. I think.
<wallyworld> awesome
<menn0> wallyworld, axw_ : i've got to stop now but I've put up a WIP: https://github.com/juju/juju/pull/7733
<wallyworld> ok, will look
<menn0> it's almost done, a few tests still fail
<menn0> no rush though
<wallyworld> will look this arvo
<menn0> I need to write a few docstrings and tests too
<wallyworld> axw_: would you have time to look at a 98% mechanical change? essentially s/offerurl/offeruuid. i've got 1 failing unit test still (gui related), but will be trivial to fix https://github.com/juju/juju/pull/7734
<axw_> wallyworld: ok
<axw_> wallyworld: let me know when you've finished changing it please
<wallyworld> axw_: sorry, was fixing that test
<wallyworld> should be ok now. i'm about to re-run all the unit tests to be sure
<wallyworld> the changes were a test fix
<wallyworld> axw_: awesome, thanks for review. good pickup on the json tag
<axw_> wallyworld: np
<babbageclunk> wallyworld: hey, are you watching that stream on youtube?
#juju-dev 2017-08-13
<wallyworld> thumper: moaning... you would just love to start monday with a review, i just know it https://github.com/juju/juju/pull/7728
<thumper> morning
 * thumper looks
#juju-dev 2018-08-06
<seyeongkim> Hello, This issue (https://bugs.launchpad.net/maas/+bug/1668650) is fixed on 2.4.1? I'm deploying trusty on that but failed with "trusty has no kernels available which meet min_hwe_kernel(hwe-16.04-edge)" msgs
