#juju-dev 2013-07-22
<wallyworld> davecheney: looks like there was no release last week?
<davecheney> wallyworld: working on it now
<davecheney> it's an uphill battle
<wallyworld> ok, good luck
<davecheney> axw: BONJOUR!
<axw> davecheney: hey :)
<davecheney> isn't it a bit early in the year to see you in this channel ?
<axw> nope, I start today
<axw> just getting my SSO setup now
<bigjools> davecheney: I'm not sure there's any consensus yet on the tls thing, did you have a preference?
<bigjools> will prob have to wait for EU to start to get more attention
<thumper> bigjools: how horrible is it for use to carry this patch against golang package?
<thumper> bigjools: instead of forking individual packages?
<bigjools> thumper: won't work on other platforms
<thumper> what won't?
<davecheney> bigjools: if you've been able to fork those packages
<davecheney> then go with that
<thumper> oh... like windows :)
<bigjools> we have no control over golang on non-Ubuntu
<davecheney> rog and I thought it would be more work
<davecheney> good to hear it was only 3 packages
<bigjools> well, I am a little concerned about forking net/http
<wallyworld> davecheney: so you saying we fork the golang source?
<thumper> axw: hello there, are you the new perth bod?
<davecheney> wallyworld: make private copies of a few packages
<thumper> bigjools: ack
<axw> thumper: Hi. I surely am
<thumper> axw: welcome
<wallyworld> davecheney: won't that be a maintenance nightmare?
<bigjools> since net/http is likely to get a lot of changes in core
<davecheney> wallyworld: rock, meet hard place
<davecheney> please see the discussion on the mailing list
<bigjools> giving us a higher maintenance burden
<thumper> wallyworld: hopefully not too hard
<thumper> but yes
<bigjools> g'day axw, welcome to the fray
<axw> thumper: thanks! Looking forward to working with you guys
<axw> bigjools: thanks and hello
<wallyworld> davecheney: i read it, and still think it sucks to fork go
 * thumper is currently writing a depressing email
<thumper> wallyworld: we aren't forking all of go
<thumper> wallyworld: just three packages
<wallyworld> axw: g'day from another aussie
<wallyworld> thumper: still, 3 is 3 too many
<davecheney> wallyworld: please read the mailng list before commenting further
<thumper> wallyworld: agreed, but in the perfect world, it would be fixed in core
<davecheney> i don't think anyone is happy about this situatoin
<wallyworld> davecheney: i have
<axw> wallyworld: Hi :)
<thumper> wallyworld: we all agree it is sub-optimal
<bigjools> do we know why there's resistance to accepting in the core Go?
<thumper> but the best option out of a collection of shitty options
<wallyworld> axw: you follow cricket? if so, ignore bigjools. he is an aussie now but still yearns for the shitty place he left behind for a new life
<davecheney> bigjools: no, please read my response on the mailinglist
<bigjools> wallyworld: I was about to ask if you'd enjoyed that yesterday
<davecheney> don't read too much into what agl wrote
<axw> wallyworld: heh :)  sorry, can't say that I do
<davecheney> it was a throw away line
<wallyworld> bigjools: FO
<bigjools> :)
<bigjools> davecheney: so you think there's a chance?
<davecheney> bigjools: yes
<thumper> bigjools: even if so, won't come fast enough
<davecheney> but it won't be available til dec at the soonest
<bigjools> well if we know it's going to be in the next version, that's fine if it makes the next LTS, because the maintenance burden is not too bad
<wallyworld> thumper: you available for a chat even though you're out and about?
<davecheney> https://t.co/ycH9kpo7ZH
<thumper> wallyworld: hangout wouldn't really work in this cafÃ©
<thumper> irc is fine
<thumper> will be home in an hour (ish)
<wallyworld> thumper: so, i've been reading martin's branches. looks like there's agreement they can be landed. what do you need from an lxc perspective?
<thumper> wallyworld: let me send this email, then we can talk more
<wallyworld> looks like they handle more the non container side of things
<wallyworld> ok
 * wallyworld has some food
<thumper> email sent
<thumper> wallyworld: I've not looked at mgz's latest assuming he put some up on Friday
<thumper> I should do soon
<thumper> from what we talked about, the planned changes should be sufficient for the local provider
<wallyworld> ok. there's 2 branches there
<wallyworld> and some empty methods on ec2 etc that need implementing
<bigjools> davecheney: after 1.2 is released, how long will 1.1 be supported?
 * thumper clicks
<thumper> wallyworld: I was hoping someone would say to me about my tools branch: we should have tools.Tools not agent.Tools
<thumper> but no...
<thumper> perhaps I'll talk to fwreade about this again
<thumper> ugh...
<wallyworld> tools.Tools sounds better for sure
<thumper> this table is mildly sticky
<thumper> wallyworld: also tools package can be very targetted
<thumper> in what it does
<wallyworld> yes
<davecheney> bigjools: yes
<thumper> maybe, agent/tools ?
<thumper> not sure
<davecheney> 1.1 will not be supported
<bigjools> davecheney: so there's no crossover period at all?
<davecheney> well, not supported any more than 1.0 is supported
<davecheney> bigjools: there is
<davecheney> but the answer will always be, upgrade
<bigjools> that's really hostile to distros
<thumper> I'm fairly sure, but importing agent/tools in no way implies bringing in agent
<thumper> davecheney: right?
<davecheney> thumper: right what ?
<thumper> the line above
<wallyworld> thumper: if the tools sub package only relevant to agent?
<wallyworld> is
<thumper> wallyworld: it is only used by agents, and generated for the use of agents
<wallyworld> so agent/tools perhaps
<thumper> that is what I was thinking
<davecheney> thumper: that is correct, importing a/b/c in no way implies you are importing a/b
<thumper> wallyworld: then we could keep the agent.Conf in agent
<thumper> davecheney: ta
<davecheney> bigjools: i agree, i don't think it is something I am in a position to change
<thumper> wallyworld: as agent/config was an attempt to show how horrible it is :)
<wallyworld> yes
<thumper> wallyworld: I don't really want to land that branch
<davecheney> I have suggested maintaing a stable branch
<davecheney> but the proposal was not accepted
<bigjools> davecheney: wow :(
<thumper> davecheney: for golang?
<davecheney> thumper: yes
<thumper> double wow
<bigjools> this chimes perfectly with the lack of versioning in dependencies - just use trunk folks!
<thumper> \o/ WINNING!
 * bigjools wonders how Google handles this
<thumper> where did I put that tiger blood
<thumper> bigjools: I bet they have their own tool chain around it
<davecheney> bigjools: they have all their source in one big perforce repo
<wallyworld> davecheney: to those golang folks know anything about collaborative software development across multiple teams etc? seems not :-(
<davecheney> it's called /google
<thumper> a while arse guess is that they maintain their own go versions too
<wallyworld> do
<bigjools> I used to use Perforce years ago
<davecheney> thumper: the development is on golang.org
<davecheney> i think they may merge back to their /google mega repo
<davecheney> but that is just to fit their process
<davecheney> to the best of my knowledge there is no 'private' version of Go
<davecheney> can I build a lp recipe from a tag ?
<bigjools> good question
<bigjools> probably
<davecheney> bigjools: should I branch from the tag? it looks like lp recipes are happiest with branches
<davecheney> ok, i'll do that then
<davecheney> oh ffs
<davecheney> that'll mean pushing to the bot
<davecheney> that means I have to ask mgz to do it for me
<davecheney> because, reasons
<bigjools> the revision spec is defined in the recipe
<bigjools> so you can set what you like
<bigjools> why would you want a recipe for a fixed revision though?
<thumper> davecheney: I *think* you can specify a revision spec in a recipe
<thumper> davecheney: and if so, yes you can use a tag
<davecheney> https://help.launchpad.net/Packaging/SourceBuilds/Recipes#Specifying_the_branches
<thumper> davecheney: see the specifying revisions
<thumper> merge packaging lp:~bzr/bzr/packaging revno:2355
<thumper> lp:bzr tag:2.0 etc
<davecheney> thumper: fuckin' hot sauce
<bigjools> davecheney: why would you want a recipe for a fixed revision though?
<davecheney> bigjools: because I'm still doing the releases without my big boy pants on
 * bigjools tries to unsee and fails
<davecheney> bigjools: right now the recipe builds the debs which I use to make the tools we push into s3
<davecheney> we're also putting a tarball from the same tag onto the project page
<bigjools> it sounds like you don't need a recipe, just a package
<davecheney> which jamespage uses to produce the debs that go into P, Q, R and S
<davecheney> bigjools: yes, we do need that
<bigjools> davecheney: we use recipes to produce repeatable builds when versions get bumped.  If you're fixing the version with a tag, it doesn't make sense to me.
<bigjools> unless you;re going to move the tag around ... :)
<davecheney> bigjools: no, not planning on moving the tag
<bigjools> I'd just use bzr builddeb or similar then
<davecheney> shit, https://launchpadlibrarian.net/145546623/buildlog.txt.gz
<bigjools> davecheney: it's not going to find the juju tag on your packaging branch is it
<davecheney> oh fuk, right, i understand
<bigjools> also
<bigjools> shouldn't all those nests contain versions?
<bigjools> otherwise it pulls tip
<davecheney> bigjools: yes they should
<davecheney> you are correct
<davecheney> bigjools: that is why I want to move away from lp build recipes
<bigjools> like I said, recipes are only really useful for repeated builds
<bigjools> but if you don't use a recipe, you need to separately package all those nested dependencies.
<bigjools> choices choices :)
<davecheney> (and wallyworld wonders why this takes ages ... :)
<wallyworld> automation!
<davecheney> https://code.launchpad.net/~dave-cheney/juju-core/137-revert-temporary-removal-of-azure-provider/+merge/175994
<davecheney> ping
 * thumper has one more branch that stabs state.Tools through the heart
<davecheney> le sigh
<davecheney> the state is back to retrying > 1 time / second
<davecheney> i'm tried of trying and failing to fix this bug
<wallyworld> thumper: if you are looking for a shit way to finish your day, i have 3 mp's that need reviewing. or i can beg someone else in EU timezone
<thumper> wallyworld: sure
<wallyworld> thanks. i'll still need to beg for the 2nd +1 i suppose :-)
<wallyworld> the simplestreams validation one is especially important
<thumper> 66 files changed, 488 insertions(+), 417 deletions(-)
<thumper> https://codereview.appspot.com/11561044/
<davecheney> us-west-1 is slow, doo daa, doo daa
 * wallyworld looks at thumper's branch before heading off for school pickup
<thumper> wallyworld: it is pretty boring
<wallyworld> hopefully easy to review then
<davecheney> warning: do not read merge proposal while driving, may cause drowsyness
<thumper> I'm especially happy with the testing function to check juju-core dependencies
<thumper> wallyworld: the message is still valid
<wallyworld> ok
<thumper> wallyworld: it used to say that the interface was there to avoid an import loop
<thumper> now it is to not bring in the dependency
<wallyworld> fair enough
<thumper> axw: how goes the new starter tasks?
<axw> thumper: pretty much all done
<axw> just waiting on hearing back from IS on getting my SSH key added
<davecheney> anyone know anything about a sprint in August ?
<bigjools> apparently there is one here
<davecheney> apparently
<bigjools> and you all get to learn about the lovely azure
 * davecheney already has his account setup
 * thumper goes to make dinner
<davecheney> 1.11.3 is out!
<bigjools> when the core asks a provider to open ports, does it differentiate between udp and tcp or does it expect both types to be opened?
<davecheney> bigjools: the latter
<bigjools> ta
<davecheney> the open-port command does not permit you to specify the flavor
<bigjools> it's required to specify on azure, so ... two calls needed.  yay.
 * axw wonders how long before his brain stops automatically doing things the IBM way
<axw> am I better off using the openstack or local provider for familiarising with juju code? local is undergoing quite a bit of change right now?
<TheMue> axw: yes, local is in a too early stage
<TheMue> axw: using ec2 is a robust and convenient way to get into it
<TheMue> axw: good morning btw ;)
<axw> TheMue: thanks. Good morning :)
<axw> first day at Canonical - I'd like to accomplish something more than getting my accounts set up ;)
<TheMue> axw: and hey, I've still got the IBM way of thinking in my brain, even after now more than one and a half years
<axw> oh, you worked at IBM?
<TheMue> axw: oh, no, but a long time with ibm technologies. first mainframe, later websphere stuff
<axw> ah right :)
<TheMue> axw: and in very conservative companies with large and unflexible it
<axw> ;)
<rogpeppe1> mornin' all
<axw> rogpeppe1: good morning
<rogpeppe1> axw: yo! welcome!
<axw> thanks :)
<rogpeppe1> axw: is this your first day?
<axw> rogpeppe1: yes it is
<TheMue> axw: where do you come from?
<rogpeppe1> axw: feel free to ask about anything. i imagine you'll probably be busy doing all those initial things, but if you wanna take a look through the code, we're all online
<axw> TheMue: I'm in Perth, Western Australia
<rogpeppe1> axw: i guess your day is just about ended now, right?
<axw> rogpeppe1: Thanks. Still a few hours left. I'm just getting set up with juju/openstack now
<axw> done all the new starter things
<rogpeppe1> axw: i've not done the openstack set up yet :-)
<TheMue> axw: oh, the australian/new zealand group slowly outnumbers the european one
<axw> rogpeppe1: oh, no, I'm just using canonistack
<TheMue> axw: good reason to have the next meeting in australia :)
<axw> TheMue: :)  there do seem to be quite a few people in A/NZ
<rogpeppe1> axw: yeah, i haven't tried that yet. i probably should...
<axw> rogpeppe1: what do you develop against then? EC2?
<rogpeppe1> axw: yeah
<axw> TheMue: just not Perth, it's dead boring
<TheMue> axw: so we'll select Sydney, visiting Dave
<axw> TheMue: Sounds good. I don't mind Sydney.
<TheMue> axw: never been there, so far only europe, us and india. so it would be a new experience
<axw> who knows about the openstack provider? seems there's a bug where it's expecting authentication to have occurred at bootstrap time
<dimitern> guys, I need help landing this branch lp:~dimitern/juju-core/073-apiserver-pinger-on-ma-connection - i run the tests like 30 times since friday and it never fails on my machine, but it always fails on the bot - can someone please pull it and run all the tests to see if it can be reproduced?
<dimitern> axw: can you give more details?
<axw> dimitern: 1. I pulled tip of juju-core; 2. I init'd and configured environments.yaml to target canonistack; 3. juju bootstrap -v
<axw> 2013-07-22 07:58:23 ERROR juju supercommand.go:235 command failed: cannot create bootstrap state file: cannot get endpoint URL without being authenticated
<dimitern> axw: do you have your canonistack creds in the env?
<axw> dimitern: yes
<axw> dimitern: I can get around this by forcing authentication in goose/client code
<dimitern> axw: set | grep OS_
<dimitern> axw: you should see 5 lines
<axw> dimitern: yes, they're all there
<dimitern> axw: have you done "go install ." in cmd/juju ?
<dimitern> after pulling
<axw> dimitern: go get ./...
<axw> dimitern: I don't have another juju installed
<dimitern> axw: but do you have an existing working dir from before?
<axw> ~/.juju ?
<dimitern> axw: do go install there, just to be sure it runs the correct binary
<dimitern> axw: no, the source checkout
<axw> dimitern: I guarantee you it is :)  I modified the code to fix the problem
<dimitern> axw: what was needed?
<axw> goose/client.go: authenticatingClient.MakeServiceURL checks IsAuthenticated
<axw> I changed it to call Authenticate if it's not
<dimitern> axw: gmm let me see
<axw> I'm just not sure if this is the appropriate place to do it. It's getting called during bootstrap
<dimitern> axw: can you paste the exact log please?
<dimitern> mgz: hey, can you help me with a branch i'm having trouble landing
<axw> 2013-07-22 07:58:09 INFO juju provider.go:115 environs/openstack: opening environment "openstack"
<axw> 2013-07-22 07:58:09 INFO juju provider.go:417 environs/openstack: bootstrapping environment "openstack"
<axw> 2013-07-22 07:58:21 INFO juju tools.go:26 environs: reading tools with major version 1
<axw> 2013-07-22 07:58:23 INFO juju tools.go:30 environs: falling back to public bucket
<axw> 2013-07-22 07:58:23 INFO juju tools.go:53 environs: filtering tools by series: precise
<axw> 2013-07-22 07:58:23 INFO juju tools.go:76 environs: picked newest version: 1.11.0
<axw> 2013-07-22 07:58:23 ERROR juju supercommand.go:235 command failed: cannot create bootstrap state file: cannot get endpoint URL without being authenticated
<dimitern> mgz: pull this and run all tests to see if they pass please: lp:~dimitern/juju-core/073-apiserver-pinger-on-ma-connection
<dimitern> axw: please use paste.ubuntu.com next time :)
<axw> sorry!
<mgz> dimitern: SURE
<dimitern> axw: hmm interesting
<dimitern> mgz: cheers
<mgz> er, caps
<dimitern> :)
<dimitern> axw: i'll look into it, might be indeed a bug
<axw> okey dokey, thanks. I'll stick with my patch while I play.
<dimitern> axw: which revision of goose you're on?
<axw> dimitern: 99
<mgz> dimitern: running tests now
<axw> dimitern: entirely possible I'm out of date...
<axw> hmm nope, seems to be the latest
<dimitern> axw: it is the latest
<axw> dimitern: would you like me to log a bug? or is this stuff too much in flux for that?
<dimitern> axw: yes, please do this, so we can triage it and see whether it's a config issue or indeed a goose bug
<dimitern> axw: against goose i think, not juju-core, as it seems
<axw> ok
<mgz> dimitern: either the test run hung after the charmload tests, or something else is very wrong...
<mgz> how do I run with -gocheck.v and do all the tests again? >_<
<dimitern> mgz: you cannot
<dimitern> mgz: but, if you just go into .. wait a sec
<dimitern> mgz: state/apiserver and run go test -gocheck.v
<dimitern> mgz: alas, gocheck doesn't support running tests recursively
<mgz> I'm failing to get cmd/juju tests to complete at all...
<mgz> with trunk
<mgz> I'll give up and just test the ones you care about in your branch
<dimitern> mgz: ok
<mgz> one failure
<mgz> http://paste.ubuntu.com/5899884
<dimitern> mgz: strange.. go version?
<mgz> 1.0.2 from distro
<dimitern> hmm.. that's probably what the bot is running, but this is wrong, should be 1.0.3 or even 1.1.1
<mgz> 1.02 on the distro has the borked fixes from 1.0.3 so should be much the same
<mgz> we did say we were going to switch wholesale to 1.1
<dimitern> well, i can't make it break here
<mgz> but haven't really pulled the trigger on that yet
<dimitern> really frustrating..
<dimitern> and the logic is sane, should pass
<mgz> the bot has 1.0.3
<mgz> at least that's what I get when I ssh in and run go version
<mgz> if it helps, you can run that test yourself on my setup
<dimitern> ideas how to debug this on the bot without stopping it?
<mgz> if the bot failing on the same test as me?
<dimitern> it is
<dimitern> hmm.. i'll try to find another way to test the same thing
<mgz> if so, ssh 10.55.60.158 and play around
<dimitern> ok
<dimitern> thanks btw
<dimitern> mgz: if I can bug you a bit more :)
<mgz> sure :)
<dimitern> mgz: added some logging to the branch, so if you pull it now
<dimitern> mgz: and then run go test -gocheck.vv -gocheck.f TestMachineLoginStartsPinger
<dimitern> mgz: pasting the output, I should get better idea
<mgz> ssh in and see yourself :)
<dimitern> mgz: I wouldn't want to screw up the bot
<mgz> it's not the bot
<dimitern> mgz: aah, ok
<mgz> it's my test server
<dimitern> thanks
<mgz> cd go/src/launchpad.net/juju-core
<mgz> then run the last command
<dimitern> mgz: when I do bzr pull there it says perm denied
<mgz> I already pulled
<dimitern> mgz: ah, ok
<mgz> (you need forwarding for that to work)
<dimitern> mgz: ha! it passes now
<mgz> hm, failed once for me
<mgz> run it a few more times
<dimitern> mgz: weird.. so it's a timing issue then - adding two log statements seems to have fixed it
<mgz> http://paste.ubuntu.com/5899917
<dimitern> mgz: I can't see this - I run it like 5 times, all pass
<mgz> logging probably made the race harder to hit
<dimitern> http://paste.ubuntu.com/5899920/
<mgz> it passed the last few times for me.... just failed again
<dimitern> mgz: that's indeed disturbing
<dimitern> but what i can say is that the pinger gets killed in both cases
<dimitern> just some timeout is needed perhaps after that
<dimitern> mgz: sorry, can you pull again please?
<dimitern> i'm approving it with the last fix, fingers crossed it might pass
<axw> is there a primer anywhere on the procedure for changesets? how to get it reviewed, etc.
<dimitern> axw: basically, you create a merge proposal with lbox propose and it gets reviewed on rietveld, you need 2 LGTMs, then you should set the commit message on LP and mark it as approved, the bot will land it
<axw> dimitern: thanks.
<dimitern> axw: note: you'll need go 1.1.1 to compile launchpad.net/lbox/...
<dimitern> ymmv
<mgz> dimitern: done, sorry, was off in the clouds
<dimitern> mgz: thanks, run it 10 times, passes, finally!
<dimitern> tyvm
<dimitern> rogpeppe1: ping
<rogpeppe1> dimitern: poing
<dimitern> rogpeppe1: it seems there's a problem with provisioner tests after the short/longwait stuff has landed
<rogpeppe1> dimitern: ah
<rogpeppe1> dimitern: is that why the bot is failing?
<dimitern> rogpeppe1: http://paste.ubuntu.com/5900029/
<dimitern> rogpeppe1: yeah, I've seen this on several other branches trying to land
<dimitern> rogpeppe1: I might be wrong, but it seems like the most probable cause, can you look into it please?
<rogpeppe1> dimitern: ah, i think i see where the problem might lie
<dimitern> rogpeppe1: it seems in one place it was 200ms and was changed to shortwait
<rogpeppe1> dimitern: http://paste.ubuntu.com/5900040/
<rogpeppe1> dimitern: it assumes that a short wait is enough to check for no operations *and* to wait for the operation to be performed
<dimitern> rogpeppe1: yeah
<rogpeppe1> dimitern: it should probably loop polling Status
<dimitern> rogpeppe1: it wasn't shortwait before
<dimitern> rogpeppe1: lxc-broker_test.go in ensureNoChanges
<dimitern> rogpeppe1: it was 200ms
<rogpeppe1> dimitern: agreed. but ensureNoChanges should not also be ensureEnoughTimeForSomethingToHappen
<axw> good night all
<rogpeppe1> axw: g'night!
<rogpeppe1> dimitern: it was only lucky that we weren't using a shorter timeout for ensureNoChanges
<dimitern> axw: 'night
<dimitern> rogpeppe1: seems likely yes
<rogpeppe1> dimitern: i think this is useful stuff to find out - it will make our tests less flaky in the end
<rogpeppe1> dimitern: i'd like to try running the tests with ShortWait=1 * time.Microsecond
<rogpeppe1> dimitern: and make sure nothing breaks with that
<dimitern> rogpeppe1: why? that seems extreme
<rogpeppe1> dimitern: it's important that we don't *rely* on anything happening within the ShortWait time scale
<rogpeppe1> dimitern: otherwise our tests will be flaky
<rogpeppe1> dimitern: setting it to a very short timeout is one way of checking that
<rogpeppe1> dimitern: obviously i wouldn't commit that change
<dimitern> rogpeppe1: yeah, it seems likely a lot of tests will fail with 5 orders of magnitude shorter timeout
<rogpeppe1> dimitern: who knows?
<dimitern> rogpeppe1: trunk tests do not fail on my machine though.. but I already had an issue like that with my branch - the bot is slower
<rogpeppe1> dimitern: it's a related kind of test to changing LongWait to 1 minute, which found a few issues
<rogpeppe1> dimitern: exactly
<rogpeppe1> dimitern: i bet that test would fail on your machine with a smaller ShortWait
<rogpeppe1> dimitern: i just tried it and get exactly that error
<rogpeppe1> dimitern: (only that one test fails in worker/provisioner, which is hopeful)
<dimitern> rogpeppe1: really? with 1us
<rogpeppe1> dimitern: yeah
<dimitern> rogpeppe1: awesome
<rogpeppe1> dimitern: i haven't tried any other packages yet tho
<dimitern> rogpeppe1: ah..
<rogpeppe1> dimitern: one test fails in state (but only one - i expected more)
<dimitern> rogpeppe1: so 2 so far? did you run all of them?
<rogpeppe1> dimitern: no, i thought i'd start with the likely candidates :-)
<dimitern> rogpeppe1: ok
<rogpeppe1> dimitern: i fixed the state test; other than that there are 4 tests in the whole suite that fail: http://paste.ubuntu.com/5900118/
<rogpeppe1> dimitern: that's not too bad, i think
<dimitern> rogpeppe1: very good
<dimitern> rogpeppe1: and how about the provisioner?
<rogpeppe1> dimitern: i fixed that one too
<rogpeppe1> dimitern: the fslock one was just a ShortWait which should have been a LongWait
<dimitern> rogpeppe1: awesome
<dimitern> rogpeppe1: please propose this so things that are waiting can start landing
<rogpeppe1> dimitern: i'm just fixing the remaining tests
<rogpeppe1> dimitern: or perhaps i should propose that fix now...
<rogpeppe1> dimitern: ok, will do
<dimitern> rogpeppe1: tyvm
<rogpeppe1> dimitern: https://codereview.appspot.com/11525045/
<dimitern> rogpeppe1: reviewed
<dimitern> mgz: can you take a look as well? ^^
<mgz> sure
<mgz> reviewed.
<dimitern> rogpeppe1: good to go then
<rogpeppe1> dimitern, mgz: thanks
<rogpeppe1> it's marked as approved
<dimitern> nice!
<mgz> I'm a little worried the bot is unhappy right now...
<dimitern> how so?
<mgz> dave got a version bump reject twice it seems
<dimitern> it works sometimes it seems
<mgz> yeah, randomly failing tests are not fun
<dimitern> if the stars align right, the timeouts/races do not cause trouble :)
<allenap> rogpeppe1: Do you have time to talk to rvba and myself?
<rogpeppe1> allenap: certainly
<rogpeppe1> allenap: anytime
<allenap> rogpeppe1: We'll invite you to a hangout. Should be quick.
<rogpeppe1> allenap: cool. we've got a meeting at 12.30 BTW
<allenap> rogpeppe1: We'll be done by then.
<rvba> rogpeppe1: I've just invited you to our hangout.
<rogpeppe1> rvba: could you paste a link? i haven't seen the invite yet.
<rvba> rogpeppe1: https://plus.google.com/hangouts/_/cc7900440cd9b8960e9a2ded920992997465edd2?hl=en
<dimitern> rogpeppe1: standup?
<rogpeppe1> dimitern: ah yes, one mo
<dimitern> wallyworld: ? ^^
<wallyworld> i'm already there
<wallyworld> waiting
<mgz> you're a different there to us in that case
<wallyworld> i used th one in the calendar
<rogpeppe1> wallyworld: https://plus.google.com/hangouts/_/f497381ca4d154890227b3b35a85a985b894b471?authuser=1
<mgz> so did I..
<rogpeppe1> wallyworld: me too
<wallyworld> https://codereview.appspot.com/11659043/
<wallyworld> https://codereview.appspot.com/11655043/
 * dimitern thinks codereview.appspot.com went to extreme lengths to make most of the generated numbers look very close at the end :)
<dimitern> wallyworld: looking
<wallyworld> dimitern: yeah. btw, those were for rodger and martin
<mgz> he can always review as well! :)
<dimitern> I have one of mine https://codereview.appspot.com/11463044
<dimitern> wallyworld: ah, ok, won't touch them then :)
<wallyworld> dimitern: thanks for offering!
<wallyworld> dimitern: we were in a hangout so i forgot to prefix the pastes with some explanatory text
<dimitern> wallyworld: np
<dimitern> rogpeppe1, mgz: when done take a look ^^ please
<rogpeppe1> dimitern: reviewed
<dimitern> rogpeppe1: thanks
<rvba> rogpeppe1: I see Dave Cheney has disabled the Azure provider to release a clean version of Jujuâ¦ is it ok if we re-enable it now?
<rogpeppe1> rvba: if you've changed things to use the new tls fork, i don't see why not
<rvba> rogpeppe1: yep, that's currently being reviewed.
<rvba> rogpeppe1: hang on, I updated the code and the Azure provider is now already re-enabled!
<rogpeppe1> rvba: cool
<dimitern> mgz: https://codereview.appspot.com/11463044 ?
<rogpeppe1> wallyworld: posted a review
 * rogpeppe1 goes for lunch
<wallyworld> rogpeppe1: thanks, will look tomorrow
<mgz> dimitern: lgtmed
<dimitern> mgz: cheers
<dimitern> rogpeppe1: ping
<dimitern> rogpeppe1: i though you fixed the log pollution from the api?
<dimitern> rogpeppe1: http://paste.ubuntu.com/5900721/ i'm getting this all the time
<rogpeppe1> dimitern: those messages are printed only once, at init time
<rogpeppe1> dimitern: at least... hmm, they should be
<rogpeppe1> dimitern: perhaps they're being printed once per connection - i'll just check
<dimitern> rogpeppe1: well I'm restarting all the time, so they might be printed once
<dimitern> rogpeppe1: but should they?
<rogpeppe1> dimitern: they're useful messages, yes
<rogpeppe1> dimitern: i have an old branch somewhere that makes them less necessary
<rogpeppe1> dimitern: but given that they only appear once, i don't think they're a great burden, and i do think they provide useful information ("why isn't my API call being made?" "ah, it's because it's being discarded because it has the wrong type signature")
<dimitern> rogpeppe1: can we at least make them appear only at TRACE log level?
<rogpeppe1> dimitern: Debug is probably more appropriate
<rogpeppe1> dimitern: and i think that's enabled at test time, whereas i'm not sure trace is
<dimitern> rogpeppe1: we should really have an easy way to set log level at run time for tests and production separately
<rogpeppe1> dimitern: agreed
<rogpeppe1> dimitern: BTW with my root-for-entity changes, i think we should be able to avoid the need for any of those messages to be printed.
<rogpeppe1> dimitern: which might argue for them being left at the same log level, or even higher
<dimitern> rogpeppe1: interesting idea
<rvba> rogpeppe1: care to review that branch (it's really tiny)? https://codereview.appspot.com/11492044/
<rogpeppe1> rvba: looking
<rvba> Thanks.
<rogpeppe1> rvba: i get a chunk mismatch error (occasional unavoidable rietveld glitch) - could you run lbox propose again please?
<rvba> rogpeppe1: done.
<rogpeppe1> rvba: thanks
<hazmat> is fwereade on vacation ?
<mgz> hazmat: yes, till thursday
<ahasenack> does mongodb from precise work with juju-core, for the local provider story?
<ahasenack> it looks like it doesn't support ssl
<ahasenack> it's 1:2.0.4-1ubuntu2.1
<dimitern> rogpeppe1, mgz: https://codereview.appspot.com/11666047/ - machiner uses api now
<mgz> dimitern: looking
<rogpeppe1> dimitern: reviewed
<dimitern> rogpeppe1: thanks
<rvba> rogpeppe1: btw, you'll note that my latest branch (to cope with gwacl's most recent changes) only touches test files.
<rogpeppe1> rvba: cool. LGTM
<rvba> rogpeppe1: ta
<mgz> everyone hold off marking any mps as approved for a while
<mgz> I'll give the okay when y'all can start landing again
<dimitern> mgz: ok; thanks for the review btw
<rogpeppe1> thumper: yo!
<rogpeppe1> dimitern: fairly trivial CL: https://codereview.appspot.com/11560044
<TheMue> rogpeppe1: lgtm
<rogpeppe1> TheMue: ta
<dimitern> rogpeppe1: will look a bit later, have to go out quickly
 * dimitern bbiab
<rogpeppe1> dimitern: np
<mgz> okay, can mark mps for landing again now
<mgz> everyone will want to update gwacl
<rogpeppe1> any other takers for a small review (more Go- than juju-centric)? https://codereview.appspot.com/11560044
<marcoceppi> Just wanted to drop by and let you guys know about this: https://bugs.launchpad.net/juju-core/+bug/1203795 It's blocking local provider on LTS
<_mup_> Bug #1203795: mongodb with --ssl not available in precise <juju-core:Confirmed> <https://launchpad.net/bugs/1203795>
<mgz> talk to jamespage :)
<marcoceppi> mgz: well my first thought is to just upload the raring version of mongodb to ppa:juju/devel and call it a day
<rogpeppe1> niemeyer: trivial change to mgo: https://codereview.appspot.com/11419045
<rogpeppe1> another small CL (just copying some code from goamz) if anyone fancies a review: https://codereview.appspot.com/11680043
<rogpeppe1> mgz: this is what i was referring to in a reply to you earlier
<rogpeppe1> mgz: assuming this goes in, i'll go and refactor the attempt loop code to use AttemptStrategy now.
<mgz> rogpeppe1: ta
<niemeyer> rogpeppe1: What is the implements command?
<rogpeppe1> niemeyer: it tells you which interfaces are implemented by which types.
<niemeyer> rogpeppe1: Should it be fixed instead to behave as the compiler does?
<rogpeppe1> niemeyer: that should probably be fixed indeed
<rogpeppe1> niemeyer: but changing to use *file seems reasonable - i certainly found it quite surprising when i saw that that actually worked
<niemeyer> rogpeppe1: It's just style, but sure
<rogpeppe1> niemeyer: i think it might be the go/types package that's broken here. not entirely sure.
<rogpeppe1> niemeyer: yeah, looks like it.
<niemeyer> rogpeppe1: It's changed
<rogpeppe1> niemeyer: ah, i should try go getting it again
<rogpeppe1> niemeyer: hmm, yes, the implements command is now broken
<rogpeppe1> niemeyer: against code.google.com/p/go.tools/go/types anyway, which i think is the correct path
<rogpeppe1> niemeyer: thanks
<niemeyer> rogpeppe1: Cool, glad the change makes it happier
<rogpeppe1> niemeyer: yeah, go/types still isn't fixed in that respect.
<rogpeppe1> niemeyer: BTW: https://github.com/dominikh/implements
<dimitern> rogpeppe1: reviewed
<dimitern> rogpeppe1: https://codereview.appspot.com/11560044/
<rogpeppe1> dimitern: thanks
<rogpeppe1> dimitern: i'd appreciate a review of https://codereview.appspot.com/11680043/ if you've a mo - it's just code moving
<rogpeppe1> right, i need to go
<rogpeppe1> g'night all!
<dimitern> rogpeppe1: g`night; will look
 * thumper just prepaid for an ubuntu edge phone
 * thumper needs another +1 on https://codereview.appspot.com/11561044/
<thumper> mramm: you available?
<mramm> yep
<mramm> need a couple min, just got off a long flight but basically available...
<mramm> also, hotel network here sucks so we will have to see how it all works out.
<mramm> but trying a google hangout now
<mramm> https://plus.google.com/hangouts/_/6d40ef8bc434ba92987e924694fb92cffb9c7e7c?hl=en
<wallyworld> thumper: you still otp?
<arosales> any folks try hp cloud with the recent juju-core release?
<wallyworld> arosales: i haven't. issues? it would be great if we had an account to test with :-)
<arosales> jcastro and m_3 reporting it is not working atm
<arosales> wallyworld, I am verifying
<wallyworld> ok, let me know if i can help
<wallyworld> arosales: btw, i'm about to land a tool to do image metadata validation. hopefully today. will let you and the guys know
<arosales> wallyworld, thanks that would be great
<arosales> wallyworld, do you know if the tools were updated on hpcloud to match 1.11.3? or is 1.11.0 tools sufficient?
<wallyworld> arosales: you will need the 1.11.3 tools. i do not know if they were updated. listing the public bucket contents will show if they were
<arosales> wallyworld, ok I will check.  currently bootstrapping on hp cloud I get:
<arosales> http://pastebin.ubuntu.com/5902079/
<wallyworld> arosales: so the 1.11.3 tools are missing. i'd like to see a --debug bootstrap to get more ides why it is failing
<wallyworld> you could also try with --upload-tools
<m_3> arosales: might be a bucket issue
<arosales> wallyworld, ya I don't see any 1.11.3 tools on hp cloud
<arosales> m_3, what error were you getting?
<wallyworld> we should add that to th relesase process
<m_3> 1.11.2 works fine in hp, but 1.11.3 gets some auth grumping
 * m_3 looks
<wallyworld> i can't off hand think of what may have changed
<m_3> dev:~/talks/now $ juju bootstrap -e go-hp
<wallyworld> to cause that
<m_3> error: cannot create bootstrap state file: cannot get endpoint URL without being authenticated
<arosales> wallyworld, http://pastebin.ubuntu.com/5902083/
<wallyworld> ah, i know where the issue might be. can you do a bootstrap --debug ?
<arosales> bootstrap --debug above ^
<arosales> wallyworld, also any reason 1.11.3 tools are missing from hpcloud?
<wallyworld> arosales: they just weren't copied there as part of the release process i'm guessing
<wallyworld> so our release process needs to be updated
<wallyworld> i'll try bootstrapping on canonistack to see if i get the same error
<arosales> wallyworld, if you a pointer to the 1.11.3 tools I can upload to hp
<wallyworld> arosales: http://juju-dist.s3.amazonaws.com/
<wallyworld> or you can use sync-tools
<wallyworld> juju help sync-tools
<arosales> sync-tools upload to an arbitrary bucket?
<thumper> wallyworld: nope, around now
<thumper> although need to go let the dog in
<wallyworld> thumper: ok, just talking to some folks about hp cloud issues
<wallyworld> with 1.11.3
 * arosales uploading the precise i386/amd64 tools to the hp bucket
<thumper> wallyworld: ok, did you want to chat?
<wallyworld> thumper: yeah, i had a meeting about the addressability issues with rodger and martin and asked martin to send an email which he did. i just wanted to touch base on that
<thumper> ok, I need to read it
<thumper> been looking at the ubuntu edge stuff
<wallyworld> ok. i need to do that too
<wallyworld> $32M is a big ask
<wallyworld> arosales: fwiw, i just bootstrapped in canonistack with current trunk, which should not be too much different to 1.11.3
<wallyworld> so hopefully using the current tools will work
 * arosales retrying hpcloud with updated tools
<arosales> wallyworld, still getting http://pastebin.ubuntu.com/5902120/ with updated precise tools
<wallyworld> arosales: hmmm. i know where in the code it's happening. but without an account to debug with, it will be hard to find out why, since it works on canonistack
<arosales> wallyworld, I can get you a sub on my account for testing
<wallyworld> arosales: that would be f*cking amazing
<arosales> wallyworld, I will pm you the details
<wallyworld> thank you. now we can smoke test etc before release
<wallyworld> thumper: i have an errand to take care of, will ping you in a bit
<thumper> wallyworld: ack, talking with hazmat
<arosales> wallyworld, fyi no saucy amd64 juju-tools on amazon
<arosales> for 1.11.3 that is
<wallyworld> arosales: i'm not sure what the saucy status is. dave cheney would know a bit more
<arosales> wallyworld, you should have access to hp cloud, let me know if you see the auth error
<wallyworld> arosales: thanks. logging in now
<arosales> wallyworld, I uploaded all the tools that amazon had for HP
<wallyworld> arosales: what region are you using?
<arosales> az1
<arosales> wallyworld, ^
<wallyworld> ok, thanks
<wallyworld> arosales: right, i have reproduced the error. now to debug
<arosales> wallyworld, sometimes that is half the battle
<arosales> wallyworld, thanks for taking a look
<wallyworld> indeed. i need to sort out why it didn't show up on canonistack
 * arosales not sure on the delta between canonistack and hp
<thumper> wallyworld: I'm likely to just merge this with your ok :-) https://codereview.appspot.com/11561044/
 * thumper waits a bit longer for another +1
<thumper> would be funny to land with "trivial"
<thumper> :)
<wallyworld> thumper: do it!
<wallyworld> thumper: will talk to you later - i'm sorting out a critical hp cloud issue with the latest release for the guys at OSCON
<thumper> wallyworld: np, I need to go to the gym anyway
<thumper> and that is a *need*
<wallyworld> of course
<thumper> also I have an email to compose re: addressability
<wallyworld> arosales: so something is resetting the openstack client's credentials, just got to find out what and why
<arosales> wallyworld, ok thanks for taking a look
 * arosales is available for testing
<wallyworld> np. will let you know when i find the next bit
<arosales> I may have to step away for dinner with the family but I will log back in
<wallyworld> ok
#juju-dev 2013-07-23
<wallyworld> arosales: found what's happening - it works if you use the --upload-tools open. not using that option causes a code path to be run which resets the current client credentials. this didn't matter previously but does now
<wallyworld> s/open/option
<wallyworld> arosales: ping me when you are back online and we can discuss a resolution
<arosales> wallyworld, and --upload-tools requires building from source?
<wallyworld> yes
<arosales> wallyworld, no go for OSCON
<wallyworld> yeah
<wallyworld> i can work on a fix
<wallyworld> but i guess that means cutting a new release
<arosales> wallyworld, we can't have charm school attendees building from source :-/ Kind of takes away from the juju simplicity
<wallyworld> agreed
<arosales> and a bug fix is in the same realm too :-/
 * arosales not sure how difficult it would be to do a 1.11.3.1 release with just your bug fix . .  .
<wallyworld> i guess a bug fix is more palatable though
<wallyworld> that should be possible
<arosales> wallyworld, but I guess you need to find the issue on the cred reset too
<wallyworld> arosales: i found what's causing it - not uploading tools triggers a code path that resets the creds. it didn't matter before but does now. i just need to figure out a fix
<arosales> wallyworld, ok. I'll leave you too that and we can sync with dave when he comes on line with possible solutions
<wallyworld> arosales: sounds good. i have a simple fix i can do, just need to think if it's the best way
<arosales> fwiw I filed bug https://bugs.launchpad.net/juju-core/+bug/1203908
<_mup_> Bug #1203908: 1.11.3 does not bootstrap on HP Cloud <juju-core:New> <https://launchpad.net/bugs/1203908>
<arosales> wallyworld, I am going to step away from the keyboard for a few, but I'll check back online later
<wallyworld> ok
<axw> hello... who's awake?
<axw> davecheney: morning
<davecheney> axw: hey
<thumper> hi axw
<axw> thumper: hiya
<davecheney> so, dependencies really grind my gears
<thumper> axw: do you know duflu from elsewhere?
<davecheney> i'm tired to seeing jtv have to ask for the bot to be updated
<axw> thumper: heh :) yes, we worked together at Micromuse and IBM
<davecheney> i'm going to prpose a script that does the checkout for it
<davecheney> so the bot checks out the code
<davecheney> runs the script which gets the deps
<davecheney> job done
<thumper> heh... if I hadn't just gone to the gym I'd be worried... I got the shakes in my hands
<thumper> amusing to watch
<thumper> first workout in two weeks due to illness
<thumper> felt good
<davecheney> thumper: man flu, or real flu
<davecheney> ?
<thumper> wasn't flu
<thumper> just a persistent cough and fever
<thumper> kid flu passed on
<thumper> can't work out coughing your guts out
<thumper> however managed the entire workout without one single cough
<thumper> so I feel over it now
<davecheney> sgtm
<axw> So, I want to fix a bug I found yesterday in juju-core. Can someone give me a brief walkthrough of creating bugfix/feature branches on LP?
<thumper> axw: happy to
<thumper> but I need to grab a smoothie first
<thumper> axw: gimmie 5 minutes?
<axw> nps
<axw> thanks
<thumper> axw: then we can do a google hangout?
<thumper> axw: are you set up for hangouts?
<axw> thumper: sounds good
<davecheney> can I get in on this one ?
<axw> thumper: should I have G+ on my Canonical account?
<davecheney> or at least listen in
<davecheney> i dunno if this laptop has a mike
<thumper> axw: yep
<thumper> davecheney: yes happy to have you listen in
<axw> I'll get it set up while you get your drink
<davecheney> cool
<davecheney> it may not work
<thumper> axw: also make sure you have your ssh key up on LP to start
<davecheney> this coworking space has the smallest internet known to man
<thumper> or wait if you need a walk through on that
<axw> yeah that's all done
<thumper> ack
<thumper> back in 5
<wallyworld> davecheney: ping
<davecheney> wallyworld: hey
<davecheney> saw your mail
<davecheney> thanks for replying
<wallyworld> the release - we need to cut a 1.11.3.1
<wallyworld> but we need to create a branch and series
<wallyworld> first
<wallyworld> so i can backport
<davecheney> _if_ we do a 1.11.3 we _really shold do it from the 1.11.3 tag
<wallyworld> yes
<davecheney> wallyworld: or we could just tag what we have now as 1.11.4
<wallyworld> hence why we need to do a series and branch
<davecheney> i think that is simpler
<davecheney> wallyworld: unless you want 1.11.3 to be 1.12
<davecheney> axw: sorry about these arse backwards version numbers
<wallyworld> davecheney: well, 1.11.3 is broken right now
<davecheney> for openstack yes
<davecheney> but 1.11.2 was broken in other ways
<davecheney> i don't think that is cause to deviate from what we've done in thepast
<davecheney> i'd be happier to tag trun 1.11.4
<davecheney> start 1.11.5
<davecheney> and then 1.11.4 can become the new stable branch
<wallyworld> davecheney: there are guys at OSCON who need a package version to work
<davecheney> that is why I want to do is as I said aboce
<davecheney> i know that will be the fastest way to get it going
<wallyworld> ok. i'll propose a branch
<wallyworld> mp
<davecheney> kk
<wallyworld> with the fix
<davecheney> wallyworld: are you your keys also know to the bot ?
<davecheney> i might have to ask you to do the tag for me
<wallyworld> hope so
<wallyworld> i'd also want to longer term sort out if we need to change SetConfig() not to be dumb
<wallyworld> i've never tried
<davecheney> wallyworld: i can send you the words
<davecheney> you just have to type them
<wallyworld> davecheney: sure. i mean that i'm not sure if my keys work
<wallyworld> but i'm more than happy to try
 * davecheney throws a chair
<davecheney> fuck this bot
<davecheney> what fucking good is it if we have to wait for jam do do our commits for us ?!?!
<wallyworld> davecheney: i'll also commit another bug fix that is waiting to land (will help the guys using quantum)
<davecheney> sgtm
<axw> wallyworld: are you fixing the openstack bootstrap bug?
<wallyworld> axw: yes indeed
<axw> ok
<axw> sorry I'd have proposed a fix yesterday, but I wasn't sure how yet :)
<wallyworld> axw: hey, no problem :-) i didn't know it was broken till i found out on irc today
<thumper> davecheney, axw: https://plus.google.com/hangouts/_/ee0597b40c9b901559b872299a2b1678184b8e6a?hl=en
<davecheney> here goes nothing
<wallyworld> arosales: you back?
<wallyworld> davecheney: thumper: when you are free https://codereview.appspot.com/11703043
<davecheney> wallyworld: will you punch me if I ask for a test ?
<wallyworld> davecheney: yes. the live tests would have failed
<davecheney> +1 then
<davecheney> LGTM
<davecheney> approved
<davecheney> commit that shit
<wallyworld> thanks :-)
<wallyworld> thumper: you able to +1 that mp so i can get it landed for the 11.4 release?
<thumper> davecheney: wanna +1 my mega state.Tools removal branch
<thumper> wallyworld: sure, link me up
<axw> davecheney: should I have an account for leankit?
<wallyworld> https://codereview.appspot.com/11703043
<davecheney> axw: yes, i don't know how to create one
<davecheney> axw: write to mramm and ask for one
<axw> ok
<davecheney> sorry it's been a while
<davecheney> i can't remember how I came by mine
<hazmat> edge phones going fast
<davecheney> 12 hours til the price goes up
<wallyworld> thumper: do you want me to beg?
<bigjools> wallyworld: you should be used to begging after years of marriage
<wallyworld> yes indeed
<wallyworld> bigjools: like you can talk anyway
<bigjools> we're all whipped my friend
<wallyworld> davecheney: waiting on a 2nd +1 - maybe i should just land it so the release can progress
<thumper> wallyworld: sorry, no begging needed, but house guest arrived
<wallyworld> np
<wallyworld> but i can if you want
<wallyworld> i'm quite good at it
<thumper> done
<thumper> wallyworld: ok, beg
 * thumper waits
 * wallyworld is down on his knees
<thumper> done
<davecheney> wallyworld: just land that shit
<wallyworld> than you
<davecheney> if anyone complains, the can blame me
<thumper> davecheney: I +1ed it
<thumper> davecheney: next time go "LGTM - trivial"
<thumper> although jam has suggested going with just one review to land
<thumper> which would unblock people
<thumper> davecheney: like this trivial one: https://codereview.appspot.com/11561044/
<davecheney> thumper: trivial my arse
<davecheney> it's larger than the fucking codebase
<arosales> wallyworld, back
<davecheney> wallyworld: $ bzr tag -d bzr+ssh://go-bot@bazaar.launchpad.net/~goose-bot/goose/trunk
<davecheney> -r1494 juju-1.11.3
<davecheney> ^ magic to tag 1.11.4
<davecheney> obvoiusly with a different tag name and revno
<wallyworld> arosales: a new release is being built. i couldn't fully test bootstrapping because i don't have permission to access the compute nodes. but the original promlem is solved
<wallyworld> davecheney: will do. bot is building now
<davecheney> wallyworld: thanks mate
<arosales> wallyworld, thats good news. Thanks for working on it.  I saw you had to beg too. I appreciate that type of commitment ;-)
<davecheney> https://bugs.launchpad.net/bugs/1203935
<_mup_> Bug #1203935: Some EC2 instance prices specified in octal <juju-core:New> <https://launchpad.net/bugs/1203935>
<arosales> wallyworld, sorry I didn't fully parse, "I don't have permission to access compute nodes."
<davecheney> facepalm
<wallyworld> arosales: np. hopefully it's no too much inconvenince for the OSCON guys
<wallyworld> arosales: when i bootstrapped, it said there image id didn't exist. from the web console, i click on the compute links and it says i have no permission
<arosales> wallyworld, hmm your a compute admin, let me make you an image admin too
<arosales> wallyworld, made you an image mange admin too.
<davecheney> axw: do yo uwant to https://bugs.launchpad.net/bugs/1203935
<_mup_> Bug #1203935: Some EC2 instance prices specified in octal <juju-core:New> <https://launchpad.net/bugs/1203935>
<davecheney> take a look at this bug
<wallyworld> arosales: thanks, i'll try again in a bit. but i'm sure it will be ok now
<davecheney> might be a easly one to work your the kinks on
<axw> davecheney: sure
<arosales> bigjools, will azure be in 11.4?
<davecheney> oh son of a bitch
<arosales> 1.11.4 that is
<davecheney> i'll have to commit it out
<davecheney> comment it out again
<arosales> davecheney, bigjools was saying it was working
<bigjools> why don't you make release branches?  this is stupid
<bigjools> arosales: no I didn't!
<arosales> bigjools, :-)
<davecheney> bigjools: i don't have a good reason for why we don't have this
<arosales> bigjools, ah reading your email now I see you had to manually deploy
<bigjools> release branches or config flags would help here
<arosales> bigjools, I thought if I said it enough it would come true
<bigjools> arosales: it will, soon :)
<arosales> bigjools, its good progress
<arosales> davecheney, bigjools, hello btw
<arosales> so this is where all the fun happens :-)
<bigjools> o/
<arosales> I should join this channel more often when I am up late
<davecheney> arosales: c'mon man, i joined your eco channel, even when #juju-dev is where it is at
<wallyworld> arosales: works :-)
<wallyworld> just bootstrapped on hp cloud
<arosales> davecheney, well said
<arosales> wallyworld, good to hear
<bigjools> davecheney: anyway not sure you need to remove azure again
<bigjools> it works w/o go-curl now
<davecheney> bigjools: ok
<davecheney> excellent
<davecheney> the biuld recipe always pulls the latest deps
<bigjools> excellent apart from the huge fork of 1.0.1 Go code in gwacl
<wallyworld> davecheney: that tag command above - it says goose but surely you meant juju-core? or i guess we need to tag all the dependencies too?
<davecheney> wallyworld: i just copied that from what gz sent me
<davecheney> if that isn't what he ran
<davecheney> then he's a liar
<davecheney> wallyworld: are you able to put my key on the bot so I can do this myself ?
<wallyworld> i don't think so. i'll chck
<davecheney> shazbut!
<wallyworld> dosn't appear so
<wallyworld> i'll tag both goose and juju-core
<wallyworld> davecheney: tags done for goose rev 99 and juju-core rev 1514. and yes, that command was slightly incorrect
<davecheney> wallyworld: thanks mate
<wallyworld> np. hopefully it will build all ok
<davecheney> what could possibly go wrong
<davecheney> wallyworld: while i'm screwing with this
<davecheney> can you please make sure those two bugs are assigned to 1.11.4 and stuff
<wallyworld> ah yes, will do
<davecheney> ta muchly
<davecheney> bollocks, have to do release notes as well
<arosales> davecheney, just +2 bugs fixes from 1.11.3 correct?
<davecheney> arosales: yup
<arosales> fwiw I appreciate the emergency 1.11.4 build
<davecheney> no worries
<wallyworld> davecheney: arosales: there was a bug fixed for 1.11.3 not tagged as such, so i added it to 1.11.4 so folks had visibility to it. bug 1195223
<_mup_> Bug #1195223: juju all-machines.log is repetitive and grows unbounded <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1195223>
<davecheney> wallyworld: did I miss that one ?
<davecheney> was it linked from the leankit card ?
<davecheney> this is the way i do the release notes
<arosales> btw, who let 1.11.3 go out the door with out it bootstrapping on hp cloud ;-)
<wallyworld> davecheney: no, it was my fault. i didn't assign it to any milestone
<wallyworld> arosales: those of us without hp cloud creds :-)
<davecheney> 1. read cards in leankit, if they are bugs and have an lp link I tried to reframe the bug in the release notes
<davecheney> if they sort of look like internall stuff, like the containers work, i omit them from the release notes
<wallyworld> davecheney: i suspect there was a card on leankit but the bug number may not have been there. not sure now
<davecheney> no worries
<davecheney> noone died
<arosales> wallyworld, ah so now that you have hp creds that issue should be solved
<wallyworld> davecheney: we devs need to take more responsibility for managing our bugs
<wallyworld> arosales: indeed
<wallyworld> davecheney: in launchpad land, our qa process was much stricter and all branches had to have bugs attached etc and so work couldn't slip through unnoticed
<axw> what level of access to my account does lbox/lpad need?
<davecheney> wallyworld: i'm not sure how to answer that
<davecheney> should we aspire to being more like lp, or less ?
<davecheney> axw: ahh, ther is a trick
<davecheney> read the CONTRIBUTING
<wallyworld> davecheney: in respect to some process type things, more like it
<axw> ta
<davecheney> from memory you need to bzr launchpad-login
<davecheney> or something (it is documented)
<axw> didn't see that, thanks
<davecheney> to generate an oauth key so lbox can fiddle with lp on your behalf
<axw> righto
<davecheney> axw: protip, do it from a fresh terminal so it can fire 'sensible-browser'
<davecheney> wallyworld: what is the tag for goose ?
<davecheney> goose-1.11.4 ?
<wallyworld> juju-1.11.4
<davecheney> ta
<wallyworld> since it's a part of juju
<davecheney> wallyworld: https://code.launchpad.net/~dave-cheney/+recipe/juju-core
<davecheney> ^ while I was there
 * davecheney goes to rant onthe agenda notes
 * wallyworld looks
<davecheney> https://docs.google.com/a/canonical.com/document/d/1JjlwYw8AhF9MxfUJo1SvG3gR9Y5wHox-ToIzUEKoRyg/edit#
<davecheney> ^ looking for the lp # for the two (3) bugs fixed
<wallyworld> davecheney: goose tag looks wrong in the recipe
<wallyworld> bug 1189507
<_mup_> Bug #1189507: juju-core duplicates quantum security groups <openstack> <serverstack> <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1189507>
<wallyworld> bug 1195223
<_mup_> Bug #1195223: juju all-machines.log is repetitive and grows unbounded <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1195223>
<wallyworld> bug 1203658
<_mup_> Bug #1203658: bootstrap fails with openstack provider (cannot get endpoint URL without being authenticated) <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1203658>
<davecheney> thanks mate
<axw> should I be adding "-cr" to get a Rietveld review going?
<axw> er sorry, add to lbox propose
<axw> didn't see it in the CONTRIBUTING doc, but I didn't get any codereview URL from lbox propose
<bigjools> plz to be reviewing https://codereview.appspot.com/11621044
<bigjools> axw: I just set up a new lbox and didn't need to do that
<davecheney> axw: you _can_ there is a .lbox file in the root of the repo
<davecheney> which I think does that for you
<davecheney> ie, it adjusts some of the defaults
<axw> davecheney: thanks, I saw that later (and thanks bigjools). I think the propose failed due to my lack of permissions in editing bugs
 * bigjools longs for lbox death
<davecheney> bigjools: we should move to github
 * davecheney ducks and is glad he doesn't live in BNE
<bigjools> talking of gits... :)
<davecheney> touche
<thumper> pfft
<bigjools> whatever we do, as long as lbox is not involved I'm good
<wallyworld> launchpad is the company standard tool
<wallyworld> we should NEVER has used anything else
<davecheney> wallyworld: you're probably right, but nobody realised you could manage bzr checkouts in your $GOPATH with lightweight checkous
<davecheney> checkouts
<wallyworld> sigh
 * bigjools is using bzr native colo too
<wallyworld> did anyone ask?
<wallyworld> why assume such things?
<wallyworld> it's not like the bzr folks work for a different company or anything like that
<davecheney> wallyworld: when i joined the team I was told do it this way
<davecheney> you know very well why I wouldn't challenge that
<wallyworld> yeah, i realise. enough said ;-)
 * davecheney thinks of austin, starts to twitch
<davecheney> no
<davecheney> albama
<davecheney> where the frack did we end up
<bigjools> bzr branch lp:juju then bzr switch -b co:mynewbranch
<davecheney> HOTLANTA!
<bigjools> in other news, the royal lizard hatched
<davecheney> ALL HAIL OUR NEW ROYAL REPTILIAN
<davecheney> i bet all the folks who campaigned for a republic back in 2004 are shitting their pants now
<bigjools> William's heir is coming out
<davecheney> right now ? this second ?
<davecheney> eww
<thumper> bigjools: what?
<thumper> bigjools: lp:juju is the old juju
<bigjools> that's what he said
<thumper> bigjools: also, co: ick
<bigjools> juju-core then, whatever
<bigjools> what's wrong with co:?
<bigjools> works fine for us Reds
<thumper> I just never liked colocated branches
<bigjools> normally I have lightweight checkouts and use a sandbox
<bigjools> and no-tree branches
<bigjools> but go fucked all that over
<davecheney> arosales: ping
<arosales> davecheney, pong
<thumper> bigjools: why, it works for me
<thumper> bigjools: my branch in GOPATH is a lightweight checkout
<thumper> and I use switch and pipelines
<bigjools> thumper: first is that "go get" is broken because I aliased bzr branch to use --no-tree
<bigjools> second is that my no-tree branches are normally in the same dir as go sources, so things get confusing quickly
<bigjools> go projects, I mean
<thumper> bigjools: I have ~/src for repos
<thumper> and branches
<thumper> and just checkouts in GOPATH
<bigjools> is there an easy way of referencing that when using bzr switch without typing the whole path?
<bigjools> if so, I am totally switching back
<thumper> yes
<davecheney> arosales: if you apt-get update && apt-get install juju-core
<thumper> bigjools: so here is what I do
<davecheney> you should have 1.11.4
<davecheney> tools are uploading now
<davecheney> but precise/amd64 is done
<arosales> davecheney, doing that now
<thumper> bigjools: lets assume we are starting with $GOPATH/src/launchpad.net/juju-core as a lightweight checkout of ~/src/juju-core/trunk
<thumper> bigjools: to make a new branch:  bzr switch -b new-work
<thumper> bigjools: to go back to trunk: bzr switch trunk
<thumper> switch looks in the parent dir of the current checkout by default
<thumper> parent dir of the branch of the checkout
<davecheney> thumper: didn't you write a document on this ?
<thumper> davecheney: yes
<thumper> doc/bazaar-usage.txt
<bigjools> thumper: ah parent of the *branch* !
<bigjools> I thought it was parent of the checkout
<thumper> bigjools: yep
<bigjools> \o/
<thumper> it may well do that too
<bigjools> that's fucking fantastic
<thumper> and then: bzr push
<thumper> because the push append_policy is linked to the branch, not the checkout
<thumper> automagically goes to the right place in LP
<bigjools> right
<bigjools> have that bit sussed already
<thumper> that is also documented
 * arosales has bootstrapped on hpcloud with 1.11.4
 * davecheney high fives wallyworld 
<davecheney> down low !
<wallyworld> yay
<thumper> too slow
<davecheney> oooh, too slow
<davecheney> jynx!
<bigjools> five side
<arosales> http://pastebin.ubuntu.com/5902956/ thanks for the fast fix and release
<bigjools> scuba dive
<davecheney> à² _à² 
<arosales> why do I only see this error on hpcloud
<arosales> $ juju deploy wordpress
<arosales> error: no instances found
<arosales> if I ran that right after a bootstrap I get that error
 * arosales has to wait for a bit longer after bootstrapping
 * bigjools believes this can of worms about user expectations was previously discussed
<davecheney> bigjools: expectatoins == feedback ?
<bigjools> I think it boiled down to either blocking, or providing feedback, yes
<bigjools> I think someone else referenced this lately: https://en.wikipedia.org/wiki/Principle_of_least_astonishment
<wallyworld> thumper: this one should be ok for you to +1 now https://codereview.appspot.com/11659043/
<axw> where do reviews get mailed to? or do I need to announce them here?
<axw> https://codereview.appspot.com/11419046
<bigjools> axw: branch mail from launchpad
<axw> bigjools: ok. who do I get to add me to that group? I'd like to be able to see changes as they come in - even if I'm not useful as a reviewer yet
<bigjools> axw: if you are a member of https://launchpad.net/~juju you should get the email by default
<thumper> wallyworld: done
<wallyworld> ty
<axw> I'm not (yet). I guess I need mramm to authorise that
<bigjools> davecheney can auth you
<davecheney> i can fix
<davecheney> shit
<davecheney> this dance
<davecheney> where we add you do groups via a process of eliminatoin
<axw> :)
<davecheney> welcome to your new job, please mind your step
<bigjools> although personally I detest group mail subscriptions
<bigjools> should really encourage people to subscribe to the branch manually and turn off mail for ~juju
<davecheney> axw: which of the page of andrew walkers are you?
<axw> wilkins
<axw> ~axwalk
<davecheney> well, that would explain it
<axw> my username is confusing :)
<bigjools> lol
<davecheney> oh, that should do it
<axw> thanks
<davecheney> axw: re testing that octal change
<davecheney> i guess we don't have anything that is testing that the costs work properly
<davecheney> personally I tihnk all that stuff is shit
<davecheney> and is going to make us cry trying to solve the traveling saleman problem of finding the cheapest instance for the customer
<axw> I don't know it wlel enough to comment yet, though it strikes me as something that should be automatically generated. Not sure if AWS provides a nice data source for that tho
<davecheney> axw: fwreade will know, he did that bit (under duress)
<davecheney> there was some grumlbing that it had to be created by hand
<davecheney> obviously nothing we use that data cared that the cost of a micro was wrong
<davecheney> newsflash, thumper all the 600 phones are sold out
<davecheney> you scored a bargin
<thumper> davecheney: that I did
<thumper> we are at $3.21m now
<thumper> so my indicator says
<davecheney> 10% of the way there
<davecheney> 94% of the time left
<thumper> yeah, not holding my breath
<thumper> would be good though
<davecheney> 27 million dollars is a lot of money let
<davecheney> left
<davecheney> which is sort of a shame
<davecheney> because it casts a shadown on the 3.2mm pledged
<bigjools> ok someone please fix CONTRIBUTING to tell me not to use lbox submit
<thumper> yeah
<thumper> bigjools: sumbit a patch
<thumper> bwahahaha
<bigjools> if I touch that file I'd be sorely tempted to remove references to lbox
<davecheney> bigjools: not til we get inline code reviews in LP
 * bigjools is leaning on lp devs to do that
<bigjools> heck, could almost do it myself
<bigjools> not sure of the attraction personally, but if it gets rid of lbox ...
<thumper> bigjools: sinzui had ideas too
<thumper> bigjools: perhaps he is easier to ply with alcohol to get it done
<thumper> bigjools: get him right in that bulmer curve
<bigjools> bulmer curve, lol
<thumper> bulmer peak?
<thumper> probably spelled it wrong
<thumper> ballmer peak
<thumper> http://xkcd.com/323/
<bigjools> Bulmer's is a brand of cider.  It sounded perfect. :)
 * thumper likes bulmer's cider
<thumper> nice on ice on a hot day
<arosales> axw, I saw your comment on 1173093.
 * arosales replied no strong opinion either way
<axw> arosales: yes, thanks for your reply. going to make the change now
<arosales> me nice to get that one knocked out for our docs
<arosales> s/me/be/
<arosales> ie "showing how to expose a service"
<axw> just figuring out bzr at the moment
<arosales> missing a fundamental piece on what port is exposed, regression from juju .7
<arosales> axw, thanks
<davecheney> arosales: pushed all the other tools
<davecheney> are you and jorge happy campers ?
<arosales> davecheney, I have deployed a few services now on hpcloud
<arosales> mysql, wordpress, and juju-gui
<arosales> looking good so far
<arosales> I haven't checked local provider yet, but I think that is unchanged from 1.11.3
<arosales> davecheney, happier camper
<arosales> thank you wallyworld and davecheney  :-)
<arosales> seeing the hardware info is nice too for machines
<arosales> davecheney, fyi agent versions are at 1.11.3, but working just fine for me.
<davecheney> arosales: that can't happen
<davecheney> the cli cannot bootstrap with older tools
<arosales> davecheney, I'll destroy and try again and confirm
<davecheney> -v (always -v)
<arosales> davecheney, did you upload 1.11.4 tools to the hp bucket?
<davecheney> no
<davecheney> how do I do that ?
<davecheney> i don't have permissions to do that
<davecheney> i gues mgz or jam must do it when i'm not watching
<arosales> I do it manually via the console
<arosales> I can add you to that group
<arosales> one sec
<davecheney> so I think it is magic
<arosales> davecheney, your an object store admin for juju-public-tools
<arosales> I mean you were already an admin, no need for me to add you.
<arosales> davecheney, check out arosales@x230:~$ dpkg -l | grep juju-core
<arosales> ii  juju-core                                 1.11.4-1~1514~quantal1                            amd64        Juju is devops distilled
<arosales> arosales@x230:~$ juju --debug bootstrap
<arosales> 2013-07-23 05:55:00 INFO juju provider.go:115 environs/openstack: opening environment "hp-go"
<arosales> 2013-07-23 05:55:00 INFO juju provider.go:417 environs/openstack: bootstrapping environment "hp-go"
<arosales> 2013-07-23 05:55:10 INFO juju tools.go:26 environs: reading tools with major version 1
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:49 reading v1.* tools
<arosales> 2013-07-23 05:55:10 INFO juju tools.go:30 environs: falling back to public bucket
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:49 reading v1.* tools
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-precise-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-precise-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-quantal-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-quantal-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-raring-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.10.0-raring-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-precise-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-precise-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-quantal-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-quantal-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-raring-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.0-raring-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.3-precise-amd64
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.3-precise-i386
<arosales> 2013-07-23 05:55:10 DEBUG juju.environs.tools storage.go:69 found 1.11.3-quantal-a
<davecheney> -v will do
<arosales> ugh sorry
<arosales> buffer wrong
<davecheney> --debug is overkill
<arosales> http://pastebin.ubuntu.com/5903055/
<arosales> roger
<arosales> davecheney, but  I see a 1.11.3 agents running from a 1.11.4 client
<arosales> davecheney, if you have a pointer tothe 1.11.4 tools I can upload those to HP cloud
<davecheney> in related news, https://github.com/andelf/go-curl
<davecheney> bug fixed upstream
<bigjools> davecheney: Â£$$%^~!"Â£$^%Â£
<mramm2> axw: hey
<axw> mramm2: hiya
<arosales> davecheney, I see the 1.11.4 tools on aws. I'll download from there and upload to hpcloud
<mramm2> just getting back from OSCON festivities
<mramm2> how goes?
<axw> :)
<davecheney> mramm2: nothing to see here, move along
<mramm2> davecheney: haha
<axw> not bad, people around here have been helping me get acquainted with launchpad and bzr
<mramm2> sabdfl is happy with things today
<axw> working on a couple of minor bugs
<mramm2> axw: that's good
<mramm2> axw: sorry I have not been around much
<axw> mramm2: no worries
<mramm2> axw: if you need anything let me know
<axw> sure thing
<mramm2> axw: I know we are all excited to have you on board
<davecheney> mramm2: sure he is, people have promised him 3.2 million dollars
<axw> glad to be here :)  wishing I wasn't so inexperienced with these tools though
<mramm2> thumper and bigjools will not lead you wrong -- well, at least not too much, and not more than I would ;)
<davecheney> arosales: don't worry, there isnt any prior learning assumed
<bigjools> mramm2: *innocent face*
<mramm2> axw: well there is an adjustment period to any new job
<axw> yup
<mramm2> bigjools: well, I was being generous ;)
<bigjools> mramm2: happy to return the same generosity :)
<mramm2> I know -- you are a good dude
<arosales> mramm2, wallyworld and davecheney worked on yet another release ;-)
<mramm2> arosales: davecheney: great stuff
<arosales> all wallyworld and davecheney
<mramm2> I know mims and jorge will be grateful
<mramm2> arosales: I know, you and I are just managers -- nothing to see here, please move along
 * arosales mostly getting in the way
<mramm2> davecheney: it's not just the phone stuff -- though that is awesome.   sabdifl is also very happy with the local provider and the gui updates -- he is feeling the juju love
<mramm2> and as we all know "juju is...."
<mramm2> "fucking amazing"
 * davecheney basks in sabdfl's munificence
<mramm2> haha
<davecheney> do you want a happy god, or an angry god ?
<mramm2> davecheney: well, the key is having a competent god ;)
<mramm2> and I am happy to work for one of those ;)
<mramm2> lots of winky faces from me this evening -- perhaps I have had one too many beers!
<davecheney> has anyone been able to get to Gustavo's ubuntu-edge.info ?
<mramm2> I got to it a few min ago
<bigjools> yes
<axw> yep
<mramm2> on my walk back to the hotel
 * davecheney tries his phone
<davecheney> stupid internet
<mramm2> if only you had an ubuntu phone -- it would all work perfectly!
<mramm2> we had a "ubuntu in production" bof this evening, and met lots of people using ubuntu in interesting places
<mramm2> everything from gas-station management to top 100 web sites
<mramm2> we asked everybody what they wanted from us, and they all told us that we were perfect
<mramm2> I don't believe it, but it was nice to hear ;)
<arosales> mramm2, remind me to tell hazmat that deployer configs need quotes around "no" and "yes" values or deployer chokes
<mramm2> arosales: I will try to remind you ;)
<mramm2> arosales: I will likely be busy for the next couple of days -- so you might also want to make a note of it ;)
<arosales> perhaps that is a juju-gui export though
<arosales> mramm2, roger.
<arosales> mramm2, ack
<mramm2> arosales: the gui is looking great, got good reviews today
<mramm2> and jorge castro is in seems to have a geek crush on thumper -- aka: the devops bunny
<mramm2> because of the local provider, which also got good reviews from hazmat
<arosales> mramm2, ha
 * arosales glad to see it went full screen by default
<mramm2> not sure if thumper will be able to live that one down
<mramm2> oh, and BTW, if any of you wants to go to the openstack summit -- we need some good juju related talk proposals
<mramm2> shoot them my way so that I can coordinate
<mramm2> next openstack summit is in hong kong, so those of you on that side of the world have the travel advantage ;)
<davecheney> mramm2: i hear you loud and clear
<mramm2> well, I have an early morning, talk to you all later!
<axw> night
<arosales> axw, night and welcome
<axw> arosales: I'm not going, but if you are going also, good night :)
<axw> and thanks
<rogpeppe1> mornin' all
<axw> morning
<rvba> Morning guysâ¦ anyone up for a tiny review? https://codereview.appspot.com/11578043/
<davecheney> jamespage: mgz wallyworld just doing the tarball for 1.11.4
<wallyworld> awesome
<davecheney> 1.11.4 _does_ include gwacl
<davecheney> but does _not_ include go-curl
<davecheney> and does not require libcurl-dev
<wallyworld> yay
<jamespage> davecheney, ack
<davecheney> ok, that is it for me
<davecheney> releases are released
<davecheney> uploads are uploaded
<dimitern> \o/
<davecheney> buttons pressed
<davecheney> ticks boxed
<dimitern> machiner api-ed
<davecheney> openstack, stacked
<dimitern> ubuntu edged ;)
 * davecheney rimshot
<dimitern> man, more than $3.2M pledged in less than a day
<wallyworld> dimitern: i won't make the meeting. i'm about to go play soccer
<dimitern> wallyworld: ok
<wallyworld> i'm sure you won't miss me
<dimitern> :) sure
<dimitern> :P
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: rogpeppe | Bugs: 5 Critical, 76 High - https://bugs.launchpad.net/juju-core/
<axw> what LeanKit board(s) are being used for juju-core? i.e. what boards should I request access to?
<rogpeppe1> axw: https://canonical.leankit.com/Boards/View/103148069
<axw> rogpeppe1: thanks
<noodles775> Hey there. I tried to install from source yesterday, but it failed with "No package 'libcurl' found. I read above that libcurl-dev is no longer required with todays release, so just tried the source again - which doesn't error, but doesn't result in $GOPATH/bin/juju either: http://paste.ubuntu.com/5903326/
<noodles775> Hrm - nm. It looks like I'm confused by the output of the second "go install" there, the exit status is still 1.
 * noodles775 installs libcurl-dev.
<rvba> noodles775: go-curl (and thus libcurl) should not be a dependency any moreâ¦ are you installing from fresh or upgrading?
<rvba> noodles775: could you make sure gwacl is up to date by running: go get -u launchpad.net/gwacl
<noodles775> rvba: thanks. That returns no output. Let me uninstall libcurl-dev again...
<dimitern> rogpeppe1: so you're doing the upgrader, right?
<rogpeppe1> dimitern: yes
<noodles775> rvba: Yep - that compiles now: http://paste.ubuntu.com/5903360/ Thanks!
<dimitern> rogpeppe1: i'll pick up the deployer then
<rogpeppe1> dimitern: cool
<dimitern> rogpeppe1: it needs a StringsWorker, following the same model like the NotifyWorker
<rvba> noodles775: welcome :)
<rogpeppe1> dimitern: are there going to be any others using StringsWorker?
<dimitern> rogpeppe1: uniter perhaps?
<rogpeppe1> dimitern: i very much doubt it
<dimitern> rogpeppe1: firewaller?
<rogpeppe1> dimitern: can't we just use the existing logic rather than refactoring everything?
<dimitern> rogpeppe1: you don't think introducing the NotifyWorker was a good thing?
<rogpeppe1> dimitern: it's a good thing if it's used more than once
<dimitern> rogpeppe1: so, that's my answer to the above :)
<rogpeppe1> dimitern: otherwise it's just unnecessary abstraction
<rogpeppe1> dimitern: the firewaller won't be able to use StringsWorker AFAICS
<dimitern> rogpeppe1: it uses an environconfigwatcher and environmachineswatcher, the second one is a stringswatcher
<rogpeppe1> dimitern: how are you going to structure the StringsWorker so that it allows more entries in the core select loop?
<dimitern> rogpeppe1: more entries?
<dimitern> rogpeppe1: it should be simple
<rogpeppe1> dimitern: currently the firewaller listens on 6 things in its select
<dimitern> rogpeppe1: couldn't we use several workers and put them together, rather than having a single loop?
<rogpeppe1> dimitern: the NotifyWorker pattern is to listen on 2 things (a watcher and the tomb) and make a callback when one of them happens
<rogpeppe1> dimitern: i don't see that that would gain us anything but obfuscation
<rogpeppe1> dimitern: the firewaller control flow is current quite nice and obvious
<rogpeppe1> dimitern: i'd be happy to keep it that way
<dimitern> rogpeppe1: ok then, won't do it for now; we can always refactor it later if needed
<rogpeppe1> dimitern: thanks
<dimitern> rogpeppe1: fairly trivial https://codereview.appspot.com/11711043/
<rogpeppe1> dimitern: i thought i'd done that...
<rogpeppe1> dimitern: yeah, it's at https://codereview.appspot.com/11586043/
<rogpeppe1> dimitern: i'm still waiting for a second review
<rogpeppe1> dimitern: you even reviewed the code :-)
<dimitern> rogpeppe1: oh :)
<dimitern> TheMue: can you review this please? ^^
<dimitern> rogpeppe1: i need it to land to continue on the deployer
<rogpeppe1> dimitern: well, i'll need another review then...
<rogpeppe1> anyone?
<dimitern> mgz: ?
<TheMue> dimitern: sure
<dimitern> TheMue: thanks
<dimitern> rogpeppe1: I'll reject mine then
<TheMue> rogpeppe1: you've got a +1
<rogpeppe1> TheMue: thanks
<dimitern> TheMue: sorry, not that
<dimitern> TheMue: https://codereview.appspot.com/11586043/ this
 * TheMue currently build a test-env for autosync tests
<dimitern> TheMue: the one you reviewed I closed, because it duplicates rogpeppe1's
<TheMue> dimitern: don
<TheMue> done
<dimitern> TheMue: tyvm
<TheMue> dimitern: seen it after lgtm'ed it ;)
<dimitern> :) np
<dimitern> rogpeppe1: good to go then
<rogpeppe1> dimitern: ok, it's approved
<dimitern> rogpeppe1: cheers
<jtv2> mgz: your oldest pending branch looks about ready to land...
<axw> jtv2, rogpeppe1: thanks (yes I did confirm pricing on AWS). Do I do "lbox submit" now?
<rogpeppe1> axw: no; we don't use that any more
<rogpeppe1> axw: you go to the merge proposal
<axw> claim review?
<rogpeppe1> axw: copy and paste the mp description into the commit message
<rogpeppe1> axw: then mark the mp as approved
<axw> ok
<jtv2> No need to claim the review.
<jtv2> Although I always add an Approved vote listing who approved in Rietveld, but that's probably just because I'm used to Tarmac requiring an approval vote.
<rvba> rogpeppe1: if you have time for a tiny review: https://codereview.appspot.com/11578043/.  William had a look at it on Friday but asked me a question (I replied) and didn't give me a formal LGTM.
<rogpeppe1> rvba: LGTM
<rvba> Thanks.
<axw> adios
<rogpeppe1> i'd appreciate a couple of reviews of this please: https://codereview.appspot.com/11680043/
<mgz> looking
<TheMue> rogpeppe1: done
<rogpeppe1> TheMue: thanks
<mgz> rogpeppe1: so if I understood correctly yesterday, we have an older copy of of this from goamz, and this basically takes the newer version?
<rogpeppe1> mgz: not quite. goamz copied the juju-core version, then updated it. this CL copies back that updated version.
<mgz> aha, fun.
<mgz> approved.
<rogpeppe1> mgz: ta
<mgz> rogpeppe1: have you got opinions on https://coderview.appspot.com/11679044 specifically the status output format?
<rogpeppe1> mgz: i get a 404 for that link
<mgz> +e
<mgz> codereview
<rogpeppe1> mgz: ha, it's funny that coderview is actually a site
<rogpeppe1> mgz: hmm, i think we should probably keep the same format as the python
<rogpeppe1> mgz: it's not as if there's a strong convention for tcp:45
<rogpeppe1> mgz: perhaps we should just change state.Port.String to print tcp/45 instead
<rogpeppe1> mgz: i can't immediately think of anything that would break
<mgz> that would be my prefered option if we can do it without breakage
 * TheMue => lunch
<rogpeppe1> mgz: you've got another review: https://codereview.appspot.com/11548044/
<mgz> ta, I should unheadinsand and land
<mgz> standup all?
<rogpeppe> TheMue: you have a review
<TheMue> rogpeppe: thanks
<dimitern> rogpeppe, TheMue: https://codereview.appspot.com/11713043
<dimitern> TheMue: thanks
<TheMue> dimitern: yw
<rogpeppe> dimitern: you've got a review: https://codereview.appspot.com/11713043/
<dimitern> rogpeppe: thanks
<dimitern> rogpeppe: the server always returns results and they are exactly the same number as the passed args
<rogpeppe> dimitern: we don't want to panic if the client's talking to a dodgy server though, do we?
<rogpeppe> dimitern: otherwise why bother checking the length at all?
<rogpeppe> dimitern: (as you do in the original if statement)
<dimitern> rogpeppe: hmm
<dimitern> rogpeppe: ok, but not sure about the return "", fmt.Errorf("no results found")
<rogpeppe> dimitern: well, yeah, choose some appropriate error msg
<dimitern> rogpeppe: should there be any? the AssignedMachineId from state doesn't behave like this
<rogpeppe> dimitern: that method isn't a bulk method
<rogpeppe> dimitern: if the server returns no results and no error, then there's something gone wrong
<dimitern> rogpeppe: and the one in the client-side api isn't either
<rogpeppe> dimitern: but the server one is
<rogpeppe> dimitern: so we need to decide what we want to do if the server returns no results and no error
<dimitern> rogpeppe: how about ErrUnexpected "unexpected server response" ?
<rogpeppe> dimitern: i'd prefer to be a bit more explicit in the error message.
<dimitern> rogpeppe: fwiw this has to change for all other api client-side methods like this
<rogpeppe> dimitern: how about ErrNoResults "no results found in server response"
<dimitern> rogpeppe: sounds good
<dimitern> rogpeppe: and I'll add a TODO to change all other methods like that
<rogpeppe> dimitern: well various places already handle it
<rogpeppe> dimitern: e.g. deployer.State.unitLife
<dimitern> rogpeppe: like where?
<rogpeppe> 		return "", fmt.Errorf("expected one result, got %d", len(result.Results))
<dimitern> rogpeppe: that's different
<rogpeppe> dimitern: i think it's exactly the same, isn't it?
<dimitern> rogpeppe: Life() returns no error, unitLife is used for Refresh()
<rogpeppe> dimitern: it handles exactly the same case
<rogpeppe> dimitern: it's handling a server call that may return no results and no errors in a robust way
<rogpeppe> dimitern: as i'm suggesting you do
<dimitern> rogpeppe: ok, i'll add the error and add todos as appropriate
<rogpeppe> dimitern: i don't see anywhere else that doesn't do it right
<rogpeppe> dimitern: most places return "expected one result..."
<dimitern> rogpeppe: ok, so why not instead of adding this extra error nobody else will probably use, change it so it works like the other calls, i.e. "expected one result..."
<rogpeppe> dimitern: sgtm
<dimitern> rogpeppe: good point about IsResponsibleForUnit(), but how should I call the client-side unit call?
<dimitern> rogpeppe: unit.IsResponsible?
<rogpeppe> dimitern: hmm
<rogpeppe> dimitern: unit.AmResponsible ?
<dimitern> rogpeppe: IsAgentResponsible?
<dimitern> rogpeppe: CanDeploy?
<rogpeppe> dimitern: i like CanDeploy
<dimitern> rogpeppe: ok
<rogpeppe> dimitern: then the server call becomes CanDeployUnits
<rogpeppe> dimitern: or perhaps just CanDeploy actually
<dimitern> rogpeppe: sgtm
<TheMue> strange, the whole call chain a tools.ErrNoTools is returned. but when I compare the this error the "if" fails :/
<dimitern> rogpeppe: updated https://codereview.appspot.com/11713043
 * dimitern bbiab
<rogpeppe> dimitern: could you remind me why the original code checked the assigned machine id, please?
<rogpeppe> dimitern: after all, we've just received the unit name from a watcher which is presumably watching only units for this particular machine
 * rogpeppe is called to lunch
<dimitern> rogpeppe: because we need to know if we are responsible for deploying that unit
<dimitern> rogpeppe: we're calling assignedmachineid to see if the unit is assigned for deployment, and yes, we only watch units of one machine (the deployer's)
<rogpeppe> dimitern: so what's a scenario where the deployer sees a unit that it's not responsible for?
<rogpeppe> dimitern: isn't the deployer now responsible for deploying subordinate units too? (or is that something that is still in the murky future?)
<dimitern> rogpeppe: if it isn't responsible for a unit it won't try deploying it
<rogpeppe> dimitern: yes, but why would it see a unit it isn't responsible for in the watcher results?
<dimitern> rogpeppe: because the unit is not assigned yet
<rogpeppe> dimitern: doesn't the watcher watch only assigned units?
<dimitern> rogpeppe: hmm
<dimitern> rogpeppe: good question, have to check
<dimitern> rogpeppe: it watches all principal units of a machine (assigned ones), and also all subs of these principals (which might not be assigned yet)
<rogpeppe> dimitern: i don't think a sub can be unassigned
<rogpeppe> dimitern: it's assigned by virtue of being a sub of an assigned principal
<dimitern> rogpeppe: yeah, that's right
<rogpeppe> dimitern: i *think* i see the issue
<dimitern> rogpeppe: but not sure assignedmachineid returns != "" for subs
<rogpeppe> dimitern: i think it probably would
<rogpeppe> dimitern: but anyway...
<rogpeppe> dimitern: it seems like the watcher doesn't tell you if it's telling you about a unit that's assigned or a unit that's been unassigned
<dimitern> rogpeppe: it returns the principal's machine id (if assigned)
<rogpeppe> dimitern: it just tells you about changes
<dimitern> rogpeppe: that seems right, yes
<rogpeppe> dimitern: even though it actually knows what the changes are (added or removed)
<rogpeppe> dimitern: this seems a bit odd, because you can't understand what the watcher's telling you without querying the state, but by that time it might be out of date.
<rogpeppe> dimitern: perhaps this was done as part of the recent StringsWatcher changes
<dimitern> rogpeppe: how could it be out of date, when you're just checking that?
<dimitern> rogpeppe: i don't get you, sorry
<rogpeppe> dimitern: well, it might be telling you that it's been just unassigned, but by the time you get around to checking, it's been assigned again.
<rogpeppe> dimitern: i guess it doesn't matter
<rogpeppe> dimitern: it just seems a bit odd, and it's the only reason for this API call you're doing
<rogpeppe> dimitern: if the watcher specified which units were assigned and which units were unassigned, we wouldn't need it
<dimitern> rogpeppe: perhaps, but we still need to check if it's assigned between we read from the watcher changes chan and we're about to deploy, based on that decision, won't we?
<rogpeppe> dimitern: not really, because that only narrows the race window AFAICS
<rogpeppe> dimitern: anyway, i now think i understand
<dimitern> rogpeppe: :)
<dimitern> rogpeppe: it's a bit hairy i guess
<rogpeppe> dimitern: there's a minor security issue which is that this call allows any machine agent to probe for the existence of any unit
<dimitern> rogpeppe: how come?
<rogpeppe> dimitern: issue the call and see what result you get
<rogpeppe> dimitern: if you get ErrNotFound, the unit doesn't exist; if you get ErrPerm, it does
<dimitern> rogpeppe: you won't get err not found
<rogpeppe> dimitern: no?
<dimitern> rogpeppe: where?
<rogpeppe> dimitern: from CanDeploy
<rogpeppe> dimitern: pass it a unit name that doesn't exist
<rogpeppe> dimitern: d.st.Unit will return ErrNotFound (or similar)
<dimitern> rogpeppe: yeah, and you get result!=true, so you get errperm
<rogpeppe> dimitern: ah, i see
<dimitern> rogpeppe: ah, you mean client-side?
<rogpeppe> dimitern: no, i'd misread your logic, sorry
<dimitern> rogpeppe: you'll get nil always, but any next call will return errperm
<rogpeppe> dimitern: i think that code is odd though - it's odd that you can't get false without an error
<dimitern> rogpeppe: not from the server
<dimitern> rogpeppe: it always returns errPerm on false
<rogpeppe> dimitern: yeah, that's weird
<rogpeppe> dimitern: how about never returning an error.
<rogpeppe> ?
<dimitern> rogpeppe: why?
<rogpeppe> dimitern: because there's no information content in the error
<mgz> rogpeppe: ah, I'll have to look at godoc example code stuff, that's neat
<rogpeppe> dimitern: result==false => err==ErrPerm; result==true => err==nil
<rogpeppe> mgz: yeah, it works quite nicely.
<rogpeppe> mgz: it's nicer when you can actually have some output - then gotest will run the code and check the output for you
<dimitern> rogpeppe: let's not hide details like that
<rogpeppe> dimitern: i'm not sure what you mean
<rogpeppe> dimitern: i was just describing the current behaviour
<dimitern> rogpeppe: I prefer not to do that
<rogpeppe> dimitern: better might be to return an error if we get an non-not-found error from Unit; otherwise return true iff the assigned machine matches
<dimitern> rogpeppe: that might work
<dimitern> rogpeppe: ok, sounds good
<rogpeppe> dimitern: cool
<rogpeppe> dimitern: then you won't be discarding the error always like you are currently
<dimitern> rogpeppe: agreed
<rogpeppe> dimitern: i've made a couple more comments
<dimitern> rogpeppe: thanks
<rogpeppe> mgz: golang.org/pkg/testing documents the example code conventions FYI
<rogpeppe> mgz: (not the most intuitive place!)
<mgz> yeah, I was surprised you added it to the test code, but it makes some sense
<dimitern> rogpeppe: wait a moment
<dimitern> rogpeppe: if we get notfounderr from st.Unit, we still don't know whether we're talking to an authorized entity
<dimitern> rogpeppe: that's why i think we still should return errperm, like in the other facades
<rogpeppe> dimitern: i don't think it should matter
<dimitern> rogpeppe: once we have the unit and it's assigned, we can call authOwner, etc.
<rogpeppe> dimitern: the question the API call is answering is: "can i deploy this unit?"
<dimitern> rogpeppe: it matters, because returning not found is exactly what you described as "minor security risk"
<rogpeppe> dimitern: i don't think it matters if the answer is "false" for units you can't deploy
<rogpeppe> dimitern: i'm not suggesting that we return not found
<rogpeppe> dimitern: i'm suggesting we return false instead of ErrNotFound
<dimitern> rogpeppe: the question, as always, implicitly like for every API call is, "can I do this at all?"
<rogpeppe> dimitern: you can ask the question
<rogpeppe> dimitern: that's not an error
<dimitern> rogpeppe: it's an errperm if we don't know if you're supposed to ask / know that
<rogpeppe> dimitern: but you can't find out anything more than "yes you can deploy this unit" or "no you cannot deploy this unit or it doesn't exist"
<dimitern> rogpeppe: isn't that the whole point of the call?
<rogpeppe> dimitern: the point of the call is to ask if we can deploy a given unit
<dimitern> rogpeppe: it's a yes/no question
<rogpeppe> dimitern: why should we return ErrPerm if the answer is false?
<rogpeppe> dimitern: i can't see any possible security hole
<rogpeppe> dimitern: or information leak
<rogpeppe> dimitern: and it just makes the client code more complex
<rogpeppe> dimitern: because it then has to explicitly check for ErrPerm
<dimitern> rogpeppe: if it's false, and we're authorized, i agree it should be false, nil
<rogpeppe> dimitern: ha ha!
<rogpeppe> dimitern: it can't be false if we're authorized!
<dimitern> rogpeppe: not now, but it will be :)
<rogpeppe> dimitern: how so?
<dimitern> rogpeppe: i'll change it to return false, nil if we're authorized and cannot deploy it
<dimitern> rogpeppe: is that what you want?
<rogpeppe> dimitern: i'm not sure. if i'm the machine agent, the watcher tells me about a unit, the unit gets unassigned, i ask whether i can deploy the unit, then i don't think i should get ErrPerm. it should just return false.
<rogpeppe> dimitern: i can't see any security issue with doing that
<dimitern> rogpeppe: so the cases are: 1) false, errperm (not found, etc.), 2) true, nil (assignedmachineid == entity.tag), 3) any other case (currently) is the same as 1)
<rogpeppe> dimitern: and it means that the machine agent doesn't need to say "oh, ErrPerm? that actually means I can't deploy the unit"
<dimitern> rogpeppe: and we can change 3) false, nil to mean (no error, and you cannot deploy it)
<dimitern> rogpeppe: but I don't see how we can do that
<dimitern> rogpeppe: and still have permission checks
<dimitern> rogpeppe: i.e. not leak unnecessary info
<rogpeppe> dimitern: i suggest: 1) false, some-non-not-found-error; 2) true, nil; 3) false, nil
<rogpeppe> dimitern: 1) happens when we get an error talking to the state
<rogpeppe> dimitern: 2) happens if you can deploy the unit
<rogpeppe> dimitern: 3) happens if the unit isn't assigned to the given machine agent or doesn't exist.
<dimitern> rogpeppe: 1) the error has to be errperm
<rogpeppe> dimitern: why?
<dimitern> rogpeppe: and 3) you cannot know which machine agent we're talking about
<dimitern> rogpeppe: at server-side
<rogpeppe> dimitern: huh? we know the authenticated machine agent. that's how we're checking AuthOwner, right?
<dimitern> rogpeppe: I mean, we know the authed entity, but not how the unit relates to it, until we see assigned machine id
<rogpeppe> dimitern: yes, and?
<dimitern> rogpeppe: oh boy
<rogpeppe> dimitern: i don't (currently) see how a non-not-found error can leak info
<dimitern> rogpeppe: I realized I have an error there
<dimitern> rogpeppe: it has to be authOwner(entity.Tag) && entity.Tag == assignedMachineId to be true, nil
<dimitern> rogpeppe: that's for 2)
<dimitern> rogpeppe: and for 1), I already agreed non-not-found errors should be returned as it, otherwise - errperm
<rogpeppe> dimitern: how can authOwner(entity.Tag) ever be true?
<rogpeppe> dimitern: given that the entity is a unit tag
<dimitern> rogpeppe: entity.Tag => authorized entity's tag
<dimitern> rogpeppe: that's what I meant
<rogpeppe> dimitern: what's wrong with authOwner(assignedMachineId) as i suggested?
<rogpeppe> dimitern: for true, nil
<dimitern> rogpeppe: ah, right, sorry - too many variables :)
<dimitern> rogpeppe: i agree with that
<rogpeppe> dimitern: so, i can't see a time when it's appropriate to return ErrPerm currently
<dimitern> rogpeppe: but that's what I did anyway
<dimitern> rogpeppe: machineId, er := unit.AssignedMachineId(); machineTag := state.MachineTag(machineId); AuthOwner(machineTag), no?
<rogpeppe> dimitern: istm that by returning ErrPerm there, we're trying to say "you can't even ask that question", but we're answering it anyway
<dimitern> rogpeppe: in 2 cases at least
<rogpeppe> dimitern: yes
<rogpeppe> AuthOwner(state.MachineTag(machineId)) i think i suggested in my comment
<dimitern> rogpeppe: 1) non-not-found error in st.Unit() -> errPerm, 2) err != nil in assignedmachineId -> errPerm, i think
<dimitern> rogpeppe: sorry, s/non-not/not/
<dimitern> rogpeppe: and 3) assignedMachineId == "" -> errPerm, i think as well
<dimitern> rogpeppe: i.e. until we know the assignedMachineId != "" and err == nil, the error is errPerm, after that it's errPerm only when !authOwner(assignedMachineTag)
<dimitern> rogpeppe: the truth table grew a lot :)
<rogpeppe> dimitern: sorry, had to take a parcel at the door
<rogpeppe> dimitern: i'd like it if a machine agent cannot get an error in the normal course of asking the CanDeploy question
<dimitern> rogpeppe: should I summarize it again? (i got confused myself) - 1) err = st.Unit(); err != notFound -> err, else -> errPerm, 2) assignedId, err = u..(); err != nil -> errPerm (i think), else assignedId == "", definitely errPerm, 3) !authOwner(assignedTag) -> errPerm, 4) else - true, nil
<dimitern> rogpeppe: how about that?
<rogpeppe> [15:41:44] <rogpeppe> dimitern: i'd like it if a machine agent cannot get an error in the normal course of asking the CanDeploy question
<rogpeppe> dimitern: i don't see how your suggestion addresses that
<dimitern> rogpeppe: how is that not leaking information?
<rogpeppe> dimitern: i don't think it should be a permission-denied error for a deployer to ask about a unit it's just been told about
<rogpeppe> dimitern: what info does it leak?
<dimitern> rogpeppe: suppose I'm MA for m0 and asking can I deploy u1, which is assigned to m1 - shouldn't that be errPerm ?
<rogpeppe> dimitern: if i just get the answer "false", what information have i gathered?
<dimitern> rogpeppe: that perhaps there indeed is a u1 and I'm not allowed to touch it
<dimitern> rogpeppe: whereas errPerm means you can't touch u1 anyway, so the question is irrelevent
<rogpeppe> dimitern: that's no information. it doesn't tell you anything about the existence or non-existence of u1
<rogpeppe> dimitern: "perhaps"
<dimitern> rogpeppe: no perhaps, sorry, if there isn't a u1 you'll get errPerm anyway earlier
<rogpeppe> dimitern: i suggest you should get false in that case
<dimitern> rogpeppe: it'll be not found -> errPerm
<dimitern> rogpeppe: so if u1 exists, I'll get false, nil, and if it doesn't i'll get false, nil, is that what you're saying?
<rogpeppe> dimitern: exactly
<dimitern> rogpeppe: sorry I mean u1 doesn't exist -> false, errPErm
<rogpeppe> dimitern: yeah, i suggest false, nil there
<dimitern> rogpeppe: the thing is the existence of u1 changes the result
<rogpeppe> dimitern: how so?
<rogpeppe> dimitern: you get false, nil in both cases
<dimitern> rogpeppe: u1 doesn't exist (not found -> false, errPerm, earlier), u1 exists -> false, nil
<rogpeppe> dimitern: for the not found case, i suggest (false, nil) as i said earlier
<dimitern> rogpeppe: why?
<dimitern> rogpeppe: that's different from all other api calls so far
<rogpeppe> dimitern: because it's perfectly ok for a machine agent to ask about a unit it's just been told about but happens to have been deleted.
<dimitern> rogpeppe: all others return errPerm on not found
<rogpeppe> dimitern: i think this *is* a bit different
<dimitern> rogpeppe: don't assume what MA's been told
<dimitern> rogpeppe: we're talking about the subtle case of a rouge agent here
<rogpeppe> dimitern: i'm not making that assumption
<dimitern> rogpeppe: what's stops an agent from asking for an arbitrary unit
<rogpeppe> dimitern: it can ask for an arbitrary unit, but it can't find out any information about it at all
<dimitern> rogpeppe: and getting false, nil means either (unit exists, but you cannot deploy it) or (unit does not exist)
<rogpeppe> dimitern: unless that unit has been assigned to it
<rogpeppe> dimitern: yeah, that seems good to me
<rogpeppe> dimitern: you're allowed to ask the question
<dimitern> rogpeppe: whereas all other calls return errPerm when unit doesn't exist and may return errPerm down the line if you're not authorized
<rogpeppe> dimitern: that's usually because it's an error to try to talk to an entity that doesn't exist
<dimitern> rogpeppe: returning nil when errPerm should be returned I think is wrong
<rogpeppe> dimitern: in this case, for this particular call, i think that's not true
<dimitern> rogpeppe: either that or because you're not authorized
<dimitern> rogpeppe: I don't see how this is different from Life for example
<rogpeppe> dimitern: sure. but i don't want to clutter the nice case for the sake of the rogue case (especially when it makes no difference either way)
<dimitern> rogpeppe: why should you be allowed to ask for life of arbitrary entities and not get errperm?
<dimitern> rogpeppe: security is important, even at expense of a bit more complicated ("cluttered") code
<rogpeppe> dimitern: because a life return actually tells you something about the entity
<dimitern> rogpeppe: even when it's an edge case - designing a good api should account for rouge agents
<rogpeppe> dimitern: saying "you cannot deploy this" does not tell you anything at all about the entity
<rogpeppe> dimitern: i agree entirely
<rogpeppe> dimitern: but there is no security implication from returning (false, nil) for not found
<rogpeppe> dimitern: it just makes the client code slightly nicer
<dimitern> rogpeppe: well, if we return false, nil in most cases, should we ever return errPerm then?
<rogpeppe> dimitern: i don't think so.
<dimitern> rogpeppe: so we're keeping the error return only in case there's a problem talking to state and nothing else?
<rogpeppe> dimitern: we have no control over the units that a machine agent asks about, but we can make sure it never finds out any info on units that it cannot deploy
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: that's the reverse of what we agreed to do in general, but in this specific case i'm beginning to see your point
<dimitern> rogpeppe: at least I can't see any unnecessary leaks, it's just weird
<rogpeppe> dimitern: thanks. it is indeed a specific case.
<dimitern> rogpeppe: ok, will change it like that - no errPerm, only err if st.Unit() fails with !notFound
<rogpeppe> dimitern: something like this? http://paste.ubuntu.com/5904401/
<dimitern> rogpeppe: not quite
<rogpeppe> dimitern: oops, http://paste.ubuntu.com/5904406/
<dimitern> rogpeppe: AssignedMachineId returns NotAssignedErr most of the time and NotFound only in the case of a subordinate referring to an unknown principal
<rogpeppe> dimitern: ah, good point.
<rogpeppe> dimitern: perhaps just ignore the error from AssignedMachineId
<rogpeppe> dimitern: or http://paste.ubuntu.com/5904411/
<rogpeppe> dimitern: (probably better)
<dimitern> rogpeppe: well I only really need is machineId != ""
<dimitern> rogpeppe: I don't think so
<dimitern> rogpeppe: if it indeed is notAssigned, then machineId == ""
<dimitern> rogpeppe: and that means false, nil basically
<dimitern> rogpeppe: (according to what we agreed, substituting errPerm for nil)
<rogpeppe> dimitern: yeah. depends if you care about returning some network error if you get one when callin AssignedMachineId
<dimitern> rogpeppe: will it be that bad to just ignore the err from assignedmachineid and return false, nil then?
<rogpeppe> dimitern: i'm thinking it would be a bit naughty but ok really
<rogpeppe> dimitern: actually, i think it is worth doing properly
<dimitern> rogpeppe: err != nil && machineId == "" -> false, nil
<rogpeppe> dimitern: otherwise a machine agent could potentially destroy a unit because of a db error
<dimitern> rogpeppe: possibly logging the err, what's wrong with that?
<dimitern> rogpeppe: no, if you're not responsible for it, you won't recall it
<dimitern> rogpeppe: or deploy it
<rogpeppe> dimitern: if the unit *does* happen to be assigned to the machine and there's a network error calling AssignedMachineId, you'll see (false, nil) and erroneously undeploy the unit
<dimitern> rogpeppe: it'll be recalled only if it's already deployed and you decide you're not responsible for it anymore
<rogpeppe> dimitern: thus i think it's worth checking for NotAssignedError explicitly
<dimitern> rogpeppe: yeah, good point
<rogpeppe> dimitern: yeah, but that's a problem if you're *told* you're not responsible for it, but you actually are
<dimitern> rogpeppe: so machineId == "" || err == NotAssigned -> false, nil then?, other err != nil -> false, err
<dimitern> rogpeppe: but that seems like a security leak, because we haven't yet checked the authOwner
<rogpeppe> dimitern: yes, in the hypothetical case that a malicious agent can induce a db error for the AssignedMachineId call but not the Unit call, we can leak information about the existence of a given unit
<rogpeppe> dimitern: i considered that, but really, i think it's vanishingly likely, and it's more important to behave well in the error case.
<rogpeppe> s/likely/unlikely :-)
<dimitern> rogpeppe: hmm
<dimitern> rogpeppe: it seems remote indeed
<dimitern> rogpeppe: the check has to be err == NotAssigned || err == NotFound || machineId == "" -> false, nil, else false, err
<rogpeppe> dimitern: not quite, but i'm sure you'll get it right in the code :-)
<dimitern> rogpeppe: why not? skip the notFound?\
<rogpeppe> dimitern: it's more like: err != nil && err != NotAssigned && err != NotFound { return false, err} else if err != nil { return false, nil } else return AuthOwner(machineTag), nil
<dimitern> rogpeppe: it'll mean faulty state - a subordinate referring to an invalid principal, hence machineId of the principal cannot be determined
<dimitern> rogpeppe: ah, right :)
<rogpeppe> dimitern: but as i said, i'm sure you'll get it right
<dimitern> rogpeppe: will see :)
<rogpeppe> dimitern: thanks for bearing with me; sorry it's been a bit long-winded :-)
<dimitern> rogpeppe: np, I've seen some errors in my ways anyway :)
<dimitern> rogpeppe: and thanks
<rogpeppe> dimitern: i think it's useful to have this kind of conversation anyway around the security issues, so we know where we're coming from
<dimitern> rogpeppe: definitely
<TheMue> *pheeeeeeew*
<TheMue> still a bit ugly, but autosync works
<dimitern> TheMue: \o/
<dimitern> rogpeppe: updated https://codereview.appspot.com/11713043/
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe: thanks
<mramm> hey
<mramm> anybody know if the latest tools have been uploaded to the HP cloud public bucket?
<mramm> m_3 reports that he had to do upload tools with the latest, or else he got the previous version tools
<mgz> mramm: they may well not have been
<mgz> I'm not even certain who has the creds
<mramm> arosales: do you know?
<arosales> most folks here have creds.  I can upload now though
<arosales> mgz, I think you have creds to the object store for juju-public-tools
 * arosales uploading now . .  .
<mgz> arosales: I see an email to jam were you added him, I guess I could have used those details...
<arosales> mgz, I think you have your own unique login
<mgz> arosales: if so, I can't find it in my email archive ;_;
<arosales> I think ian or someone asked for a set of folks to have access sometime in may
<arosales> mgz, no worries
 * arosales uploading now
<mgz> do we also need to do anything simplestreams related?
<arosales> mgz, I think the public bucket has the image/tools info for AZ1
<arosales> but AZ2/AZ3 aren't enabled, and haven't been :-/
<arosales> mramm, 1.11.4 tools should be there now
<mramm> arosales: thanks!
<arosales> m_3, ^
<rogpeppe> ha! reviewed everything!
<rogpeppe> some test race fixes, in case anyone fancies a review: https://codereview.appspot.com/11723043
<rogpeppe> and... that's me for the day
<rogpeppe> see y'all tomorrow
<thumper> morning
 * thumper takes dog for a walk
<thumper> trivial review for someone: https://codereview.appspot.com/11745043
<thumper> hi mramm
<thumper> mramm: how goes oscon?
<mramm> great
<mramm> charm school this morning was good
<mramm> interest in juju from folks at intel, and several other places
<mramm> people seemed to get it
<thumper> cool
<mramm> gui and local provider were highlights
<mramm> lots of questions about LXC support in general
<mramm> and some definite interest in manual provisioning
 * thumper nods
<thumper> it seems that mgz is on top of the network stuff, will try to touch base with him tonight my time, Wed morning his time
<mramm> cool
<thumper> I'm poking around some manual provisioning ideas
<mramm> In about an hour I have an hour of booth duty
<thumper> trying not to get too stressed about not knowing how the network stuff is going
<mramm> haha
<mramm> ;)
<thumper> juju booth or ubuntu booth?
<thumper> btw, what sort of questions around LXC?
 * davecheney waves
#juju-dev 2013-07-24
<davecheney> arosales: ping
<davecheney> wallyworld: ping
<bigjools> morning
<davecheney> jamespage: ping,
<davecheney> thank you for uploading 1.11.4 into saucy
<davecheney> any issues ?
 * thumper waves
<davecheney> thumper: if we branch 1.12
<davecheney> is the bot going to be able to cope with this ?
<thumper> NFI
<davecheney> survey says: fuckup
<davecheney> thumper: do I need to wait for william on branching 1.12
<davecheney> or do you speak for him ?
<davecheney> or in fact does he speak for you ?
<thumper> :)
<davecheney> it is hard to make him out, from his exhaultedly high architects chair
<thumper> william is on holiday
<thumper> if we need it done now
<thumper> do it
<thumper> I'll take any heat - although I expect only the good kind
<davecheney> we don't need to do it _RIGHT_NOW_
<davecheney> but it feels like the right time
<thumper> do it
<davecheney> and that way we can have a stable release in backports when you go to Isle of Man
<davecheney> this bit was imporant to Mramm
 * thumper nods
<davecheney> important either
<davecheney> sodding, shit
<davecheney> the bot will have to own the series branch right ?
<thumper> davecheney: why do we want a bot controlled series branch?
<davecheney> so the bot owns juju-core/trunk
<davecheney> is it ok for ~gopher/juju-core/1.12 to be a thing ?
<thumper> sure is
<davecheney> /s/gopher/gophers or whatever our team is called
<davecheney> ok, let me see if I can figure out how to do that
<davecheney> where can I list the series for a project ?
<davecheney> apart from the graphy thing on the front page ?
<davecheney> ah, ok, i think i have it now
<axw> hey folks. does whatever bot that does the merging not also update bug status to "fix committed"?
<axw> is that a manual step?
<axw> also, is there an up-to-date doc describing merge procedures? cos CONTRIBUTING talks about lbox submit, which apparently isn't used anymore
<bigjools> axw: the bot should mark linked bugs as committed, yes
<bigjools> but they have to be linked to the branch
<bigjools> errr not the bot sorry, Launchpad's branch scanner will do it
<bigjools> when it sees the branch merged
<bigjools> CONTRIBUTING needs fixing, as you noticed :)
<axw> I assume that would've run in the last 12 hours
<axw> https://code.launchpad.net/~axwalk/juju-core/lp1203935-ec2-octal-prices/+merge/176315
<axw> is merged
<bigjools> yes
<axw> but the bug is still in progress
 * bigjools looks
<axw> but... the mp says pending?
<bigjools> huh what's up here
<bigjools> oh the pending is because reviews are done on shiteveld not LP, so there's no approvals on the MP itself
<bigjools> lbox should do that really
<bigjools> what am I saying, lbox should die
<thumper> :)
 * thumper tries to focus
<axw> is the pending status stopping the scanner from updating the bug?
<axw> or something else is wrong?
<bigjools> axw: not entirely sure, thumper might know?
<bigjools> (he wrote most of that code)
 * thumper looks up
<axw> bigjools: nps, thank you
<axw> ah
<axw> thumper: my MP is merged, but the bug wasn't automatically updated
<bigjools> thumper: branch scanner has not set fix committed on a linked bug where the branch is merged
<axw> curious what I did wrong, or missed
<thumper> LP has never done that AFAIK
<thumper> but people wrote scripts
<bigjools> really?
<thumper> really
<bigjools> does tarmac do it?
<thumper> I think so
<bigjools> I've seen it done
<bigjools> ah
<axw> tarmac?
<bigjools> axw: was the bug linked before marking the MP approved?
<bigjools> Tarmac is the landing bot code
<axw> bigjools: yes
<bigjools> ok so either it's a bug in Tarmac or something else is wrong, but no idea what!
<axw> mkay
<bigjools> I've definitely had this working on other projects
<axw> never mind, not a big deal for now
<bigjools> yeah, you can manually mark it
<davecheney> bigjools: how do I branch from a tag ? bzr branch -r ... lp:... ?
<bigjools> davecheney: bzr help revisionspec
<bigjools> bzr branch -r tag:BLAH lp:thing
<davecheney> thank you
<davecheney> i would grumble that every dvcs has a different convetion for rev specs
<davecheney> but actally i'm just a dumpass
<davecheney> dumbass
<bigjools> heh, np
<bigjools> bzr's help is actually very good, I reckon
<davecheney> thumper: would you think it is safe to say to #eco that 1.12 won't have the worlds best local provider, and they should follow 1.13 ?
<thumper> who is #eco?
<thumper> um... world's best?  it is likely to get continual improvements
<thumper> hopefully we won't be too long between non-dev releases
<davecheney> #eco is m_3 marcoceppi etc, who lurk elsewhere
<thumper> so 1.14 should be soon enough (I hope)
<marcoceppi> o/
<davecheney> to be more specific, 1.12 (stable) won't see a lot of lxc fixes
<davecheney> we'll deliver a better version in 1.14
<thumper> correct
<davecheney> thumper: cool, thanks for clarifying
<thumper> I don't see us spending much time fixing things in 1.12
 * davecheney has given up trying to branch locally
<davecheney> too slow on this tiny internet
<davecheney> https://launchpad.net/juju-core/1.12
<bigjools> davecheney: add a trunk series and you'll get a prettier graph
<bigjools> oh wait there is one, ignore me
<davecheney> bigjools: pretty isn't the word i'd use for it :)
<davecheney> elongated springs to mind
<bigjools> I did say "prettier".... it's all relative :)
<bigjools> it reflects your lack of release branches though
<davecheney> bigjools: if you squint, the shape looks like a cowboy hat
<axw> so.. I might be full of crap, but it seems that one of the cmd/juju tests is a bit wrong/broken
<axw> UpgradeJujuSuite.TestUpgradeJujuWithRealUpload always builds tools from the trunk
<bigjools> davecheney: seems appropriate
<davecheney> axw: wouldn't surprise me
<axw> is this intended?
<davecheney> axw: probably not
<axw> actually
<axw> I am full of crap
<axw> ignore me
<davecheney> axw: glad you figured it you :P
 * thumper might do some more clean up of code
<thumper> been writing documents
<davecheney> urgh, lunch coffee tastes like armpit
<wallyworld_> thumper: pingy ping
<thumper> wallyworld_: hey
<wallyworld_> so, i've finally done some work in my validation branch to satisy rogr hopefully. to do it as a plugin.... i was thinking i'd make a plugin dir, add the cmd in there, and update the release scripts
<thumper> wallyworld_: hmm... sounds interesting...
<thumper> however
<wallyworld_> so the juju-foo binary is put in the deb
<thumper> the point I raised about doing it as a plugin is to keep it out of the tree
<thumper> and out of band
<thumper> if you are just going to put it into the tree and build a binary and ship it
<wallyworld_> i think it belongs in the tree
<thumper> just keep it in the freaking tree as a command
<wallyworld_> ok
<thumper> it makes zero point to have a plugin in the tree
<wallyworld_> it belongs in the tree because the validation is tied to the version of juju
<thumper> command is best IMO then
<wallyworld_> ok
<thumper> another trivial gc prefix branch https://codereview.appspot.com/11754043
<wallyworld_> when i say tied to..... it's not right now but there may well be thinsg we build into the metadata that only later version of juju can rad
<wallyworld_> read
<thumper> wallyworld_: that's good enough for me
<wallyworld_> ok, ta. will re-propose
<bigjools> I have a failing test locally, what is wrong here folks? http://pastebin.ubuntu.com/5906306/
<davecheney> bigjools: "-":"0"
<davecheney> ^ weird key in the map
<bigjools> yeah - nothing I changed either
<davecheney> bigjools: o_O, where does the "-" leak in from status ...
<bigjools> I dunno, I know very little about this stuff
 * davecheney runs tests
<bigjools> I suspect a local setup problem but I've no idea how to work out what
<bigjools> the test looks fragile
<davecheney> bigjools: it is
<davecheney> i tried to use the same test harness to validate both json and yaml outputs
<wallyworld_> thumper: you free for a hangout sometime?
<m_3> hey gang, anyone have a quick pointer to syntax for 'relation-get' from within a hook?   I'm looking for args
<davecheney> m_3: shoot
<davecheney> what have you tried ?
 * davecheney goes to look at the source
<thumper> wallyworld_: sure, if you don't mind getting interrupted by me being a manual gps service
<m_3> getting breakage from... popen("relation-get --format json - $node", 'r');
<wallyworld_> thumper: it can wait till you're free
<m_3> davecheney: I'm guessing that I can't pass in the unit-id
<davecheney> m_3: - $node looks suspect
<m_3> yeah
<davecheney> did you mean -- $node ?
<thumper> wallyworld_: ok
<davecheney> ahh, this is interesting
<davecheney>                 if c.Key = args[0]; c.Key == "-" {
<davecheney>                         c.Key = ""
<davecheney>                 }
<m_3> davecheney: nope, the syntax was a single '-' apparently
<davecheney> looks like we consume it and remove it
<thumper> m_3: do you have more context
<thumper> one line isn't helping with language either
<m_3> thumper: from within a hook, you'll typically call `relation-list`
<m_3> thumper: then loop on the results of that and do a `relation-get --format json - $remote_unit` to get info for each of the related units
<thumper> m_3: right, so I'm assuming bash?
<m_3> actually python
<thumper> in which case the $ looks wrong
<m_3> popen("relation-get --format json - $node", 'r');
<thumper> m_3: can you pastebin the whole thing?
<m_3> sure
<m_3> hahaha
<m_3> ok, sorry... php
<m_3> fuck my life
<m_3> mediawiki charm... lemme find the link
<davecheney> m_3: that's going on the quotes page
<m_3> but really, I'm guessing something changed between py and go wrt this '-'?
<m_3> do we expect the current relation get to accept a unit-id?
<davecheney> pls hold, bootstrapping
<davecheney> actually, don't need to do that
<m_3> thumper: context... http://bazaar.launchpad.net/~charmers/charms/precise/mediawiki/trunk/view/head:/hooks/slave-relation-changed
<bigjools> do you guys have any details for the Brisbane induction sprint?
<davecheney> bigjools: no, i was going to ask you
<bigjools> heh
<davecheney> i'd good to see we're sticking to the code
<bigjools> the first rule of induction club ...
<davecheney> indeed, arrive at unknown destination, begin a linear search for your hotel
 * bigjools emails persons
<thumper> m_3: dude, that is PHP!
<thumper> not python
<m_3> right
<bigjools> lulz
<m_3> was only looking at the system call to relation-get, not the surrounding code
 * thumper vomits a little into his mouth
 * m_3 cries
<thumper> $node now makes sense
<thumper> but I can't help you
<m_3> but really, the same thing is done in bash and python in other charms
<m_3> relation-list... relation-get - unit-id
<thumper> sure...
<thumper> my lack of ability to help is more around NFI what the actual commands do
<thumper> or return
<thumper> etc
<m_3> :)
<m_3> yeah, I'll start grepping through core
<thumper> I do think it is funny that one uses exec
<thumper> the other popen
<thumper> to do the same shit
<m_3> I'm not even going to start on that
<thumper> heh
<thumper> wallyworld_: https://plus.google.com/hangouts/_/2e05ac3602809a0997b9970e5bf785f4c607e745?hl=en
<davecheney> same from the old code
<davecheney>         if self.options.settings_name == "-":
<davecheney>             self.options.settings_name = ""
<davecheney> this - thing is a noop
<davecheney> buuuuuut, i wonder if gnuflags parses it correctly ...
<m_3> I think I'm just gonna punt on this one.  I'll find some other stack that's working and pretty to look at in the gui
<m_3> thanks for the help
<davecheney> m_3: shit
<davecheney> that sucks
<davecheney> please log a bug
<bigjools> davecheney: is there a place we can store environment bootstrap data? we need the same data at env teardown time
<davecheney> bigjools: i'm thinking JUJU_HOME
<davecheney> we put a lot of other crap in there
<davecheney> ssl certs
<davecheney> caches
<bigjools> directory?
<davecheney> yup
<bigjools> on the client?
<davecheney> yes
<davecheney> maybe I misunderstood your question
<bigjools> I mean user's machine
<bigjools> what happens if they do stuff from a different machine later
<jtv2> What we need to store is information about resources on the cloud that we've allocated as part of setting up the environment.
<jtv2> So it has to be centralized.
<bigjools> what he saud
<bigjools> said
<bigjools> davecheney: nowhere centralised then?
<jtv> davecheney: can the provider update its own config for this sort of thing?  (It happens to be a configurable resource â provided by the user or allocated by the provider)
<davecheney> bigjools: the two options are
<davecheney> ~/.juju and the state
<davecheney> jtv: yes, in theory
<davecheney> in practice no, it porbably doesn't have access to the state
<bigjools> ~/.juju is a non-starter really
<davecheney> oh sorry
<davecheney> one more place
 * davecheney smacks head
<davecheney> the private bucket
<bigjools> otherwise destroy-environment would not work except on your original client
<davecheney> bigjools: it is unlikely anything will work except on the original client
<bigjools> private bucket dear liza
<bigjools> srsly?
<davecheney> if the user does not have a whole copy of ~/.juju
<jtv> The private bucket...  I guess that's the EC2 equivalent of our storage account & container.
<davecheney> no certs baby
<bigjools> so if you local machine loses its HD .... ?
<bigjools> your*
<davecheney> you are royally fucked
<bigjools> !!!
<bigjools> dafuq
<bigjools> private bucket means you'd need to know the data to get the data ...
<davecheney> bigjools:  i can tell that you and I are going to get on like a house on fire in BNE
<davecheney> me telling you things
<davecheney> you trying not to hit me
<bigjools> that wasn't me cutting his internets, honest
<rogpeppe> bigjools: what environment bootstrap data do you want to store?
<bigjools> rogpeppe: otp, back in a while
<rogpeppe> bigjools: okeydoke
<bigjools> rogpeppe: right
<bigjools> rogpeppe: when we bootstrap, we have to create some storage account objects that need naming
<bigjools> we're configuring that to use existing storage accounts at the moment but there's also no reason why they can't get created by bootstrap code
<bigjools> but if we do that we also need to delete them when destroy-env runs
<rogpeppe> bigjools: i think the private bucket is exactly what you need here. it's where (for instance) the id of the bootstrap instance is stored.
<rogpeppe> bigjools: so the bootstrap code already writes there
<bigjools> rogpeppe: I was worried we had a chicken and egg situation
<rogpeppe> bigjools: i don't *think* so...
<bigjools> is the private bucket creds/details cached somewhere?
<rogpeppe> bigjools: you name the private bucket in your environments.yaml
<rvba> bigjools: what about creating the storage with a fixed name, derived from the environment name?  This way we won't need to store the name anywhere, because it can be derived from the env name.
<bigjools> rogpeppe: can't do that, it has to be globally unique
<rogpeppe> rvba: that's exactly how the private bucket works
<bigjools> err rvba sorruy
<rogpeppe> bigjools: that's the same as the ec2 private bucket
<rogpeppe> bigjools: you need to choose something globally unique
<rogpeppe> bigjools: is that a problem?
<bigjools> azure creates a DNS entry out of your account name
<bigjools> storage account I mean
<bigjools> would you like to guess how many people are going to call their environment "azure"
<bigjools> so yes it's a problem :)
<rvba> I see :)
<rogpeppe> bigjools: how about adding a config entry to the azure environments.yaml: global-environment-name, or something
<bigjools> rogpeppe: could make it a {{rand}}
<rogpeppe> bigjools: yeah, that would probably be a good default
<bigjools> we did consider this earlier, but there is a snag
<bigjools> if someone wants to re-use an existing storage account instead of having juju create one on the  fly, how do we know whether to delete it or not later?
<rogpeppe> bigjools: two possibilities there
<rogpeppe> bigjools: 1) if we create the account at bootstrap time, mark the private data with "this needs cleaning up", and clean up the account at destroy-environment time only if that flag is set
<rogpeppe> bigjools: 2) just delete it regardless and document that juju always requires its own storage account
<rogpeppe> bigjools: for 2) you could probably do some sanity checking to make sure there's nothing in the account that you wouldn't expect to be created by juju
<bigjools> rogpeppe: if the former, where can we store a flag?
<rogpeppe> bigjools: in the private bucket? (which presumably is in the storage account)
<bigjools> inside storage accounts, you have another layer of indirection called a container, BTW.  These are separately public and private.
<rogpeppe> bigjools: hmm
<bigjools> fun isn't it :)
<bigjools> there's a pricing implication as well I think, which is why I am being careful
<rogpeppe> bigjools: the container name space is within a storage account, or global?
<bigjools> we took the cheap way out at the moment and made configuration of an existing account/container mandatory
<bigjools> within the account
<rogpeppe> bigjools: ah, you pay for a storage account?
<bigjools> I'm not sure but I expect so
<rogpeppe> bigjools: can you delete a storage account if it has containers?
<bigjools> no
<bigjools> anyway I think we can write a file to the private storage for now
<bigjools> it will suffice
<bigjools> thanks for the advice
<rogpeppe> bigjools: hmm, this might work: at destroy-environment time, you destroy the private bucket(container) and try to delete the storage account. if it fails because there are containers, ignore the error.
<rogpeppe> bigjools: this assumes that there's no other useful stuff attached to an account, i guess
<bigjools> well if someone created a storage account and didn't put any containers in it yet, they would get a surprise
<rogpeppe> bigjools: yeah, but they'll have named the account in their juju environments.yaml, and hopefully this behaviour is documented.
<bigjools> rvba: I was thinking - we might *have* to create a container on the fly to ensure it's private
<rogpeppe> bigjools: or...
<rvba> bigjools: maybe there is a way to check if a container is private and issue a warning or even an error if it's not.
<bigjools> rogpeppe: well I personally dislike surprises like that, it ought to be simpler.  I think we can make it simple.
<rogpeppe> bigjools: add a "do-not-delete-storage-account" attribute to environments.yaml, i guess
<bigjools> rogpeppe: jtv came up with that one too, I accused him of coming up with a developer solution and not a user solution :)
<rogpeppe> bigjools: the wrinkle that i see is that several juju environments might use the same storage account, presumably
<bigjools> rogpeppe: correct
<rogpeppe> bigjools: so i guess the question is: what semantics do we want? do we want the storage acct to be destroyed after the last environment is destroyed in the acct?
<rogpeppe> bigjools: but only if the account was created automatically
<bigjools> rvba: rogpeppe: my ideal scenario is to delete only if created automatically.
<bigjools> and that is decided by whether you configure it or not
<rvba> Sounds like the best story from a user pov.
<bigjools> indeed
<rogpeppe> bigjools: don't you have to configure it, otherwise you won't know how to find the private bucket?
<rogpeppe> bigjools: assuming the private bucket is addressed relative to the storage account
<bigjools> right
<bigjools> so.... seems like we can't do this then
<bigjools> privacy is defined at the container level, not the account
<bigjools> so if a private "bucket" is required in the config up front, we can't auto-generate anything can we?
<bigjools> this is my chicken and egg question from earlier
<rogpeppe> bigjools: how about this: you must specify the storage account name. within that, the private bucket is named after the environment name. if the storage account does not exist, it's created, but we never remove a storage account.
<bigjools> could work
<rogpeppe> bigjools: i think that's a clear story to the user with minimal magic
<bigjools> indeed
<thumper> mgz: ping
<bigjools> rvba: what do you think?
<bigjools> rvba is too busy deploying juju-gui on azure :)
 * thumper wanders off for a bit
<rvba> rogpeppe: I /think/ it isâ¦ and that's why leaving an account after auto-creating it is a bit nasty.
<rvba> heh
<rogpeppe> rvba: on the other hand, perhaps it's nice for a user to still have around the account that incurred the billing
<rogpeppe> rvba: the user is explicitly naming it after all.
<rogpeppe> rvba: we should find out definitively the cost implications of having a storage account
 * rvba otp
<rvba> rogpeppe: you're right, that's really the deciding factor.
<rvba> bigjools: ^
<bigjools> rvba: let's go half way for now and auto-generate the containers
<bigjools> named after the environment
<bigjools> auto generating storage accounts is fraught with problems
<rogpeppe> bigjools: that seems reasonable to me. a decent error message ('Juju storage account "foo" not found - please create it and try again') could help, and make things less obscure to the user
<thumper-afk> mgz ping
<jtv> wallyworld_: got  a moment to introduce me to the wonders of simplestreams?
<mgz> thumper: pong
<thumper> mgz: hey there
<thumper> mgz: got any voip capabilities?
<mgz> either hangout or mumble
<mgz> your pick
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: thumper | Bugs: 5 Critical, 76 High - https://bugs.launchpad.net/juju-core/
<dimitern> rogpeppe: ping
<rogpeppe> dimitern: pong
<dimitern> rogpeppe: hey, so the deployer needs to know state/API addresses for the simple context
<dimitern> rogpeppe: I missed that before, do you think we should have both in the deployer facade?
<rogpeppe> dimitern: yes, i think so
<dimitern> rogpeppe: and just return whatever's there in state
<rogpeppe> dimitern: yeah
<dimitern> rogpeppe: ok
<dimitern> rogpeppe: and CACert as well
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: isn't this a security issue?
<dimitern> rogpeppe: maybe not, because it's the public cert
<rogpeppe> dimitern: exactly
<dimitern> rogpeppe: ok then
<rogpeppe> dimitern: we'll also need to provide the private key too, but only to nodes which are configured to run the state server.
<dimitern> rogpeppe: and there's no deployer there usually, right?
<rogpeppe> dimitern: there might be. i don't think we mind too much at this level.
<dimitern> rogpeppe: but it's not part of the deployer api
<rogpeppe> dimitern: the deployer wouldn't need the private key. i think that would probably be something for the machineagent API
<dimitern> rogpeppe: right
<wallyworld_> jtv: sorry, i was at soccer training and am about to go out again. tomorrow
<jtv> Ah.
<wallyworld_> dimitern: i'm going out for a late dinner and a movie so will miss the meeting
<axw> gnight
<dimitern> wallyworld_: ok
<dimitern> wallyworld_: is there a rising trend there? :) kidding
<rogpeppe> i'm looking for a review on https://codereview.appspot.com/11723043/ if anyone cares to take a look
<dimitern> rogpeppe: looking
<davecheney> what ?
<davecheney> i can have a ubuntu edge phone for $625
<davecheney> or iu can have the same phone for $675
<davecheney> or $600
<davecheney> or 800
<davecheney> or 830
<davecheney> or 700 in packs of two
<davecheney> WHAT THE FUCK
<dimitern> rogpeppe: reviewed
<rogpeppe> davecheney: weird, ain't it
<rogpeppe> davecheney: so much for the "for 24 hours only" thing
<rogpeppe> dimitern: ta
<dimitern> davecheney: you can't for less than 700, no longer
<dimitern> davecheney: ha!
<dimitern> davecheney: they updated it again :)
<dimitern> davecheney: it seems the campaign started skidding and they refreshed it to up the flattening curve :)
<davecheney> if I had pledged $800 I would feel fucking ripped off
 * davecheney writes to Jane
<rogpeppe> davecheney: the 825 payers will be refunded
<rogpeppe> davecheney: apparently
<rogpeppe> dimitern: is there really a point in a blank separation line when there are only two imports?
<rogpeppe> davecheney: the only point of the blank line is so that each section is sorted, which is unnecessary there
<rogpeppe> oops
<rogpeppe> s/davecheney/dimitern/
<rogpeppe> dimitern: ignore me, sorry, i'll just make the change as requested
<dimitern> rogpeppe: the idea is to format imports as agreed
<davecheney> dimitern: what rogpeppe said
<dimitern> davecheney: uh?
<davecheney> gah
 * davecheney gets another beer
 * rogpeppe hands davecheney a few beers
<davecheney> email sent
<davecheney> this is not quality
<davecheney> we are a quality organisiation selling a top shelf (and top price) product
<davecheney> but someone is acting like they are running a fire sale
<dimitern> rogpeppe: how about having a NetworkInfo call in DeployerAPI which returns Addresses, APIAddresses and CACert?
<dimitern> davecheney: +1
<rogpeppe> dimitern: thinking
<dimitern> rogpeppe: and at client-side it will be cached
<davecheney> and with that career limiting move, i leave you in the grace and favor of the lord
<rogpeppe> dimitern: yeah, i think that's reasonable; we'll probably want to watch all those things later.
<rogpeppe> dimitern: not entirely sure about the NetworkInfo name though
<dimitern> rogpeppe: i was sure about that :) let's bikeshed it
<rogpeppe> dimitern: if you're happy with it, gfi
<dimitern> rogpeppe: i'm open to suggestings
<dimitern> suggestions
<rogpeppe> dimitern: i haven't thought of anything better yet. my reservation is it isn't really information about the network - it's information about how to connect to the state servers.
<rogpeppe> dimitern: maybe ServerInfo ?
<dimitern> rogpeppe: good point
<dimitern> rogpeppe: I like ServerInfo
<dimitern> rogpeppe: will do and propose it shortly
<dimitern> rogpeppe: that's obviously a non-bulk call
<rogpeppe> dimitern: oooh noooo!
<dimitern> rogpeppe: :D
<rogpeppe> dimitern: no, but it must be a bulk call!
<dimitern> rogpeppe: well I can pretend to make it bulk
<rogpeppe> dimitern: params []struct{} :-)
<dimitern> rogpeppe: like give me this number of the same struct as results :)
<rogpeppe> dimitern: yup
<dimitern> rogpeppe: it can be without args, right?
<dimitern> rogpeppe: at server-side
 * rogpeppe still finds the bulk call thing very hard to deal with
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: cool
<dimitern> rogpeppe: but wait.. maybe at least a machineTag, so I can call AuthOwner on it?
<rogpeppe> dimitern: wat?
<dimitern> rogpeppe: no authorization at all? free for all?
<rogpeppe> dimitern: the machine agent has already authorized
<dimitern> rogpeppe: so any agent
<rogpeppe> dimitern: well, any agent which can create a deployer API, yes
<dimitern> rogpeppe: how do we guarantee this?
<rogpeppe> dimitern: guarantee what?
<dimitern> rogpeppe: the only thing we check is AuthMachineAgent in NewDeployerAPI
<rogpeppe> dimitern: isn't that enough?
<dimitern> rogpeppe: so any MA can call it
<rogpeppe> dimitern: definitely
<rogpeppe> dimitern: any MA can run a deployer, right?
<dimitern> rogpeppe: but arguably the info ServerInfo returns you probably already know
<dimitern> rogpeppe: except the state addresses
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: which is a huge security hole I think
<rogpeppe> dimitern: really?
<dimitern> rogpeppe: maybe not return state addresses
<rogpeppe> dimitern: why is knowing the state addresses a security hole?
<dimitern> rogpeppe: well, if any rouge MA can connect directly to state, then why we need the API?
<rogpeppe> dimitern: just having the address doesn't allow you to connect
<rogpeppe> dimitern: you need an identity and password too
<rogpeppe> dimitern: the address is not secret
<dimitern> rogpeppe: well, if you were able to connect to the API to call ServerInfo, you probably can connect to state with the same creds, no?
<rogpeppe> dimitern: no
<rogpeppe> dimitern: the two sets of credentials are separate
<dimitern> rogpeppe: ah, ok
<dimitern> rogpeppe: but setpassword sets them both, right?
<rogpeppe> dimitern: currently, yes (because everything connects directly to mongo)
<dimitern> rogpeppe: so it is a security leak then
<rogpeppe> dimitern: but in the future, SetPassword will only set the mongo password for agents that are allowed to connect to mongo
<rogpeppe> dimitern: it's not a security leak currently because the state server addresses are identical to the API server addresses
<rogpeppe> dimitern: and machine addresses are not private
<dimitern> rogpeppe: which is the api worker in the MA on 0
<rogpeppe> dimitern: yup
<dimitern> rogpeppe: so if that gets compromised we're as good as wide open
<rogpeppe> dimitern: we should never rely on machine addresses being secret - they are easily enumerable and findable by other means
<dimitern> rogpeppe: but other MAs rouge or not won't be able to access state
<rogpeppe> dimitern: when we've moved to API-only, that's right
<dimitern> rogpeppe: that considerably diminishes the risk, but it's still there, we need to think about it later
<rogpeppe> dimitern: i don't see that there's a risk from publishing machine addresses
<dimitern> rogpeppe: no, provided the creds are not the same ;)
<rogpeppe> dimitern: if something can compromise the system just from having a machine address, then our system is broken
<dimitern> rogpeppe: for now, i agree we're no better of with publishing addresses or not
<dimitern> rogpeppe: standup
<mgz> what the hell google
<rogpeppe> mgz: we've lost you
<mgz> google logged me out of everything... for no apparent reason
<dimitern> rogpeppe: https://codereview.appspot.com/11765043/
<dimitern> rogpeppe: the ServerInfo stuff
<rogpeppe> dimitern: i'm sure that having three separate calls in the client API is right
<rogpeppe> dimitern: that seems to me to miss part of the point of putting them together in the first place
<rogpeppe> dimitern: if you want all the info, you're going to make three API calls, all of which return exactly the same info
<rogpeppe> dimitern: if we're going to have separate client entry points, i think we should have separate API calls
<rogpeppe> dimitern: (which i'd be just fine with - i almost suggested that)
<dimitern> rogpeppe: I prefer to limit the number of calls at the server, if possible
<dimitern> rogpeppe: I was thinking of making a single serverInfo call at construction time
<rogpeppe> dimitern: construction of what?
<dimitern> rogpeppe: and cache them, but that seemed wrong - could they change?
<dimitern> rogpeppe: client deployer facade
<rogpeppe> dimitern: yes, they could and will change
<dimitern> rogpeppe: so caching is not a good idea, hence this
<rogpeppe> dimitern: we'll want a watcher too
<rogpeppe> dimitern: eventually
<dimitern> rogpeppe: but not for the deployer
<dimitern> rogpeppe: yeah
<rogpeppe> dimitern: when you say "limit the number of calls", do you mean the number of entry points, or the overall number of API calls at runtime?
<rogpeppe> s/at runtime/made at runtime/
<dimitern> rogpeppe: entry points
<dimitern> rogpeppe: because this call is special
<rogpeppe> dimitern: i think our API is vast anyway - 2 extra entry points aren't going to make any difference
<dimitern> rogpeppe: you're probably right
<dimitern> rogpeppe: ok, will do them separately then
<rogpeppe> dimitern: thanks.
<rogpeppe> dimitern: i think that makes sense
<rogpeppe> dimitern: and means we can use a conventional StringsWatcher for watching the addresses in the future
<dimitern> rogpeppe: do I have to have an error result in an apiserver method, even though CACert doesn't return an error?
<rogpeppe> dimitern: no you don't
<dimitern> rogpeppe: cool
<rogpeppe> dimitern: read the rpc docs for details
<dimitern> rogpeppe: :) sorry, too lazy
<dimitern> rogpeppe: I did some time ago, but forgot
<rogpeppe> dimitern: the most important bit is in the Conn.Serve documentation - the list of allowed method signatures: http://paste.ubuntu.com/5907455/
<dimitern> rogpeppe: yeah, just revisited that
<dimitern> rogpeppe: updated https://codereview.appspot.com/11765043/
<dimitern> TheMue, mgz: can one of you take a look as well please? ^^
<rogpeppe> dimitern: i think i'd probably send the CA cert as a string - it's already ascii AFAIR
<dimitern> rogpeppe: if it's []byte why pretend it's a string?
<dimitern> rogpeppe: how about if some unicode char gets there?
<rogpeppe> dimitern: it's a PEM-encoded certificate
<rogpeppe> dimitern: which is defined to be ascii
<dimitern> rogpeppe: so why are we not using string for it in other places?
<rogpeppe> dimitern: and is already base64-encoded
<rogpeppe> dimitern: so we'll be base-64 encoding twice
<rogpeppe> dimitern: we're not using string for it because the crypto entry points use []byte
<dimitern> rogpeppe: we can keep it as a string and convert it to []byte before passing it to crypto calls
<rogpeppe> dimitern: i played around with quite a few permutations. it's nicer to keep it as []byte most of the time, i think
<dimitern> rogpeppe: double encoding as base64 is not such a bad thing
<rogpeppe> dimitern: anyway, i don't mind much; if some js client wants to interpret it, i'm sure it's easy for them to b64 decode
<rogpeppe> dimitern: yeah, it's not *too* much bigger
<dimitern> rogpeppe: the size grows only marginally
<rogpeppe> dimitern: well, a third bigger, but that's probably of no particular import here
<rogpeppe> dimitern: reviewed
<dimitern> rogpeppe: thanks
<dimitern> rogpeppe: MongoAddresses only server-side or both?
<rogpeppe> dimitern: both
<rogpeppe> dimitern: "Addresses" made sense as a method on the mongo-based state
<rogpeppe> dimitern: but not as an API call
<dimitern> rogpeppe: that'll require changing Addresser interface in the deployer and several other places
<rogpeppe> dimitern: yeah, i think that's worth doing
<dimitern> rogpeppe: why not StateAddresses then?
<rogpeppe> dimitern: because i think that's ambiguous... then again, we have "StateWorker" vs "APIWorker".
<rogpeppe> dimitern: yeah, go with StateAddresses
<dimitern> rogpeppe: ok
<dimitern> rogpeppe: oops
<dimitern> rogpeppe: I think I forgot to do common.ServerError(err) when I'm returning it
<rogpeppe> dimitern: sigh
<rogpeppe> dimitern: actually, that doesn't matter here
<rogpeppe> dimitern: it's not a bulk call
<dimitern> rogpeppe: hmm
<dimitern> rogpeppe: sure about that?
<rogpeppe> dimitern: the usual error return always gets translated through ServerError
<dimitern> rogpeppe: ah, right
<rogpeppe> dimitern: that's one of the things we lost by moving to bulk calls for everything
<dimitern> rogpeppe: it was a good thing, you'll see at one point - definitely in the provisioner
 * rogpeppe might be more convinced if there was a single bulk call implementation that didn't just do everything serially.
<dimitern> rogpeppe: having a bulk interface for the calls allows us to change the implementation later
<rogpeppe> dimitern: i have a feeling we standardised on a bulk call interface that's hard to implement in practice
<dimitern> rogpeppe: to be bulk, rather than serial like now
<TheMue> dimitern: seen your review demand, do it now
<dimitern> TheMue: thanks
<rogpeppe> dimitern: sure there are places where a bulk interface is appropriate, but for 95% of calls it's not and never will be IMHO
<rogpeppe> dimitern: and having a bulk call interface for calls that can only ever allow one thing is just farcical.
<rogpeppe> dimitern: sorry, you got me started.
 * rogpeppe shuts up about bulk calls
<dimitern> rogpeppe: :)
<dimitern> rogpeppe: my bad
<TheMue> dimitern: you've got a review
<dimitern> TheMue: cheers
<TheMue> interesting
<TheMue> dimitern, rogpeppe : anyone an idea? on several places tests if err == tools.ErrNoTools work, but not where I use it in bootstrap command
<TheMue> dimitern, rogpeppe : printing the error shows, that it is that error
<rogpeppe> TheMue: could you expand more on the context please?
<TheMue> rogpeppe: bootstrap calls environs.Bootstrap()
<TheMue> rogpeppe: and there Bootstrap() of the current environment is called
<TheMue> rogpeppe: in there it is environs.FindBootstrapTools()
<rogpeppe> s/it is/it calls/ ?
<TheMue> rogpeppe: "it is" in the sense of "it is what's called" ;)
<TheMue> rogpeppe: and here it reads the list of the storage and then the public storage, but both are empty (what I want, that's correct)
<TheMue> rogpeppe: so ErrNoFiles is returned upwards until my bootstrap command, where I can evaluate it/print it
<TheMue> rogpeppe: tools.ReadList() is the one who returns it
<TheMue> hmm, will try something
<TheMue> strange
<TheMue> i'll add a print chain to see where the error is "lost" ;)
<TheMue> wow, that's really interesting
<TheMue> rogpeppe: in FindAvailableTools() == is true, but in FindBootstrapTools(), which calls FAT(), == is false *shrug*
<TheMue> dimitern: btw, you asked for a link: https://codereview.appspot.com/11588043/
<dimitern> TheMue: cheers, will take a look
<TheMue> rogpeppe: please take a look at http://paste.ubuntu.com/5907706/
<TheMue> rogpeppe: that's the most interesting part
<TheMue> hmmpf, disconnected by the German Telekom
<rogpeppe> TheMue: looking; sorry, was at lunch
<TheMue> rogpeppe: np
<rogpeppe> TheMue: what do you see if you print the type of the error (with %T) in each place?
<TheMue> rogpeppe: oh, good idea, will try
<rogpeppe> TheMue: print the error message too (with %q)
<TheMue> rogpeppe: TYPE *errors.NotFoundError MSG "no tools available"
<TheMue> rogpeppe: it mutates to a reference
<rogpeppe> TheMue: you've missed the call to convertToolsError
<rogpeppe> TheMue: you need to use errors.IsNotFoundError, i think
<TheMue> rogpeppe: ah, cool
<TheMue> rogpeppe: I'll tell you about the success
<TheMue> rogpeppe: you're fucking amazing ;) thx
<rogpeppe> TheMue: yw
<rogpeppe> type Removerer interface {
<rogpeppe> ha ha!
<TheMue> :D
<rogpeppe> dimitern: ping
<dimitern> rogpeppe: pong
<rogpeppe> dimitern: i'm thinking of changing the definition of params.ErrorResults
<dimitern> rogpeppe: oh?
<rogpeppe> dimitern: so that it's potentially forward compatible if we want to change a method to return a value in the future
<dimitern> rogpeppe: expand please
<rogpeppe> dimitern: currently it's defined as type ErrorResults {Errors []*Error}
<rogpeppe> dimitern: i propose defining it as http://paste.ubuntu.com/5907815/
<rogpeppe> dimitern: then if a call that previously just returned an error decides to return a value, they can just create a new structure with some data in the result struct as well as the error.
<dimitern> rogpeppe: I don't see how anything stops us from changing the methods now to do that
<rogpeppe> dimitern: you couldn't do that in a backwardly compatible way
<rogpeppe> dimitern: you'd have to change the function result to look like: type MyResults {Errors []*Error; DataResults []MyDataResult}
<rogpeppe> dimitern: whereas really we'd like to have (following the rest of the API) type MyResults {Results []MyResult} where MyResult contains the error as well as the data
<dimitern> TheMue: reviewed
<rogpeppe> dimitern: does that make sense?
<dimitern> rogpeppe: wait
<dimitern> rogpeppe: we don't currently have that
<dimitern> rogpeppe: we have func f() -> results, error, where results contain both a result and an error
<rogpeppe> dimitern: exactly
<rogpeppe> dimitern: but if you're changing an API call that used to return ErrorResults, and want to return some data, you can't get that
<rogpeppe> dimitern: because the error is directly in each result element, rather than under an Error field which it would be in the result+error case
<dimitern> rogpeppe: I hear you
<dimitern> rogpeppe: but still can't see why we need to change ErrorResults-returning methods to return something else
<dimitern> rogpeppe: I need some examples
<rogpeppe> dimitern: well, as a particular example, i'm just changing the UpgraderAPI.SetTools signature. it did return extra data (the tag) but now i'm changing
<rogpeppe> ...
<rogpeppe> dimitern: it to return just an error
<rogpeppe> dimitern: it would be good if that kind of change could be done backwardly compatibly
<rogpeppe> dimitern: in all other places we give our API calls the freedom to add extra data
<rogpeppe> dimitern: i think we should do that for calls that currently just return an error too
<dimitern> rogpeppe: how can changing a method signature ever be backwards compatible?
<rogpeppe> dimitern: easily
<rogpeppe> dimitern: if you call a method and it returns more fields than you expect, the extra fields are ignored
<rogpeppe> dimitern: if you call a method with more fields than the method expects, the extra fields are ignored too
<rogpeppe> dimitern: that's all by virtue of the json unmarshal semantics
<dimitern> rogpeppe: can you paste your proposed changes to that method, I still can't see it, sorry
<rogpeppe> dimitern: to which method?
<dimitern> rogpeppe: UpgraderAPI.SetTools
<rogpeppe> dimitern: basically this change just regularises our call conventions so that errors are always in result.Results[i].Error
<rogpeppe> dimitern: ok, i want to change UpgraderAPI.SetTools to look like this:
<rogpeppe> func (u *UpgraderAPI) SetTools(args params.SetAgentsTools) (params.ErrorResults, error) {
<dimitern> rogpeppe: and now it looks like this
<rogpeppe> dimitern: it currently looks like this:
<rogpeppe> func (u *UpgraderAPI) SetTools(args params.SetAgentTools) (params.SetAgentToolsResults, error) {
<dimitern> rogpeppe: yeah
<dimitern> rogpeppe: so how will ErrorResults look like?
<rogpeppe> dimitern: as i pasted before http://paste.ubuntu.com/5907815/
<dimitern> rogpeppe: but SetAgentTools returns a tag
<dimitern> rogpeppe: as well as an error
<rogpeppe> dimitern: it did - i'm changing it so it doesn't
<rogpeppe> dimitern: there's no need for it to return a tag
<dimitern> rogpeppe: so the question is between ErrorResults{Errors []*Error} and ErrorResults{Results []ErrorResult{Error *Error}} ?
<rogpeppe> dimitern: yup
<dimitern> rogpeppe: and having an extra nested struct helps us how? we can add stuff to ErrorResult later?
<rogpeppe> dimitern: it's compatible with SomeOtherType{Results []SomeOtherResult{Error *Error; Data SomeOtherData}}
<rogpeppe> dimitern: that is, we can make a method return some extra data without compromising the backwards-compatibility of the API
<dimitern> rogpeppe: but should we do that?
<dimitern> rogpeppe: ISTM this is exactly a type of change that needs versioning
<dimitern> rogpeppe: of the api
<rogpeppe> dimitern: not necessarily
<dimitern> rogpeppe: speaking from software development best practices, if you will
<rogpeppe> dimitern: if a client doesn't care about the new data returned from the call, then it can happily ignore it
<dimitern> rogpeppe: "thou shall not break the contract"
<dimitern> :)
<rogpeppe> dimitern: i disagree. see https://developers.google.com/protocol-buffers/docs/overview under "A bit of history" for some justification.
<dimitern> rogpeppe: these are different things
<rogpeppe> dimitern: the whole API is designed explicitly so we can get this kind of backward compatibility without having many different (fragile) versions
<dimitern> rogpeppe: interface and its over-the-wire format
<rogpeppe> dimitern: sorry, i don't understand
<dimitern> rogpeppe: the API defines the interface -> F(x, y) (a, b)
<rogpeppe> dimitern: ok
<dimitern> rogpeppe: the RPC layer defines the serialization mechanism
<rogpeppe> dimitern: ok
<dimitern> rogpeppe: you say these two don't have to match 1-1
<dimitern> rogpeppe: from the client's POV
<rogpeppe> dimitern: i'm saying that there are defines ways of changing the API so as to preserve backward compatibility
<rogpeppe> s/defines/defined/
<dimitern> rogpeppe: and you still oppose versioning in general
<rogpeppe> dimitern: so we can change the API to F to, say F(x, y) (a, b, c)
<rogpeppe> dimitern: and clients that expected the old version will continue to work
<dimitern> rogpeppe: so changing it to F(a, b, c) (x, y) is also ok?
<rogpeppe> dimitern: yes (assuming that F is implemented with the knowledge that c might be unset from old clients.
<rogpeppe> )
<rogpeppe> dimitern: assuming you actually means F(x, y, z) (a, b)
<rogpeppe> s/means/meant/
<dimitern> rogpeppe: yup
<dimitern> rogpeppe: and if z is required then what?
<rogpeppe> dimitern: then it's not backwardly compatible
<dimitern> rogpeppe: exactly
<rogpeppe> dimitern: but the point is that it's *possible* to change things in a backwardly compatible way
<dimitern> rogpeppe: the same applies to the client I think
<dimitern> rogpeppe: we cannot assume all clients will be as lenient in parsing the response as go is
<rogpeppe> dimitern: i think we can
<dimitern> rogpeppe: it *is* possible, but fragile
<rogpeppe> dimitern: in fact, we should probably write a set of API usage guidelines that specify that
<rogpeppe> dimitern: i think versions are more fragile in some ways
<rogpeppe> dimitern: if you get the wrong version you can't speak at all
<rogpeppe> dimitern: and that applies particular to our API where we have many different kinds of client
<dimitern> rogpeppe: and that's probably fine, because your behavior will be undefined then
<rogpeppe> dimitern: it doesn't have to be
<rogpeppe> dimitern: i'd prefer to do versioning by renaming entry points and/or facades (only) when necessary
<dimitern> rogpeppe: could there be possible security issues with that approach?
<dimitern> rogpeppe: buffer overruns, etc?
<rogpeppe> dimitern: how so?
<dimitern> rogpeppe: at client-side
<rogpeppe> dimitern: then we still have the freedom to change things in a backwardly compatible way, but we can change things in a safe backwardly incompatible and fine-grained way too
<dimitern> rogpeppe: well, i'm attempting to deserialize 2 fields and I get 5
<rogpeppe> dimitern: the json package discards extra fields
<rogpeppe> dimitern: other clients will almost certainly just unmarshal to a map
<dimitern> rogpeppe: client-side you cannot guarantee that
<rogpeppe> dimitern: and then if there are extra fields in the map, that's unlikely to be a problem
<rogpeppe> dimitern: and particularly if we document that new fields may be added, which we should
<dimitern> rogpeppe: but not removed
<rogpeppe> dimitern: yup
<rogpeppe> dimitern: similar to the protobuf approach
<dimitern> rogpeppe: ok, seems sane, at least I can't see obvious holes, although I'm trying
<dimitern> rogpeppe: but please, as a separate CL
<rogpeppe> dimitern: please keep trying. i also try to find flaws in the approach
<rogpeppe> dimitern: which as a separate CL?
<dimitern> rogpeppe: if you're going to change ErrorResults, do it everywhere in one go, separately
<rogpeppe> dimitern: that's what i'm proposing, yes
<dimitern> rogpeppe: but please document what we discuseed about policies
<dimitern> rogpeppe: adding fields is ok, changing or removing - not
<dimitern> s/policies/guidelines/
<rogpeppe> dimitern: yeah. i should spend some time updating doc/api.txt
<dimitern> rogpeppe: +100
<rogpeppe> dimitern: i've added a kanban card
<dimitern> rogpeppe: incl. stuff mentioned in the huge doc comment about method signatures
<rogpeppe> dimitern: i'm not sure that implementation-specific stuff is appropriate for that document
<rogpeppe> dimitern: but there could easily be another document talking about that stuff
<dimitern> rogpeppe: api_hacking.txt ?
<rogpeppe> dimitern: yeah, probably
<rogpeppe> dimitern: the doc comment is only 36 lines though. it's not *that* huge :-)
<dimitern> rogpeppe: :)
<dimitern> rogpeppe: the point is, it's not easy to find, once you forgot where to look
<dimitern> rogpeppe: it's better to have docs in one place
<dimitern> rogpeppe: possibly even moving most of it in a txt file and referring to it in the comment itself
<rogpeppe> dimitern: i'd prefer to go the other way
<rogpeppe> dimitern: the package documentation is complete and kept up to date
<rogpeppe> dimitern: it says exactly how to use the rpc package
<dimitern> rogpeppe: either way, there has to be a link from the doc to it
<rogpeppe> dimitern: yeah
<rogpeppe> dimitern: definitely
<rogpeppe> http://godoc.org/launchpad.net/juju-core/rpc#Conn.Serve :-)
<dimitern> rogpeppe: nice!
<mramm> hey, hey
<mramm> OSCON is going very well
<mramm> juju charm school went well
<mgz> ace
<mramm> I definitely feel like we've hit an inflection point -- people are starting to get it -- and the pace of user adoption/corp interest is definitely accelerating
<mramm> there will be a bit of juju in mark's OSCON keynote
<mramm> though also quite a bit of phone
<mramm> keynote will be in half an hour, and will be live-streamed here: http://www.oscon.com/oscon2013/public/content/video
<rogpeppe> mramm: cool
<rogpeppe> dimitern, TheMue, mgz: any chance of a review on this: a large CL but entirely mechanical in nature: https://codereview.appspot.com/11760045/
<mgz> looking
<mramm> rogpeppe: dimitern: how are things going with the API work?
<TheMue> rogpeppe: will take a look
<rogpeppe> mramm: coming along ok, i think. the machiner now actually talks to the API. deployer and upgrader in the works.
<mgz> rogpeppe: so, the relevent change is - version.Binary + Version version.Binary?
<mramm> rogpeppe: that's cool
<rogpeppe> mgz: yes
<mgz> lgtmed. I trust to bot to catch any missed bits :)
<rogpeppe> mgz: ta!
<mramm> rogpeppe: so the uniter is the big bit that hasn't been started yet...?
<rogpeppe> mramm: yes
<mramm> when do you think deployer and upgraded will land?
<mramm> haha -- conversation killing question... ;)
<TheMue> mramm: ya' know, estimation is always a bad topic
<TheMue> mramm: :D
<rogpeppe> mramm: oh sorry, didn't see your question
<rogpeppe> mramm: i'm quite hopeful for upgrader this week, assuming i don't get too bogged down with reviews
<TheMue> rogpeppe: you've got another revie
<TheMue> review
<rogpeppe> TheMue: thanks!
<TheMue> rogpeppe: yw
<rogpeppe> weird, my machine is running like a dog, the load average is 8, the system monitor shows all cpus pegged around 100%, but then adding up the all the processes' cpu percentages comes to about 20%
<rogpeppe> where's my power going?!
<rogpeppe> i guess it can only be something in the kernel
<dimitern> rogpeppe: try powertop
<dimitern> rogpeppe: in these cases I usually blame either compiz or Xorg
<mramm> juju section of mark's talk starting now
<rogpeppe> dimitern: here's ErrorResults changed as we talked about: https://codereview.appspot.com/11674045/
<dimitern> rogpeppe: LGTM
<rogpeppe> dimitern: thanks. anyone else still around for a review?
<dimitern> TheMue: ?
 * rogpeppe has to go
<rogpeppe> dimitern: see ya tomorrow
<rogpeppe> dimitern: thanks as always for the prompt reviews :-)
<rogpeppe> g'night all
<dimitern> rogpeppe: 'night!
<benji> gary_poster: heh, I forgot about the presentation and was at lunch; I'm glad it worked ;)
<TheMue> rogpeppe: if you're looking: lgtm
<thumper> morning
<thumper> who knows MAAS?
 * thumper looks for bigjools
<hallyn> o/
<thumper> hey
<thumper> so, right now I have a package that wraps lxc as juju cares about it
<thumper> also right now there is one and only one network config
<thumper> that uses veth
<thumper> this works fine for the local provider
<thumper> what I'm thinking is having a way *waves hands* for the container manager to ask for network config
<thumper> and will get one of three responses
<thumper> use host - which means we don't configure any network bits
<thumper> use device - and pass in the device name
<thumper> so uses the phys setting
<hallyn> just to be clear - you do not want to be in the business of juju creating an openvswitch GRE based network right?
<thumper> or use default
<thumper> is that related to SDN?
<hallyn> yes
<thumper> not just yet
<hallyn> k
<thumper> I'm deferring any SDN discussions to IOM
<thumper> the idea of using the host networking will mean we can have semi-functional containers on hosts that won't give us extra IP addresses
<thumper> like openstack
<thumper> the default veth stuff is how we'll use local provider
<thumper> but the ideal is to have a new device created and passed through
<thumper> which means the networking *should* be set up automagically
<thumper> so we can default all the providers to use the host networking initially
<thumper> and implement the physical nic creation as we can
<thumper> with the physical bits meaning we won't have port clashes in the containers
<thumper> so two containers could have the same ports open
<thumper> which obviously we wouldn't be able to with the host network option
<thumper> does that sound reasonable to you?
<hallyn> i think it'd be worth having smoser in on this conversation (or zul) as they would know more about what openstack and maas can provide
<thumper> hallyn: well I have talked with lifeless about openstack
<hallyn> what is 'use default'  for case 3?
<thumper> and openstack isn't going to give us extra IPs
<thumper> case 3 is the veth that works for local provider
<thumper> bridge over lxcbr0
<thumper> case 1 and 2 are for cloud providers
<hallyn> ok
<hallyn> i understand what maas will do.  i'm still not clear on what you plan to get from openstack
<thumper> I'm actually feeling like this might acctually work L)
<thumper> the plan for openstack at this stage is to use the host network
<thumper> and deal with the limitations of port clashes
<thumper> until we have a possible SDN
<thumper> I think SDN is the only way we'll get proper network isolation for openstack
<thumper> given what I have been told about openstack networking
<hallyn> what is lifeless' relation to openstack?
<thumper> lifeless now works for HP on their openstack on openstack
<hallyn> i see no reason to take it for granted that we *can't* support multiple ip's per instance
<thumper> hallyn: neutron as it is now won't allocate multiple ip's for a cloud instance
<thumper> and also aparently the NAT is done prior to getting to the host
<thumper> so if you did have multiple IPs, you wouldn't know anyway
<thumper> so I'm deferring caring about containers inside openstack for now
<hallyn> thumper: the 'right now' matters to your demo, but should not limit our long-term planing
<thumper> but containers to deploy openstack is important
<hallyn> that's good :)  (deferring caring on openstack)
<thumper> right
<thumper> I think the "correct" approach in the future is to have SDN for the containers
<thumper> so we have a "cloud" of containers inside our cloud
<hallyn> thumper: I'm not 100% sure, but seem to recall that was rejected at last sprint
<thumper> which is getting a little meta
<thumper> rejected because we thought we wouldn't need it
<thumper> so either
<thumper> we need to fix openstack to support our use case (like AWS)
<thumper> or have another solution
<thumper> also
<hallyn> I thought it was rejected bc we didn't want to complicate things on behalf of cloud providers which dont' fully do their job
<thumper> azure only gives one IP address
<hallyn> (that's my own phrasing :)
<thumper> yeah, kinda
<hallyn> ok.
<thumper> but I guess it matters to how much we want to support containers on clouds that don't do their job properly
<thumper> from what I've seen, EC2 will work fine
<thumper> and MAAS should work
<hallyn> i don't mean to beat this to death - my only point is: openstack is OSS, so we should not let current limitations limit our long term planning
<thumper> sure
<thumper> we could have openstack updated to support our use case
<thumper> which could mean we don't need SDN at all
<thumper> it may well be the easier approach
<thumper> in fact,
<thumper> if we get containers working nicely on MAAS, and EC2
<thumper> then we can point at openstack and say "you need to work better"
<thumper> and provide concrete use cases
<hallyn> i was thinking more of working with zul to push patches to do what we need :)
<thumper> I've found that concrete problems are the best way to get new features \
<thumper> aye
 * thumper tries to remember who zul is
<hallyn> but yeah we would still use the 'look there' to help push the patches
<hallyn> Chuck Short
<thumper> ah Chuck
<thumper> yeah, just poked mup
<hallyn> THE
<hallyn> :)
<hallyn> http://whereschuck.org/
<hallyn> on that note, it's roundabout dinner time - ttyl
<thumper> ok
<thumper> thanks for your help
<hallyn> np - i'll check backlog later if you have more questions, but sounds like you have a plan
<thumper> I do
<thumper> now to see if the plan works
 * thumper goes to the gym while amazon spins things up
#juju-dev 2013-07-25
<bigjools> yo thumper
<thumper> hi bigjools
<bigjools> you wanted me earlier?
<thumper> yeah, perhaps chat about maas
<thumper> but I need to shower first
<thumper> hmm... my magic change hasn't worked
<thumper> I think I know why it failed...
<thumper> but need more investigation
<wallyworld_> thumper: i'd like a catch up too
<thumper> wallyworld_: give me 5 minutes to grab these log files.
<wallyworld_> ok
<thumper> ok, ready
<thumper> but confused
<thumper> wallyworld_: shall I start a hangout?
<wallyworld_> ok
<thumper> https://plus.google.com/hangouts/_/98238f3ad80fb6e5ea1e7c6e53442bef8dd68df6?hl=en
<bigjools> davecheney: "i can tell that you and I are going to get on like a house on fire in BNE" - is it ok if I laugh a lot?
<davecheney> yes
<davecheney> that would break the ice
<bigjools> :)
<davecheney> opp
<davecheney> changing host
<davecheney> two secs
<davecheney> aaaand, back
<davecheney> wifi, how does it even work /
<davecheney> ?
<bigjools> it uses the tears of developers
<davecheney> apparently not enough tears in my house today
<bigjools> also pixie dust
<davecheney> and unicorn vomit
<davecheney> -- that is what someone described the plague of frozen yoghut stores that are springing up in Sydney
 * bigjools lives in the boonies so doesn't see that kind of nonsense
<bigjools> everyone here complained when McDonalds got planning permission
<davecheney> isn't that just what australian's do ?
<bigjools> whinge?  oh yes
<davecheney> we don't want airports, freeways, desalination plants, or mcdonalds' in our backyards
<bigjools> hey wallyworld_, I found out that the big empty space next to Coles here is going to get a huge department store and a Woolies!
<bigjools> the locals will go nuts :)
<wallyworld_> great
<bigjools> wallyworld_: and the phuqers put speed humps on that incline so I can't take you over it in the air any more
<wallyworld_> :-(
 * bigjools considers  going monster truck stylee
<davecheney> oh you kids, with your monster trucks and airborne shenanigans
<bigjools> davecheney: you made a series called 1.12 from a tag called 1.11.4?
<davecheney> yessir i did
<bigjools> how do you plan to work now, are you pushing revisions from trunk back to 1.12 or something else?
<davecheney> select revisions
<davecheney> possibly none
<bigjools> you plan on branching a new series from trunk at some point?
<davecheney> the reason for a stable tag is to give something to james page so that we can try to push that into backports for all series
<bigjools> ok
<davecheney> at the moment the devel versions are only going into aucys
<davecheney> saucy
<davecheney> ffs lag
<davecheney> the next stable will be 1.14, and I understand that a better local provider is the headline feature for that release
<bigjools> and azure
<davecheney> bigjools: yeah, i didn't want to overstep in my email
<bigjools> it's in our best interests to get Azure done ASAP
<davecheney> as I wasn't sure of the plan, but azure in 1.14 sounds good to me
<davecheney> right-o
<bigjools> and I'll be doing a handover when you're all in BNE
<davecheney> bigjools: so you know something about this sprint ?
<bigjools> since Red are moving to maas pastures
<bigjools> davecheney: I know that it's planned, and that's about it :)  Trying to get mramm to get it organised etc etc
<davecheney> ok, things are going about as smoothly as usual then
<davecheney> anyone know what date ?
<bigjools> I heard same week as IoM sprint
<bigjools> but I don't recall from whom that came
<davecheney> bigjools: that's arse backwards
<davecheney> thumper: must be there
<davecheney> otherwise itt's a waste of time
<thumper> I don't entirely agree with that
 * bigjools shrugs
<bigjools> as much as I'd like to spend time with thumper
 * thumper deputises wallyworld_
<davecheney> thumper: axw is your boss
<davecheney> he should get to meet you in person sometime this year :)
<bigjools> guffaw
<axw> wat
<thumper> davecheney: you get that the right way round?
<axw> :)
<thumper> davecheney: and I thought that it was mramm, not me
<thumper> anyway...
<davecheney> on noes, not the not double negative
<davecheney> thumper: you need to study the org chart more closely
<bigjools> trololol
<thumper> bigjools: https://plus.google.com/hangouts/_/98238f3ad80fb6e5ea1e7c6e53442bef8dd68df6?hl=en plz join
 * thumper pokes the directory
<bigjools> thumper: eek, hang on
<bigjools> 2 mins
<thumper> davecheney: look at axw in the directory, manager mramm
<davecheney> thumper: who do I report to ?
<davecheney> if it says mramm
<davecheney> it's wront
<davecheney> it's wrong
<thumper> davecheney: arosales
<davecheney> ok, i guess it's correct
<thumper> :)
<thumper> this is what I expected anyway
 * axw is not sure where he's meant to be placed
<axw> davecheney: did you see my comment on the warning log change, regarding transient errors?
<davecheney> axw: i sort of did
<davecheney> i threw some code at you
<davecheney> i tried to fix the problem from the other end
<davecheney> i hoped that there would be a middle group
<davecheney> ground
<axw> *looks*
<axw> same sort of thing
<axw> the problem I realised later is that there may be ERROR logs that don't result in a command failure
<axw> I was playing with the local provider yesterday, and got a bunch of ERROR logs about not being able to connect to port... then after a while it connects and succeeds
<axw> I suppose those could be changed to be not-errors though, except for the final one if it gives up retrying.
<davecheney> meeting agenda for tonight, https://docs.google.com/a/canonical.com/document/d/1eeHzbtyt_4dlKQMof-vRfplMWMrClBx32k6BFI-77MI/edit#
<jtv> wallyworld_: now, about simplestreams...
<wallyworld_> jtv: otp to bigjools, give me a sec
<wallyworld_> jtv: you rang?
<jtv> wallyworld_: otp to bigjools, give me aq sec
<bigjools> heh
<wallyworld_> lol
<jtv> wallyworld_: off the phone now.  I don't know who this bigjools thinks he is, keeping us all from getting work done like that.
<wallyworld_> indeed
<bigjools> it's a rare skill I have
<wallyworld_> jtv: so what bit of my limited knowledge did you want me to share?
<jtv> wallyworld_: don't worry, he doesn't mean the skill is rare.  He means it's rare for him to have a skill.
<wallyworld_> pwned
<jtv> I'm trying to invite you to a hangout, but your name is just too common...
 * bigjools bends over
 * wallyworld_ unzips
<bigjools>  /everyone screams
<wallyworld_> bigjools screams loudest
 * bigjools loves the family channel atmosphere
<jtv> Good thing you two are alone on a separate continent.  Now all we need to do is seal the exits and let you fight it out.
<wallyworld_> bigjools: you started it
<bigjools> wallyworld_: and you're going to finish it... oh wait you already did
<bigjools> ;)
<thumper> wallyworld_: hmm... don't really want to use host network
<thumper> wallyworld_: weird shit happens
<wallyworld_> thumper: otp to jtv
<thumper> wallyworld_: I had a container with that config, didn't really start properly, and worse than that, when I did lxc-shutdown, it shut down the host
<thumper> as in, my laptop
<bigjools> haha
<thumper> bigjools: I lost those links you pasted before
<thumper> bigjools: got them handy?
<bigjools> thumper: http://bj0z.wordpress.com/2011/08/19/howto-build-a-base-lxc-container-in-ubuntu-11-04/
<bigjools> thumper: http://unix.stackexchange.com/questions/50201/how-to-configure-external-ip-addresses-for-lxc-guests
<jtv> thumper: I let wallyworld_ go, so you can have him now.
<wallyworld_> thumper: funny that your laptop shut down
<wallyworld_> well, funny for me :-D
<wallyworld_> thumper: well that sucks
<thumper> wallyworld_: I'm sending an email out
<thumper> end of day lack of progress email
<wallyworld_> thumper: ni, i have a new peroblem
<thumper> which is? trouble spelling?
<wallyworld_> it seems you can't bridge to a wireless interface
<wallyworld_> typing
<wallyworld_> and when i tried doing it to my eth nic which is disconnected, it killed my connectivity
<thumper> heh
<wallyworld_> and i don't have a wired connection that is easily accessible :-(
<wallyworld_> i mean, it's 2013 after all
<thumper> :)
 * thumper signs off until the meeting
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: dimitern | Bugs: 5 Critical, 76 High - https://bugs.launchpad.net/juju-core/
<TheMue> dimitern: you've seen, I handled your review yesterday evening
<TheMue> dimitern: morning btw
<dimitern> TheMue: morning :)
<dimitern> TheMue: yeah, except one suggestion about table based tests
<rogpeppe> mornin' all
<dimitern> rogpeppe: moin
<dimitern> TheMue: https://codereview.appspot.com/11588043/diff/6001/environs/sync/sync_test.go#newcode85
<TheMue> dimitern: yep, they haven't been table driven in the old original synctools code. this cl is intended to only move the logic with as less changes as possible.
<TheMue> rogpeppe: morning, reviewed your ...4045 yesterday
<axw> morning
<rogpeppe> TheMue: yup, saw that, thanks a lot! branch is landing now.
<rogpeppe> axw: hiya
<dimitern> TheMue: so don't you think converting them to a table-based test will improve the code, and reduce it a lot?
<dimitern> axw: hey
<TheMue> dimitern: maybe in another cl, but i dislike mixing two intentions in one cl
<TheMue> dimitern: but i'm also no real fan of table-driven tests. here you often need extra effort to follow failing tests, e.g. add the output of an index or description to see which of the tests is failing
<dimitern> TheMue: well, instead of like 6 tests which are almost identical, you can have a 6-element table and a single case testing all cases
<dimitern> TheMue: but anyway, please at least add a TODO about that
<TheMue> dimitern: shouldn't it be a CANDO? ;)
<dimitern> TheMue: well, you asked me for a review :P
<dimitern> TheMue: i think it should be done, not necessarily by you
<dimitern> TheMue: but having a TODO at the right place is helpful to remember
<TheMue> dimitern: will change it to TDT so you've got your peace. but I still have the opinion that they are worse readable and followable in case of a fail. the only positive part is writing less code.
<dimitern> TheMue: let's agree to disagree then :)
<TheMue> dimitern: and btw, SetUpTest and TearDownTest have to be moved inside the loop this way. instead using the mechanisms of all mature test frameworks
<dimitern> TheMue: when I have some time, I'll do it and show you how better it looks ;)
<rogpeppe> TheMue: FWIW i agree strongly with dimitern in this case
<rogpeppe> TheMue: with the non-table test I have to scan very carefully for subtle differences between the test cases
<dimitern> rogpeppe: I knew you would
<dimitern> rogpeppe: that block there is almost 1-1 ready for a table
<rogpeppe> TheMue: whereas if it was a table i could look at one single piece of code, and then look at the changing data independently
<davecheney> evening
<dimitern> davecheney: morning :)
<rogpeppe> TheMue: for the setup/teardown thing, you can call SetUpTest and TearDownTest within the test itself, which can work (i usually define Reset as {Teardown; Setup})
<rogpeppe> davecheney: hey boss
<TheMue> rogpeppe: but i have to identify the test data that caused the fail. often enough found tests where i had to count tests myself instead just reading the source
<rogpeppe> TheMue: that's why we usually put an "about" field in the table
<rogpeppe> TheMue: and log it
<dimitern> +1
<thumper> rogpeppe: that sounds terrible
<thumper> rogpeppe: why not just have an independent test?
<TheMue> thumper: +1
<TheMue> :D
<dimitern> gents, meeting time
<rogpeppe> thumper: because in this case there are 110 almost-duplicated and non-trivial lines of test code
<rogpeppe> thumper: which could probably be reduced to a single function with a static data table
<rogpeppe> thumper: personally i think that's a win
<rogpeppe> axw: ping
<dimitern> axw: https://plus.google.com/hangouts/_/bf3f4cfe715017bf60521d59b0628e5873f2a1d3
<thumper> axw: https://plus.google.com/hangouts/_/bf3f4cfe715017bf60521d59b0628e5873f2a1d3
<axw> ?
<axw> brt
<rogpeppe> axw: team meeting
<axw> ah
<thumper> axw: yeah, but no one invited you :)
<davecheney> https://docs.google.com/a/canonical.com/document/d/1eeHzbtyt_4dlKQMof-vRfplMWMrClBx32k6BFI-77MI/edit#
<davecheney> mramm: hey there
<davecheney> we're having a meetiong
<davecheney> wanna come ?
<mramm> hey
<mramm> trying for the last bit
<wallyworld> thumper: got 5?
<thumper> wallyworld: for you, maybe even six
<thumper> wallyworld: hangout or irc?
<wallyworld> rejoin thehangout
<rogpeppe> i'm looking for a review of this, please: https://codereview.appspot.com/11800044/
<dimitern> rogpeppe: on it
<rogpeppe> dimitern: thanks
<dimitern> rogpeppe: reviewed
<rogpeppe> dimitern: thanks
<rogpeppe> looking for a second review of https://codereview.appspot.com/11800044/ if anyone cares to take a look
<fwereade> rogpeppe, https://codereview.appspot.com/11800044/ reviewed, just one question
<rogpeppe> fwereade: no particular reason other than it doesn't seem like it's necessary for state to define every possible interface that a client might use
<rogpeppe> fwereade: but i'll move it the interface to state if you'd prefer
<fwereade> rogpeppe, I agree that it's not quite right as it stands, but I'd be happiest if we at least kept those bits all together
<fwereade> rogpeppe, LGTM with that then
<rogpeppe> fwereade: thanks
<jtv> Damn, I was about to approve that one.  :)
<jtv> fwereade: you wanted a chat about instance-type selection in Azure?
<axw> night folks
<fwereade> jtv, yeah, sorry -- I was just wondering whether anything sprang to mind re possible unifications of the various don't-deploy-worthless-instances implementations
<fwereade> jtv, IIRC there's something in openstack and ec2, as well as azure now
<fwereade> jtv, (this is not a request for you to do more work, just a quick check to see if there's stuff in your brain that should maybe also be in mine)
<jtv> What do you mean by worthless?
<jtv> You mean, for setting a minimal baseline spec?
<fwereade> jtv, unlikely to be useful
<fwereade> jtv, yeah
<jtv> Hmm...
<jtv> That does look as if it ought to be the same story for every cloud, doesn't it?
<fwereade> jtv, it does rather
<jtv> It seems like a simple problem technically...  Just one function that everyone could call.
<fwereade> jtv, and I have a feeling that at least one of them tries with the baseline and then falls back to whatever-it-can-get rather than just refusing
<fwereade> jtv, yeah -- I don't intend to schedule anything now, just to remember it as something to bear in mind
<fwereade> jtv, would you add a tech-debt bug please? and link it in the CL?
<jtv> OK
<TheMue> dimitern: the CL just landed again with table driven tests. ;) you can take a look?
<dimitern> TheMue: which one was it?
<TheMue> dimitern: https://codereview.appspot.com/11588043
<m_3> davecheney: hey man, you around?
<dimitern> TheMue: that's great, but my comment was not about cmd/juju/synctools_test.go, but about environs/sync/sync_test.go :)
<jtv> fwereade: I filed bug 1204851 â now how do I link it to the CL?
<_mup_> Bug #1204851: Different providers set different baseline instance constraints <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1204851>
<fwereade> jtv, sorry, I just mean to mention it in there somewhere
<dimitern> jtv: from LP's MP directly
<TheMue> dimitern: so next round ;)
<TheMue> dimitern: but why didn't mentioned you that for synctools_test? :P
<dimitern> TheMue: because they were less and not so obviously similar
<fwereade> dimitern, I don't think that's quite right, this CL doesn't actually address it -- it's just the thing that caused us to think "yeah, this should exist"
<dimitern> fwereade: sorry?
<TheMue> dimitern: they are even more similar than in sync_test
<fwereade> dimitern, I don't think it's very useful to have the bug linked to the CL, because the CL doesn't address the bug; but IME it *is* useful to be able to look back at a CL and see that it spawned a bug
<dimitern> TheMue: so good that you did them as well, no?
<dimitern> fwereade: ah, I don't really know, I was just answering jtv's question, I think
<TheMue> dimitern: please rephrase
<fwereade> dimitern, yeah, you were just missing a spot of context
<dimitern> fwereade: most likely
<dimitern> TheMue: i meant "it was good that you've seen the similarity there and found a way to refactor them into a table-based test"
<TheMue> dimitern: ah, ack
<jtv> Argh.  Can we get another gwacl update on the build machine?
<dimitern> jtv: on it
<jtv> Thanks.  I'll send the email.
<dimitern> jtv: thanks
<dimitern> jtv: oops :)
<dimitern> jtv: i meant "done"
<jtv> Then thank you too.  :))
<dimitern> jtv: np
<dimitern> mgz, rogpeppe, TheMue, fwereade: standup
<mgz> ta
<dimitern> rogpeppe: standup
<mgz> hm, Addresses in state/open.go currently uses st.db.Session.LiveServers()... what's that do exactly? I'd want machine ids to get detailed address info
<benji> gary_poster: not that I'm aware of
<benji> pffe
<dimitern> fwereade: I cannot use StateInfo in apiserver
<dimitern> rogpeppe: any idea for a workaround getting the addresses?
<rogpeppe> dimitern: why can't you use StateInfo in apiserver?
<dimitern> rogpeppe: it needs a bootstrapped environment
<rogpeppe> dimitern: isn't the environment bootstrapped by the time the api server starts?
<dimitern> rogpeppe: provider-config file in the bucket
<dimitern> rogpeppe: and moreover, it's a client api
<dimitern> rogpeppe: it's not supposed to be run on the server
<dimitern> rogpeppe: how to test it?
<rogpeppe> dimitern: it'll still work though, as a temporary hack, won't it?
<rogpeppe> dimitern: how are any of the api server calls which use an Environ tested?
<dimitern> rogpeppe: http://paste.ubuntu.com/5911031/
<dimitern> rogpeppe: none of apiserver calls use the environ, except for the upgrader
<rogpeppe> dimitern: i thought deploy did
<dimitern> rogpeppe: no
<rogpeppe> dimitern: Client.ServiceDeploy definitely does
<dimitern> rogpeppe: let me see
<rogpeppe> dimitern: hmm, i *think* it should work
<dimitern> rogpeppe: no, it doesn't
<dimitern> rogpeppe: not StateInfo or anything provider specific at least
<dimitern> rogpeppe: and that's what I need
<rogpeppe> dimitern: i don't see why
<rogpeppe> dimitern: can't you call NewConnFromState?
<dimitern> rogpeppe: why do I need that?
<rogpeppe> dimitern: because a Conn contains an Environ
<dimitern> rogpeppe: NewConnFromState essentially calls environs.New(cfg)
<rogpeppe> dimitern: yes
<dimitern> rogpeppe: I'm doing this:
<rogpeppe> dimitern: the api server doesn't have environ credentials when it first bootstraps, so the StateAddresses call will fail until the credentials are pushed, but i don't *think* that's a problem because we won't be using StateAddresses until the first connection has been made. this is my only concern.
<dimitern> rogpeppe: http://paste.ubuntu.com/5911043/
<dimitern> rogpeppe: and StateInfo fails with "provider-config" not found
<dimitern> rogpeppe: which happens at bootstrap time
<rogpeppe> dimitern: oh god, StateInfo is non-provider-specific now
<rogpeppe> dimitern: that's broken
<dimitern> rogpeppe: why?
<rogpeppe> dimitern: because the local provider doesn't store anything in the private bucket
<dimitern> rogpeppe: I don't see how this matters
<rogpeppe> dimitern: you're using StateInfo wrong
<rogpeppe> dimitern: you need to call env.StateInfo()
<dimitern> rogpeppe: I'll try
<rogpeppe> dimitern: environs.StateInfo is just a way for providers to implement Environ.StateInfo
<dimitern> rogpeppe: now only APIAddresses test fails
<dimitern> rogpeppe: and each time the port is different, as reported by StateInfo
<rogpeppe> dimitern: that's expected, i think
<dimitern> rogpeppe: actually I think State.APIAddresses is too lazy
<dimitern> rogpeppe: it just replaces the port
<rogpeppe> dimitern: oh yes, that's true
<dimitern> rogpeppe: and returns the same host as the state address
<dimitern> rogpeppe: I can hack it a bit to get the actual port
<rogpeppe> dimitern: where the dummy provider starts the API server on a random port
<rogpeppe> dimitern: i think we should actually store the API addresses in the state
<dimitern> rogpeppe: yeah, perhaps, but until then I need a workaround
<rogpeppe> dimitern: i'm not sure there is one
<rogpeppe> dimitern: how can the state know what port number has been allocated by the dummy provider?
<rogpeppe> dimitern: isn't it quite trivial to add a State.SetAPIAddresses method?
<dimitern> rogpeppe: I did it
<rogpeppe> dimitern: cool!
<dimitern> rogpeppe: using JujuConn.APIInfo() in the test
<dimitern> rogpeppe: instead of s.State.APIAddresses
<rogpeppe> dimitern: i thought the test was testing APIAddresses...
<dimitern> rogpeppe: it was, but not anymore
<dimitern> rogpeppe: it's testing what's in the api info, the same as reported back
<dimitern> rogpeppe: will live test it now and see
<rogpeppe> dimitern: i'll reserve judgement until i see the CL :-)
<TheMue> dimitern: so, more table driven testing
<dimitern> rogpeppe: live tests work!
<dimitern> TheMue: will take a look soon
<TheMue> dimitern: thx
<rogpeppe> dimitern: cool. am at lunch BTW
<dimitern> TheMue: reviewed
<dimitern> rogpeppe, fwereade: https://codereview.appspot.com/11800045/
<TheMue> dimitern: thx
<dimitern> fwereade: ping
<fwereade> dimitern, pong
<dimitern> fwereade: can you take a look please? ^^
<fwereade> dimitern, ah, sorry
<dimitern> rogpeppe: you as well, after lunch?
<rogpeppe> dimitern: looking
<dimitern> thanks guys
<dimitern> ok, mu mouse officially acts weird lately - a single click is frequently a double click and it drops while dragging and still holding the left button... hmm need a screw driver
<mgz> I find drag and drop harder with a screwdriver than a mouse
<dimitern> lol
 * TheMue dcc's dimitern his BT mouse, isn't needed anymore
 * dimitern bbi1h
<dimitern> (or so.. dentist)
<fwereade> dimitern, you'll have a review waiting when yuo return
<fwereade> does anyone recall the justification for CanDeploy?
<rogpeppe> dimitern: you have a review
<rogpeppe> fwereade: i remember the justification for CanDeploy
<fwereade> rogpeppe, oh yes?
<rogpeppe> fwereade: it's because if we implemented AssignedMachineId(unitTag) in API (which is what the deployer previously used) it's a bit of a security leak
<fwereade> rogpeppe, but CanDeploy itself doesn't need to exist AFAICS
<rogpeppe> fwereade: because it means any machine agent can ask for the assigned id of any unit
<rogpeppe> fwereade: well yes, i was puzzling over why we checked AssignedMachineId in the first place
<fwereade> rogpeppe, what does CanDeploy do that isn't already done with UnauthorizedErrors?
 * rogpeppe checks
<fwereade> rogpeppe, it's just a big copy of the deployer's authfunc as far as I can see
<fwereade> rogpeppe, it's just really well obfuscated ;p
<rogpeppe> fwereade: the deployer's authfunc?
<fwereade> rogpeppe, deployer.go:53
 * rogpeppe looks in api*server*/deployer/deployer.go:53 :-)
<fwereade> rogpeppe, ha, sorry
<rogpeppe> fwereade: no, it's different to that
<fwereade> rogpeppe, ah, ok, which units will be handled differently?
<rogpeppe> fwereade: i *think* the case it's dealing with is when we get told about a unit by the watcher but that unit is no longer assigned to this machine
<fwereade> rogpeppe, we'll get an unauthorized out of Life in that case
<rogpeppe> fwereade: oh sorry, i didn't read far enough
<rogpeppe> fwereade: wow, that getAuthFunc does quite a lot of work
<fwereade> rogpeppe, the theory is that watchers are cheap
<fwereade> rogpeppe, if not, getAllUnits can be tuned on its own
<rogpeppe> fwereade: i suspect that making a set from the results of getAllUnits is vast premature optimisation, but i take your point
<rogpeppe> fwereade: yes, i don't see any point in CanDeploy
<fwereade> rogpeppe, I think it's more about clarity of expression than runtime optimization
<fwereade> rogpeppe, cool, thanks
<rogpeppe> fwereade: personally i think it's just as clear (+ shorter + more correct) this way, but YMMV. http://paste.ubuntu.com/5911521/
<fwereade> rogpeppe, I think it's maybe implemented a little awkwardly -- would be nicer to build the set from units directly -- but I can't generally ever get very enthusiastic about replacing N map lookups with N linear searches
<fwereade> rogpeppe, not to say it's never justified
<rogpeppe> fwereade: the thing is currently N=1
<rogpeppe> fwereade: so we lose out by building the set
<fwereade> rogpeppe, I'd rather pay the fixed costs now and have something that stays sane as N grows
<fwereade> rogpeppe, if the fixed costs become a problem we can revisit it ;p
<rogpeppe> fwereade: seems like premature optimisation to me but there y'go
<rogpeppe> fwereade: it's never gonna be a bottleneck
 * rogpeppe is off for a bit of bouldering
<rogpeppe> see y'all tomorrow
 * fwereade is off too
<sidnei> oh, wow. got a panic!
<thumper> panic!
<thumper> panic!
<thumper> that type?
<sidnei> panic: runtime error: comparing uncomparable type map[string]interface {}
<sidnei> thumper: https://pastebin.canonical.com/95058/
<thumper> sidnei: what were you doing?
<sidnei> thumper: juju set u1-public-gallery general.environment_name=staging -e local
<thumper> hmm... local provider
<thumper> ?
<thumper> can I get you to file a bug with as much info as possible?
<sidnei> thumper: was about to :)
<sidnei> thumper: https://bugs.launchpad.net/juju-core/+bug/1205112
<_mup_> Bug #1205112: panic while setting a config value <juju-core:New> <https://launchpad.net/bugs/1205112>
<thumper> ta
<sidnei> thumper: somewhat related, seems like positional arguments are not supported for juju get?
<thumper> sidnei: what do you mean?
<sidnei> $ juju get u1-public-gallery general.environment_name -e local
<sidnei> error: unrecognized args: ["general.environment_name"]
<sidnei> to get a single setting
<thumper> hmm...
<thumper> seems like a reasonable request
<thumper> file a bug :-)
<sidnei> so, deploying the services needed to run a tiny part of u1 takes 18G of disk space
<sidnei> fun
<thumper> haha
<thumper> wow
<thumper> why so much?
<sidnei> 16 containers
<sidnei> i guess none of this uses overlays
<sidnei> right? each container is a full base install?
<thumper> yeah
<thumper> I think there may be plans to use overlays later
<thumper> once we get the basics working
<sidnei> or ephemeral containers yeah
<sidnei> thumper: did you talk to any of the tsuru guys?
<thumper> no I haven't
<sidnei> they are using docker to snapshot things so that add-unit is instantaneous
<sidnei> not hooked up to juju, but working around it
<sidnei> iiuc they start a unit directly with docker stab a charm into it then snapshot it
<thumper> so they are using charms, but not juju?
<sidnei> thumper: they are using juju to provision machines in ec2/openstack but only the provisioning part, then they inject charms into those machines, iiuc
<sidnei> still on pyjuju because they have a branch with vpc that they depend on
<sidnei> im looking at getting it moved to gojuju using the api directly instead of shelling out, since it's all in go
<sidnei> but the lack of vpc is a blocker
 * thumper nods
<sidnei> haven't looked into how much tied to docker they are, maybe local provider is good enough
 * sidnei wonders if flaviami_ is around
<wallyworld> thumper: z'up
<thumper> hi wallyworld
<thumper> trying stuff
<wallyworld> any luck?
<thumper> can't seem to log into it
<wallyworld> :-(
<wallyworld> i pissed my mp still hasn't been given the 2nd +1
<thumper> which one
<wallyworld> if you're going to hold it up, you need to be responsive to changes
<wallyworld> the simple streams validation
<thumper> no response from the other reviewer?
<wallyworld> nope. i mentioned it on the standup too
<thumper> ok, see if you can get a +1 from davecheney or jtv1 or bigjools
<wallyworld> and made sure my latest changes were ready for looking at
<thumper> failing that, I'd say land it and do a post landing review if needed
 * wallyworld checks to see if there's enough beer
<wallyworld> the remaining stuff is more bikeshed than critical
<wallyworld> thumper: the latest lxc email - the suggested network config looks like the setup i used for bridging
<wallyworld> with no luck
<thumper> what was the problem with yours?
<wallyworld> thumper: i just hung like yours did
<wallyworld> lxc-attach worked
<wallyworld> but i had no eth cable plugged in
<wallyworld> not sure if it didn't like that
<wallyworld> i've got access to the qa lab now though
<wallyworld> but i need to check with mgz where i can ssh to
<thumper> I have an ec2 instance up that I'm playing with manually
<thumper> resetting the networking etc
<wallyworld> you would need to reboot it after changing /etc/network/interfaces, no?
<wallyworld> or just restart networking i guess
<thumper> I just restarted networking
<thumper> and the ssh connection stayed valid
<thumper> so that is good I guess
<thumper> wallyworld: I need to be away for a bit
<thumper> mind needs to mull this over
<thumper> also taking kids to a thing in tow
<thumper> n
<thumper> back later
<wallyworld> ok
<wallyworld> let's catch up when you get back
#juju-dev 2013-07-26
<davecheney> wallyworld: could you please tag goose for juju-1.12
<wallyworld> sure
<wallyworld> rev 99 i think?
<davecheney> hold up
<davecheney> i need this tag, juju-1.12.0 at the same revno as juju-1.11.4
<wallyworld> ok
<wallyworld> davecheney: done
 * davecheney hugs wallyworld 
 * wallyworld goes all gooey
<davecheney> wallyworld: what cmd did you use ?
<wallyworld> bzr tag -d bzr+ssh://go-bot@bazaar.launchpad.net/~go-bot/goose/trunk -r99 juju-1.12.0
<wallyworld> rev 99 was what i did the other tag for
<davecheney> ok
<davecheney> i don't understand bzr tags
<davecheney> but that dovetails nicely into all the other things i don't understand
<thumper> wallyworld: today is a bit here and there
<thumper> wallyworld: have to take the dog to the vet for a 2pm apt
<thumper> did you want a quick chat now?
<wallyworld> thumper: sure
<wallyworld> thumper: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
<axw> If anyone is curious wtf the new guy is doing, I'm going to look at implementing debug-hooks. Feel free to redirect my attention though.
<davecheney> axw: +1
<davecheney> axw: there is a branch for the client side part
<axw> davecheney: yep thanks, I saw you made a start
<davecheney> but the logic to intercede in the agent is an open problem
<axw> yeah I see there's a problem with replacing the ZK ephemeral node thingy
<wallyworld> thumper: found out what's up with the bot - lyc02 is down for maintenance \o/
<wallyworld> maybe we shouldn't run prod stuff on canonistack :-/
<thumper> haha
<wallyworld> thumper: we don't really need to record how a machine is started because we know that from the ContainerType attribute, hence can use that to figure out how to get an address
<wallyworld> may still need to record if manually provisioned perhaps
<thumper> wallyworld: no...
<thumper> yes, manually provisioning hits this
<wallyworld> but in manual case we would just write the ip address directly to Addresses
<wallyworld> so i'm not sure we need to record anything extra
<thumper> wallyworld: about my desktop, it seems that there is no eth0 in my /etc/network/interfaces file
<wallyworld> you need to add it
<wallyworld> it won't be there
<thumper> well I have an eth0
<thumper> problem is that it has a static ip
<wallyworld> yes, but it doesn't have an entry in that file by default
<thumper> and if futz with that, other things fail
<wallyworld> you can add an entry in there with a static ip afaik
<thumper> wallyworld: yeah, but that doesn't help the bridge does it?
<wallyworld> or mark it as manual or something like that to tell it not to use dhcp
<thumper> this is all getting very confusing
<wallyworld> yes
<wallyworld> those web links from yesterday had examples for nics with static ips i'm sure
<wallyworld> thumper: i think i read somewhere that the host nic needs to be set to promiscuous mode using a pre-up statement on br0 bridge interface
<wallyworld> not sure though
<thumper> that sounds right
<thumper> otherwise it is likely to filter only those for its mac address
<thumper> wallyworld: I'm going to write up a proposal for something hacky that might work
<wallyworld> thumper: i also read that some kernels have filtering enabled which will break it and you need to use sysctrl to fix that
<thumper> bigjools: can I have access to your maas instance?
<wallyworld> all very complicated
<bigjools> it will cost you
<thumper> wallyworld: we need a network specialist to work with IMO
<wallyworld> yes indeed
<thumper> to make sure we don't fuck up
<thumper> bigjools: what's the cost?
<wallyworld> yep. surely we have one of those
<bigjools> a night in your arms
<thumper> haha
<thumper> but you have boy couties
<bigjools> don't worry, it's not transmittable in saliva
 * thumper is speechless
 * bigjools wins \\o/
<wallyworld> so that doesn't mean he won't get it via other fluids
 * thumper blocks his ears and hands over eyes
<thumper> lalalalala
<bigjools> ok give me an hour or so as I need to finish a review and then I need to refresh the package and OS
<bigjools> on my maas box itself
<thumper> bigjools: not today, but may need for testing next week if that is ok
<bigjools> sure
<bigjools> what OS do you need?
<bigjools> saucy?
<thumper> precise
<wallyworld> thumper: what are you going to try?
<bigjools> you're SoL
<thumper> SoL?
<thumper> wallyworld: some hackery
<bigjools> it's currently on raring and I ain't gonna downgrade :)
<bigjools> Shit Outta Luck
<thumper> how am I supposed to put charms on it then?
 * bigjools rolls eyes
<thumper> bigjools: isn't this easy with maas?
<bigjools> the maas server is irrelevant to what gets provisioned on nodes
<thumper> ok, in which case I don't care
<bigjools> yes it's trivial
<bigjools> good :)
<thumper> bigjools: all i need is access to something that I can bootstrap with maas and juju
<bigjools> actually you probably need the saucy daily since juju  broke maas recently
<thumper> wallyworld: so, boostrap a maas precise image
<thumper> wallyworld: shell in
<thumper> wallyworld: tweak the interfaces file
<bigjools> it was trying to upload zero sized state files
<thumper> wallyworld: make sure I have a hacked up juju
<bigjools> thumper: I will arrange that for you, no prob
<thumper> wallyworld: so I can override the default network bridge with an environment variable
<thumper> wallyworld: then try to start some containers
<thumper> and see if they are pingable / addressable
<wallyworld> bigjools: it needed to create an empty file to get a url it could use later
<wallyworld> bigjools: why did that break? the test doubles worked ok
<bigjools> yeah.... test doubles ... might have a been a bit different to reality :/
<bigjools> it's fixed in latest trunk for maas anyway
<wallyworld> sure, but why does creating a 0 length file break?
<bigjools> maas was rejecting it
<wallyworld> well that sucks
<bigjools> indeedy
<bigjools> but no longer
<wallyworld> thumper: so you are just setting up a "standard" lxc bridged environment
<wallyworld> ?
<thumper> wallyworld: yeah, set eth0 to manual
<thumper> and bridge to dhcp over eth0
<wallyworld> i tried that and lxc didn't boot
<thumper> worked for me
<thumper> on ec2
<thumper> I want to try on metal
<wallyworld> hmmm. ok
<thumper> and maas
<thumper> may be hacky
<thumper> but if it leads to installing openstack nicely on containers in maas
<thumper> we may be ok with it...
<thumper> maybe
<wallyworld> i just has to work for iom
<wallyworld> it
<thumper> huh
<thumper> and not be entirely string, spit and duck tape
<wallyworld> but it would be nice to have a network guy say "do it like this"
 * thumper nods
<wallyworld> surely there's someone we can ask
<wallyworld> maybe from is?
<thumper> that's going to be part of my email
<thumper> will fire up to mramm and william
<wallyworld> so i still don't think we need to record the origin of a machine in state
<wallyworld> since we can write the address to martin's new data model
<thumper> I'd be happy enough if we could use hallyn's suggested hackery for IOM and MAAS with something more concrete/understandable later
<wallyworld> hallyn?
<thumper> wallyworld: hallyn is serge
<wallyworld> ah right
<thumper> wallyworld: sometimes we don't know the ip address until after the machine agent has started
<thumper> wallyworld: so the machine agent needs to know how to find out
<wallyworld> well, the the manual provisioning case, it just gets written directly into env
<wallyworld> for lxc, can't we add something to cloud init
<wallyworld> to record the correct address in state so the agent can read it
<thumper> I still think that sticking something in state is the right approach
<thumper> jsut gut feel right now
 * thumper untangles gut feelings into an email
<wallyworld> we are sticking something in state - the addresses :-)
<wallyworld> no need for any indirection
 * thumper punches wallyworld for not listening
<wallyworld> huh?
 * thumper punches wallyworld because he is frustrated
<thumper> perhaps more honest
<thumper> stand still wallyworld
<wallyworld> ouch
<wallyworld> i listened, didn't necessarily agree :-)
<thumper> I think we need it for later, perhaps not entirely needed now
<thumper> as we can force  it in
<thumper> wallyworld: if you are looking for something to do, we need to be able to parameterise machines with the containers they support
<wallyworld> so let's just do what we are sure we need for now and iterate later if needed
<thumper> and be able to set and update that
<thumper> and have the deployments honour it
<wallyworld> ok. i think i have hit a roadblock with my removal of control-bucket :-(
<thumper> wallyworld: so we can then only start an lxc provisioner if we support lxc
<thumper> bootstrap?
<wallyworld> bootstrap works fine. but the next time you try a juju command, it doesn't know what the control bucket is cause it's stored in state and it can't access state cause it needs a control bucket to do so
<wallyworld> which make me wonder how we are going to support someone going to a different pc and using juju for the env they have set up elsewhere
<wallyworld> thumper: so we would parameterise machines based on the provider which created them i guess? ie all ec2 machines support x,y.z; all mass machines support a,b,c ?
<thumper> no, I don't think so
<thumper> could be kernel limited
<thumper> so some kernals support kvm, some don't
<wallyworld> so how do we know then what a machine supports?
<thumper> well...
<thumper> some job in the machine agent will need to interrogate the machine
<thumper> if lxc is not installed, no lxc containers
<thumper> if kernal foo, no kvm
<thumper> etc
<thumper> and then sets state
<wallyworld> doesn't sound very scalable to have to track all the kernal versions supporting kvm etc
<wallyworld> i think also a machine could it self write into state what it supports
<wallyworld> so a job would be called by cloud init and the machine would contact the state server and update its details
<wallyworld> ?
<thumper> your guess is as good as mine right now
<thumper> and at the end of friday I have run out of fucks to give
<wallyworld> np. i'll try it and see if it works
<wallyworld> lots of dead ends this week :-(
<thumper> sometimes you have shitty weeks
<thumper> that's life
<wallyworld> and the landing bot is still fucked
<thumper> accept it and move on
 * thumper finishes early
<jtv1> wallyworld: a question about the "signedImageDataOnly" parameter to imagemetadata...  Is that about requiring images to be signed, or does it mean the simplestreams index etc. must be signed?
<wallyworld> it's about requiring that the metadata be signed
<wallyworld> not the images themselves
<jtv1> Ah OK, that explains a few things.  Thanks.
<jtv1> (This was actually documented as I recall, but it's one of those cases where you first need to know enough to rule out the potential for ambiguity.)
<jtv1> wallyworld: do we have anything like a test double for simplestreams?
<jtv> wallyworld: also, there seem to be "releases" and "daily" versions of the base URL, but also "releases" and "daily" streams inside the indexes found at those base URLs.  Is that correct?
<wallyworld_> jtv: yes, there are releases and daily. we use releases unless specified otherwise
<jtv> wallyworld_: what I mean is, there's separate base URLs for releases and daily, but that doesn't seem to be the same thing as setting the Streams selector to "releases" or "daily," is it?
<wallyworld_> i've only ever used the base url with "releases" at the end. and within that, chosen the releases metadata as opposed to the daily metadata
<wallyworld_> i've not used a daily base url. didn't know one existed
<jtv> Maybe it's outdated...  there's a bug open for the "daily" one not having an Azure file.
<wallyworld_> jtv: we've always just used the release images for openstack and ec2
<wallyworld_> jtv: as far as a test double - we have tests that set up sample data and a matching  http service, but that hasn't been packaged into a re-usable instance
<jtv> So I may have to export it.
<wallyworld_> export it?
<jtv> Capitalize some names.
<wallyworld_> sure, but what is "it"?
<jtv> testRoundTripper.
<wallyworld_> just make a new one
<wallyworld_> var testRoundTripper = &jujutest.ProxyRoundTripper{}
<wallyworld_> i guess it could be packaged
<jtv> And the code that puts it in place.
<wallyworld_> i think it's all of 4 lines
<wallyworld_> the image data will be different for each test case i would imagine
<rogpeppe> mornin' all
<wallyworld_> g'day
<rogpeppe> wallyworld_: hiya
<wallyworld_> rogpeppe: i went to land that simplestreams branch today cause i really need it. but canonistack went down for maintenance and now our landing bot is gone
<wallyworld_> and i don't know how to restart it
<rogpeppe> wallyworld_: oh dear
<wallyworld_> yeah :-(
<rogpeppe> wallyworld_: mgz probably does
<wallyworld_> the IS guys said why the fuck are you running prod stuff on canonistack?
<wallyworld_> i didn't have a good answer :-)
<wallyworld_> rogpeppe: i'm unsure about the correct way to do stuff with the api changes. in the jujud Machineagent Run() method, it is kosher to get a state object using openState() and then go machine = st.getMachine(234) and then invoke methods on the machine object?
<rogpeppe> wallyworld_: the correct answer depends on what you're trying to do
<wallyworld_> rogpeppe: a code snippet http://pastebin.ubuntu.com/5914001/
<wallyworld_> i want to write some data to the machineDoc
<wallyworld_> on which the agent is running
<rogpeppe> wallyworld_: i *think* that will probably be best done in one of the machine agent workers
<rogpeppe> wallyworld_: probably deployer, or maybe machiner
<wallyworld_> rogpeppe: the code to set up the supported containers is in the run method
<wallyworld_> so i have the info at hand at that point
 * rogpeppe thinks
<wallyworld_> i'll see if i can move the code
<wallyworld_> cause some more lxc sruff is done in StateWorker()
<wallyworld_> so it might make sense to put it all together
<wallyworld_> i'm just reading all this code for the first time
<wallyworld_> i think i can stick it in StateWorker()
<rogpeppe> wallyworld_: i think that's reasonable for the moment. we're going to be moving away from doing anything in the state worker though, as we start to use the API for everything
<rogpeppe> wallyworld_: so we'll need a SetSupportedContainers API call in the relevant worker (or perhaps on MachineAgent.State
<rogpeppe> )
<wallyworld_> rogpeppe: except i just saw that StateWorker is only run on bootstrap node. and it just calls openState() anyway. so i might leave the code in the Run method
<rogpeppe> wallyworld_: no, every client runs StateWorker currently
<rogpeppe> wallyworld_: otherwise nothing would work, because we haven't moved all the agents to using the API yet
<wallyworld_> ok. i'm still not across all the new design
<rogpeppe> wallyworld_: can we really not tell what containers a machine supports until it comes up?
<rogpeppe> wallyworld_: i'm thinking that this breaks the nice usual mode of operation: if i interpret this right, you won't be able to deploy to a given container on a machine until that machine has actually come up
<rogpeppe> wallyworld_: or am i misunderstanding what SetSupportedContainers is to be used for here?
<fwereade> rogpeppe, if you have any clever way of determining the answer to that question we would be most interested to hear it
<fwereade> rogpeppe, wallyworld_: but I don't think we can prevent adding containers until the host's up
<rogpeppe> fwereade: i'm thinking that we should probably *allow* deployment to a container before a machine's up, yes
<rogpeppe> fwereade: but the deployment should fail if the container's not supported
<fwereade> rogpeppe, wallyworld_: isn't that just a provisioning failure?
<rogpeppe> fwereade: +1
<rogpeppe> fwereade: but perhaps that's what wallyworld_'s envisaging anyway, i'm not sure
<wallyworld_> rogpeppe: there's a method called EnsureLXCContainers
<wallyworld_> that is called at the start of the Run method
<rogpeppe> wallyworld_: method on what?
<wallyworld_> so we know then if lxc is supported
<wallyworld_> MachineAgent
<rogpeppe> wallyworld_: you mean EnsureWeHaveLXC ?
<wallyworld_> yeah
<wallyworld_> sorry, bad memory
<wallyworld_> rogpeppe: and yes, set supported containers is supposed to be used so that we only attempt to create a container on a host that can support it
<wallyworld_> eg not all hosts can run kvm
<wallyworld_> or lxc (eg windows)
<fwereade> wallyworld_, I'm a bit worried about the inconsistency there
<rogpeppe> wallyworld_: i think that breaks juju's basic workflow, unfortunately
<fwereade> wallyworld_, what's the benefit of having two different ways of failing the same operation?
<wallyworld_> fwereade: rogpeppe: i have to go have dinner and play soccer, i'll talk to you later this evening when i'm back
<rogpeppe> wallyworld_: np, have fun
<fwereade> wallyworld_, sure, have fun
<wallyworld_> fwereade: also, bot is down
<fwereade> grar, ty
<wallyworld_> canonistack got nuked today for upgrade
<wallyworld_> and bot disappeared
<wallyworld_> and i have nfi how to restart it
<wallyworld_> we need to not run bot on canonistack
<wallyworld_> anywats, ttyl
<fwereade> mgz, do I recall that jam handed bot-bouncing duties over to you when he left?
 * fwereade getting breakfast quickly
<axw> fwereade: when you have a moment, can you please read my comment here? https://bugs.launchpad.net/juju-core/+bug/1027876
<_mup_> Bug #1027876: cmdline: Support debug-hooks <cmdline> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1027876>
<fwereade> axw, that looks pretty plausible actually
<fwereade> axw, are you interested in investigating that a little bit?
<axw> yeah, I'd be happy to have a crack at it
<axw> it'll probably take me a little while longer than others, still learning things obviously :)
<axw> thanks for taking a look
<fwereade> axw, the only wrinkle I can see is clearly communicating what's going on when two users try to debug hooks on the same unit (and making sure it works if two people are doing different units on the same machine)
<fwereade> axw, cool, that would be very much appreciated
<axw> yeah I dd think of that as I was writing it up... only simple solution I could think of was the time out the ssh command and print something useful
<fwereade> axw, I can live with a bit of inelegance there, though, it is very much a charmer-focused tool
<axw> actually the flock can do that
<axw> ok
<axw> cool
<TheMue> so, second part of syncing is in: https://codereview.appspot.com/11910043
<TheMue> anyone interested in a review?
<jtv> TheMue: I think I can take it.
<TheMue> jtv: thx
<axw> I'm off. Have a nice weekend everyone.
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: mgz | Bugs: 5 Critical, 76 High - https://bugs.launchpad.net/juju-core/
<fwereade> dimitern, ping
<dimitern> fwereade: pong
<fwereade> dimitern, any thoughts on CanDeploy? AFAICT we don't really need it
<fwereade> dimitern, because we'll get unath errors out of Life for any unit we're not meant to know about
<dimitern> fwereade: well it started as AssignedMachineId
<dimitern> fwereade: but you're probably right
<fwereade> dimitern, the plan was that we could drop all of that bit, I think -- "responsible" is now handled implicitly behind the api
<fwereade> dimitern, that's what getAuthFunc does for us
<dimitern> fwereade: any chance for mistakes? like recalling a unit we think we cannot access?
<fwereade> dimitern, so long as we're barfing on error that aren't unath, and we're only returning unauth errors in appropriate situations I think we're good
<fwereade> dimitern, and CanDeploy just duplicates the work of getAuthFunc from the other direction AFAICT
<fwereade> dimitern, you should be able to drop the CanDeploy bit and all tests should still pass, I think
<fwereade> dimitern, if that doesn't work we should look closer
<dimitern> fwereade: i'll try that out
<jtv> Hi folks â I was wondering why the Virtual RoundTripper in environs/jujutest/metadata.go lists files as an array of filename/content tuples?  Wouldn't a map from filenames to contents be easier to understand?
<fwereade> jtv, sounds like it'd eliminate possible confusion too, but I don;t specifically know about that code
<jtv> Thanks.  Something to keep in mind...  I found it hard to get my mind around the test setup, but I figured it was a better investment than mindlessly copying what everybody else does.  :-)
<fwereade> jtv, +1
<dimitern> fwereade, rogpeppe: I'd like to leave the UnitTag, consts, regexpes and other common stuff between api and state that's now duplicated to be cleaned up in a follow up, if you don't mind, not to complicate this CL
<rogpeppe> dimitern: sgtm
<fwereade> dimitern, ah, has duplication already started elsewhere?
<dimitern> fwereade: yeah, in a few places, not many
<fwereade> dimitern, ok, yeah, best clear it up all in one go after this then
<dimitern> fwereade: yeah
<fwereade> dimitern, does CanDeploy evaporate cleanly?
<dimitern> fwereade, rogpeppe: I also removed CanDeploy altogether and reverted the NotAssignedError changes, running tests now, then will test it live, if it works it should be done
<fwereade> dimitern, lovely, tyvm
<rogpeppe> dimitern: brill, thanks
<dimitern> rogpeppe, fwereade: yep, tests pass without it, testing live now
<dimitern> wallyworld_: standup?
 * rogpeppe goes for lunch
<frankban> dimitern: re William question in my review, i don't see a StringsWorker in the worker package (there is only a notifyWorker), so I guess it's not yet implemented, correct?
<mgz> dimitern: so, the go-bot creds, can you send them to me?
<dimitern> frankban: yes it's not - so far only one worker could use it - the machiner, if others appear we can factor out the common code into a stringsworker
<dimitern> mgz: just a sec
<frankban> dimitern: well, the minunits worker will be the second one
<dimitern> frankban: great
<dimitern> frankban: we can follow the notifyworker pattern then
<mgz> so, for now, people should manually land on trunk I think
<mgz> if anyone is not sure how to do that or wants help, poke me
 * dimitern boo we want the bot!!
<dimitern> :)
<mgz> once you bot, you don't want to go back...
<dimitern> sure!
<dimitern> fwereade, rogpeppe: https://codereview.appspot.com/11800045/ updated
<TheMue> so, first step done, a pre-check
<fwereade> mgz, rogpeppe, I might have lunch before I chat to you guys, will you be around a bit later? am I blocking you at all?
<rogpeppe> fwereade: i'll be around
<rogpeppe> dimitern: "No, if I do the test still succeeds, but it takes 10s (LongWait) more."
<rogpeppe> dimitern: i'd like to understand why that's true
<rogpeppe> dimitern: i can't see why it would be
<rogpeppe> dimitern: because isRemoved doesn't call Sync or StartSync
<mgz> fwereade: I shall lunch as well
<rogpeppe> dimitern: and anyway Sync doesn't affect anything except watchers, and waitFor doesn't use a watcher
<rogpeppe> mgz: enjoy
<fwereade> reviewing dimitern's branch but off shortly, cath's whereabouts are unknown
<dimitern> rogpeppe: I don't know but I observed so
<rogpeppe> dimitern: that's really weird
<rogpeppe> fwereade: lost in a maze of twisty passages, all alike?
<fwereade> rogpeppe, london streets have the occasional distinguishing feature, but yeah
<rogpeppe> fwereade: ah, i thought you were still in a large country house in wiltshire...
<dimitern> rogpeppe: pull the branch and try it out for yourself, if you want
<rogpeppe> dimitern: am just doing that
<rogpeppe> dimitern: i can't reproduce the behaviour
<rogpeppe> dimitern: did you change BackingState to State in waitFor too?
<rogpeppe> dimitern: because that *would* have the results you saw
<dimitern> rogpeppe: I changed them everywhere
<dimitern> rogpeppe: when I saw the delays of 10s
<rogpeppe> dimitern: ah, that was not my suggestion
<rogpeppe> dimitern: the only place it makes a difference for a call to Sync
<rogpeppe> s/for/is for/
<rogpeppe> dimitern: i think it's worth using BackingState only when necessary rather than changing it as if it's magic powder :-)
<rogpeppe> s/changing/using/
<dimitern> :)
<dimitern> rogpeppe: ok will try
<rogpeppe> dimitern: tbh, i'm not sure we should have provided BackingState at all - JujuConnSuite.Sync/StartSync would probably be a better idea
<dimitern> rogpeppe: it's useful sometimes
<dimitern> rogpeppe: with watchers through the api
<rogpeppe> dimitern: agreed, but what would you ever need to do on it other than call Sync or StartSync ?
<rogpeppe> dimitern: (genuine question)
<dimitern> rogpeppe: i can't think of other uses, but jam probably has some
<rogpeppe> dimitern: i can't think of any other case where it would make a difference
<rogpeppe> dimitern: after all, they're both talking to the same underlying mongo
<dimitern> rogpeppe: but different connections
<rogpeppe> dimitern: sure, but why would that make a difference?
<rogpeppe> dimitern: we don't do any caching
<dimitern> rogpeppe: probably nothing, just thinking out loud
<ahasenack> guys, got this in today's update of the whole juju-core stack from trunk:
<ahasenack> # launchpad.net/gwacl
<ahasenack> ../go/src/launchpad.net/gwacl/management.go:317: function ends without a return statement
<ahasenack> I'm on raring
<ahasenack> it doesn't even say if that's an error or a warning
<rogpeppe> ahasenack: ah, that's because the builder is running go1.1
<rogpeppe> ahasenack: there are no warnings
<ahasenack> go likes to change syntax every now and then, heh?
<rogpeppe> ahasenack: go1.1 is more lenient about return statement positioning
<ahasenack> rogpeppe: so what do I do?
<rogpeppe> ahasenack: you could install go1.1
<ahasenack> no
<ahasenack> that would hide the problem
<rogpeppe> ahasenack: most of us use that
<ahasenack> do you plan on backporting go1.1 to raring, quantal and precise?
<rogpeppe> ahasenack: that is the plan, yes
<rogpeppe> ahasenack: although i don't know where we are with it
<ahasenack> then I'll get go1.1 when it's backported :)
<ahasenack> although it sounds like saucy will be out before :)
<rogpeppe> ahasenack: in which case, you should file a bug with gwacl
<ahasenack> ok
<rogpeppe> ahasenack: and perhaps propose the fix (which will be trivial) if you feel charitable
<rogpeppe> ahasenack: i'm afraid i don't know where we are currently on policy around using go1.1 features
<rogpeppe> fwereade, mramm: ^ ?
<ahasenack> https://bugs.launchpad.net/gwacl/+bug/1205331
<ahasenack> k, filed
<_mup_> Bug #1205331: r203 doesn't build in raring <Go Windows Azure Client Library:New> <https://launchpad.net/bugs/1205331>
<rogpeppe> ahasenack: i was away when the builder was changed to use go1.1, i'm afraid, so i missed whatever debate there was.
<ahasenack> seems a big deal switching compilers
<rogpeppe> ahasenack: the Go folks are very careful around backward compatibility, so it's not actually a big deal
<ahasenack> rogpeppe: oh, I thought that bug prevented building
<ahasenack> so it is a warning?
<rogpeppe> ahasenack: no, it's a backwardly incompatible feature
<ahasenack> so this one in particular is a big deal then
<rogpeppe> ahasenack: it's a big deal if people use any go1.1-specific feature if we require 1.0.x compatibility
<rogpeppe> ahasenack: that's not just a compiler-switch problem, it's a problem with using any new feature AFAICS
<rogpeppe> ahasenack: that's why i *thought* we were going to continue running the bot on 1.0.2 until we could use 1.1 everywhere
<ahasenack> sounds like the safer choice
<ahasenack> doesn't the bot need rebuilding, given that canonistack outage? Maybe it could be rebuild with 1.0.2
<ahasenack> or is that a different bot
<ahasenack> rebuilt
<rogpeppe> ahasenack: mgz is restarting it, yes. he might want to use the other version, though i suspect it was changed for a reason i'm not aware of
<ahasenack> maybe the tls stuff
<ahasenack> I don't recall which version had which problem, but go 1.1 and tls and gwacl were involved
<rogpeppe> ahasenack: it's possible that gwacl now requires go1.1 because they've forked net/http from tip
<rogpeppe> ahasenack: nope, "The fork is based on go version 2:1.0.2-2."
<fwereade> rogpeppe, I am not 100% sure that the last 1.0 uses have been excised -- mgz, is there anything preventing us from going 1.1 across the board?
<rogpeppe> fwereade: is 1.1 now backported to precise etc?
<fwereade> rogpeppe, I *think* that the most recent releases were all built with 1.1, by a variety of means which I do not have at immediate recall
<rogpeppe> fwereade: so is it a problem that gwacl is using go1.1-specific features?
<fwereade> rogpeppe, it may not be, but I'm waiting for mgz's response, because he has better command of the situation than I do
<frankban> mgz: hi, any idea about this lbox error? http://pastebin.ubuntu.com/5914998/
<mgz> frankban: you can't use lbox submit, you don't have perms on the branch
<mgz> commit to trunk locally, run the tests, push to the go-bot branch as go-bot
<mgz> so, the last command specifically is on bzr+ssh//go-bot@bazaar.launchpad.net/~go-bot/juju-core/trunk
<mgz> you may want to do `bzr info` on it now, to double check if your ssh keys got added to the bot account
<mgz> wuhah, I totally missed the hilight in the log
<mgz> ahasenack: we're moving to go 1.1 becasue maintaining cross-version compat for the lifetime of our support on a young language like go isn't practical.
<mgz> ahasenack: you can use ppa:juju/golang which contains what jamespage is going to get backported to older series, and what juju-core is getting built from
<rogpeppe> mgz: so we can use go1.1 features now?
 * rogpeppe drools slightly
<mgz> we really want our releases sorted first, but yeah.
<fwereade> mgz, rogpeppe: if you're both around, do you want to talk about addresses?
<mgz> lets
<rogpeppe> fwereade: sure
<frankban> mgz: thanks, I'll do
<mgz> frankban: if you get stuck I can land for you.
<fwereade> rogpeppe, mgz, I'm joining the standup call
<ahasenack> mgz: so #1205331 should be marked as invalid?
<_mup_> Bug #1205331: r203 doesn't build in raring <Go Windows Azure Client Library:New> <https://launchpad.net/bugs/1205331>
<ahasenack> or won't fix perhaps
<ahasenack> until go 1.1 lands in raring
<mgz> ahasenack: arguably we should fix that for now
<frankban> mgz: bzr info bzr+ssh://go-bot@bazaar.launchpad.net/~go-bot/juju-core/trunk gives me a "Permission denied (publickey)". If you can land my branch, that would be great, otherwise I can wait for tarmac to be up again
<mgz> I shallland shortly, which branch?
<frankban> mgz: lp:~frankban/juju-core/minuniter-worker thanks!
<dimitern> fwereade: updated https://codereview.appspot.com/11800045/
<dimitern> rogpeppe: I haven't seen your response to ^^ yet?
<fwereade> dimitern, nice timing :)
<dimitern> rogpeppe: changed the tests to use s.State, except for waitFor
<rogpeppe> dimitern: oh sorry, i got half way through then got distracted
<rogpeppe> dimitern: thanks
<fwereade> dimitern, https://codereview.appspot.com/11800045/diff/22001/worker/deployer/deployer.go#newcode176
<fwereade> dimitern, do we need an error return?
<fwereade> dimitern, ah, I guess we check tag sanity
<fwereade> dimitern, forget I said anything
<dimitern> fwereade: not really, it's always nil
<rogpeppe> dimitern: have you re-proposed with those changes?
<dimitern> fwereade: kept it like this to match the interface of state
<dimitern> rogpeppe: yes
<fwereade> dimitern, I'd probably drop it then, but don't do it unless you honestly think it's a good idea, it might just be churn
<rogpeppe> dimitern: oh yes, doh, reload doesn't change the patch no
<dimitern> rogpeppe: https://codereview.appspot.com/11800045/diff2/22001:31001/worker/deployer/deployer_test.go
<rogpeppe> dimitern: yeah, i'm stupid, ignore me
<rogpeppe> dimitern: LGTM
<dimitern> rogpeppe: great, thanks
<dimitern> fwereade: unless you have objections, i'd like to land it, so I can start working on the (few) follow-ups
<fwereade> rogpeppe, https://codereview.appspot.com/11800045/diff/22001/cmd/jujud/deploy_test.go#newcode95
<fwereade> dimitern, LGTM regardless
<dimitern> fwereade: thanks!
<rogpeppe> fwereade: it felt cleaner to have them as separate calls, especially as we might want to make them available to different clients
<dimitern> mgz: so how's the landing dance performed?
<dimitern> mgz: lp:~dimitern/juju-core/080-deployer-uses-api
<dimitern> mgz: but i'd like to know how can I do it as well
<rogpeppe> fwereade: and having them as separate calls maps nicely to the future address watchers, where we may well want to watch the individual addresses separately
<rogpeppe> fwereade: so we can just use a separate StringsWatcher for both Addresses and APIAddresses
<mgz> dimitern: so, what I'm doing,
<rogpeppe> fwereade: if it's slow making 3 request serially, then it's pretty trivial to make them concurrently as a future optimisation
<rogpeppe> s/request/requests/
<mgz> switch to trunk. merge your feature branch. commit with --author (if not you) using roughly rvsubmit style message. run `go test ./...`. bzr push to the magic bzr+ssh://~go-bot url ^above
<fwereade> rogpeppe, something's still a bit itchy... maybe it's that we will probably always want cacert with either
<fwereade> rogpeppe, but, I don't feel very strongly, I'd misinterpreted anyway
<dimitern> mgz: ok, I'll try now
<mgz> dimitern: you probably want to wait till after me for the merge though, so we serialise nicely
<mgz> ...the tests are taking too long to run :)
<rogpeppe> fwereade: yeah, we'll need CACert with either, but that will still change independently.
<dimitern> mgz: I'll merge and run the tests, so when I'm done to push it, I'll tell you
<fwereade> rogpeppe, yeah, sgtm, cheers
<rogpeppe> fwereade: cool, thanks
<mgz> yeah, if you move the commit till after running tests, you can them pull/remerge then and (mostly) have got the same thing done :)
<dimitern> gwacl is broken right now
<dimitern> just pulled and I see this:
<dimitern> ../gwacl/management.go:317: function ends without a return statement
<dimitern> jtv, rvba, bigjools: any ideas?
<jtv> dimitern: you need James Page's Go PPA.
<jtv> We're standardized on 1.1.1 nowadays.
<dimitern> jtv: can you point me there please?
<jtv> The  Juju project is, that is.
<jtv> Yes I can
<jtv> ppa:james-page/golang-backports
<jtv> This was mailed out as a mandatory step a few weeks ago.
<dimitern> jtv: and then what? apt-get install golang?
<jtv> Just "apt-get update ; apt-get upgrade" should do it (as root obviously).
<jtv> Would there be anybody available for a largish refactoring branch involving attempt strategies?  It's this one: https://codereview.appspot.com/11923043
<jtv> dimitern: is it working now?
<mgz> dimitern: pushed, so you can pull trunk and run the real merge/commit now
<mgz> frankban: branch landed
<dimitern> jtv: yeah, thanks, I have to rebuild everything, but it works
<jtv> Oh, yes, forgot to say â you may get some weird errors from things that still link but no longer work.
<jtv> Best to "rm -rf $GOPATH/pkg/linux_*"
<dimitern> jtv: that's exactly what I did, and had to re-get goyaml and gwacl/...
<jtv> I'm surprised that you  had to re-get those, but who am I...  :)
<dimitern> jtv: i was using go 1.0.3 and haven't updated goyaml in quite a bit of time, since it was refactored to be pure go
<jtv> oic
<jtv> We also no longer use libcurl in gwacl.  We got an unofficial patch for crypto/tls that solves the problem for us, so gwacl now contains forked copies of net/http and crypto/tls.
<jtv> Hmm... that largish refactoring branch I've got up for review isn't that big really, it just looks big in Rietveld because of the number of files it touches.  Launchpad calls it: 629 lines (+141/-61) 22 files modified
<jtv> So it's all context.
<jtv> Any volunteers?
<rogpeppe> mgz: here's a sketchy sketch: http://paste.ubuntu.com/5915266/
<rogpeppe> mgz: it only waits until a machine has an address; it doesn't update addresses when they change later
<rogpeppe> mgz: i think we could probably leave that as a later refinement
<rogpeppe> mgz: with a couple of obvious errors removed: http://paste.ubuntu.com/5915275/
<rogpeppe> jtv: looking
<jtv> Thanks!
<rogpeppe> jtv: we always guarantee at least one attempt (and that's always been true)
<rogpeppe> jtv: so i don't think it's necessary to set Min=1
<mgz> rogpeppe: thanks, that's really useful
<rogpeppe> mgz: cool, np
<rogpeppe> mgz: it's probably full of bugs :-)
<jtv> rogpeppe: are you sure?  I was told a few weeks back that this was an issue, but I must admit the code isn't particularly clear.
<jtv> It'd be nice to have this sort of thing documented.  :/
<rogpeppe> jtv: yeah, that should be documented definitely
<rogpeppe> jtv: the original comment in the code says "	// we always make at least one attempt."
<rogpeppe> jtv: and i'm pretty sure i meant that when i wrote the comment
<jtv> rogpeppe: oh, did I miss that one?  Or are you saying it was lost somewhere along the way?
<jtv> I'd like to fix this if it's safe to do so, obviously.
<rogpeppe> jtv: it wasn't a doc comment unfortunately
<jtv> So might be nice to promote it.
<rogpeppe> jtv: the new code returns an Attempt with force==true, and looking at Attempt.Next, if force is true, it's guaranteed to return true, so it seems to be the case now too
<jtv> To avoid misunderstandings, which new code do you mean?
<jtv> I guess I'll have to see this through now and actually write the comments.  :)
<rogpeppe> jtv: how about this, as an addition to the comment on Next:
<rogpeppe> // Next always returns true the first time it is called -
<rogpeppe> // we are guaranteed to make at least one attempt.
<jtv> Yes, I'll add that, thanks.
<rogpeppe> jtv: the new code is the code that's currently in the tree (i updated it quite recently from goamz which had made some useful changes)
<jtv> The "force" bit is a bit weird...  You'd expect the "count" field to cover the same issue.
<jtv> As in: initialize "count" to min(strategy.Min, 1)
<rogpeppe> jtv: i remember going through a few iterations of the review of that code
<rogpeppe> jtv: it's a bit more subtle than it looks
<jtv> I'm sure it is.  :)
<rogpeppe> jtv: in particular, look at the way that HasNext sets force
<rogpeppe> jtv: could you reprose that branch please? rietveld is doing that annoying "chunk mismatch" thing again.
<rogpeppe> s/reprose/repropose/
<jtv> Oh dear.  Yes, hang on.
<jtv> rogpeppe: lbox just finished re-proposing.
<jtv> I also removed all the "Min: 1" and added your documentation text.
<dimitern> i merged mine
<jtv> How do we merge?
<mgz> jtv: you probably need to ask me or dimitern to do it, don't think red squad are in the bot ssh key set
<jtv> Ah
<jtv> I've got a bunch of branches lined up for landing.  :)
<TheMue> So, have to leave. Have a nice weekend.
<rogpeppe> jtv: reviewed
<jtv> Thanks rogpeppe!
<jtv> And a nice weekend, TheMue
<TheMue> jtv: thx
<jtv> rogpeppe: you discovered something interesting...  Not only does VerifyBootstrapInit() use env.Storage() a few times, but it's the only use it makes of env!
<rogpeppe> jtv: yeah, so you could just pass in the storage
<rogpeppe> jtv: not entirely sure if that would be a good idea though
<jtv> Not sure either.  I guess that means I should leave it be.  :)
<rogpeppe> jtv: i guess we might want to make it verify some other property of the environ in the future
<jtv> Yes, we might well.
<jtv> About those statements I moved out of the "if" conditions: those lines got longer during my work, so I had to break them up to keep them legible.  And then they got a lot shorter again.
<jtv> So you see how that happened.
<ackk> hi, is the ErrorCode supposed to be present for each kind of error response from juju API?
<dimitern> ackk: only if there was an error
<ackk> dimitern, so if make a login request with AuthTag: foo I get an error without an "ErrorCode" field, just an "Error" one
<ackk> dimitern, "user-foo" gives me back an auth error with "unauthorized access" ErrorCode, instead
<ackk> dimitern,  fwiw I get 'invalid entity tag "foo"' as Error in the first case
<rogpeppe> dimitern, fwereade: finally, worker/upgrader: https://codereview.appspot.com/11932043
<fwereade> rogpeppe, ack
<rogpeppe> fwereade: much simpler than the one in cmd/jujud; hopefully ok :-)
<rogpeppe> aaaand that's me for the day
<rogpeppe> happy weekends to all
<jtv> nn robbiew
<jtv> I mean, rogpeppe
<jtv> (damn over-eager tab completion!)
<rogpeppe> jtv: :-)
<rogpeppe> jtv: g'night
<ackk> dimitern, I was wondering if the different behavior is a bug or it's intended
<dimitern> ackk: there is a test like that i think
<dimitern> ackk: "foo" is an invalid tag format, so you're not even getting to the unauthorized step
<ahasenack> dimitern: but no ErrorCode came back in that case, that's the question
<ackk> dimitern, yeah waht ahasenack said
<dimitern> ackk, ahasenack: both code and message are optional, at least one of them will be set when there's an error
<ahasenack> ah, this is how software comes to display those "sorry, unknown error occurred" messages :)
<sidnei> https://bugs.launchpad.net/juju-core/+bug/1205451
<_mup_> Bug #1205451: killing instance outside of juju, doesn't get noticed <juju-core:New> <https://launchpad.net/bugs/1205451>
<ahasenack> sidnei: see this comment: https://bugs.launchpad.net/juju-core/+bug/1190715/comments/4
<_mup_> Bug #1190715: unit destruction depends on unit agents <juju-core:In Progress by fwereade> <https://launchpad.net/bugs/1190715>
<ahasenack> and the rest of the bug, of course, for context
<dpb1> a new  papercut one, maybe it's easy to fix? https://bugs.launchpad.net/juju-core/+bug/1205466
<_mup_> Bug #1205466: deploy charm uses cached copy even when all services are removed <papercut> <juju-core:New> <https://launchpad.net/bugs/1205466>
<dpb1> mramm, arosales: ^
<mramm> dpb1:  I'm not sure that's even a bug
<mramm> you are in control of what charm updates happen if we leave it that way
<mramm> and not in control if we change it
<mramm> that said, I think it would be probably be easy enough to fix if that's the right thing to do
<mramm> for the local charm case it seems like it might be surprising for the charm not to update
<dpb1> mramm: I can validate what pyjuju did, but it seems very weird that I'm asking for a local deployment of a charm that is no where in the deployed system, and I get an old thing installed
<mramm> but for the non-local charm case it seems like having it not use the cache might be equally suprising
<mramm> and having it have different behaviors between the two seems likely to be surprising as well
<dpb1> mramm: and ya, you all get to decide if you want to fix it I can see your reasoning --- just reporting that it has bitten me a number of times, and each time wastes 15 minutes or so.
<dpb1> mramm: agreed on differing behaviors.
<dpb1> mramm: that would be very unexpected
<mramm> dpb1: if nothing else we should probably let people know what is happening
<dpb1> mramm: do you want me to verify what pyjuju did, or does it matter?
<mramm> it does matter, but not that much -- I think the important thing is that we do the "right" thing
<mramm> if it is biting people, we need to figure out a way to make it better if we can
<dpb1> mramm: OK, I leave it in your capable hands. :)
<ahasenack> mramm: fwiw, I think "destroy-service", having the word "destroy" in it, should get rid of the cached copy of the charm. It's what I would expect
<mramm> what if you have two service groups of the same service (two wordpress installs?)
<ahasenack> you mean same charm deployed under two different service names?
<ahasenack> like, juju deploy wordpress wordpress-A
<ahasenack> juju deploy wordpress wordpress-B ?
<ahasenack> and then juju destroy-service wordpress-B
<ahasenack> followed by juju deploy wordpress wordpress-C (to keep things simple, not -B again)
<mramm> right
<ahasenack> I think -C would use the charm as if it's the first time it's being deployed, no cached copy
<ahasenack> let's assume the deploy commands above were from a local charm, i.e., --repository ~/foo and local: prefix
<mramm> but what happens when you add-unit to A
<ahasenack> I would expect it to use my local on-disk copy
<mramm> since that charm is no longer in the cache?
<ahasenack> it would use the same charm that wordpress-A/0 has
<ahasenack> so the cache would need to be service-name based
<dpb1> Yes, when you upgrade-charm, then are upgraded independently.
<mramm> so what you are saying is that the cache needs to change
<dpb1> distinctly
<ahasenack> so far the cache was invisible
<ahasenack> implementation detail
<mramm> or is it already that way?
<ahasenack> but the case dpb described made it appear
<mramm> ok
<mramm> if the cache is service name based (which makes sense)
<mramm> any new deploy should not use the cache
<mramm> and we should destroy it if there are no units in that service naem
<mramm> name
<mramm> that does in fact seem reasonable
<mramm> posted as much to the bug comments
<ahasenack> mramm: +1 on your observation about unit count going down to zero not being the same
#juju-dev 2013-07-27
<ahasenack> hm, this looks like a juju bug
<ahasenack> context.go:221 worker/uniter: HOOK runtime: panic before malloc heap initialized
<ahasenack> http://pastebin.ubuntu.com/5916638/
#juju-dev 2013-07-28
<thumper> morning all
<wallyworld_> thumper: i started implementing supported containers in machine state, but william didn't like the idea, since we can only fully know what the containers are when a machine is running but want to use add-machine/deploy/etc before a machine starts
<thumper> hmm...
<thumper> this is going to be a bit weird
<thumper> I can see both sides of the coin
<wallyworld_> yeah
<wallyworld_> i think i should hold off on any more work on this till we discuss it
<wallyworld_> thumper: also, jools' son is in hospital so i'm not sure when he will be around today, so access to mass servers might be delayed. but maybe not, just depends on how things turn out
<thumper> wallyworld_: yeah, saw a tweet about it.  which son?
<thumper> and do you know what the problem is
<thumper> ?
<wallyworld_> jake, on of the twins
<thumper> PM is fine
<wallyworld_> brathing problems
<wallyworld_> breathing
 * thumper nods
<wallyworld_> i know a few other people of have had the same thing with their young kids. i think there must be a virus or something going around
<wallyworld_> luckily it seems to gt better after a day or three, but a hospital visit is required, and pretty scary all the same
<thumper> yeah
<thumper> children in hospital is always terrible
<thumper> re: work - I'm messing around with a local branch
<thumper> that will let me test out the bridged ethernet on maas
<thumper> when that becomes available for testing
<wallyworld_> based on what you tried with ec2?
<thumper> yeah...
<thumper> but more likely to work due to the way maas handles ip addresses
<thumper> with the simple dhcp server
<wallyworld_> and then combine that with martin's work and it should come together hopefully
<thumper> hopefully
<thumper> I also need to check pending reviews
<thumper> to see if there is anything pending
<thumper> that I should look at
<wallyworld_> i'll look too in case there's anything that needs a +1
<wallyworld_> i wonder if they got the bot working
<wallyworld_> the whole canonistsack thing was a bit of a fiasco
<wallyworld_> the other thing i started work on was removing the control bucker from env.yaml. but it needs additional work done to store the bucket locally after generation. and there's an outline of how that should evolve in william's document
<wallyworld_> we also agreed that i should introduce a top level validation command and use sub commands for image and tools validation. so a bit more cleanup required on the simplestreams validation branch
<thumper> ok
 * thumper goes to the gym
#juju-dev 2014-07-21
<axw> wallyworld: will 1.20.2 just be cut off trunk?
<wallyworld> axw: there's a 1.20 branch
<axw> ok
<wallyworld> we branched when 1.20 was first released
<axw> oh yeah, I think I backported things to it
<axw> :)
<axw> wallyworld: was just wondering if #1337091 could/should be retargeted
<_mup_> Bug #1337091: maas provider: allow users to specify network bridge interface. <canonical-is> <juju-core:Fix Committed by niedbalski> <https://launchpad.net/bugs/1337091>
<axw> wallyworld: see the comments from a user at the end of the bug
<wallyworld> looing
<wallyworld> looking
<wallyworld> axw: at first glance, i thought that one was actually in 1.20.1, but my memory may be wrong
<axw> wallyworld: nah, I just checked
<wallyworld> axw: the one we really want in bug 1341524
<_mup_> Bug #1341524: juju/MAAS deployed host with bonding configured via preseed missing eth0 from bond on first boot <maas-provider> <juju-core:Triaged by niedbalski> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1341524>
<wallyworld> i guess we can back port that other one also
<axw> wallyworld: do I need to ask sinzui or...?
<wallyworld> axw: no we'll just backport
<axw> okey dokey
<wallyworld> i could have sworn that one was targetted to 1.20
<wallyworld> axw: i've tweaked michael's juju/txn branch https://github.com/juju/txn/pull/2
<axw> wallyworld: looking in a sec
<wallyworld> no hurry, thanks
<wallyworld> axw: great, thanks. i'll fix the comments
<axw> cheers
<thumper> davecheney: https://github.com/juju/names/pull/16
<axw> wallyworld: no great rush, but here's the last PR from the core StateServerInstances changes: https://github.com/juju/juju/pull/342
<wallyworld> ok
<wallyworld> thumper: hey, you have fwreade with you this week?
<thumper> wallyworld: I do
<wallyworld> thumper: i'd love to talk about the mongo repliaset stuff
<wallyworld> at some point
<davecheney> thumper: one for you in return, https://github.com/juju/names/pull/17
<thumper> davecheney: https://github.com/juju/charm/pull/22
<thumper> wallyworld: for repos other than juju/juju, we just click the merge button on github right?
<wallyworld> thumper: yep, till martin get's stuff sorted, hopefully rsn
<thumper> kk
<wallyworld> thumper: so may later on when you have a spare moment, ping me and we can talk mongo
<thumper> wallyworld: sure
<thumper> davecheney: and another https://github.com/juju/juju/pull/343
<cmars> dfc, https://github.com/juju/juju/pull/344
<cmars> oh, i mean davecheney ^^
<davecheney> looking for a second review on https://github.com/juju/juju/pull/344
<davecheney> ta
<davecheney> (ã ãÐã)ã ï¸µ â»ââ»
<davecheney> (ï¾à²¥çà²¥ï¼ï¾ï»¿ â»ââ»
<thumper> (â¯Â°â¡Â°)â¯ï¸µ â»ââ» ï¸µ â¯(Â°â¡Â° â¯)
<axw> wallyworld: thanks. I'll wait for perrito666 before landing
<wallyworld> axw: sure
<waigani> thumper: https://github.com/juju/juju/pull/346
<davecheney> waigani: review done
<wallyworld> axw: i've set up a remote alias to another repo on gh. i've fetched it. now I want to merge a branch from it into my branch, but all the syntax i'm trying fails. any clues?
<axw> wallyworld: try just the commit hash?
<axw> or is there more than one?
<wallyworld> axw: i think there's more than one, it's michael's copy-session branch
<wallyworld> so i just want to merge all of this commits to that branch
<axw> wallyworld: I thought you just did "git merge remote/branch"
<axw> you'll need to fetch it first
<axw> git fetch $remote
<wallyworld> axw: i did git remote add voidspace https://github.com:voidspace/juju.git
<wallyworld> and then git fetch voidspace
<wallyworld> and now i want to merge in the copy-sessions branch
<wallyworld> tried git merge voidspace/copy-sessions
<wallyworld> ah hang on
<wallyworld> i'm an idiot
<wallyworld> i had a typo
<axw> remote is wrong?
<wallyworld> i let off an s
<axw> doh
<wallyworld> yup
<wallyworld> sorry
<thumper> davecheney: if you feel the desire... https://github.com/juju/juju/pull/347
<wallyworld> thumper: kvm-ok is still installed but images we are told do not have "/usr/sbin" on the path
<wallyworld> some images
<thumper> right... which is why we install the cpu-checkers package
<thumper> at least that is what we used to do
<wallyworld> no we still do
<wallyworld> kvm-ok is there
 * thumper struggles...
<wallyworld> but /usr/sbin is not on the path
<thumper> wat
<thumper> ah
<wallyworld> hence "kvm-ok" won't be found
<thumper> right
<thumper> got it now
<wallyworld> i had no idea /usr/sbin would not be on the path always
<wallyworld> axw: you got a minute to talk in tanzanite-standup?
<axw> wallyworld: just a minute
<wallyworld> sure
<axw> wallyworld: did you see my response about the raciness? do you think it's worthwhile updating?
<wallyworld> axw: yeah, i was going to suggest using the buildTxn() stuff but i see your point. if it's not too much extra work, it would be nice to have it non-racy. since someone else reading the code will have the same concerns
<wallyworld> and it would be good not to deliberately add racy code
<wallyworld> since you never know how it might manefest in a problem later
<axw> ok, I'll take another look
<wallyworld> don't waste too much time on it
<wallyworld> add a comment perhpas
<TheMue> morning
<vladk> TheMue, jam: please, take a look https://github.com/juju/juju/pull/348/files
<TheMue> vladk: *click*
<TheMue> jam: Iâve done the review of 348, looks solid to me but would prefer another short look by you.
<jam> vladk, TheMue: will do. I peeked at it a bit ago. The basic idea here is that we don't need to pass the networking information into cloud-init anymore because it is no longer responsible for setting up the initial networks, correct?
<vladk> jam: I think so. This works only on MaaS provider now, because it's the only provider that supports networks.
<jam> TheMue: just making coffee, will be in our 1;1 in a moment
<TheMue> jam: ok
<Egoist> hello
<Egoist> is there any way to use github instead of charmstore?
<jam> vladk: lgtm
<jam> https://github.com/juju/juju/pull/348
<vladk> jam: thanks
<natefinch> morning all
<TheMue> natefinch: morning
<natefinch> MOrning all
<TheMue> :)
<natefinch> rogpeppe: do you know why, in go/ast, fields have a slice of names, rather than just one?  When would a field have multiple names?  http://golang.org/pkg/go/ast/#Field
<rogpeppe> natefinch: struct {a, b, c int}
<natefinch> rogpeppe: that's not 3 fields? :/
<rogpeppe> natefinch: nope - otherwise you couldn't round-trip gofmt through go/ast
<natefinch> ahh, that's true
<natefinch> still, makes the logic around that code more difficult, can't just count fields and know how many fields there really are
<natefinch> thanks
<rogpeppe> natefinch: sure, you need another loop :-)
<rogpeppe> natefinch: you could always use the types package
<natefinch> rogpeppe: yeah, not a huge deal. Just a little annoying, but understandable
<natefinch> rogpeppe: I'm using a bit of both
<rogpeppe> natefinch: what are you actually trying to do?
<natefinch> rogpeppe: extract exported function names, arguments, and the comments on them to do some code generation
<rogpeppe> natefinch: generate code to do whaT?
<natefinch> rogpeppe: I'm making a CLI-generator.  Give it some exported functions and it'll write the help docs and flag parsers based on the comments and the function signatures
<rogpeppe> natefinch: interesting
<natefinch> rogpeppe: just an idea that popped into my head when someone mentioned a self-documenting CLI, and then I was disappointed that it required retyping a bunch of stuff... so I figured I'd write my own where your code really is self-documenting
<rogpeppe> natefinch: getting flags right will be tricky
<natefinch> rogpeppe: yep, that's part of the fun
<natefinch> rogpeppe: there's a bunch of ways you could do it, and you're not going to please everyone no matter what way you choose to do it
<rogpeppe> natefinch: another question is whether you'd be able to provide a dynamic escape hatch
<natefinch> rogpeppe: not sure what you mean
<rogpeppe> natefinch: whether *all* of the cli is defined by this thing, or whether we could have a way of adding extra stuff that doesn't perhaps fit the model so well, while doing the majority generated
<natefinch> rogpeppe: certainly a good idea.
<perrito666> morning
<perrito666> axw: hey, says my backlog that you need to talk to me
<natefinch> rogpeppe: oh, unrelated - what's the correct way to get Acme?  There was a deb I tried, but it behaved oddly.  The website is not very clear.
<axw> perrito666: hey. some time today can you please review the change to restore in https://github.com/juju/juju/pull/342
<perrito666> going
<perrito666> we have a very low bus factor on b&r
<rogpeppe> natefinch: http://swtch.com/plan9port/
<perrito666> axw: I tend to believe you are right but I cannot assure it, as I see it you have two options here, you ask the original author of that line (roger) or you give me time to actually run the test on you branch
<axw> perrito666: CI is testing backup/restore now right?
<axw> I mean, successfully
<perrito666> axw: it is
<perrito666> axw: hold, let me check that again
<perrito666> axw: it is not, although ha backup and restore is passing
<axw> appears replset related
<axw> 1/2 is good enough for me
<axw> I'm confident it's fine anyway, so I'm going to land it as is. I'll fix it if it breaks
<perrito666> sg
<perrito666> if one of the two jobs is running properly means its mostly timeouts issue since they do pretty much the same
<katco> good morning all
<perrito666> hi katco
<vladk> TheMue, jam: please, take a look https://github.com/juju/juju/pull/255
<TheMue> vladk: yep
<natefinch> jam, wallyworld: is there a handoff we need to make for the current crop of super high priority bugs?
<wallyworld> natefinch: in my email i mention 2 that we've not yet looked into
<wallyworld> the ohers are in progress
<katco> should a backport commit message state that it's a backport, or should it contain the original commit message?
<natefinch> wallyworld: ok
<wallyworld> katco: i'd mention the fact that it's been backported, but also include the details from the first commit
<katco> wallyworld: will do, ty sir
<wallyworld> natefinch: also, any input on the oplog file size, plus presence replication discussion would be appreciated
<natefinch> wallyworld: sure.  I think the problem with the oplog is that it gets put in the ephemeral storage which is like 400gigs
<natefinch> wallyworld: IIRC
<natefinch> wallyworld: not a lot of info on that landscape crash bug, eh?
<natefinch> wallyworld: oh, I missed the machine-0 log... that helps
<wallyworld> yeah
<natefinch> sorta looks like it might just be a side effect of the i/o timeout bug
<wallyworld> could be, i haven't had a chance to look
<wallyworld> natefinch: with the oplog file, it gets put in /var/lib/juju/db i think, and the size is calulated based on 5% of free space (up to max 50GB), so for a 400GB disk, the file is about 20GB
<wallyworld> which is not too bad I would think?
<natefinch> wallyworld: 20gb is pretty big, and I'm not sure exactly how much that thrashes the disk, which might be quite slow
<wallyworld> natefinch: i think the file is just allocated ie empty
<wallyworld> maybe the mongo defaults are bad, but that's what's being followed - the mongo algorithm is being used
<wallyworld> we can asjus if needed though
<wallyworld> adjust
<natefinch> wallyworld: we've seen it take a long time on some machines.... I don't know why.  One instance took like 2 minutes.  I don't know 100% for sure that the oplog was the problem, but it was basically the only thing we were doing
<wallyworld> i think for local provider is was wond back to max 1GB for that reason
<wallyworld> i think that was the incident you're referring to
<natefinch> yep
<wallyworld> slow disks will be an issue for sure in generating the file
<natefinch> wallyworld: I'm guessing the huge, free, shared, ephemeral disk you get with a $40/month AWS instance is the worst kind of slow
<wallyworld> yeah, could be
<wallyworld> so william and i are thinking we should use the default algorithm, but may that's wrong
<wallyworld> maybe
<wallyworld> it can be changed easily enough
<natefinch> wallyworld: Mongo is designed for huge datasets.  Juju's usage is not a huge dataset.
<natefinch> We can ask on #mongo
<wallyworld> can you do that for me? i need to sleep
<natefinch> wallyworld: fair enough.  Will do
<wallyworld> thanks
<natefinch> hazmat: do you have the full logs for this bug? https://bugs.launchpad.net/juju-core/+bug/1345014
<_mup_> Bug #1345014: juju machine agents suiciding <deploy> <lxc> <juju-core:Triaged> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1345014>
<TheMue> vladk: added a minor comment
<vladk> TheMue: thanks
<perrito666> natefinch ?
<natefinch> ericsnow, perrito666: we'll have to move the standup to later, roofers are here and they need a little help
<natefinch> houses are expensive, dammit
<perrito666> natefinch: lol ok, shout if you need a hand
<natefinch> perrito666: haha thanks
<ericsnow> natefinch, perrito666: lol same here :)
 * perrito666 shows a pamphlet of argentina to natefinch
<perrito666> we have space :p
<perrito666> you can comer here
<vladk> TheMue: I fixed https://github.com/juju/juju/pull/255
<TheMue> vladk: Iâm looking
<bodie_> morning
<TheMue> bodie_: morning
<TheMue> vladk: reviewed
<natefinch> rogpeppe, mgz, jam: I'm talking to the mongodb guys on #mongodb, trying to get an idea of how big our oplog should be, and they basically said it depends on the amount of modifications the DB receives.  It's my understanding that we normally aren't going to get huge amounts of modifications after an initial juju deployment... but we also support wide varieties of environment sizes from 2 to 5000 machines, and people might
<natefinch> be bringing stuff up and down a lot.  Seems like one of those "it depends" answers which is not very helpful.
<rogpeppe> natefinch: we get a modification every time a unit's relation data changes
<rogpeppe> natefinch: that could be happening continually
<rogpeppe> natefinch: but it really depends on ops/second
<natefinch> rogpeppe: even more "it depends" (now depends on the implementation of the specific charms deployed) .... but that's good to know
<rogpeppe> natefinch: because there's only a problem if the rate of operations is greater than we can cope with
<rogpeppe> natefinch: and what the likely burst rate/duration is
<mgz> natefinch: that was my understanding as well, unless we write very fast, we don't *need* a huge oplog
<natefinch> ahh, so it's getting drained as replication happens, I get it
<mgz> ut I also see william's earlier point about just using default mongo settings unless we really shouldn't
<natefinch> well, the whole point is that mongo's default is causing us problems
<natefinch> in some cases
<mgz> and this doesn't seem a big issue
<mgz> except in some annoying edge cases
<mgz> not real deployments
<natefinch> ok, yeah, going back over the email thread, it does seem like this is orthogonal to the actual problems seen
<dpb1> Hi -- is hulk-smashing a supported operation by juju?
<natefinch> dpb1: depends on your definition of supported
<natefinch> dpb1: it's generally discouraged, but it's possible, so we obviously support it at some level
<dpb1> natefinch: do you all have a document as to *why* it's generally discouraged?
<natefinch> dpb1: not really... usually it's enough to say that any two charms may make conflicting changes to the environment.  If they both want to listen on port 80, for example, or they both create the same user etc etc.
<natefinch> dpb1: generally the best answer to getting more than one charm on the same machine is to put them in containers
<dpb1> natefinch: ok, thanks.
<dpb1> natefinch: appreciate the answer
<natefinch> dpb1: welcome
<natefinch> perrito666: have you seen the cloudbase emails?  How is August 25th-28th for you?
<TheMue> hmmm, my local environments doesnât like me anymore
<TheMue> api server cannot be connected, strange
<perrito666> natefinch: I was reading them during lunch
<perrito666> works for me too
<katco> natefinch: https://lh3.googleusercontent.com/-zndbSfYolfU/U8xCMtJxJ3I/AAAAAAAAGNg/oiO7QEtZACU/w426-h446/BashStartupFiles1.png
<natefinch> katco: I just want one place to put everything and not need to worry about a flowchart. That's why I like Window's model.  One place.  Put stuff there.  If you're logged in, it's applicable.
<katco> hehe i thought you'd appreciate that
<natefinch> katco: I think I've actually seen it before, looks familiar.
<natefinch> katco: but yes, thanks for reminding me of that pain :)
<katco> lol
<perrito666> what happen to juju.NewConn ?
<perrito666> esta mina no es mas hija de puta ni practicando http://www.lavoz.com.ar/politica/desafortunada-frase-de-cristina-al-inaugurar-los-trenes-del-sarmiento
<perrito666> sorry wrong channel
<natefinch> heh, the english translation doesn't make sense :)
<natefinch> (of the webpage)
<perrito666> natefinch: short: our presindent making a christening act of a train formation says "if we dont hurry the next train will crash us (figuratively speaking of the gov making new trains) on that same spot a train crashed and killed many people a year ago tomorrow and it was discovered that it was in part due to a corruption case where gov did not oversee the train company that was redirecting the maintenance money
<perrito666> and the gov did nothing about it
<natefinch> doh
<perrito666> natefinch: I was trying to post the article to people that live nearby
<perrito666> in another channel
<perrito666> so well, yeah people not very happy with that comment
<natefinch> yeah.... that's pretty unfortunate
<perrito666> I have an environs.Environ and I want to get a State out of it, how do I get that now that NewConn is no longer there? anyone knows?
<perrito666> jam: jam1: is any of you here?
<ericsnow> perrito666: how do we test backup/restore (plugins) currently?
<perrito666> in CI only (and I run the CI test every time I change anything in there
<perrito666> takes a bit of patience
<ericsnow> perrito666: I'm trying to write one last test for backup that does a backup without any fakes involved (i.e. what we must be doing for the plugins currently).
<ericsnow> perrito666: think it's doable?
<perrito666> ericsnow: what do you mean without any fakes?
<perrito666> I can hangout if you want, so we settle this before I go to the doctor
<ericsnow> perrito666: moonstone?
<perrito666> sure
<perrito666> wallyworld: please ping me when you get up :)
<perrito666> bbl ppl, going to the doc
<natefinch> ericsnow: you around?
<ericsnow> natefinch: yeah
<natefinch> ericsnow: did you see my private messagey thing?  Not sure what they call it on irc
<ericsnow> natefinch: nothing showed up
<natefinch> ericsnow: weird
<mbruzek> my lxc foo is weak.  Can someone tell me how to get more information on this error:  http://pastebin.ubuntu.com/7832276/
<katco> i'm a but confused by the purpose of this line: "var _ ContainerFactory = (*containerFactory)(nil)"; can anyone provide insight?
<natefinch> katco: it's a compile-time check that container factor satisfies the ContainerFactory interface
<katco> natefinch: ah ok; no other purpose?
<natefinch> katco: that's it
<katco> what is the use-case for that? if it wasn't, wouldn't that cause issues elsewhere?
<natefinch> katco: it's assigning a containerFactory pointer to a variable of type ContainerFactory.  If that's not legal, it'll fail to compile
<natefinch> katco: it's handy when you're intentionally implementing an interface, but you don't actually have code that assigns one to the other in your code
<natefinch> So, like if a third package is supposed to use your implementation as that interface.... if you change just your package to accidentally not fulfill the interface anymore, you might not notice if you didn't build that other third package
<katco> ahhh ok that makes sense as a use case
<katco> natefinch: ty for the info; much appreciated :)
<natefinch> welcome. I hadn't seen it until I started working on juju, but it's kinda cool.
<katco> natefinch: it looks like that technique is actually in the effictive go docs: http://golang.org/doc/effective_go.html#blank_implements
<katco> i had never seen that either
<katco> kind of seems like something that should be in a test and not code that's run in production
<natefinch> ahh cool.  I should reread that
<natefinch> the compiler will optimize it out, because the variable on the left is the blank identifier
<katco> natefinch: but still run it? ...?
<natefinch> There's effectively nothing to run.
<katco> well, it's at compile time, so i guess it parses it, sees it's valid or not, and then optimizes it out if it is
<hackedbellini> natefinch: hi! I didn't see thumper anymore in this channel. Do you know how I can communicate with him? I couldn't report that the workaround he asked me to try didn't work and I was hopping that he had some other ideas to try atm.
<natefinch> yep
<natefinch> hackedbellini: he's in New Zealand, so it's still somewhat early for him (it's 8:41am where he is). Usually he's on by now, but may have other things going on this week, I'm not sure.
<hackedbellini> natefinch: hrm, I see. No problem. Maybe you or someone else knows how to solve the problem now, as he found the possible cause of the issue
<natefinch> hackedbellini: More than willing to help.  I sent you a private message with his and my emails.  I'll do what I can to help, but unfortunately, I have to run in about 15 minutes.
<hackedbellini> remember that our juju installation whent from 1.18.3 to 1.19.2 because of this bug? https://bugs.launchpad.net/juju-core/+bug/1325034
<_mup_> Bug #1325034: juju upgrade-juju on 1.18.3 upgraded my agents to 1.19.2 <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1325034>
<hackedbellini> it seems that this is the root of the problem. thumper said juju was missing a "ha migration" or something like this because that migration was only done when upgrading from 1.18.x to 1.20.x.
<hackedbellini> he asked me to change the "upgradedToVersion" on agent.conf to read "1.18.4" and try to run the agent again, so we could possibly trick juju and he would do that "ha migration", but it didn't :(
<natefinch> hackedbellini: ahh... interesting
<hackedbellini> thumper: hi
<thumper> o/
<hackedbellini> thumper: the workaround didn't work :(
<thumper> hackedbellini: hmm... bummer
<thumper> hackedbellini: "ha" is "HA" High Availability
<thumper> we introduced replica sets for mongodb
<hackedbellini> we changed upgradedToVersion to read "1.18.4" instead of "1.19.3" and restarted the agent, but the same issue happened
<thumper> which has triggered a bunch of weird edge case failures that weren't apparent before
<thumper> I really don't know what to do from here
<thumper> my main suggestion is to move the services running in that environment to a new Juju
<thumper> and please don't use local
<thumper> manual would be better
<hackedbellini> thumper: hrm, I see... The only problem with this is that we will have to migrate all of our services that were running on juju to that other environment :(
<hackedbellini> thumper: do you know if there is at least a way to access the lxc containers without sudo access? I could access them with "juju ssh <machine or service>" but since juju isn't running I can't now
<wallyworld> perrito666: hi
<jcw4> PTAL https://github.com/juju/juju/pull/351
<thumper> hackedbellini: you can just use "ssh ubuntu@<lxc ip address>"
<hackedbellini> thumper: ahh, yes that works! :) Didn't work before for some reason
<thumper> hackedbellini: probably missing the "ubuntu@"
<hackedbellini> thumper: don't really remember, but probably, yes =P
<hackedbellini> thumper: if we create a new environment using "manual", do you think we will be able to add the existing lxcs to it without any problems? Or should I avoid that?
<thumper> heh... no it won't work by default...
<thumper> and you would probably have to jump through lots of hoops to make it work
<hackedbellini> I see... its not trivial, but its doable?
<hackedbellini> well, I have to go now. Thank you very much for you help so far! Cheers
<thumper> fwereade: https://github.com/juju/errors/pull/4
<dpb1> anyone seen this one? https://pastebin.canonical.com/113903/
<davecheney> dpb1: yes
<davecheney> all the time in testing
<davecheney> something related to how long it takes mongo to squeeze out a repl set
<dpb1> why is it launching mongo locally?
<dpb1> davecheney: ok, good to know.
<dpb1> davecheney: you have a bug I can point to?  sorry, my LP search foo is weak
<davecheney> dpb1: i can't see where the environment is being bootstrapped from that paste
<davecheney> dpb1: no useful bug
<davecheney> well, there are loads of bugs logged
<dpb1> davecheney: ok, should I add --debug?
<dpb1> or will that help
<davecheney> dpb1: just paste the entire log
<davecheney> or just tell me
<davecheney> is this the local provider ?
<davecheney> atually it doens't matter
<dpb1> davecheney: nope, that is maas
<davecheney> mongo == slow
<davecheney> takes a long time
<davecheney> if it can't get it's repl set up and running fast enough
<dpb1> davecheney: that is the whole log, just the cmdline was not echoed
<davecheney> bootstrap fails
<dpb1> davecheney: I have another example, but the paste is the same. :)
<dpb1> davecheney: ooh
<dpb1> davecheney: you are right, the paste was truncated!
<dpb1> ok, sec
<dpb1> ok, this log is long, pasting again
<dpb1> davecheney: http://paste.ubuntu.com/7832774/
<dpb1> davecheney: I'll file another bug and paste, if you want to dup it you can
<davecheney> dpb1: ok
<davecheney> this is an impotrant issue
<davecheney> i think many people are working on it tangentally
<davecheney> given the profile of maas and juju
<davecheney> thumper: https://github.com/juju/errors/pull/5
<dpb1> davecheney: https://bugs.launchpad.net/juju-core/+bug/1346597
<_mup_> Bug #1346597: mongo timeout? cannot get replica set configuration <cloud-installer> <landscape> <juju-core:New> <https://launchpad.net/bugs/1346597>
<thumper> fwereade, davecheney: https://github.com/juju/juju/pull/347/files
<davecheney> thumper: https://github.com/juju/juju/pull/352
<davecheney> tiny fix for the test explosion
<davecheney> ^ pretty uncontravesial
<perrito666> wallyworld: nites
<perrito666> wallyworld: hey I have a question for you
<wallyworld> sure
<perrito666> I ask you because you removed juju.NewConn, feel free to redirect me to rtfm
<perrito666> I have an environs.Environ and I want to get a State out of it, how do I get that now that NewConn is no longer there? anyone knows?
<wallyworld> there's an api, let me check
<perrito666> I used to use NewConn
<wallyworld> perrito666: there's code in restore.go which does something along those lines
<wallyworld> you need to know the cacert which you can get from environ config
<perrito666> wallyworld: ah lol, I actually avoided what was done in the old restore in favor of new conn ehehe
<perrito666> ok, back to that
<wallyworld> the address of the api server can also be got from environ
#juju-dev 2014-07-22
<wallyworld> Conn was an artifact of the old days when agents used to connect directly to mongo
<wallyworld> perrito666: what's trhe use case, is this for restore?
<perrito666> wallyworld: yup, same part, new restore
<wallyworld> perrito666: do you need state or apiState?
<wallyworld> there's NewAPIState to get you an api connection
<wallyworld> that takes an environment
<perrito666> I was using conn.State
<jcw4> https://github.com/juju/names/pull/18 <-- ActionResultTag added to complement the exisiting ActionTag in the names package
<davecheney> menn0: tmpfs   /tmp    tmpfs   nodev,nosuid,mode=1777  0       0
<davecheney> ^ put that in /etc/fstab
<wallyworld> perrito666: conn.State was just made from a call to state.Open(), using info extracted from the environment
<wallyworld> if you really did need access to state.State rather than api.State, you could add a helper method to state/open.go
<davecheney> jcw4: done with review
<davecheney> OOPS: 139 passed, 114 FAILED, 4 PANICKED, 5 MISSED
<davecheney> --- FAIL: TestPackage (264.62s)
<davecheney> god damnit
<menn0> fwereade: https://github.com/juju/juju/pull/353
<thumper> tasdomas:   https://github.com/juju/juju/pull/354
<jcw4> davecheney: thanks!
<jcw4> davecheney: I addressed the issues you commented on; I'm stepping away for the evening but if you get a chance to respond that will be appreciated
<jcw4> Cheers!
<axw> wallyworld: the core work for #1271144 was done in 1.17.6, so it's already in 1.20
<_mup_> Bug #1271144: br0 not brought up by cloud-init script with MAAS provider <canonical-is> <cloud-installer> <landscape> <local-provider> <lxc> <maas> <regression> <juju-core:Fix Released by axwalk> <juju-core 1.20:Triaged> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Trusty):Confirmed> <https://launchpad.net/bugs/1271144>
<axw> just need to do the other one I think
<axw> #1337091
<_mup_> Bug #1337091: maas provider: allow users to specify network bridge interface. <canonical-is> <juju-core:Fix Committed by niedbalski> <juju-core 1.20:Triaged> <https://launchpad.net/bugs/1337091>
<wallyworld> axw: great, ok
<axw> ah, I think I added that didn't I. doh
<axw> removing it from the milestone
<davecheney> apiclient_test.go:127: c.Assert(err, gc.ErrorMatches, `unable to connect to "wss://.*/"`)
<davecheney> ... error string = "unable to connect to \"wss://127.0.0.1:42509/environment/6e4a12fc-b1c1-4fc7-8401-6cb66fce31b2/api\""
<davecheney> ... regex string = "unable to connect to \"wss://.*/\""
<davecheney> [LOG] 0:00.249 INFO juju.provider.dummy reset environment
<davecheney> oh
<davecheney> how is this a failure ?!?
<davecheney> i see
<davecheney> thumper: waigani https://github.com/juju/juju/pull/356
<axw> wallyworld: what else would you like me to do for 1.20.2?
<wallyworld> axw: i'm still working through the i/o timeout issues getting passing tests. like hitting gophers :-( perhaps take a peek at the ec2 health failures to see if there's anything we can do. might just be an ec2 issue but i'm not sure
<axw> wallyworld: cool, that's what I started doing.
<wallyworld> great :-)
<davecheney> wss://127.0.0.1:42509/
<davecheney> but not wss://127.0.0.1:42509/api
<davecheney> and not wss://127.0.0.1:42509/environment/xxxx/api
<wallyworld> axw: also, we've been told to reduce the max oplog file size from 50GB. I'm thinking 10GB might be reasonable
<axw> wallyworld: I think it could be lower, and concur with jam that we ought to do a size-to-capacity analysis
<axw> not sure how to go about that tho
<wallyworld> yeah, me either
<wallyworld> fwereade is -1 on reducing the size from the manufacturer's recommnedation so 10GB is sort of a compromise
<fwereade> wallyworld, I'm -0.5 at most, it's really just "let's analyse this, not just pull a number out of our collective ass"
<wallyworld> yeah, need to figure out ow to do that
<wallyworld> we can adjust the size later if needed
<wallyworld> if we make a wrong decision now
<axw> wallyworld: the problem is that changing existing installations is difficult
<axw> not impossible though
<wallyworld> we can do it as an upgrade step can't we?
<axw> wallyworld: probably. not sure how current/correct this is, but: http://www.kchodorow.com/blog/2011/02/22/resizing-your-oplog/
<axw> http://docs.mongodb.org/v2.2/tutorial/change-oplog-size/
<wallyworld> axw: i was looking at a more recent version, but yeah, it does involve some steps http://docs.mongodb.org/manual/tutorial/change-oplog-size/
<fwereade> wallyworld, can you remember what the launchpad project with the functional tests is called?
<fwereade> wallyworld, IIRC sinzui sent round a mail saying where it was and a bit about how to write new ones, but I can't find it
<wallyworld> hmmm. i can't recall, but i can look
<axw> I thought it was under ~juju-qa, but I don't see the branch name I'm looking for
<axw> ci-cd2 something something
<axw> ci-cd-scripts2  -- that might just be the support scripts tho
<fwereade> axw, yeah, all I could remember was there was a "2" in the name
<wallyworld> lp:juju-ci-tools
<wallyworld> i think
<axw> yeah, that matches what I have on disk for ci-cd-scripts2
<axw> must've been renamed
<fwereade> axw, am I being dumb? I don;t see any actual *tests* in there
<axw> fwereade: I think they're just mixed together with support code. I'll follow through from jenkins and find one
<axw> fwereade: the aws-deploy-trusty-amd64 job just runs this: http://paste.ubuntu.com/7833744/
<axw> fwereade: backup/restore runs test_recovery.py
<axw> with some args
<fwereade> axw, yeah, I just found that one, but I feel it's likely that that's not the only one...
<axw> fwereade: it's not. are you looking for something in particular, or you just want to know what our set of functional tests is?
<fwereade> axw, yeah, I just want to tell ericsnow to write big new tests as FTs directly, and to do it in <insert-place-here> instead of juju-core
<axw> I see. probably best off getting sinzui's advice then
<axw> or maybe abentley
<davecheney> just in case anyone was confused, i really hate JujuConnSuite
<fwereade> davecheney, I too have a deep and abiding loathing for that thing
<davecheney> fwereade: i want to cleeve it in two
<davecheney> JujuClientConnSuite and JujuServerConnSuite
<davecheney> then at least for the moment, recombine them into some horrific lovecroftian monster as JujuConnSuite
<davecheney> menn0:  % df -h /tmp
<davecheney> Filesystem      Size  Used Avail Use% Mounted on
<davecheney> tmpfs           3.9G 1016M  2.9G  26% /tmp
<davecheney> this is what a day of running juju tests looks like
<axw> I'd rather we just didn't use full end-to-end testing for our unit tests
<axw> why is shit getting left behind in tmp now? I don't think it used to?
<davecheney> axw: always leaks like a mofo
<axw> wallyworld: I can't reproduce the ec2 failures :/
<davecheney> especially if your fixtures panic
<davecheney> axw: note, i wasn't complaining
<davecheney> i'm used to it
<wallyworld> axw: yeah, i suspect it's on the ec2 side
<axw> davecheney: hmmok. I'm complaining, because I actually ran out of tmpfs the other day ;)
<axw> wallyworld: according to the AWS docs, it just means the server is overloaded. I guess it's more likely to happen in a US region during US-friendly hours
<axw> AWS status page has nothing much to say
<wallyworld> yep. can you comment on the bug, so that curtis knows we've looked at it?
<axw> sure
<wallyworld> axw: so, state of session copying branch. i have got a passing set of unit tests (with occasional replicaset failure). but bootstrapping a real env fails - 2014-07-22 03:18:28 ERROR juju.cmd supercommand.go:323 failed to initialize state: cannot create state servers document: not authorized for insert on juju.txns
<wallyworld> it will be something simple, so i should be able to propose something soon
<axw> hmm ok
<wallyworld> not sure just right now exactly what the issue is though
<axw> is this related to the thin michael emailed about? and that jam suggested we might be able to drop? (the mongo password dance)
<axw> thing*
<wallyworld> could be, but I think that has been changed in this branch, i need to go look into it
<axw> ok
<wallyworld> i'll get you to review later, but here's the current diff https://github.com/wallyworld/juju/compare/copy-session
<davecheney> fwereade: can you explain the rational for this signature and comment ?
<davecheney> func APIInfo(env Environ) (*api.Info, error) {
<davecheney> bugger
<davecheney>  // APIInfo returns an api.Info for the environment. The result is populated
<davecheney>  // with addresses and CA certificate, but no tag or password.
<waigani> thumper: https://github.com/juju/juju/pull/346 question in user_test.go:106
<mattyw> thumper, http://paste.ubuntu.com/7833982/
<axw> wallyworld: #1344857   <- this was fixed for 1.20, but not in the 1.18 branch
<_mup_> Bug #1344857: Crash deploying landscape-dense on MAAS <juju-core:Triaged> <juju-core 1.20:Triaged> <juju-deployer:New> <https://launchpad.net/bugs/1344857>
<axw> fixed some time during 1.19 anyway
<axw> added a comment
<axw> about to propose a fix for the machine-agent suciding bug
<wallyworld> axw: so, using session.Copy() results in all sorts of permission errors, whereas session.Clone() doesn't
<axw> Clone doesn't create a new socket
<wallyworld> the different at the mgo level is that Copy() makes a Refresh() cal lon the session which releases the socket
<tasdomas> thumper, https://github.com/juju/juju/pull/358/files
<wallyworld> yes, so now the question is - why does acuiring a new socket cause mongo to reject collection queries etc as unauthorised
<axw> wallyworld: I can only guess that it has to do with Login is being called after the session is copied
<axw> so other sessions don't inherit the credentials
<wallyworld> could be
<wallyworld> yeah, looks like i need to rearrange some code
<davecheney> fwereade: https://github.com/juju/juju/pull/359
<davecheney> a semi seroius attempt to reduce the horror
<davecheney> for comment, not review
<wallyworld> hmmm, maybe not. Login() is called first
<axw> wallyworld: getting anywhere?
<wallyworld> axw: sorta. state.Open() is first called with no credentials, and that fucks up everything from that point onwards
<wallyworld> because there are no credentials, db.Login() is never called
<wallyworld> so credentials are never set on the session
<axw> I see
<wallyworld> but it tries to use the session via session.Copy()
<wallyworld> using the session directly (which also has no creds) works
<axw> huh
<wallyworld> so i thin we must be calling Open() twice
<wallyworld> once without creds and once with
<wallyworld> well, that's my theory
<wallyworld> i think it's all tied up in how we initially create the admin user
<wallyworld> ah, it's in agent/bootstrap
<wallyworld> we create an initial state as part of the bootstrap process, but without user or password
 * wallyworld bbiab, raining, goota get kid from train startion
<axw> right, cos it's the first connection to mongo and there are no users yet
<wallyworld> yeah
<wallyworld> so have to special case that somehow
<TheMue> morning
<perrito666> good morning everyone, please remember dimitern and I will be your on call reviewers today :)
<mgz> mornin'
<TheMue> perrito666: morning
<TheMue> perrito666: dimiterm isnât available, diving holidays
<perrito666> TheMue: then its only me :p
<TheMue> perrito666: but I can support you, currently only fighting a bit with my machine after upgrading
<axw> wallyworld: I've got some things to do after dinner, so may be late for / miss the standup
<wallyworld> sure, np
<wallyworld> thanks for doc, haven't read it yet
<wallyworld> i think i'm going to revert michael's changes to txn runner so we can control when session is closed
<axw> mk
<wallyworld> i should say Copy()ied
<wallyworld> cause now, it copies inside txn runner even when it's not appropriate to do so
<wallyworld> the caller (state) should be in control of when a session is copied
<wallyworld> as it knows if the credentials have been set
<wallyworld> it's a bit messy
<jam> TheMue: i realized I just went through our standup time, I had an interview I was conducting. Want to touch base in about 5 min?
<TheMue> jam: ah, hey, just back from lunch. after you didnât came I thought that watching the games has been too long. :D
<natefinch> morning all
<perrito666> natefinch: morning
<TheMue> natefinch: morning
 * perrito666 uses a rather inefficient method to find out code to be reviewed
<jam1> perrito666: looking through the entrails and chicken bones for URL fragments?
<TheMue> perrito666: which one?
<perrito666> TheMue: the one jam1 just mentioned
<perrito666> TheMue: I open all the repos of gh/juju/ and then click on the ones that actually have pull requests
<TheMue> perrito666: yeah, Iâm looking at the PRs and use also the number of comments as a hint if nobody has looked at a PR so far
 * perrito666 senses a script in the near future
<perrito666> btw, what is dave cheney's nicnkane on irc?
<TheMue> davecheney
<TheMue> ;)
 * TheMue is happy enough that his local provider is working again. :) somehow it yesterday didnât wanted to start the API server :(
<perrito666> can anyone shed some light here? https://github.com/juju/utils/pull/7 I think axw and I are right and I am inclined to merge this
<TheMue> perrito666: will take a look
<perrito666> tx
<TheMue> perrito666: hmmm, I do understand dave, it hurts a bit to see those two Close so near to each other. the defer relies on the fact that Close only may return an error but doesnât hurt more (eg. with a panic in case of an already closed file)
<TheMue> perrito666: just looking for a way check if the file is not yet closed, so defer would use a func() { if f.IsOpen() { â¦ } }
<TheMue> perrito666: sadly it doesnât exist
<perrito666> I guess that could be split into a couple of functions but I believe it is a bit overkill
<TheMue> perrito666: so yeah, we have to act this way, otherwise ReplaceFile would have to be changed to use a os.File instead of prep and path
<natefinch> TheMue: I think it's a lot clearer to use two closes rather than defer and close.  Defer and close looks like a bug.  two closes looks like we're choosing not to use defer.  With a comment as to why we have to close before replacing, it'll be totally clear and ok.
<TheMue> natefinch: yep, the comment would help to not struggle about it later
<natefinch> TheMue: yeah. I can just totally see someone coming by and deleting the f.Close because it's extraneous due to the defer  (or convert both closes back to a defer in the case of the two close form)
<TheMue> natefinch: hehe, death by refactoring
<natefinch> that's half the reason I comment stuff... so the guy coming into the code 6 months from now won't go "wtf is this?  man, what an idiot, I'll just fix it...." and then break stuff (P.S. that guy in 6 months is often oneself ;)
<mgz> heh, that's a funny pr
<perrito666> mgz: it has an amazing review/changes ratio
<mgz> nate's solution seems the best, f.Close twice after f.Write, on the error and non error branches
<mgz> or put Write and the error check on seperate lines so you can Close between them (and comment, of course)
<hackedbellini> natefinch: I just noticed in the log after the workaround thumper asked us to try this: 2014-07-22 13:12:37 ERROR juju.worker runner.go:218 exited "state": failed to add "machine-0" to admin database: cannot set admin password: system.users entry must not have both 'roles' and 'readOnly' fields
<hackedbellini> after that, the db service stopped. I'm thinking if fixing this could maybe fix the ha
<natefinch> hackedbellini: hmm weird
<hackedbellini> do you or anyone here knows how to workaround that?
<TheMue> jam1: ping
<jam1> TheMue: pong
<TheMue> jam1: the LXC card tells that an lxc instance has no IPv6 address. whoâs the source of this info?
<natefinch> hackedbellini: looking
<jam1> TheMue: when I've tried the local provider, and then run "sudo lxc-ls âfancy" I see it list an IPv4 address but "â" for ipv6, has this changed?
<TheMue> jam1: when Iâm starting LXC manually as well as via juju (using the local provider) it has an IPv6 address
<TheMue> jam1: hmm, interesting, using lxc-ls it doesnât show the IPv6, yes
<natefinch> hackedbellini: it seems like the problem is that we're trying to modify the user, but that's not really how you're supposed to change passwords on users anymore
<TheMue> jam1: but when visit an instance with juju ssh 1 and do a ifconfig -a I do see it, bound to eth0 with the same address lxc-ls show
<TheMue> shows
<hackedbellini> natefinch: that error was new after setting the "upgradedToVersion" = 1.18.4. Do you think that this error is a side effect that happened after the agent was trying to "fix" the ha migration?
<natefinch> niemeyer: ^^ It looks like juju is using mgo.Database.UpsertUser to change passwords, but mongo really wants us to use changeUserPassword: http://docs.mongodb.org/v2.4/tutorial/change-user-password/
<TheMue> jam1: ah, the IPv4 can be pinged, the IPv6 not, interesting
<natefinch> hackedbellini: I think you're hitting a weird upgrade path where you're half upgraded, because you're really on 1.19 and not 1.18, so some of the upgrade code is failing because it expects the user not to exist
<hackedbellini> natefinch: hrm, so this is actually good. It's trying to migrate from 1.18 to 1.20, that's what we wanted. Any way to "skip" those already migrated code?
<jam1> TheMue: so I think the ubuntu that comes up in the LXC is auto-assigning something, but it isn't being made routable by the LXC outer machinery.
<TheMue> jam1: looks like, the standard address is link-local
<katco> good morning all
<TheMue> katco: heya
<mgz> hey katco
<katco> TheMue: mgz: hi :)
<niemeyer> natefinch: What's the actual error, and what is the version of mgo and of MongoDB
<niemeyer> natefinch: FWIW, db.changeUserPassword is just a wrapper, which just updates the user using one method or the other depending on which version of MongoDB you're using
<niemeyer> natefinch: In theory, the latest version of mgo does exactly the same..
<niemeyer> natefinch: Unless there's a bug, which I'd be happy to know about
<natefinch> niemeyer: we're getting "system.users entry must not have both 'roles' and 'readOnly' fields"
<niemeyer> natefinch: What is the version of mgo and MongoDB you are using?
<niemeyer> natefinch: This sounds like a really old behavior.. even an older mgo should be able to cope with that, unless you are actually trying to modify the database yourself, in which case you are of course on your own
<natefinch> hackedbellini: I presume you have mongodb 2.4.6?
<natefinch> niemeyer: we're just using UpsertUser: https://github.com/juju/juju/blob/1.20/mongo/admin.go#L114
<niemeyer> natefinch: Okay, so that should be working fine
<natefinch> niemeyer: mgo is labix.org/v2/mgo   20140331185009-fhnh3xzfdpicup0j
<niemeyer> natefinch: Testing..
<natefinch> niemeyer: the question is what would happen if you ran that code twice?  I have a feeling that's what happening.  hackedbellini's state server is in a weird state - he got upgraded to 1.19 from 1.18, and now we're trying to get him to 1.20... my guess is that the upseruser code got called from 1.18-> 1.19 and now is getting called again from 1.19->1.20
<niemeyer> natefinch: Ah, hmm..
<niemeyer> natefinch: If you try to upgrade a user from a password-less state to a password-rich one, yeah,  you might have issues
<niemeyer> natefinch: Erm, sorry
<niemeyer> natefinch: If you try to upgrade a user from a role-less state to a role-rich one, yeah,  you might have issues
<niemeyer> natefinch: There's no "readOnly" field in there
<niemeyer> natefinch: So I assume that field is in the database already, from an ancient run
<niemeyer> natefinch: So the error is correctly saying you cannot do that
<natefinch> niemeyer: yeah, that's what I was thinking.  I wonder if we should just override readonly
<niemeyer> natefinch: This was unfortunately quite fiddled with during MongoDB 2.2 => 2.4 => 2.6
<niemeyer> natefinch: I did my best to sanitize the upgrade path for driver users, but the server did change significantly and certain cases are just not doable
<niemeyer> natefinch: You can try dropping readOnly, yes, but I suggest testing it both in 2.4 and 2.6
<niemeyer> natefinch: The manual dropping code will not work on 2.6, as user changes are now done via commands rather than directly fiddling with the db
<natefinch> niemeyer: yeah, we're working on 2.6  compatibility.... but so much changed, it's quite a process.
<niemeyer> natefinch: So probably "drop on 2.4" + "check for that one error on 2.6 and ignore"
<niemeyer> natefinch: Yeah, user managed, specifically, did change both on 2.4 and on 2.6
<niemeyer> natefinch: I'm hopeful the fiddling is done now.
<perrito666> wwitzel3: stdup?
<natefinch> niemeyer: thanks for the help.  I'll make that fix, in case anyone else hits this (this is a pretty bizarre case, but shows it's not impossible)
<niemeyer> natefinch: No problem
<natefinch> perrito666, wwitzel3: brt, one minute
<hackedbellini> natefinch: sorry I was helping a workmate here. The mongodb version is 2.4 here.
<hackedbellini> I saw that you and niemeyer were discussing some things regarding my problem. Is there anything you want me to try? My installation is totally broken, so I have nothing to loose (well, I cannot loose in any case the data on the lxc machines, since they are on production, but I don't think that will happen)
<natefinch> hackedbellini: we could have you twiddle the DB manually, but I think I'd rather get the fix into production and have you retry the update after the fix
<hackedbellini> natefinch: hrm, perfect! When do you think that update will be available? I will probably have to download the tools by hand, right? Since I can't do an "upgrade-juju"
<jcw4> what is the normal approach to testing internal helper functions?
<jcw4> should we export the internal functions so that the tests can see and test them directly, or do we rely on the helper functions being implicitly tested by the public functions being tested?
<jcw4> perrito666, mgz, natefinch, et. al. ^^^
<perrito666> jcw4: in that case I usually add a test in the package
<jcw4> perrito666: i.e. in the actual package rather than in package_test
<perrito666> yup
<perrito666> jcw4: but dont take my word as law I am far from elder here
<jcw4> :)
<jcw4> I've heard strong opinions against that, but possibly there might be exceptions for internal helper methods
<mgz> jcw4: I pretty much always prefer the export_test.go method
<perrito666> mgz: @more
<jcw4> @more?
<meetingology> jcw4: Error: "more?" is not a valid command.
<mgz> lol
<jcw4> ah,
<jcw4> lol
<perrito666> mgz: meh, you could very well support basic irc bot commands
<mgz> for any non-trivial internal function, having dedicated unit tests makes sense, rather than just testing at pubic interfaces (which some prefer)
<mgz> so, just have a SomeInternalFunction = someInternalFunction in export_test.go in that package
<mgz> and test at that level
<jcw4> mgz +1
<jcw4> perrito666, mgz I'll add some tests and then ask for a review again
<perrito666> btw, there are a bunch of prs from 26 days ago about cloud sigma that no one commented on, is there a reason for that?
<mgz> perrito666: congratulations, you get to review them I think
<jcw4> hehe
<perrito666> mgz: I am on it, just wondering if there was a deeper reason for them to be ignored for almost a month
<mgz> perrito666: not that I'm aware of, apart from some obvious reviewers being enveloped with other things
<katco> mgz: hey, i'm unsure of where to put this test for including the simple-streams URL in the container start. my options are juju/juju/container/kvm/kvm_test.go and live_test.go. live_test.go looks like it has some container-ish tests in it, but the change will be in kvm.go... thoughts?
<mgz> katco: I'd try to put in kvm_test.go even if it means adding a new suite there
<katco> mgz: k that's what i was thinking initially; thank you :)
<perrito666> I just burned frozen burgers... that must be some sort of special skill
<mgz> perrito666: I thought you were the chef in the house...
<perrito666> mgz: I am, I just got distracted by a code review
<jcw4> riveting code review
<mgz> scorching code review...
<perrito666> mgz: well this burned food will definitely affect the outcome of the review
<katco> haha
<katco> "this is the worst object instnatiation i've ever seen. the formatting is crap, and i'm hungry. dnflgtm"
<perrito666> lol
<rogpeppe1> jam, jam1: ping
<rogpeppe1> cmars: ping
<katco> i need a second opinion on a design decision; anyone available to PM for a few minutes?
<mgz> katco: bug perrito666, he's in a good mood and not otherwise occupied
<katco> rofl
<katco> perrito666: if that's true, got a sec? or are you munching on some burnt lunch? ;)
<mgz> katco: more seriously, you can bug me if you like :)
<katco> mgz: that would probably be best since we're on the same team :)
<natefinch> jcw4: for what it's worth, I think export_test.go is an anti-pattern.  I think it is doubly so when you're doing it so you can test an implementation detail like an unexported method . Just make a new testing file that is in the same package as the test, and write your unit test there.  There's a reason both test forms exist.
<perrito666> mgz: mm I see you have managerial talent
<perrito666> katco: I am in a good mood, excepting for that incident with the chickenburger, so fire away
<katco> perrito666: mgz helped me out, but thank you!
<mgz> perrito666: I can be useful, as well as wind people up about their burnt burgers :)
<jcw4> natefinch: thanks
<natefinch> jcw4: as others said, not everyone agrees with me.... but I don't think anyone will stop you from writing more tests :)
<jcw4> haha
<perrito666> I have had in-package tests merged so I guess no one has very strong objections
<jcw4> I'll clear it with mgz, and decide between a separate .go file and the export_test.go file, but either way I have to make perrito666 happy
<jcw4> ;)
<perrito666> jcw4: well, you will not make me happy per se (not after my burguers charred) but you might get a lgtm from me if you add tests :p
<jcw4> lol
<jcw4> from my perspective that's the same thing
<jcw4> ;)
<perrito666> I need someone with managerial skills for a quick question
<natefinch> perrito666: you can't expense the heroin, I told you that
<perrito666> natefinch: maan, whyyy?
<natefinch> perrito666: whazzup?
<perrito666> natefinch: I pmd you
<katco> perrito666: jcw4: i am thinking about the same thing. i think the idiomatic thing to do is to keep the tests in the same package. they will only get included in the test binary, not the production binary.
<katco> perrito666: jcw4: and it avoids the export dance for tests
<jcw4> katco: okay, I haven't heard from mgz and it sounds like a bit of a quorum so I'll go with in-package tests
<katco> jcw4: mind you i am brand new, so my opinion probably doesn't count for much :)
<jcw4> katco: haha - well change is easy.  But, when I see fairly divergent opinions regularly on the project I think it just means a clear standard has yet to emerge
<katco> well then i'll throw my hat into the "keep in the package" ring. if it's good enough for the standard lib, it's good enough for me :)
<jcw4> :)
<ericsnow> FWIW, last I saw, niemeyer favored the *_test package approach, which somewhat explains our use of it :)
<ericsnow> see https://groups.google.com/d/msg/Golang-nuts/dkk0X1tIs6k/nO3CKFqbIxYJ
<niemeyer> +1
<perrito666> sounds to me like: Public API in package foo_test and Private in _test.go file of foo package but I did not read the whole link from ericsnow :)
<perrito666> testing things as they are intended to be used
<niemeyer> Yep
<niemeyer> In many cases, it's also easy to export the things that are internal so that they can be touched by tests, using the export_test.go trick in use in many packages
<niemeyer> Also like that extra mile.. encourages people to be more honest about testing what is meant to be tested, rather than implementation details
<jcw4> hmm; the quorum is crumbling.. since I report to mgz and niemeyer is an elder in the community I'm going to flip flop and go with export_test.go :)
<niemeyer> jcw4: It's hard to give reasonable advice without knowing what the case is, though.. what are you testing?
<jcw4> niemeyer: helper functions in the actions and actionresults watchers
<niemeyer> jcw4: URL to the code?
<jcw4> niemeyer: https://github.com/juju/juju/pull/351 ... specifically :
<jcw4> https://github.com/juju/juju/pull/351/files#diff-5a822a48b96dd79a90eda5ba852495c4R1673
<jcw4> niemeyer: I'm actually about to refactor them down to one watcher with a parameter for which collection to watch...
<niemeyer> jcw4: I'd be on the fence on that one.. it's all internal, and it's a relevant abstraction that deserves testing
<niemeyer> jcw4: It's previously existent code, so I'd probably just go with what is already in place unless I had an actual reason to change it
<niemeyer> jcw4: (other than purism)
<jcw4> niemeyer: meaning the refactor to one watcher?
<jcw4> or the helper functions
<niemeyer> jcw4: No, meaning the subject of our conversation
<jcw4> :D
<jcw4> okay
<jcw4> niemeyer: I'll see what in-place testing there is for helper functions and if there is none then I'll export the couple helper functions we want to test...
<katco> ericsnow: niemeyer: ty for the context :)
<natefinch> thumper: got a minute to talk about jujud logging?
<thumper> natefinch: sure, a few
<natefinch> thumper: just as easy on irc..... basically, there's the log rotation bug that has come up again.  I wanted to use a Go package for rolling Jujud logs so it'll work when we support windows.... but the relationship between the commands and logging and stuff is kinda twisty
<thumper> natefinch: oh... kay..
<thumper> natefinch: what needs to be done is this:
<thumper> 1) at the startup of jujud, configure the logging to use the special writers you have
<thumper> 2) change the upstart scripts so it writes out to a file, not to stdout/stderr
<thumper> or at least, logging goes to the file
<thumper> I don't think we write anything else out
<thumper> yeahâ¦..
<thumper> just that...
<thumper> changing the upstart scripts are the biggest thing
<natefinch> thumper: so, we have a few different commands, bootstrap, machine agent, unit agent, juju-run
<natefinch> thumper: to get the right directory in which to configure loggo's output on startup, we need the agentConf, which everything except juju-run has access to right now.
<thumper> don't worry about juju-run
<thumper> it sends stdout, stderr back to the client
<natefinch> ok, that was sort of my question
<thumper> and shouldn't log
<natefinch> right, that make sense
<natefinch> ok... so I think the only thing we have to do is change cloud-init to modify the upstart script, but I guess we'll need an upgrade step to modify the upstart script of existing installations
<natefinch> so... three, three things we have to do: configure loggo for the commands, change cloud-init, and update the existing upstart scripts of running machines
<thumper> right
<natefinch> thumper: and how do I set the output on loggo?
<natefinch> thumper: there's the register writer thing, but it takes a loggo.Writer, and to make one of those from an io.Writer wants a formatter and......   I just want to say "here, loggo, write to this io.Writer"
<thumper> natefinch: change the writers
 * thumper looks
<natefinch> https://godoc.org/github.com/juju/loggo
<thumper> NewSimpleWriter
<thumper> with a default formatter
<natefinch> do I replace the "Default" logger or no?
<thumper> yes
<thumper> otherwise it will continue to write to stderr
<natefinch> thumper: can I overwrite, or do I have to delete and then register?
<natefinch> thumper: the package example seems to say delete and register
<thumper> loggo.ReplaceDefaultWriter
<thumper> probably an old example
<natefinch> awesome
<natefinch> thanks.
<thumper> np
<natefinch> thoughts on max log size?
<thumper> 5 meg?
 * thumper shrugs
<natefinch> wow really?  I was going to say like 200
<natefinch> (meg)
<natefinch> I dunno
<thumper> me neither
<thumper> ask an admin type person
<natefinch> I'll PR it and let people duke it out
<natefinch> Thanks again.  Gotta run'
<fwereade> perrito666, ping
<katco> gofmt sure does make for messy diffs when adding longer variable names to structs
<perrito666> fwereade: pong
<fwereade> perrito666, the semi-readable PRs for cloudsigma are here: https://github.com/Altoros/juju-cloudsigma/pulls
<perrito666> fwereade: so the ones that are in my email are not the semi readable ones?
<fwereade> perrito666, those ones I think are kinda unbearably yucky to read
<fwereade> perrito666, I got upset with them, jam pointed me at the others
<perrito666> fwereade: you have a lower tolerance level than I
<perrito666> they are ugly, not unbearable
<perrito666> fwereade: we should perhaps close those ones? for what I see in an answer I got in priv I am not the first to ask these questions
<perrito666> s/those/the ones I pointed
<fwereade> perrito666, yeah, sgtm, please close them with a note pointing at the others
<fwereade> perrito666, I got all upset with the 18 changed files, 20 changed files, 22 changed files progression
<perrito666> fwereade: mm I will email the proposer guy
<perrito666> I am not sure which is which and I fear I might close the wrong ones
<perrito666> the link you passed me contains the ones I mentioned plus other
<fwereade> perrito666, that sounds reasonable -- and, sorry I'm not completely up to date with it, jam has been liaising with them primarily
<davecheney> mattyw: http://i.imgur.com/IODqYZN.jpg
<cmars> waigani, http://sourcegraph.com/blog/ipfs-the-permanent-web-by-juan-benet-talk
<mattyw> davecheney, http://irclogs.ubuntu.com/2014/07/22/%23juju-dev.html
<davecheney> http://i.imgur.com/x6DVenz.jpg
<bodie_> anyone have thoughts on how to format a descriptor of optional flags to a command?
<bodie_> it's simply a map index which can also have lists
<bodie_> action-get outfile.format for example
<bodie_> in that example, outfile is the key to an inner map
<bodie_> format is a key in that inner map
<bodie_> picture JSON
<bodie_> however, there could also be a list index
<bodie_> action-get outfile.files[1]
<bodie_> right now, the help displays something like: [<key>[.<key>.<key>....]]
<bodie_> which doesn't seem optimal
<bodie_> but adding the list index option is much much uglier
<bodie_> jcw4 any thoughts?
<jcw4> bodie_: wouldn't the pattern extend to outfile.files.1 ?
<jcw4> bodie_: you'd have to work a bit to make sure you parsed integer indexes at the right time instead of strings
<bodie_> I figured traditional list index would be easier to parse
<bodie_> right now it indexes by the key parsed from the list, so '1' would have to be a string key to another map
<bodie_> or uh
<bodie_> files would be a map
<bodie_> '1' would be one of the keys in it
<bodie_> whereas if we have [1] we clearly know it's an int, plus it's more familiar to the user
<bodie_> it almost seems like this isn't something that is really the job of the action-get command
<bodie_> but lists are clearly supported by JSON-Schema as JSON elements
<bodie_> perhaps I'll simply PR it as WIP and query for responses on this topic
<jcw4> bodie_: agree, this is potentially common utility code.. I don't know if it's implemented anywhere though
<bodie_> hmm, good thought
<bodie_> it's not hard to recurse a map on a list of keys, but adding list indexing could be a -bit- more tedious, though I imagine it would just be an interface{} list instead of a string list, and we'd either have int keys or string keys in the list
<bodie_> this would be vastly simpler in a functional language with .apply() ... :P
<bodie_> oh well, this is fine too
<perrito666> fwereade: ping
#juju-dev 2014-07-23
<davecheney> fwereade: thumper https://github.com/juju/juju/pull/362
<perrito666> fwereade: ping 2
<davecheney> thumper: fwereade http://en.wikipedia.org/wiki/.local
<davecheney> http://en.wikipedia.org/wiki/.local
<davecheney> Networking device hostnames ending with .local are often employed in private networks, where they are resolved either via the multicast domain name service (mDNS) and/or local Domain Name System (DNS) servers.
<wallyworld> axw: if you have a moment, could you give https://github.com/juju/juju/pull/361 the once over as well? i've tested live on hp cloud with juju-gui deployed to ensure gui is updated as services are added/removed
<axw> wallyworld: sure, looking
<wallyworld> thanks
<menn0> fwereade: https://github.com/juju/juju/pull/364
<fwereade> axw, commented quickly on your doc, thoughts?
<axw> fwereade: I was trying to avoid timers, because it seems unnecessary. we already have timer based pinging on the API, so we should be able to use connection liveness to monitor presence
<axw> fwereade: since the state server is a machine agent, it will have an entry in the presence map too
<axw> I just meant that we'll need to propagate that into state somehow
<fwereade> axw, ok, cool -- I'm mainly just saying that I don't think we need to add to the peergrouper
<axw> because EnsureAvailability needs something to decide whether a state server is healthy
<axw> fwereade: something needs to monitor mongod's health though right? doesn't need to be the peergrouper, but some worker?
<fwereade> axw, the api servers will be keeping track of their own connected clients, and those forwarded from other agents, and we may as well pipe that same info into state in the same sort of way we do for environs
<fwereade> axw, do we specifically need to monitor mongo's health if we can monitor state server health?
<axw> fwereade: no, it was just something rogpeppe and I wanted to do originally, because mongod and jujud could independently die
<axw> could be deferred
<fwereade> axw, that's true, but I worry it's an overcomplication in this specific context
<axw> let's strike that out for now, it's strictly an improvement over what we've got now anyway
<fwereade> axw, fwiw, if we were doing it, I feel we could probably keep it local to the individual state servers -- if they detect problems with their local mongo, they can mark themselves messed-up
<axw> fwereade: I suppose they could just sever their connection to the other state servers
<cmars> tasdomas, thumper: http://paste.ubuntu.com/7839596/
<axw> fwereade: I feel like I must've missed something, because the existing implementation is quite complicated. were there previously other use cases?
<axw> wallyworld: phew. lgtm
<wallyworld> axw: yeah, sorry :-)
<axw> :)
<wallyworld> axw: i just found another collection in the watcher that was missed
<axw> oops
<wallyworld> so i'm retesting
<wallyworld> axw: also, we can change oplog max size to 5GB
<axw> okey dokey
<wallyworld> but need to test a real deployment with a deploy -n X
<axw> sure, I'll do some testing
<wallyworld> i'm trying to use aws but keep running out of instances
<axw> oh
<axw> you're on it?
<wallyworld> yeah, but feel free to test on another platform
<wallyworld> but i think we'll be ok
<axw> I'll try with Azure, but I can't see it being a problem
<wallyworld> me either
<axw> wallyworld: did you just change maxOplogSizeMB in mongo/prealloc.go?
<wallyworld> axw: actually, i'll land the io timeout one and then will need to backport to 1.20 (will be messy), so if you could do the branch to reduce the max oplog size that would be great
<wallyworld> yup
<axw> sure
<wallyworld> there's been a lot of change in trunk in the same areas as touched by the collection stuff
<axw> fun :(
<axw> wallyworld: has anyone managed to look at a production juju installation's oplog?
<wallyworld> axw: i think kapil did in the bug report
<wallyworld> and nate was going to ask in #mongo for advise
<wallyworld> but i need to ping him cause i haven't hard back
<wallyworld> apil wanted the size reduced also
<wallyworld> k
<axw> hmm it shows the cap size, but not usage
<wallyworld> yeah, i'm not sure we have that data
<axw> err, azure simplestreams is busted
<axw> wallyworld: do I poke IS or what?
<axw> there's only trusty images for China
<wallyworld> oh :-(
<axw> in the index
<wallyworld> sigh, yeah poke #is
<wallyworld> they were meant to put tests in place to catc this
<wallyworld> i gave them the tools
<wallyworld> thumper: yo
<wallyworld> or menn0
<menn0> wallyworld: yep?
<wallyworld> fresh deploy of a 1.21alpha1 environment, dpeloyed gui, all looks ok, but
<wallyworld> machine-0: 2014-07-23 02:49:28 ERROR juju.worker runner.go:218 exited "upgrade-steps": unexpected quit
<wallyworld> machine-0: 2014-07-23 02:49:28 INFO juju.worker runner.go:252 restarting "upgrade-steps" in 3s
<wallyworld> log is full of the above
<menn0> hmmm
<wallyworld> i can ssh in and poke around. have you seen that?
<wallyworld> this is on aws
<menn0> can I get the full log?
<menn0> also, where can I get the code for that release? I'd like to see what made it in.
<wallyworld> menn0: this is trunk
<menn0> ok
<wallyworld> i'll pastebin the log
<menn0> built when? this code has changed quite a bit over the past 3 days (one change is testing for merge right now)
<axw> wallyworld: hloeung says the server team manages the index, so I guess I'll just email Ben Howard?
<wallyworld> axw: yep, and cc scott moser
<menn0> wallyworld: I will try to repro
<wallyworld> menn0: i built it just before. i have my own changes in there concerning copying sessions when talking to mongo, so it's possible that make be involved, but everything else works ok
<wallyworld> by in there, i mean in state
<wallyworld> no the upgrade code itself
<wallyworld> the changes are below the waterline
<menn0> wallyworld: ok. let me have a dig.
<axw> best merge directive evar
<davecheney> waigani: http://www.shag.com/
<davecheney> something tells me that he wouldn't have his art reprodued on a 3 buck sticker sold on ebay
<wallyworld> axw: say wot
<axw> wallyworld: https://github.com/juju/juju/pull/364#issuecomment-49827041
<wallyworld> omfg
<wallyworld> trust menn0 to be a smartarse
<wallyworld> well played, sir
<axw> wallyworld: how many instances did you get up to on ec2?
<wallyworld> axw: about 10 or so
<wallyworld> and then the account ran out
<wallyworld> very slow
<axw> ok
<axw> ah, there may be trusty images in the daily stream for azure... will try that
<wallyworld> ok
<davecheney> who wants to be mean to me ? https://github.com/juju/juju/pull/365
<davecheney> gentle ping, https://github.com/juju/juju/pull/362
<axw> wallyworld: actually, bootstrapping azure won't tell us much. it's root disks aren't that big (25G), so the oplog isn't large anyway
<axw> wallyworld: did you create a large root-disk on ec2?
<wallyworld> axw: no. but the idea was to see if a smaller oplog could handle a stress test with large numbers of units etc deployed
<axw> wallyworld: ok, I see
<wallyworld> if the smaller one could handle it, then 5GB will be plenty
<axw> fair enough
<davecheney> i think maas is unique in the fact it gives you a large /
<davecheney> 'cos you get 100% of the underlying machine
<axw> I thought you could request whatever you want with ec2?
<axw> one of the cloud providers does that
<wallyworld> davecheney: even if te disk is large, i don't get why it's not partitioned to have a smaller /
<davecheney> wallyworld: maas is web scale
<menn0> wallyworld: fwereade and I are going to change Runner slightly so that a worker doesn't have to wait for the stop channel to close if it wants to exit without error
<menn0> wallyworld: this is the source of that problem
<wallyworld> ok, sounds good
<menn0> wallyworld: we thought that had already been done but it wasn't (we should have checked more closely...)
<wallyworld> np, easy enough to fix :-)
<menn0> wallyworld: at any rate, it's only log spam and shouldn't have any real adverse affects otherwise. the upgrade-steps worker will be getting restarted but then exiting immediately over and over.
<wallyworld> yeah, that's what i saw
<wallyworld> axw: yay, just merged trunk version of the copy session stuff back to 1.20, soooo many conflicts \o/
<axw> fun times ahead
<wallyworld> yeah, i so want to get this comitted today so we can build a 1.20.2 rc
<thumper> wallyworld: oh hai
<wallyworld> yo
<thumper> wallyworld: I have a question for you too
<wallyworld> wasn't me
<thumper> wallyworld: I'm looking at the disk configstore, and wanting to remove the "create an empty file" bit
<thumper> wallyworld: fwereade said to ping you because you have have experience with this
<wallyworld> i do?
<thumper> and he vaguely remembered talking to you about it
<thumper> but if you didn't remember, then to ignore it
<wallyworld> let me look at the code
<axw> damnit
<axw> wallyworld: I just found a bug in azure, load balancing is broken
<axw> possibly just on trunk, will need to verify with 1.20
<wallyworld> thumper: yeah, i must have been on crack cause i don't recall the conversation
<thumper> wallyworld: ok, cool
<wallyworld> axw: well at least you found it
<davecheney> https://github.com/juju/schema/pull/3
<davecheney> ^ anyone? anyone? Beuller ?
<axw> wallyworld: 3 state servers, 10 ubuntu units deployed = 2.34MB oplog over 1 hour
<davecheney> axw: nice
<axw> mostly quiescent tho
<wallyworld> axw: will 5GB should be *plenty*
<wallyworld> well
<axw> indeed
<wallyworld> thanks
<axw> wallyworld: I think that change I made to API host ports doesn't actually stop the oplog from getting spammed
<axw> seems the assertion still gets inserted into the transaction log
<wallyworld> ah
<wallyworld> i wondered about that
<axw> so it's either racy or spammy
<wallyworld> whether assert noops got logged
<wallyworld> given the frequency of change, it's ok to be "spammy"
<wallyworld> imo
<wallyworld> since it's not really that spammy anyway
<wallyworld> luckily mongo is web scale :-)
<wallyworld> axw: did you forward port the api host ports race fix to master?
<axw> wallyworld: yep
<wallyworld> ok, that bit of code is coming up as a conflict doing my backport
<wallyworld> and it appeared to be showing the old code in master, but like my mistake
<davecheney> thumper: while you're slcking off,  https://github.com/juju/juju/pull/362,  https://github.com/juju/juju/pull/365
<axw> wallyworld: where did the 5GB number come from?
<wallyworld> axw: of of thin air. greater than 1GB but an order of magnitude less than 50GB
<wallyworld> we use max 1GB for local provider
<axw> we use 1MB for local
<wallyworld> ah, ooops
<wallyworld> i thought it was G
<wallyworld> so maybe 1GB is sufficient
<axw> I think so
<wallyworld> ok
<wallyworld> now that there's some numbers
<waigani> axw: hello :)
<axw> waigani: ahoy
<menn0> wallyworld: btw, fix for that upgrade-steps problem on the way: https://github.com/juju/juju/pull/366
<waigani> axw: I've got a question - let's see how good your memory is
<wallyworld> menn0: awesome, tnaks
<jam> axw: if you wanted a worst case, you could use the xplod charm
<waigani> axw: https://codereview.appspot.com/70190050/diff2/20001:120001/state/state.go
<jam> instead of ubuntu
<waigani> axw: I'm trying to remember while I added additionalValidation
<jam> axw: xplod
<jam> axw: xplod
<axw> ? :)
<jam> copy & paste fale
<jam> fail
<jam> https://code.launchpad.net/~jameinel/charms/precise/peer-xplod/peer-xplod
<axw> thanks
<axw> I'll give that a shot
<jam> axw: when you add units, they start a peer chatter amongst them
<jam> each one tries to increment a number and report it back to the rest of the peers.
<jam> axw: also, as it is a simple charm, you can "juju deploy --to 1" to get more of them
<jam> (well add-unit --to 1)
<jam> axw: though the fslock means they won't really scale super huge on one machine
<axw> jam: cool, thanks
<axw> waigani: umm
<waigani> axw: yeah hehe
 * axw greps for uses of UpdateEnvironConfig
<waigani> axw: none
<jam> davecheney: why no uppercase letters (A-F) ?
<waigani> axw: fwereade has suggested removing it
<waigani> axw: but just wanted to double check that there wasn't a good reason to keep it
<axw> waigani: not none
<waigani> oh?
<axw> apiserver/client/EnvironmentSet
<fwereade> waigani, there's one test that uses it to fuck things up creatively
<fwereade> waigani, axw: and, yeah, we do need that functionality
<axw> there's a bit of code in there that checks agent-version isn't set
<fwereade> waigani, axw: the fact that the test *can* fuck it up is evidence the method itself is a bit broken
<davecheney> jam: 'cos that is what we say is a uuid
<fwereade> waigani, axw: but it's gradually been becoming less os over time which is nice
<davecheney> jam: is that my old xplod charm ?
<jam> davecheney: I adapted yours and put a peer relation in it to make it easier to use
<jam> you still get N^2 by just adding units
<axw> jam: stupid question, is there a charm URL to deploy that directly?
<axw> or do I need to fetch it and local:
<jam> axw: not that I know of. branch it locally and then --repository local:
<axw> okey dokey
<davecheney> jam: be careful
<davecheney> that charm for my ec2 credentials locked out for abuse :)
<jam> davecheney: we fixed that bug :), and I've used it on EC2 and scaled up to about 200*10 units
<jam> (more than 100 doesn't help because of CPU bound on the individual machines)
<axw> why did it get you locked out?
<jam> axw: we used to have a bug where every hook
<jam> would call a Provider
<jam> would make a Provider call (for the API server IP addresses)
<jam> so hundreds of calls per second?
<axw> I see :)
<davecheney> yup
<davecheney> made it hard to kill that environment ...
<jam> davecheney: ai
<jam> aiui, you ran into rate limiting first, which meant you couldn't kill it
<jam> which let it run away until they shut it down for you :)
<davecheney> couldn't even use the aws console
<davecheney> as your quote counts towards that api as well
<davecheney> (probably calls the same endpoint under the hood)
<jam> davecheney: I wonder if you used IAM credentials you could get just the one IAM account locked out, and still access it as your super users.
<davecheney> dunno
<axw> jam, wallyworld: thoughts on https://bugs.launchpad.net/juju-core/+bug/1344940/comments/17 ?
<_mup_> Bug #1344940: Juju state server database is overly large <canonical-is> <cloud-installer> <landscape> <mongodb> <juju-core:Triaged by axwalk> <juju-core 1.20:Triaged by axwalk> <https://launchpad.net/bugs/1344940>
<wallyworld> axw: i think 1GB to be safe
<wallyworld> that's imo
<axw> wallyworld: I'll decrease min to 512MB, and max to 1024MB
<wallyworld> ok, sounds good
<axw> and if mark doesn't like it, we'll take it from there
<jam> axw: sounds good to me, there is a lot of "how out of date can a replica get and come back without a full sync". If we want to support 1 day? 5 hours? all that stuff is pretty arbitrary
<axw> yeah
<axw> I think 24 hours would be reasonable, but... I'm not an ops guy
<jam> axw: again, it depends a lot on how big the actual DB becomes, to determine how long a full sync is going to cost
<jam> again, our estimates are that it isn't that bad
<wallyworld> axw: do you know why "git diff X...Y" works but "git merge X...Y" doesn't?
<jam> axw: it would matter more if you had a 1TB database, and churn of less that 1GB/day, doing a full sync would be painful
<axw> wallyworld: I don't
<wallyworld> :-(
<jam> wallyworld: git cherrypick, IIRC
<jam> wallyworld: git cherry-pick -h
<wallyworld> jam: i tried that but it brings in other stuff that doesn't show up in the diff
<jam> wallyworld: so you can always do "git dif X..Y | patch -p1"
<wallyworld> tries that but it says the patch can't apply
<wallyworld> although i didn't do the -p1
<jam> wallyworld: p1 matters, because they put "a/" and "b/" prefixes
<jam> you need to be in the root as well
<jam> it may be that conflicts/moved code/etc means a patch won't do a good job
<jam> *I* would try cherry-pick as that is the intended git method (AFAICT)
<wallyworld> jam: yeah, but for some reason, although the diff shows as correct, cherry-pick was bringin in revs i didn't see in the diff, and aborts partr way through
<wallyworld> i just want it to apply whats in the diff and let me resolve merge conflicts after
<axw> wallyworld: I've just been listing the commits explicitly in "git cherry-pick" and that worked fine, FWIW
<axw> I don't tend to have a lot of commits in a PR though
<axw> https://github.com/juju/juju/pull/368
<wallyworld> i just want to take the difference between master and a feature branch and backport
<axw> oplog moar smaller
<wallyworld> nfi why this is so easy in bzr and so hard in git
<jam> wallyworld: http://stackoverflow.com/questions/449541/how-do-you-merge-selective-files-with-git-merge seems to recommend rebasing until you get clean commits, then cherrpicking them...
<wallyworld> jam: thanks will look. i've not had much luck with rebasing in the past., will try again
<jam> wallyworld: the other approach is specifically rebasing the desired revisions into the target
<wallyworld> so many revisions i'd rather just apply trhe whole diff and resolve conflicts
<jam> and then remove the revisions you don't want merged
<wallyworld> i can't see why diff works and I can't just use "merge" in place of "diff" in the command line
<jam> wallyworld: patch can't handle merge conflicts, because it only has 2 inputs (you need 3 with a diff3 approach). I can think of how you could do it manually (checkout the common ancestor, checkout both merge tips, etc.)
<jam> wallyworld: I don't understand git merge internals
<jam> it may refuse to do merges that don't include the whole DAG which is what you are trying to do.
<jam> they have cherrypick for that, but it only seems to support single revs
<wallyworld> tl;dr; it sucks
<axw> wallyworld: gotta go pick up my daughter, but I just confirmed that the azure bug is present in 1.20.1 too
<axw> I've added it to the milestone
<axw> bbs
<wallyworld> axw: np, thanks, i have to head out for a bit as well
<jam> cmars: are you still around?
<jam> I'd like to discuss your plans with https://github.com/juju/juju/pull/367
<TheMue> morning
<Egoist_> Hello
<Egoist_> How to set in environments.yaml for openstack to use only one security group?
<axw> jam: I'm guessing wallyworld's branch has broken the tests, but not really sure
<axw> it touched a lot of statey things
<axw> my branch is failing the tests too
<niedbalski> axw, thanks for the reply on the thread.
<axw> niedbalski: hey nps. not sure if that helped at all
<axw> hopefully gives some context tho
<niedbalski> axw, yep, i needed some context
<rogpeppe1> axw: hiya
<axw> rogpeppe1: howdy
<rogpeppe1> axw: how's tricks?
<axw> not too shabby
<axw> and with you?
<rogpeppe1> axw: pretty good. currently in london for the gui sprint
<rogpeppe1> axw: doing charm store stuff
<rogpeppe1> axw: i was just wondering if you had any opinions about charm.Reference
<axw> rogpeppe1: I don't know what it is, so nope ;)
<rogpeppe1> axw: ah, ok. i think it has to die :-)
<rogpeppe1> axw: but i'd like to speak to someone who might've been originally involved in creating it
<axw> rogpeppe1: looks vaguely related to the changes made not too long ago to support cs:<charm> without specifying series
<axw> and having the charm store tell us which one to use
<rogpeppe1> axw: yeah
<rogpeppe1> axw: the problem is that it makes it impossible to have a bunch of charm urls, some of which specify the series and some of which do not
<axw> rogpeppe1: looks like cmars is the man to talk to
<rogpeppe1> axw: unfortunately he's also sprinting... in NZ
<axw> sorry, out of my domain
<axw> ah :(
<rogpeppe1> axw: np
<bodie_> https://github.com/juju/juju/pull/370 fwiw --
<bodie_> something's wacky and the sun is coming up, so I'm putting this one down for now, but if anyone knows about how the cmd.out.Write method handles maps and --format=json, that insight would be valued
<TheMue> jam: Iâm in da house
<jam> TheMue: for some reason pidgin isn't beeping at me
<axw> wallyworld: there's a "Revert" button on merged PRs if you didn't realse
<axw> realise*
<axw> no need to do a reverse PR or anything
<wallyworld> axw: ah, rightio, thanks, didn't see that
<mgz> but reverse prs are fun...
<mgz> (also, I'm not actually sure how much I trust github's merge algo stuf...)
<wallyworld_> mgz: katco: axw: sorry, network problems, be there soon
<mgz> wallyworld_: no probs
<perrito666> good morning
<mgz> hey perrito666
<natefinch> I'm going to be out for a bit, have to take my 1 year old to the doctor.  Poor thing has had a 102.5Â° (39Â°C) fever for 36 hours.
<perrito666> ouch natefinch best of lucks with that
 * natefinch need a bot to just automatically convert F/miles to C/km for the rest of the non-US (aka non-backwards) world
<katco> natefinch: gosh hope he starts feeling better nate
<perrito666> I am deleting and re-creating a folder in a given function, how can I test that the new folder is not the same, I tried FileInfo's ModTime, but the deletion/creation happens too fast for ModTime to change and I really do not think this justifies adding a sleep of 1 sec in the middle, ideas are welcome
<mgz> inode?
<katco> perrito666: i really hate to rely on sleeps; they so non-deterministic. i like mgz's idea
<perrito666> katco: exactly I am trying to find an alternative
<mgz> you'll have to skip on windows but that's not too bad
<perrito666> mgz: not an issue this is restore
<perrito666> wow os.FileInfo could really have more information
<hackedbellini> natefinch: hi! Is there an eta for when I will be able to update to that new version that will correct the issue I'm having?
<katco> hackedbellini: i believe natefinch is out at the moment, taking care of a sick child.
<hackedbellini> katco: ahhh, ok. np, will talk to him later so :)
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Bugs: 4 Critical, 152 High without fixes - https://bugs.launchpad.net/juju-core/
<natefinch> back
<perrito666> natefinch: how was it?
<natefinch> perrito666: possibly an ear infection.  Not conclusive, but possible, so she's on antibiotics.
<ericsnow> natefinch: yuck
<ericsnow> natefinch: hope she feels better soon!
<natefinch> Thanks
 * TheMue always felt bad when the kids have been ill. today the kids feel bad when Iâm ill. :D
 * perrito666 has no kids but knows a lot about ear pain
<TheMue> I thankfully seldom had, but the kids from time to time.
<TheMue> â¦oooOOO( Or at least I cannot remember if I had it as kid too sometimes. )
<katco> have we ever discussed using the irc bot to do CI notifications in the room? maybe just failures or other important things?
 * perrito666 upgrades ISP to double speed for U$D3/month
<perrito666> I love promotional loopholes
<natefinch> katco: no, but it's a good idefa
<natefinch> idea
<katco> what bot are we using?
<natefinch> perrito666: I called my ISP about upgrading, and they said "Oh, you have 50/25 right?"  And I was like, no I have 70/35.  And they said "you can't have that, we don't offer that".  Well, actually you do/did due to a promotion.  But thanks for being a jerk.
<natefinch> katco: no idea  bug #1347715
<_mup_> Bug #1347715: Manual provider does not respond after bootstrap <bootstrap> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1347715>
 * natefinch wasn't sure if the bot said what it was... 
<natefinch> I've been using IRC for approximately 360 days, so I'm not really the one to ask :)
<perrito666> natefinch: what was the question?
<katco> natefinch: it did. were you just posting that bug as an example, or are you wanting me to look at that particular one?
 * perrito666 got upgraded but also disconnected
<natefinch> katco: just an example. First one that I found
 * natefinch wasn't sure if mup was the bot's name or just the name we gave it to post as
<katco> natefinch: probably a name someone gave
<katco> i'll post something to juju-dev
<natefinch> mgz probably knows
<mgz> mu p~muo mu?
<perrito666> ah, iirc, jenkins has a bot builtin or can have one
<perrito666> I had that working for a project once
<katco> perrito666: yeah it does
<katco> perrito666: it's a plugin i think
<perrito666> very annoying thing
<perrito666> a lot of finger pointing
<katco> lol
 * katco currently waiting for jenkins to fail so she can resubmit
 * katco would rather not keep flipping to the jenkins console
<hackedbellini> natefinch: hi! You weren't here before, so I'll resend the question I've made earlier =P
<hackedbellini> natefinch: hi! Is there an eta for when I will be able to update to that new version that will correct the issue I'm having?
<natefinch> hackedbellini: I'll get the fix in today, and then I think we're making a cut at end of week anyway, so.... early next week?
<mgz> mup is supybot, but there's a launchpad project somewhere with stuff in
<katco> mgz: while you're here... is there any way i can kill a jenkins job i know will fail?
<mgz> katco: only with admin access... so you can poke me
<katco> mgz: #103
<mgz> the current one?
<katco> yeah
<mgz> done
<katco> the command i used to run my tests locally skipped over one that's failing
<katco> ty sir
<mgz> you need a little magic to requeue it
<katco> mgz: oh, i just did $$merge$$
<mgz> either craft your won github comment with "merge failed: " in it, or get me to requeue via jenkins
<katco> the second $$merge$$ seemed to work... huh
<perrito666> natefinch: standing thinguie
<mgz> katco: oh, you got a tests failed message
<katco> mgz: is that surprising?
<hackedbellini> natefinch: ok, no problem! I'm just anxious for that fix because without it my juju is dead =P
<hackedbellini> let me know if you need me to test anything.
<natefinch> hackedbellini: understandable
<natefinch> perrito666: ok
<mgz> katco: if I got the abort in, that wouldn't happen - might have finished anyway or something
<katco> ahhh ok
<katco> i follow now; that's why i would have had to craft the failure manually
<TheMue> aaaaah *jump *jump* *jump* lxc-ls now shows an ipv6 address for a container
<TheMue> sadly had to set it internally by hand, it is not set during deployment
<perrito666> ericsnow: what is the whole cleanup thing https://github.com/juju/juju/pull/334/files#diff-076396fa7fd3f93945528111df2d8319R48 ?
<ericsnow> perrito666: it's so that we don't leave any empty file behind in case of an error
<perrito666> I meant the cleanup variable
<ericsnow> perrito666: that could definitely use a comment
<perrito666> it seems to me that you only delete when there is an error?
<ericsnow> perrito666: that's right
<perrito666> ericsnow: you can name the error return and check it
<perrito666> since deferred call is after the function exit
<ericsnow> perrito666: ah, good point
<ericsnow> perrito666: note that in the review and I'll take care of it
<perrito666> will do, just wanted to make sure I understood your intentions correctly
<ericsnow> perrito666: you do
<ericsnow> perrito666: and thanks :)
<perrito666> np
<perrito666> does anyone know what is the possible error output of filepath.ABS ?
<perrito666> oh I see
<natefinch> anyone familiar with the upgrade logic?
<natefinch> mgz ^^ ?
<mgz> hmm. not very
<perrito666> sinzui: Ill take https://bugs.launchpad.net/juju-core/+bug/1342937
<_mup_> Bug #1342937: Juju restore  fails Could not get lock /var/lib/dpkg/lock <backup-restore> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1342937>
<sinzui> thank you perrito666
<natefinch> sinzui: I'm working on #1342725
<_mup_> Bug #1342725: C:/Juju/lib/juju/nonce.txt does not exist, bootstrap failed in win <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1342725>
<natefinch> I had most of a fix, but it conflicts with the windows changes gsamfira is doing, so I'm going to grab the fix from his code.
<jcw4> perrito666: https://github.com/juju/juju/pull/351 is updated... if you get a chance I'd appreciate a look
<perrito666> jcw4: will do
<jcw4> tx perrito666
<perrito666> jcw4: interesting test https://github.com/juju/juju/pull/351/files#diff-7195d3d7d4a41d504d4c75799ca3e540R342
<jcw4> perrito666: I knew I couldn't get anything past your eagle eye
<jcw4> I remembered I hadn't done that test after merging in master and didn't want to make more changes... I suppose I'll have to now
<jcw4> :)
 * natefinch tries desperately to get his logs above 1 meg
<natefinch> yay, log rotation
<jcw4> natefinch: yay
 * jcw4 assumes thats a good thing
<jcw4> :)
<natefinch> haha yes
<perrito666> natefinch: if you have anything using mysql turn on debug mode
<perrito666> that should do
<perrito666> :p
<natefinch> heh
<natefinch> my log rotation package lets you specify a max size, but it's in megabytes, which is a surprisingly large amount of plaintext
<natefinch> -rw-r--r-- 1 root   root   1.0M Jul 23 12:06 machine-0-2014-07-23T16-06-26.278.log
<natefinch> -rw-r--r-- 1 root   root   288K Jul 23 12:07 machine-0.log
<wwitzel3> natefinch: I thought we were just using logrotate?
<natefinch> wwitzel3: logrotate won't work on windows
<wwitzel3> hah, dumb, didn't think of that
<natefinch> wwitzel3: and we weren't using anything.  I had investigated logrotate, and it probably would have been ok, but honestly probably more work than using a pure-go solution (which also happens to be portable to other OSes)
<wwitzel3> natefinch: makes sense
<natefinch> review for whoever https://github.com/juju/juju/pull/375
<katco> backport for v1.20: https://github.com/juju/juju/pull/376
<katco> natefinch: would we ever consider qualifying our variable names to include size units? e.g.: MaxSizeInMB?
<natefinch> katco: I wouldn't.... I might consider making a custom type like type Megabyte int   so that you can't accidentally convert from one to the other.
<katco> i still like axw's idea to duplicate the time package's implementation for size units
<katco> but i wouldn't mind more discussion on the naming; what do you dislike about that technique?
<natefinch> katco: I thought of that, and actually did that for v1 of lumberjack (had Megabyte = 1024*1024  and Gigabyte as 1024*Megabyte)
<natefinch> katco: the problem that came up was deserializing from a config file.... the config file can't use those constants, and can't just use 1024*1024, so it ended up needing to be like MaxSize: 100000000
<katco> i wonder how serialization works with the time package
<natefinch> katco: plus it just wasn't necessary to let people specify log size down to the byte.  no one's going to want a log that rolls over at 54 bytes or something
<katco> natefinch: no, agreed. it's all about clarity.
<natefinch> katco: there are custom serialization formats
<katco> i.e.: if i see MaxSize, i need to go look at the declaration to know what the unit is
<katco> which is kind of annoying
<natefinch> katco: yeah, I know
<katco> hence, MaxSizeInMB
<natefinch> it seems wrong.... like hungarian notation... but I can't explain why
<katco> hungarian notation was wrong b/c it caused tons of churn all over the codebase if you ever changed the type. i suppose you could have the same issue here
<katco> i guess the difference is that hungarian notation was trying to represent the syntax in the name, and this would be representing the value in the name; something appropriate for a variable name, and probably OK to cause refactoring since you would be evaluating all the places it's used anyhow
<natefinch> It's also the kind of thing you only ever use once in your application and then never touch again
<katco> what the variable?
<natefinch> in this specific case
<natefinch> you set the logging output in one spot one time and you're done.
<perrito666> katco: you dont need to go to the  declaration, just MaxSize = "n" and try to build and the error will tell you :p also I am surprised emacs does not tell you that
<katco> perrito666: rofl
<katco> perrito666: emacs had already simulated, and predicted the outcome of this conversation, and decided it was not worth its time.
<katco> perrito666: also, any stupid things i say are emacs trying to undermine my credibility. (nods)
<katco> natefinch: yeah it's definitely not something i would dig my heels in on, but interesting conversation.
<natefinch> katco: I'm not sure you're wrong.  I have a vague leaning against it, but like I said, I can't really explain why.
<katco> natefinch: sometimes those gut checks are correct.
<perrito666> well it reminds me of old php code where they did typing trough variable naming
<perrito666> people that is
<katco> perrito666: see aforementioned comment about name trying to express syntax vs. type of value.
<katco> perrito666: that strikes me as hungarian notation, slightly different from this.
<perrito666> we will end up with a size struct and MB() GB() B() methods :p
<natefinch> I rather like go's named types, like type Megabytes int, however it doesn't help in this case because you can still just do MaxSize = 100, since constants are untyped and mold to fit the type they're assigned to.
<perrito666> anyway, I was about to say, for me file size magnitudes should always be in bytes
<perrito666> which is default for most unix tools
<natefinch> perrito666: I know, but see above about deserializing from a config file.  I did that in v1, and ended up with a config file that had maxsize = 100000000
<natefinch> and that's just ugly
<natefinch> I don't ever want to have to count zeroes
<perrito666> natefinch: most unix config files hold that kind of values
<perrito666> or modern ones accept the unit as part of the value
<natefinch> yeah, the unit deserialization is a possibility
<natefinch> 200MB  or 1.5GB etc
<perrito666> true
<natefinch> meh
<natefinch> megabytes is fine
<perrito666> most unix commands evolved from times where bytes where something meaningful so they added units when 1T was something that you could have on your laptop I guess
<katco> 200 * size.MB
<natefinch> katco: that's fine for code, but doesn't work in deserialization, which is a big use case in logging configuration
<katco> oh you're talking about the use-case for fiddling with these settings in the config file where such niceties aren't present
<katco> i missed that, sorry
<natefinch> right, np
<natefinch> makes me want to write a size package so I can have those niceties
<natefinch> plus, "package size" is um.... amusing
<katco> oh lord lol
<natefinch> whelp, I know what my next project is :)
<katco> haha
<perrito666> ohh man, I arrived to late for the size and package jokes
<katco> but you have to give it a name that's a double entendre
<natefinch> the key is figuring out how much double entendre you can fit in and still make it unclear if it's on purpose or not
<katco> rofl
<katco> +1 nate. +1.
<perrito666> natefinch: the thing is to actually name it size, make it very useful pakcage, and then get proper english speakers to compliment you on it
<katco> "One of the driving principles of the X project is that size shouldn't matter. It's how you use it. With this, we thrust our package into the go community and await feedback."
<katco> oh look, an emacs plugin to watch jenkins status. :)
<perrito666> ericsnow: ping
<ericsnow> perrito666: hey
<perrito666> ericsnow: hey, I cannot find what did you patch to make this work https://github.com/juju/juju/pull/334/files#diff-baa2cc9d463ab23cb9521ade2d84a5e9R94
<perrito666> :p
<perrito666> a little help?
<ericsnow> perrito666: it's in setData()
<perrito666> ericsnow: I guessed so much, I was not sure how that goes all the way down to backup
<ericsnow> perrito666: the actual patching happens in SetUpTest()
<perrito666> ohh I see
<perrito666> I thought yo where cheating the whole thing so it would return the same hash
<ericsnow> perrito666: tempting but no :)
<ericsnow> perrito666: by cheating I would have been "done" a lot earlier :)
<perrito666> ericsnow: I am done R up to api_test.go later Ill continue with the rest
<perrito666> but you have a handful of my comments,
<ericsnow> perrito666: awesome
<perrito666> :) I am sure more savvy people can give you even better comments
<ericsnow> perrito666: thanks so much
<perrito666> actually today you could force our own natefinch or cmars to review your code :p or thats what the ocr schedule says
<perrito666> bbl, bike time
<natefinch> damn, am I OCR today?  Where's that OCR list?
<perrito666> https://github.com/juju/juju/pull/377
<perrito666> natefinch: sent you the link in priv
<perrito666> natefinch: and since you are ocr, that link I just posted is a large part of restore
<perrito666> ;)
<perrito666> ericsnow: do you mind if I take over https://github.com/juju/juju/pull/113 ? or are you going to work on it? I am really looking forward to have a functional restore (after I fix this week's chapter bug of old restore)
<ericsnow> perrito666: go ahead :)
<arosales> Hello
<natefinch> quick, everyone hide
<arosales> fyi, we have a juju core panic on Power reported in bug https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1347322
<_mup_> Bug #1347322: juju ssh results in a panic: runtime error <ppc64el> <juju-core:Triaged> <juju-core (Ubuntu):Confirmed> <https://launchpad.net/bugs/1347322>
<arosales> natefinch: lol :-)
<arosales> previous bug was https://bugs.launchpad.net/ubuntu/+source/gccgo-4.9/+bug/1304754
<_mup_> Bug #1304754: gccgo has issues when page size is not 4kB <ppc64el> <trusty> <gcc:Fix Released> <gcc-4.9 (Ubuntu):Fix Released> <gccgo-4.9 (Ubuntu):Invalid> <gcc-4.9 (Ubuntu Trusty):Invalid> <gccgo-4.9 (Ubuntu Trusty):In Progress by doko> <gcc-4.9 (Ubuntu Utopic):Fix Released> <gccgo-4.9 (Ubuntu Utopic):Invalid> <https://launchpad.net/bugs/1304754>
<arosales> mbruzek: can reproduce this bug pretty easily so please feel free to ping him if any further data is needed.
<arosales> If anyone has any insights into that bug it would be much appreciated as it is blocking juju deployments on power.
<natefinch> the ssh one?
<arosales> natefinch: correct, I think mbruzek also say it on regular deploys
<arosales> ssh reliable reproduces it though
<mbruzek> natefinch, if you need access to power system I can hook up
<arosales> natefinch: initially we thought it to be the same as the compiler bug 1304754
<_mup_> Bug #1304754: gccgo has issues when page size is not 4kB <ppc64el> <trusty> <gcc:Fix Released> <gcc-4.9 (Ubuntu):Fix Released> <gccgo-4.9 (Ubuntu):Invalid> <gcc-4.9 (Ubuntu Trusty):Invalid> <gccgo-4.9 (Ubuntu Trusty):In Progress by doko> <gcc-4.9 (Ubuntu Utopic):Fix Released> <gccgo-4.9 (Ubuntu Utopic):Invalid> <https://launchpad.net/bugs/1304754>
<natefinch> mgz: the machine agent and unit agent - they're separate processes running at the same time on the same machine, right?
<natefinch> wallyworld: are you really there?
<natefinch> mgz: you can ignore my previous question when and if you see it :)
<mbruzek> natefinch, I saw your update to 1347322
<mbruzek> Which log do you need more of?  The text in the bug body is from the console.
<mbruzek> I included all of the dmesg output from that system.
<natefinch> mbruzek: the juju machine log should
<natefinch> mbruzek: have more info
<natefinch> mbruzek: this looks like a plain old code problem, not a compiler problem, though it could be the latter that just happens to show up as the former
<mbruzek> natefinch, I don't see an error in machine-1.log, do you want to see all-machines.log?
<natefinch> mbruzek: oh, I think I was misunderstanding what I saw.  That's a panic in the CLI code.
<natefinch> I think
<natefinch> it's so hard to read when it's all wrapped wackily like that
<mbruzek> natefinch, yeah it is.
<natefinch> mbruzek: if you can repro easily, can you get me cleaner output of that text?  or is that the best you can get?
<mbruzek> natefinch, when I get the panic the screen is garbled like that always
<mbruzek> natefinch, I can work on cleaning it up for you
<mbruzek> natefinch, I also got the machine logs from the system if they would be helpful
<natefinch> mbruzek: never hurts to attach more logs to a bug :)
<mbruzek> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1347322
<_mup_> Bug #1347322: juju ssh results in a panic: runtime error <ppc64el> <juju-core:Triaged> <juju-core (Ubuntu):Confirmed> <https://launchpad.net/bugs/1347322>
<mbruzek> natefinch, updated
<natefinch> mbruzek: thanks
<natefinch> mbruzek: i can clean up the log as easily as you, it's no problem
<mbruzek> natefinch, the panic only happens *after* I juju ssh to the ubuntu unit.  I just use the terminal a bit and it goes sideways.  I don't believe it is related to what I am running on the terminal.
<mbruzek> natefinch, to expediate the bug I juju sshed to the same unit 3 times and got the same grabled text in all three windows.
<natefinch> oh weird.  so you're connected for a bit before it actually blows up?
<mbruzek> natefinch, yes I am not sure what sets it off.  I was doing several different things
<natefinch> mbruzek: does it happen if you ssh into the machine the old fashioned way?
<natefinch> in theory, juju ssh just gets the ssh info from state and then runs ssh like a normal person would
<davecheney> thumper: https://bugs.launchpad.net/juju-core/+bug/1347939
<_mup_> Bug #1347939: build is unstable since 7524c62 <juju-core:Confirmed> <https://launchpad.net/bugs/1347939>
<davecheney> wallyworld__: sad to say your session copy fixed broke the build, https://bugs.launchpad.net/juju-core/+bug/1347939
<_mup_> Bug #1347939: build is unstable since 7524c62 <juju-core:Confirmed> <https://launchpad.net/bugs/1347939>
<wallyworld__> davecheney: i reverted it last night
<davecheney> ok thanks
<wallyworld__> it passed for me locally and on the bot
<wallyworld__> but clearly there's a race in our tests
<wallyworld__> :-(
<davecheney> wallyworld__: it's not a race
<davecheney> it's livelock
<davecheney> when I run the test my cpu usage eventually goes to 0 and the test will timeout
<wallyworld__> i haven't fully looked into it yet, just making an assumption as to why it passes sometimes and not others
 * perrito666 reviewed a 30 file pr
<davecheney> wallyworld_: right-o, thanks
<perrito666> davecheney: your post about conditional compilation is very cool, thank you, I did not know about _$GOARCH.go
<davecheney> perrito666: yup, the pattern is extended to
<davecheney> _$GOOS_$GOARCH.go
<davecheney> and even
<davecheney> _$GOOS_$GOATCH_test.go
<davecheney> mattyw: tasdomas http://blog.nuclearsecrecy.com/2014/05/23/oppenheimer-gita/
<davecheney> ^ that quote
<perrito666> waigani: funny network?
<mattyw> davecheney, tasdomas the reflections of feynman: http://www.youtube.com/watch?v=6no328q_VGQ
<fwereade> wallyworld_, ping
<perrito666> sorry I meant wallyworld_
<wallyworld_> hi
<fwereade> wallyworld_, I'm trying to figure out tools selection for container proviisioners
<wallyworld_> ok
<fwereade> wallyworld_, we have a bug that needs to be fixed -- that env provisioners start machines with the agentVersion in env config, not the current running version
<wallyworld_> was there a specific question?
<fwereade> wallyworld_, and it's looking like the container ones do that as well, but very indirectly
<fwereade> wallyworld_, ie they grab the tools with the envconfig agent-version, with arch/series taken from the current machine
<wallyworld_> fwereade: should the env config version not match the running version?
<fwereade> wallyworld_, but ISTM that they're all going through the same path, ultimately, in which they hit simplestreams
<fwereade> wallyworld_, not necessarily
<fwereade> wallyworld_, after the agent version is set by upgrade
<fwereade> wallyworld_, but before the upgrade actually happens
<fwereade> wallyworld_, we could provision a machine with tools not matching those running in the provisioner
<fwereade> wallyworld_, and *that* would come up (assuming it *did*) already running new tools
<fwereade> wallyworld_, and would never run upgrade steps
<perrito666> fwereade: hey, you are here, apparently the cloudsigma links I pointed in my mails are  the new prs, none has more than 6 files and jam\d? said that those are the younes you guys are going to review
<fwereade> wallyworld_, leading to all manner of potential unhappiness
<wallyworld_> i'll have to look up the moving parts in the container provisioners to remember how the tools selection works in there
<fwereade> perrito666, are they? ok I am out of date there, I last spoke to jam about a week ago, he is undoubtedly more current on that
<fwereade> wallyworld_, in particular the brokers get initialised with one set of tools
<perrito666> fwereade: yup, he answered to the thread I think
<fwereade> wallyworld_, looked up over the api with arch/series initialised from the current machine agent
<fwereade> wallyworld_, and then when we want to run a container with a different series we just hack up the tools struct so the wrong tools look like they have the right series
<fwereade> wallyworld_, and as if by magic everything somehow currently works
<fwereade> wallyworld_, more or less
<wallyworld> fwereade: the container provisioner gets the tools to run via a call to the Tools client api method
<fwereade> wallyworld_, yeah -- and that I think is the problem, because *that too* is using envconfig's agent-version
<fwereade> wallyworld, not that actual version that's really running on the machine
<wallyworld> i *think* the assumption was that the env version should match the running version
<wallyworld> the upgrade procedure is designed to make that happen
<wallyworld> ie set env version, trigger upgrade worker, doenload new tools, restart agents
<wallyworld> so by design, env version should match runnning version (perhaps implementaton doesn't match that design)
<wallyworld> i'm hand waving a bit because i didn't do the initial implementation of all this
<wallyworld> what bug are you seeing?
<wallyworld_> fwereade: sorry, dropped off again, my irc is so flakey for some reason
<wallyworld_> not sure if you responded
<fwereade> wallyworld_, sorry, i've decided that it is a rabbit warren of death and I'm not going to fix it today
<fwereade> wallyworld_, I found that the upgrader is *also* interestingly fucked wrt what tools it picks
<wallyworld_> ok. is there a bug number so i can see the issue?
<fwereade> wallyworld_, there might be, but it's really just a race we realised was important for HA upgrades yesterday
<fwereade> wallyworld_, I *think* there was a bug a while ago
<wallyworld_> ok. perhaps the original design is sound for a single state server
<wallyworld_> but not ha
<fwereade> wallyworld_, i'm just trying to understand the situation and flailing around and grabbing onto you
<perrito666> ok fine people, my brain just SIGQUITted on me, see you all tomorrow, cheers
<fwereade> wallyworld_, it's still a bug, fwiw, but probably less impactful
<fwereade> wallyworld_, we can't assume that the provisioner running on version X is capable of correctly setting up an instance running !X
<perrito666> sinzui: I have a solution for the apt lock issue but Ill code it tomorrow cheeers
<fwereade> wallyworld_, worst case non-HA we just fail with one machine
<fwereade> wallyworld_, worst case with HA is a bit worse, we end up with a state server never participating in the upgrade synchronisation
<fwereade> wallyworld_, anyway
<fwereade> wallyworld_, it's not your problem and you don't need to worry about it
<fwereade> wallyworld_, sorry noise
<fwereade> perrito666, apt lock?
<fwereade> perrito666, context please?
<fwereade> perrito666, that solution *should* be the hook execution lock
<fwereade> perrito666, if it's anything else we may have a problem
<fwereade> perrito666, ehh, you're EOD
<fwereade> perrito666, please find someone who knows about the hook lock tomorrow before writing anything else that tries to fix apt contention
<fwereade> perrito666, sleep well :)
<davecheney> FAIL
<davecheney> FAIL    github.com/juju/juju/replicaset 173.495s
<davecheney> always fails on my machine
<davecheney> can anyone confirm they see the same thing ?
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1347969
<_mup_> Bug #1347969: FAIL: replicaset_test.go:155: MongoSuite.TestAddRemoveSetIPv6 <juju-core:New> <https://launchpad.net/bugs/1347969>
<davecheney> thumper: fwereade https://github.com/juju/juju/pull/378
#juju-dev 2014-07-24
<davecheney> wallyworld: https://github.com/juju/juju/pull/379
<davecheney> mattyw: http://www.ohloh.net/projects/juju
<perrito666> fwereade: since your freaked out I returned :p
<fwereade> perrito666, I'm in a quantum superposition of being freaked out, and not freaked out, until I know more context :)
<perrito666> fwereade: since the latest speed ups restore sometimes fails when it arrives to the part where it installs mongo client because someone else is running apt-get
<fwereade> perrito666, what's the apt lock issue?
<mattyw> davecheney, 19% javascript?
<fwereade> perrito666, heh
<perrito666> so we need to wait until apt-get finished
<perrito666> :)
<fwereade> perrito666, who else might be running it?
<perrito666> fwereade: I have no clue, but I intend to figure that out tomorrow
<fwereade> perrito666, ok
<davecheney> "maintained by a very large development team"
<perrito666> fwereade: but that set aside it is reckless for restore to just try to apt-get without checking if it can
<fwereade> perrito666, well
<fwereade> perrito666, restore is setting up a new state server, right?
<perrito666> fwereade: well it is stepping on a fresh one :) so my quick guess, something like apt-get update/grade is happening
<mattyw> davecheney, thumper https://github.com/juju/juju/pull/369
<fwereade> perrito666, it kinda feels like the apt lock is too low level
<fwereade> perrito666, we may be stepping on it while cloudinit is still finishing?
<fwereade> perrito666, in which case we can expect more things than just the apt lock to fall over, I suspect
<perrito666> fwereade: good point, you think that if I wait for apt-to finish something else might break?
<fwereade> perrito666, yeah
<fwereade> perrito666, I think that we should have complete control over that server
 * perrito666 wonders how to know if everything else is finished
<fwereade> perrito666, I feel like we ought to have already solved that issue for bootstrap
<fwereade> perrito666, maybe we haven't?
<perrito666> I think we think we did
<fwereade> axw, you did sync bootstrap -- do we guarantee cloudinit is finished before we start work?
<fwereade> perrito666, if we do, restore should do whatever bootstrap does
<perrito666> fwereade: well the first step of restore is to bootstrap a machine so we cant really be more boostrapy :p
<fwereade> perrito666, if not, restore and bootstrap should both make sure they wait
<fwereade> perrito666, and ideally use the same code in the same way to do so regardless
<fwereade> perrito666, that maybe sounds like we don't wait
<perrito666> fwereade: well, new restore is far cleaner in that sense ;)
 * fwereade grumps a bit
<fwereade> perrito666, cool
<fwereade> perrito666, so, high-level, ISTM that we should be reusing bootstrap as much as possible (which we are, great) but that in either case we should wait for cloudinit to be done before we start doing anything else
<fwereade> perrito666, or possibly I don;t know what I'm talking about
<fwereade> perrito666, always bear that possibility in mind
<perrito666> I definitely need someone which knows more bootstrap than I to go over what bootstrap does and find out why bootstrap declares it finished when it didn't
<fwereade> perrito666, either way, forget what I said about the hook execution lock, I'm pretty sure it's irrelevant
<fwereade> perrito666, thanks for coming back to chat
<fwereade> perrito666, bah, axw not around yet
<fwereade> perrito666, would you drop him a quick mail he can answer overnight please?
 * fwereade will slope off to lunch unless there's something else?
<perrito666> fwereade: I really need some sleep so ttyt
<perrito666> lunch? where are you?
<fwereade> perrito666, new zealand
<perrito666> cool, bring back a hobbit :p
<fwereade> perrito666, haha
<fwereade> perrito666, sleep tight
<perrito666> thank you
<wallyworld> davecheney: i found a deadlock in the client api login tests
<wallyworld> that will explain a lot of the test failures in that area
<wallyworld> very much timing related so subtle changes due to session copying may have triggered it to be more frequent
<davecheney> wallyworld_: awesome
<wallyworld_> davecheney: i updatd your bug, hopefully explanation makes sense
<wallyworld_> fixed locally
<davecheney> wallyworld_: thanks
<davecheney> thumper: go test -run=XXX github.com/juju/juju/...
<thumper> fwereade: https://github.com/juju/juju/pull/380
<wallyworld_> axw: a small one https://github.com/juju/juju/pull/381
<axw> looking
<axw> ahh, is this what was causing the tests to timeout?
<davecheney> wallyworld_: do you have a new PR for reapplyng your txn fix ?
<davecheney> then cmars can test it
<wallyworld_> davecheney: not yet, i want to retest locally first. soon
<davecheney> ok
<davecheney> tasdomas: var uuidregex = regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}`)
<axw> wallyworld_: something I can do to help you with the mongo session changes?
<axw> axw> wallyworld_: something I can do to help you with the mongo session changes?
<davecheney> axw: i think wallworld needs help with his intertubes
<axw> so it would seem
<tasdomas> davecheney, thumper https://github.com/juju/names/pull/19
<axw> tasdomas: do you know if landscape has updated their code to use the new format?
<axw> tasdomas: also, #1257587 isn't actually Fix Released until this is in juju - right?
<_mup_> Bug #1257587: environment-tag handling permits non-unique tags <landscape> <tech-debt> <juju-core:Fix Released> <https://launchpad.net/bugs/1257587>
<davecheney> axw: true
<davecheney> we need to land a companion branch to juju and dependencies.tsv
<wallyworld__> davecheney: so, gh won't let me create a pr using the original copy-session branch because it says master already has all those commits, even though i reverted. so do i need to do a whole new branch? please say i don't
<menn0> wallyworld__: add --force to the push?
<wallyworld__> menn0: i haven't pushed anything new as yet - just went to the original branch on gh and tried to create a pr
<davecheney> wallyworld__: i wish I could help
<menn0> wallyworld__: ah sorry... I misunderstood what you were trying to do
<menn0> wallyworld__: I know even less about GH than Git
<davecheney> wallyworld__: i'd grab the .diff from the original PR and try to patch it onto a fresh branch
<wallyworld__> menn0: that worked, thanks. not sure why exactly, since the branch hadn't been changed on my end and the revs it contains were already in my fork. but it seems to have done something
<axw> wallyworld__: still investigating, but I think I need to reopen #1345832
<_mup_> Bug #1345832: Juju writes to mongo without an actual change occurring <cloud-installer> <landscape> <juju-core:Fix Committed by axwalk> <juju-core 1.20:Fix Committed by axwalk> <https://launchpad.net/bugs/1345832>
<axw> don't think it's actually fixed on account of the assertion going into the oplog
<axw> may be best to take it off the 1.20.2 milestone if that's the case
<axw> -__________-
<axw> wallyworld__: your connection is awful
<axw> axw> wallyworld__: still investigating, but I think I need to reopen #1345832
<_mup_> Bug #1345832: Juju writes to mongo without an actual change occurring <cloud-installer> <landscape> <juju-core:Fix Committed by axwalk> <juju-core 1.20:Fix Committed by axwalk> <https://launchpad.net/bugs/1345832>
<axw> axw> don't think it's actually fixed on account of the assertion going into the oplog
<axw> <axw> may be best to take it off the 1.20.2 milestone if that's the case
<wallyworld__> axw: it's only freenode
<wallyworld__> nfi idea why
<wallyworld__> axw: how often are the api hosts updated?
<axw> umm
<axw> not sure. will need to check
<axw> but the oplog will be changing at the same rate as before
<axw> just with a different type of entry
<tasdomas> thumper, davecheney https://github.com/juju/juju/pull/369
<wallyworld> davecheney: cmars: master now updated with session copying branch
<cmars> wallyworld, thanks
<davecheney> wallyworld: ta
<axw> wallyworld: I'm reopening the bug about no-op writes, but thankfully I think it's easy to fix
<axw> I had thought there were two sources of calls to SetAPIHostPorts, but there's only one
<wallyworld> axw: np, sounds good
<wallyworld> axw: i got the copy sessions stuff backported, had to fox some additional tests. i'm doing a live test before landing. if you wanted to eyeball it, that would be good. no real hurry https://github.com/juju/juju/pull/385
<axw> wallyworld: okey dokey. so what was causing the errors last night?
<wallyworld> axw: that necessitated the revert?
<axw> wallyworld: yeah
<wallyworld> i think it was the login deadlock
<axw> what about the authorization failures tho?
<wallyworld> let me re-read the email
<wallyworld> axw: i think i saw a fair few tests fail once the login deadlock caused failure
<axw> mk
<wallyworld> once i fixed that, i wasn't able to reproduce any more failures
<davecheney> thumper: fwereade https://github.com/juju/juju/pull/378
<davecheney> fwereade: may I draw you attention to https://github.com/juju/juju/pull/378, which I have updated
<davecheney> mattyw: ping
<mattyw> davecheney, pong?
<mattyw> davecheney, at the moment all I've added is a NewUUID to the factory which just wraps utils.NewUUID
<davecheney> mattyw: that's the ticket
<davecheney> more abstraction, that's what we need
<axw> thumper: https://github.com/juju/juju/blob/master/environs/configstore/disk.go#L235
<thumper> hi axw
<axw> hello
<thumper> bugger
<thumper> forgot to remove that
<thumper> sorry
 * thumper fixes
<axw> thanks. I'd do it, but in the middle of a branch
<thumper> axw: also, just pulled trunk to get the new hotness
<thumper> but the tests seemed to be hanging
<axw> :(
<thumper> last output is:
<thumper> ok  	github.com/juju/juju/cmd/envcmd	0.145s
<axw> will see if I can repro in a moment
<thumper> proceeded the second time
<thumper> axw: I normally run all the tests when I freshly pull master
<thumper> to make sure it is all good
<thumper> and it makes me feel better
<thumper> I think it is up to the replica set tests now
<waigani> axw: I'm trying to live test manually adding a machine with a specified ssh key and I get the following error:
<waigani> WARNING failed to parse bootstrap-config: empty image-metadata-url in environment configuration
<waigani> ERROR empty image-metadata-url in environment configuration
<axw> waigani: do you have an empty image-metadata-url in your environment configuration? ;)
<waigani> lol
<waigani> axw: fair call
<axw> if it's blank in environments.yaml, that's considered an error
<axw> it needs to be either not there, or not blank
<waigani> right, thanks
<thumper> axw: https://github.com/juju/juju/pull/386
<waigani> axw: its not there
<axw> hmm
<axw> waigani davecheney: I suspect https://github.com/juju/juju/pull/362/files#diff-2
<axw> config previously allowed "", now does not
<axw> so... somehow your environment has a blank value I guess
<waigani> axw: ugh, nice catch
<davecheney> wallyworld: can you help me revert this PR ?
<davecheney> https://github.com/juju/juju/pull/362
<davecheney> turns out we can't even remove these fields
<wallyworld> davecheney: sure, let me look
<wallyworld> davecheney: reverted and merged
<davecheney> ta
<davecheney> wallyworld: thanks
<wallyworld> np
<thumper> anyone else seeing the state/watcher tests fail?
<waigani> axw: worked out what happened: I bootstrapped, ran make install (new version), then ran addmachine. I get same error when I try to destroy my env (I'll monkey my config to make it work). davecheney's revert should fix this.
<axw> okey dokey
<wallyworld> thumper: i occasionally get failures with those. what are you seeing?
<thumper> watcher_test.go:631:
<thumper>     assertChange(c, s.ch, watcher.Change{"test", "a", revno1})
<thumper> watcher_test.go:118:
<thumper>     c.Fatalf("watch reported nothing, want %v", want)
<thumper> ... Error: watch reported nothing, want {test a 2}
<thumper> also, my replicaset test timed out at 10 minutes
<wallyworld> thumper: i regularly get replica set timeouts. i sometimes see the watcher failures, been meaning to search for a bug. definitely intermittent for me. another race :-(
<davecheney> wallyworld: it's not a race
<davecheney> it's that PR that I landed
<wallyworld> the wather failures?
<wallyworld> watcher
<davecheney> the problem is we're startin a mongodb on [::1]:something
<davecheney> and then dialing it on 127.0.0.1:something
<davecheney> they are effectively different networks
<wallyworld> ok. i've seen them pass sometimes and not others
<davecheney> thumper: confirmed, this has just started happening
<davecheney> ----------------------------------------------------------------------
<davecheney> FAIL: watcher_test.go:625: com_juju_juju_state_watcher_test.TestWatchBeforeRemoveKnown.pN55_github.com_juju_juju_state_watcher_test.SlowPeriodSuite
<davecheney> insert("test", "a") => revno 2
<davecheney> remove("test", "a") => revno -1
<davecheney> watcher_test.go:118: c.Fatalf("watch reported nothing, want %v", want)
<davecheney> ... Error: watch reported nothing, want {test a 2}
<davecheney> [LOG] 0:10.025 INFO juju.testing reset successfully reset admin password
<davecheney> OOPS: 20 passed, 4 FAILED
<davecheney> --- FAIL: TestPackage (34.06 seconds)
<davecheney> FAIL
<davecheney> FAIL	github.com/juju/juju/state/watcher	34.559s
<wallyworld> davecheney: that's what's been intermittent for me, passes mostly but sometimes fails
<davecheney> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1348032
<_mup_> Bug #1348032: state/watcher: FastPeriodSuite.TestWatchAfterKnown failure <juju-core:New> <https://launchpad.net/bugs/1348032>
<davecheney> it's started to fail all over the ship today
<wallyworld> hmmmm. mostly passes here for me, with the odd failure
<wallyworld> that's why i thought it to be a race
<davecheney> wallyworld: it's not a race
<davecheney> it's load related
<davecheney> the test passes on an idle machine
<davecheney> when I load up my machine
<davecheney> the test fails
<wallyworld> so the load could trigger the race, by changing timing of thread interactions
<wallyworld> race conditions typically manifest under different load conditions
<davecheney> i'm running it under -race
<davecheney> it's not a race conditoin by my definition
<thumper> davecheney: https://github.com/juju/juju/pull/390
<thumper> :-(
<thumper> axw, wallyworld: above PR fixes a critical issue in a previous branch of mine
<thumper> typed nil FTL
<wallyworld> looking
<thumper> axw: ta
<axw> np
<thumper> talking with davecheney about writing a jc.IsNil that may assert the type of the interface if it is nil
<wallyworld> thumper: i'd argue the original code was wrong not to check err value anyway
<axw> +1
<wallyworld> jeez, bot is busy today
<thumper> wallyworld: it did check the err value
<thumper> wallyworld: what's the problem there?
<thumper> it wasn't the error that was the problem
<wallyworld> - if info != nil {
<wallyworld> - info.environmentDir = d.dir
<thumper> it was the expectation of nil, not-nil
<thumper> wallyworld: that's perfectly fine there
<wallyworld> should use the return value being "nil" as the determining factor
<wallyworld> shouldn't
<wallyworld> it should have done if err != nil
<thumper> still not the solution there
<wallyworld> or if err == nil
<thumper> that still wouldn't fix the problem
<axw> thumper: wallyworld is arguing that all error cases should exit early, which I agree with
<thumper> yeah...
<wallyworld> if err != nil, then you know info is bad
<wallyworld> then it doesn't matter if it is nil or typed nil
<thumper> wallyworld: well, I have a test that asserts that info is nil if error is there
<wallyworld> cause you won't use it
<thumper> but we do use it
<thumper> and we have a test that asserts it is nil
<wallyworld> you use it even if err != nil?
<wallyworld> that's bad
<thumper> right
<thumper> we assume that info is nil if error is set
<davecheney> thumper: https://bugs.launchpad.net/gocheck/+bug/1248040
<_mup_> Bug #1248040: test failed on 1.2rc3 <gocheck:New> <https://launchpad.net/bugs/1248040>
<thumper> I'm not saying it is right
<wallyworld> cause if err != nil, then you mustn't assume the return val is usable
<thumper> I'm saying that is what we do
 * thumper makes a note to fix the call site too later
<tasdomas> davecheney, https://github.com/juju/juju/pull/393
<davecheney> log_test.go:159: c.Check(tw.Log, jc.LogMatches, []string{"foo", "bar"})
<davecheney> ... obtained func() []loggo.TestLogValues = (func() []loggo.TestLogValues)(0x455c10)
<davecheney> ... expected []string = []string{"foo", "bar"}
<davecheney> ... Obtained value must be of type []loggo.TestLogValues or SimpleMessage
<thumper> davecheney: https://github.com/juju/testing/pull/23
<wallyworld> axw: got a sec for a hangout in the standup hangout?
<axw> sure
<wallyworld> mgz: meeting?
<perrito666> axw: good morning, do you think I can get an answer to my email before your EOD?
<perrito666> good morning all
<axw> perrito666: good morning. the one I already replied to, or the one that hasn't come yet? :)
<perrito666> lool
 * perrito666 kicks his inbox
<perrito666> sorry it got buried in githubbies
<axw> no worries :)
<perrito666> aghh, then who is running apt-get
<axw> perrito666: can you see what's being installed via apt-get?
<perrito666> axw: well to be honest I was never able to reproduce that yet, Ill give it a try
<axw> jam: are you abreast of all the IPv6 changes that dimitern has been doing? if so, would you cast your eye over this please? https://github.com/juju/juju/pull/394
<axw> if not I'll land anyway and send dimitern an email
<natefinch> jam, mgz: relatively easy code review? https://github.com/juju/juju/pull/375
<TheMue> natefinch: *click*
<natefinch> TheMue: thanks... that's what I get for not scrolling all the way down in the user list
<TheMue> natefinch: hehe
<TheMue> natefinch: lumberjack? ;)
<natefinch> TheMue: it's a log rolling package
<TheMue> natefinch: funny name
<natefinch> TheMue: borning names are so boring
<natefinch> s/borning/boring/
<TheMue> natefinch: Iâve been grown up with TLAs
<katco> good morning all
<TheMue> natefinch: itâs a bad PR
<TheMue> katco: morning
<katco> favorite TLA joke: what does idk mean?
<TheMue> natefinch: thereâs nothing to complain :D
<TheMue> katco: iiirks, dunno?
<TheMue> katco: ah
<katco> :)
<TheMue> katco: I donât know
<katco> the response is "well then i'll just ask someone who does"
<natefinch> Yeah, I've decided I like giving real names to my projects rather than just purely descriptive ones... I could have named lumberjack go-logrotate or something boring like that... but that's not memorable.  Lumberjack is fun, memorable, and fairly unique.
<katco> natefinch: you should have named it "snore"
<katco> as in, "sawing logs"
<TheMue> natefinch: I dislike all those go-prefixes
<natefinch> TheMue: me too.... but it's tricky because pretty much every decent name has already been taken
<TheMue> natefinch: not in your namespace, eg. at github
<natefinch> TheMue: I've decided that I don't care if I conflict with an existing project as long as it's not hugely well-known  and/or in the exact same functional space
 * TheMue has to admit heâs using prefixes for his repositories, but not for the packages/modules/libraries in there
<natefinch> TheMue: http://www.pkgname.com/
<TheMue> so itâs clear that all the stuff in there is for one language or is an application which possibly uses multiple languages
<TheMue> natefinch: oh no, people build services to check this
<natefinch> haha, it basically just checks if you have "Go" in the name, and if so, say's it's a crappy name
<natefinch> of course, it's not smart... something like "gorilla" it says is a bad name because it mentions go
<TheMue> maybe it should be no prefix
<TheMue> ihavetogo
<TheMue> letsgo
<TheMue> canwego
<TheMue> igotoyou
<natefinch> nah, because people do foo-go  (oh, hyphens are another no-no at pkgname, which I agree with)
<TheMue> those go* packages remind me of all thos ISomething interfaces
<natefinch> yep
<TheMue> or upn
<TheMue> hmm, no, wrong name, donât remember how it is calles
<TheMue> called
<TheMue> this paiPointerToArrayOfIntegers
<natefinch> TheMue: we were having a similar conversation yesterday...   Lumberjack has a MaxSize field, which is supposed to be in megabytes.... should it be MaxSizeInMB, or should the units be in the type  (type Megabytes int) or just in a comment?
<natefinch> TheMue: curious to hear your opinion... I went with a comment, but I sorta feel like it should have been its own type.  Not a fan of the change of the name
<TheMue> natefinch: I like the approach with an own type. itâs one of the strength of go to make this so simple. also for arguments or constants
<TheMue> natefinch: think of the time package
<natefinch> Yeah... I regret not using it on lumberjack
<natefinch> oh well.... can't change it now, I'd break the API.
<natefinch> It's not so bad for lumberjack, because it's something you only ever set up once in your project, so you only have to remember it's megabytes that one time.
<TheMue> +1
<natefinch> TheMue: what do you think about the log size and number of backups for the log rotation?  It seemed like a reasonable config, but I don't know what people really expect.  100M, to me, is nice, because if you get something spamming the logs, you have half a chance at being able to go back far enough to find the root cause before all the spam.... and yet, 100MB in this day and age shouldn't be a hardship even on relatively s
<natefinch> mall cloud disks.
<natefinch> jam: curious to hear your thoughts on the max size of the logs, too.  Note, this is for the machine-n and unit-n logs, not the all machines log, which can be much much bigger (due to being, you know, everything)
<TheMue> natefinch: size is ok to me, only maybe more than just one generation, but Iâm not sure
<TheMue> natefinch: standard is often 5, but for apps with less log changes, so the files are pretty small
<axw> natefinch: is there a way to get lumberjack to always backup the first log file? it may have useful information about the agent's initialisation
<axw> e.g. the agent version/compiler
<perrito666> aghh I hate race conditions
<natefinch> heh
<natefinch> sorry, was out for a bit, electrician arrived
<natefinch> axw: there's no way to keep around the first file for forever... we could do that ourselves easily enough, though
<natefinch> axw: just copy the log file to unit-0-init.log after initialization is done
<natefinch> axw: lumberjack won't touch files it doesn't generate itself
<natefinch> (unless you happen to make a file that exactly matches origFilename-lumberjackTimestamp.origExt
<jamespage> I have a question with regards to relation state - when does data that a remote service has set on a relation become visible to related services? is is at the point of execution of the -changed hook on the remote side or as soon as the data is set on the local side?
<perrito666> hey natefinch ericsnow can I be a couple of mins late? I am uploading some tools to a vm and mi bw up is completely used
<ericsnow> perrito666: I'm good :)
<perrito666> no you are not :p
<ericsnow> haha
<mgz> ericsnow: poke about reviewboard plugin, have you got it in a state I can pick up and carry on with?
<ericsnow> mgz: I take it you didn't get me email 10 days ago :)
<ericsnow> mgz: yeah, it's ready (though I'm sure we'll find something that needs tweaking)
<mgz> ericsnow: thanks
<ericsnow> mgz: np
<mgz> (I have also been stuck in other things since the sprint mostly...)
<ericsnow> mgz: I figured :)
<ericsnow> voidspace: how's the trip?
<perrito666> ericsnow: natefinch I am all set, going in
<voidspace> ericsnow: hey, cool - just sitting in lightning talks now
<voidspace> ericsnow: you all coping without me?
<voidspace> current talk title "I hate testing"...
<ericsnow> voidspace: Gary Bernhardt?
<voidspace> ericsnow: hah, no - some European guy I don't know
<voidspace> speaking about code mutation for testing - checking that when you change your code randomly some test fails to tell you
<ericsnow> natefinch: you coming?
<wwitzel3> ericsnow, perrito666, natefinch: we are working on the inbound to juju mapping document for tosca, can't join standup right now.
<natefinch> wwitzel3: no problem.  Don't worry about the standup this week
<wwitzel3> natefinch: I'll send another update to the team EOD tomorrow that will cover the second half of the week.
<perrito666> wwitzel3: send pics :p
<ericsnow> wwitzel3: hey, don't sound so excited!
<mbruzek> Hello natefinch I just got done with the Juju Cross team meeting.  The Landscape team has a bug blocking their release that someone on core should look at. https://bugs.launchpad.net/juju-core/+bug/1318366
<_mup_> Bug #1318366: jujud on state server panic misses transaction in queue <cloud-installer> <landscape> <orange-box> <panic> <performance> <sm15k> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1318366>
<perrito666> unrelated question, any of you has a thinkpad x1 carbon?
<mbruzek> natefinch,  fwereade, since Alexis is out who can I speak with for core issues?
<natefinch> mbruzek: you can talk to me... sorry I missed the cross team meeting
<mbruzek> natefinch, That is OK you were dealing with electricians
<perrito666> sinzui: ping
<sinzui> hi perrito666
<mbruzek> natefinch, This seems pretty important to the Landscape team and no one on juju except for John has made a comment.
<perrito666> sinzui: hi, sorry to botter, the repo for CI tests changed?
<perrito666> I am bzr pull-ing on my copy and it is complaining
<natefinch> mbruzek: sorry... much of the Juju Core leads are indisposed this week.  William and Tim are on a sprint, and Alexis is on vacation.
<sinzui> perrito666, lp:juju-ci-tools and lp:juju-release-tools
 * perrito666 scratches his head
<perrito666> thank you
<natefinch> mbruzek: I'll look into that error today.  Seems like mongo is having trouble for some reason
<sinzui> perrito666, note that the restore tests are being run in HP this week because of the ec2 provisioning issue. So the tests are more likely to pass. They certainly always pass for 1.20.3
<mbruzek> natefinch, OK great, thank you for taking a look.  I believe this is blocking the Landscape team so if you could leave a comment that would be appreciated.
<perrito666> sinzui: I am setting up an ec machine to run the tests from there to ec2, so I can trigger the error
<sinzui> +1
<natefinch> mbruzek: done
<natefinch> heh.... This landscape bug has a panic traceback that's over a million lines long.  Looks like about 18000 goroutines
<mgz> amusing
<natefinch> this is why exceptions are somewhat less useful in multi-threaded environments :)
<mgz> not really, but dumping every stack when we're also leaking routines certainly does lead to silliness
<natefinch> didn't realize we're leaking goroutines. that seems... serious
<natefinch> dpb1: I hear you're the one to talk to about log files for this landscape bug... do you have the full log file? The one James Page posted is truncated, and missing a lot of info.
<natefinch> https://bugs.launchpad.net/juju-core/+bug/1318366
<_mup_> Bug #1318366: jujud on state server panic misses transaction in queue <cloud-installer> <landscape> <orange-box> <panic> <performance> <sm15k> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1318366>
<dpb1> natefinch: I'm looking into it. sec
<sparkiegeek> have been pointed here for giving info about bug 1318366?
<_mup_> Bug #1318366: jujud on state server panic misses transaction in queue <cloud-installer> <landscape> <orange-box> <panic> <performance> <sm15k> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1318366>
<sparkiegeek> sorry, wasn't connected, don't have the backscroll
<sparkiegeek> what's needed?
<sparkiegeek> natefinch: https://chinstrap.canonical.com/~acollard/maxwell-logs.tgz
<bodie_> anyone familiar with the cmd.out.Write utility for marshaling JSON?
<natefinch> sparkiegeek: thanks
<sparkiegeek> natefinch: as stated on the bug, it might contain sensitive data, hence the canonical.com link :)
<natefinch> sparkiegeek: understood
<natefinch> niemeyer: we're getting "rescanned document misses transaction in queue" from mgo during a big deployment.  Relevant stack trace: http://pastebin.ubuntu.com/7848252/
<natefinch> niemeyer: I'm not really sure what the panic means
<natefinch> niemeyer: relevant bug: https://launchpad.net/bugs/1318366
<_mup_> Bug #1318366: jujud on state server panic misses transaction in queue <cloud-installer> <landscape> <orange-box> <panic> <performance> <sm15k> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1318366>
<niemeyer> natefinch: I'm getting in a meeting right now, but we catch up afterwards
<natefinch> niemeyer: ok thanks
<sparkiegeek> natefinch: did you read the duplicate bug? https://bugs.launchpad.net/juju-core/+bug/1318044/comments/1 especially
<_mup_> Bug #1318044: panic in mgo transaction lib <cloud-installer> <landscape> <mongodb> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1318044>
<natefinch> sparkiegeek: ahh, no, thanks
<perrito666> natefinch: http://bashrcgenerator.com/ <-- this might come in handi
<perrito666> handy
<natefinch> cute
<natefinch> what's the trick to rebase a change on top of stuff merged from upstream?
<natefinch> mgz, gsamfira ^^
<gsamfira> natefinch: if your branch is based on "master" for example, you need to commit your changes, do a git checkout master, git pull, git checkout <my branch>, git rebase master
<gsamfira> your changes will be replayed on top of current master
<gsamfira> if any conflicts come up, you will be guided through resolving them
<gsamfira> make a backup of your local repo just in case ;)
<katco> https://github.com/juju/juju/pull/398
<mgz> pretty much, it's not really as dangerous as it seems either, actually destroying underlying revisions is hard
<gsamfira> mgz: depends. If they are continuous, you could just do
<gsamfira> git rebase --interactive HEAD~2
<gsamfira> and rebase the last 2 commits
<gsamfira> put a squash instead of a "pick"
<gsamfira> and they will be squashed into one
<niemeyer> natefinch: I'm here
<niemeyer> natefinch, gsamfira: You don't _have_ to rebase though.. merging works fine for that too
<gsamfira> niemeyer: true, rebasing just gives you a continuous history. If you start on a new feature, your commit messages will be at the top, instead of spread out throughout the commit log
<gsamfira> also squashing is helpful if you don't want to push 50+ commit messages for one big feature
<gsamfira> stuff like: changed based on feedback
<gsamfira> or "oops...forgot to close function"
<gsamfira> :)
<perrito666> sinzui: getting "upgrade in progress - Juju functionality is limited" this is a behaviour you see too?
<niemeyer> natefinch: Can I please have the full log for that bug?
<natefinch> niemeyer: https://chinstrap.canonical.com/~acollard/maxwell-logs.tgz
<natefinch> brb, I have to drop off the babysitter.  back in ~10 mins
<sinzui> perrito666, I have never seen that with the restore tests.
<perrito666> sinzui: btw, I see what you meant about aws being unreliable these days
<niemeyer> natefinch, jamespage: Where are the db logs there?
<niemeyer> natefinch, jamespage: Ah, they seem to be unfiltered in syslog
<natefinch> back
<sparkiegeek> niemeyer: heh, those logs are from me. Separate case of the bug
<niemeyer> sparkiegeek: Hm?
<sparkiegeek> niemeyer: I think there's a lingering "rsyslog not configured to put mongodb in a separate file" bug in there somewhere too
<niemeyer> sparkiegeek: Which logs are from you?
<sparkiegeek> niemeyer: maxwell-logs.tgz is mine :)
<natefinch> sparkiegeek: ahh,  I didn't realize these logs were separate from the ones in the bug
<niemeyer> sparkiegeek: Ah, you are the one facing the issue then?
<niemeyer> sparkiegeek: Or is that unrelated?
<sparkiegeek> niemeyer: initial bug was filed by jamespage (twice, I closed one as dupe). He hit it on a SeaMicro 15k which was put down to "lots of units, complex relations, something blew up"
<sparkiegeek> niemeyer: I hit it a couple of days ago on an OrangeBox, grabbed all the logs I could because I was aware of previous issue not being solved due to lack of logs
<sparkiegeek> niemeyer: make sense?
<natefinch> sparkiegeek: thank you for grabbing all that info.  It makes it so much easier to be sure we're not missing something.  Sometimes it can be like pulling teeth to get a full set of logs.
<niemeyer> sparkiegeek: Not yet..
<niemeyer> sparkiegeek: The question is, what's the actual bug these logs are relatedto?
<sparkiegeek> niemeyer: bug 1318366
<_mup_> Bug #1318366: jujud on state server panic misses transaction in queue <cloud-installer> <landscape> <orange-box> <panic> <performance> <sm15k> <juju-core:Triaged> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1318366>
<bodie_> question for anyone who has input on the topic
<bodie_> config-get returns a map of config values for a service or unit
<niemeyer> sparkiegeek: Okay, thanks
<bodie_> you can also index it by key, as in config-get maxRam
<sparkiegeek> niemeyer: be sure to check the dupe bug too
<niemeyer> sparkiegeek: I don't think I understand what you meant originally by "separate case of the bug" then
<niemeyer> sparkiegeek: Which one?
<bodie_> (I'll let you guys finish :) )
<sparkiegeek> niemeyer: https://bugs.launchpad.net/juju-core/+bug/1318044
<_mup_> Bug #1318044: panic in mgo transaction lib <cloud-installer> <landscape> <mongodb> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1318044>
<niemeyer> sparkiegeek: Thanks, I've exchanged ideas on this one a couple of months ago already
<sparkiegeek> niemeyer: perhaps "separate instance of the bug" is clearer? I mean, I hit it separately from James and the logs are from that, not from James instance
<niemeyer> sparkiegeek: Do you still have that system running, or can you reproduce the problem easily?
<sparkiegeek> niemeyer: no, and no
<niemeyer> sparkiegeek: Ah, got it
<niemeyer> sparkiegeek, jamespage: Okay, so I'll restate my original comment from #1318044: please grab a dump of the database when you see it again
<_mup_> Bug #1318044: panic in mgo transaction lib <cloud-installer> <landscape> <mongodb> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1318044>
<niemeyer> sparkiegeek: The logs were the other pending detail, and you nailed it. Thanks
<sparkiegeek> niemeyer: just for future reference, how do I grab a dump of the database?
<sparkiegeek> niemeyer: the thing that jumped out of me from those logs was how often mongo seemed to be getting killed
<niemeyer> sparkiegeek: There's a command, mongodump
<sparkiegeek> niemeyer: ok :) thanks
<niemeyer> sparkiegeek: Just point it to the localhost juju-db address and it'll spit everything out
<niemeyer> sparkiegeek: It's awkward indeed.. and it's a SIGTERM, so it's being explicitly terminate
<niemeyer> d
<niemeyer> Why is it being so?
<sparkiegeek> niemeyer: right, that lead me to upstart logs
<sparkiegeek> but I hit a dead end at that point
<sparkiegeek> is mongodb upstart controlled?
<natefinch> sparkiegeek: for juju, yes
<niemeyer> sparkiegeek: upstart/juju-db.log
<sparkiegeek> natefinch: juju-db.conf right?
<niemeyer> Nothing interesting
<natefinch> yeah
<sparkiegeek> yeah, but the noise there implies multiple restarts? That was my take away
<niemeyer> Just to be clear, it shouldn't really matter for this one instance.. it might be shut down thousands of times and it shouldn't matter
<niemeyer> But it's a problem worth understanding on itself
<natefinch> yeah.... wish there were timestamps in there
<sparkiegeek> natefinch: +1
<niemeyer> natefinch: Where?
<natefinch> niemeyer: in the juju-db upstart log
<niemeyer> natefinch: Why? It's all boring messages..
<natefinch> niemeyer: because then you can tell if they were 1 second apart or 1 day apart.  Can make a big difference
<niemeyer> natefinch: I mean, not that I disagree.. it's very strange that these messages are not timestamped.. but that's yet another bug, this time on upstart :)
<natefinch> heh yep
<niemeyer> natefinch: We can tell how far apart they are.. just look at syslog
<niemeyer> They're under 30 seconds apart
<natefinch> right, I was going there next.  Just was surprised that any log written anywhere was done without a timestamp
<perrito666> aghh found it apt-get --option Dpkg::Options::=--force-confold --assume-yes install vlan
<sparkiegeek> the amusing thing is that the logs for mongo appear to have two timestamps :)
<sparkiegeek> take from column A, add to column B ;)
<natefinch> oh yeah... huh.  didn't notice
<natefinch>  very weird that it's getting sigterm'd like 21 seconds after it starts up
<sparkiegeek> natefinch: niemeyer: I'm about to EOD. I think I've given you guys all the help I can?
<natefinch> sparkiegeek: the logs are awesome, thanks again.  Really makes all the difference.
<sparkiegeek> natefinch: np :)
 * perrito666 senses everybody forgot core team meeting
<natefinch> mgz, TheMue, jam: team meeting?
<bodie_> https://github.com/juju/juju/pull/399 should be ready to merge
<perrito666> is anyone here familiar with networker?
<katco> just found this: https://chrome.google.com/webstore/detail/mnkacicafjlllhcedhhphhpapmdgjfbb
<katco> enables tweaking git diffs; side-by-side, ignore whitespace
<katco> here's github repository: https://github.com/KuiKui/Octosplit
<katco> firefox extension as well
<natefinch> katco: yeah, it sort of works
<katco> looks great with my sample size of 1 :)
<natefinch> it's poor man's side by side, and it mucks up if you expand stuff and then do side by side (or something like that)
<katco> ah. yeah i just went for the simple case.
<katco> does anyone know what the const aptAddRepositoryJujuStable is used for in juju/juju/provider/local/prereqs.go? i can't find any reference of it anywhere.
<natefinch> uh weird.  must be old
<natefinch> I don't see any use of it either
<katco> can i delete it?
<katco> broken windows principle
<natefinch> right
<katco> i'm dealing with apt-getty schtuff atm
<natefinch> also, it someone is using it, it won't compile, so we'll figure it out
<katco> so i figure it fits
<natefinch> s/it/if/
<natefinch> delete it, see what happens
<katco> k out it goes!
<katco> ty nate
<natefinch> katco: looks like the mongo code does the same add-apt-repository, but not using that constant... probably something missed in a refactor
<katco> ah
<natefinch> friggin' bzr
<natefinch> bzr: ERROR: Not a branch: "http://bazaar.launchpad.net/~maas-maintainers/gomaasapi/trunk/".
<perrito666> perhaps it is not a branch ?
 * perrito666 hides
<natefinch> it definitely is a branch :)
<perrito666> Ill take bzr word on this one, sorry
<perrito666> far from perfecting my go skills I am becoming even better at bash...
<perrito666> sinzui: quick question
<perrito666> do you think that restore should rather timeout trying to restore or rather fail after a few attempts knowing that if it kept trying it could have done it before timeout?
<natefinch> deleting my local launchpad.net directory and re-doing a go get fixes the problem
<sinzui> perrito666, I don't understand. My experience with juju in ci that juju passes quickly and fails slowly. Increasing timeouts/waiting is doesn't improve juju's chance of success
<perrito666> sinzui: ok you indirectly answered me
<sinzui> oh?
<perrito666> sinzui: you make me write, this is the deal
 * perrito666 writes
<perrito666> there is a worker that tries to add vlan support, for that it needs to install a package
<perrito666> "sometimesÂ®" its apt-get is not finished before our restore starts installing mongod-clients
<perrito666> that causes restore to fail
<sinzui> ah
<perrito666> which sucks bc its a dumb reason to fail
<perrito666> so, I can either retry apt-get until it works or timeout
<perrito666> oooor
<perrito666> I can retry apt-get a few times and then give up
<sinzui> perrito666, thank you for the explanation
<perrito666> I guess the first option is the right one
<sinzui> perrito666, All apt ops in juju need to retry since it shares the system. I suppose we need to know how long is reasonable?
<sinzui> perrito666, we have a bootstrap timeout of 10 minutes by default? is it reasonable to use a configured timeout to "rebootstrap" the state-server for this case
<sinzui> bootstrap-timeout?
<perrito666> sinzui: mm, at that point I am not sure how much time passed, I think it is reasonable to stay there as much as required
 * natefinch gets ready to run go test ./... on his windows VM
<sinzui> perrito666, +1
<katco> natefinch: good luck lol
<natefinch> heh... it causes like 100 "windows firewall has blocked..." windows to pop up
<perrito666> natefinch: lol
<perrito666> kill the fw if you want to live
<perrito666> sinzui: the fix should go to trunk and then backported?
<perrito666> I am not sure what is the process there
<sinzui> perrito666, +1 for trunk. I believe we use git patch to backport to stable. But we only need to do that if we believe stable is affected
<perrito666> ok anyway to be sure Ill have to run this a few times
<sinzui> wallyworld, CI blessed bc568a6d as 1.20.2. No human intervention. It took 2.5 hours. I can start the release now
<davecheney> \o/
<wallyworld> sinzui: great :-) i'd like to strongly suggest to stakeholders that we do internal testing prior to formal release
<sinzui> wallyworld, how would they do that. compile their own and upload-tools?
<wallyworld> can't they take the release you make and run that?
<wallyworld> before we publish
<sinzui> wallyworld, I can copy all the binaries that CI made
<wallyworld> several internal changes have been made to how mongo connections are handled to combat i/o timeouts and we need proper load testing on maas etc
<wallyworld> we need testing in the same scenariois that showed issues with 1.20.1
<sinzui> wallyworld, I could build into a PPA, and if they reject it, we skip to 1.20.3, because packaging never lets you reuse a version
<wallyworld> that sounds ok to me
<wallyworld> i'd rather do the extra due diligence given the high profile of this release
<wallyworld> with mark s involved also etc
<sinzui> wallyworld, Everyone using either CI or unreleased packages need to use --upload-tools or make their own streams
<sinzui> wallyworld, CI is already building its next test version, so its streams cannot be used
<wallyworld> i guess we could ask them to use upload-tools. but we are really trying to be able to kill that "feature"
<automatemecolema> So I'm trying to deploy a bundle file on juju canvas and I get an error saying no bundle name provided thoughts?
<wallyworld> i'd like us to have a pre-release mechanism for internal testing
<wallyworld> by a wider audience of stakeholders
<sinzui> wallyworld, If we provide alternate streams in a few cpcs, the users need proxy or direct access
<sinzui> wallyworld, I understand, but dev built juju to ensure it panics when it cannot find itself in the streams...
<sinzui> wallyworld, CI's packages are always available for people to test, but they need to use --upload-tools
<wallyworld> so we need to have a mechanism to publish a "daily" set of tools
<sinzui> no
<wallyworld> as opposed to "release"
<wallyworld> like daily images
<sinzui> wallyworld, we build 3 1.20.2's today
<sinzui> I don't think I will be doing that this year
<wallyworld> ok, np. it was a thought bubble
<wallyworld> so long as we have a way we can tell internal stakehholders how to test a pre-release
<wallyworld> if upload-tools is needed, then so be it
<sinzui> wallyworld, building packages and publish streams is a tightly coupled proc. CI can do it for itself. It isn't trivial to build real packages and streams because we need to ensure every version exists forever and it is unique to ensure no tampering
<wallyworld> yes, agreed
 * sinzui had a meeting with Ubuntu today and they now understand streams and juju's lack of commitment for stable cli/api means backporting is hard
<wallyworld> sinzui: we are guaranteeing a stable client api afaik
<wallyworld> starting with 1.18 in trusty
<wallyworld> we will ensure 1.18 clients are supported with future versions of juju
<wallyworld> we now support api versioning internally
<sinzui> wallyworld, so a user who deployed 1.18.1 with trusty a few months ago will get an ubuntu upgrade to 1.25.0 and CLI/API guarantees that they can talk to their env?
<wallyworld> yes, that's the plan unless i am totally wrong
<wallyworld> that's why it took a while to get 1.18 done, to get all the stuff ported across to using the api which we could keep stable
<sinzui> wallyworld, I know there are deprecated cli between 1.18.1 and 1.20-alpha1
<wallyworld> i think AddMachine may be one such api
<wallyworld> there's a V2, but will will still support V1
<sinzui> bootstrap --series changed its meaning
<sinzui> wallyworld, I will start a conversation on the list about this case. and the case is future clients always talking to older envs because users are getting automatic updates of the client
<wallyworld> ok. i think the param change in meaning we may be able to get away with, as people can change their scripts. the client will still talk to the server
<wallyworld> ie it was a change to the client
<wallyworld> but i can't recall the exact change made
<waigani> just came across this https://github.com/gocircuit/circuit. looks interesting
<wallyworld> sinzui:  the idea is though that we ask ubuntu to backport juju to because we are supposed to support old and new clients and then we publish new backend binaries via simplestreams
<wallyworld> tools are tied to deployments/workloads, not ubuntu per se
<sinzui> wallyworld, I understand, I just happen to know that cli is still getting deprecated, so the 5 year commitment isn't in everyone's head
<sinzui> wallyworld, I have a few minutes left in my day. I can send an email asking for testers and provide a link to CI's packages.
<sinzui> I may have time to make streams from those packages.
<wallyworld> sinzui: thank you, appreciated. i think a good dose of internal testing is essential given we haven't been able to see the issues arising in the field
<wallyworld> cant wait to get our own maas for ci etc
<sinzui> wallyworld, I am now skeptical about getting our own maas now that it clears there are many configurations used by stake holders. I need many maases
<wallyworld> yeah :-(
<wallyworld> but we do need at lease one set up with a reasonable workload to deploy
<thumper> wallyworld: hey
<thumper> FYI: https://bugs.launchpad.net/juju-core/+bug/1348386
<_mup_> Bug #1348386: lxc template fails to stop <clone> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1348386>
<thumper> people seeing lxc templates not stop
<wallyworld> \o/
<thumper> is because of this
 * wallyworld wonders if it is his fault before reading the bug
<thumper> wallyworld: no, lxc
<thumper> lxc changed the meaning of a command line arg
<thumper> from a filename to a device
<wallyworld> thumper: ok, thanks for heads up, will miss the 1.20.2 release sadly
<perrito666> fwereade: ping
<fwereade> perrito666, pong
<perrito666> gnight/morning
<perrito666> fwereade: I found out what is running apt-get
<perrito666> its networker trying to install the vlan module.
<fwereade> perrito666, and not acquiring the hook lock? please make it do so
<fwereade> bbs
<arosales> anyone available to help mbruzek debug https://bugs.launchpad.net/ubuntu/+source/gccgo-4.9/+bug/1304754 and https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1347322
<_mup_> Bug #1304754: gccgo has issues when page size is not 4kB <ppc64el> <trusty> <gcc:Fix Released> <gcc-4.9 (Ubuntu):Fix Released> <gccgo-4.9 (Ubuntu):Invalid> <gcc-4.9 (Ubuntu Trusty):Invalid> <gccgo-4.9 (Ubuntu Trusty):In Progress by doko> <gcc-4.9 (Ubuntu Utopic):Fix Released> <gccgo-4.9 (Ubuntu Utopic):Invalid> <https://launchpad.net/bugs/1304754>
<_mup_> Bug #1347322: juju ssh results in a panic: runtime error <ppc64el> <juju-core:Triaged> <juju-core (Ubuntu):Confirmed> <https://launchpad.net/bugs/1347322>
<arosales> specifically mbruzek needs assistance testing utopic gcc (gccgo-4.9_4.9.1-1ubuntu3_ppc64el.deb) on trusty
<arosales> wallyworld:  thumper any folks available to help?
<mbruzek> I need some help compiling juju.  Can someone help me with that?
<arosales> mbruzek: can your power machines reach github?
<mbruzek> arosales, sure.
<arosales> does gccgo-4.9_4.9.1-1ubuntu3_ppc64el.deb install cleanly on your system?
<mbruzek> arosales, I don't know how to get it to use the utopic version
<arosales> if so reading https://github.com/juju/juju should get you a juju binary built with that gcc version
<mbruzek> arosales, how can I get apt-get to give me the utopic version?
<arosales> I thought you may be able to get it from https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/ppa/+packages
<arosales> mbruzek: https://launchpad.net/ubuntu/utopic/ppc64el/gccgo-4.9/4.9.1-1ubuntu3
<arosales> mbruzek: specificially http://launchpadlibrarian.net/180170369/gccgo-4.9_4.9.1-1ubuntu3_ppc64el.deb
<arosales> mbruzek: hopefully the deps don't complain
<wallyworld> arosales: sorry, was having breakfast
<wallyworld> is there anything i need to do in particular?
<arosales> wallyworld: mbruzek is working on https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1347322 which is blocking the verification of https://bugs.launchpad.net/ubuntu/+source/gccgo-4.9/+bug/1304754
<_mup_> Bug #1347322: juju ssh results in a panic: runtime error <ppc64el> <juju-core:Triaged> <juju-core (Ubuntu):Confirmed> <https://launchpad.net/bugs/1347322>
<mbruzek> arosales, depend problems let me see if I can resolve them http://pastebin.ubuntu.com/7850100/
<_mup_> Bug #1304754: gccgo has issues when page size is not 4kB <ppc64el> <trusty> <gcc:Fix Released> <gcc-4.9 (Ubuntu):Fix Released> <gccgo-4.9 (Ubuntu):Invalid> <gcc-4.9 (Ubuntu Trusty):Invalid> <gccgo-4.9 (Ubuntu Trusty):In Progress by doko> <gcc-4.9 (Ubuntu Utopic):Fix Released> <gccgo-4.9 (Ubuntu Utopic):Invalid> <https://launchpad.net/bugs/1304754>
<arosales> wallyworld: currently mbruzek is trying to test utopic's gcc https://launchpad.net/ubuntu/utopic/ppc64el/gccgo-4.9/4.9.1-1ubuntu3 on ppc64el trusty
<arosales> wallyworld: the help needed is just getting juju built with https://launchpad.net/ubuntu/utopic/ppc64el/gccgo-4.9/4.9.1-1ubuntu3
<arosales> wallyworld: so mbruzek is going to try to install the utopic gcc-go.deb manually on ppc64el trusty, and then try to issue, "go install -v github.com/juju/juju/..." to build juju
<thumper> arosales: sorry, sprinting
<arosales> wallyworld: is that the correct approach?
<arosales> thumper: understood
<waigani> menn0, fwereade: https://github.com/avelino/awesome-go
<waigani> fwereade: https://github.com/gocircuit/circuit
<wallyworld> arosales: yes, go install will produce the juju binaries. but that will work regardless of what compiler is installed
<wallyworld> ie the one without the fix or the one with
<arosales> wallyworld: ack just wanted to confirm that was the correct process
<wallyworld> np
<arosales> wallyworld: so mbruzek just need to get the fixed gccgo onto the system and then build juju with "go install -v github.com/juju/juju/..."
<wallyworld> arosales: yes, so long as the $GOPATH/bin directory is in the path then you easily run the compiled juju
<arosales> mbruzek: note you'll need to get juju source first.
<wallyworld> or else you will need to type the full path
<arosales> mbruzek: https://github.com/juju/juju#getting-juju
<wallyworld> and all the dependencies
<wallyworld> i think go get does that
<mbruzek> arosales, having dependency problem with that deb package.
<mbruzek> http://pastebin.ubuntu.com/7850125/
<arosales> mbruzek: so try to follow https://github.com/juju/juju#setting-gopath up to "Using Juju"
<thumper> wallyworld: since we are busy, any chance katco could look at the lxc clone bug?
<thumper> wallyworld: the only tricky bit is supporting old lxc versions
<arosales> mbruzek: why are you running sudo apt-get install gccgo
<wallyworld> thumper: i had assumed you wanted us to look at it
<thumper> wallyworld: I didn't assume :-) asking is nicer
<mbruzek> arosales, to resolve unmet dependencies.  I ran it without gccgo installed and got the same error
<arosales> mbruzek: so you enabled the utopic archive?
<mbruzek> arosales, no I downloaded it, but I thought the apt-get would resolve the depends for me
<wallyworld> thumper: i can target that bug to 1.20.3
<thumper> wallyworld: sure
<arosales> mbruzek: try just a "dpkg -i gccgo-4.9_4.9.1-1ubuntu3_ppc64el.deb"
<thumper> wallyworld: not sure how to best support multiple versions
<wallyworld> yeah, need to look into it
<wallyworld> maybe it needs to version sniff first
<wallyworld> when container manager is started
<mattyw> davecheney, ping?
<wallyworld> arosales: i use synaptic, does all the dependency stuff in a nice gui :-)
<arosales> mbruzek: hmm your apt get seemed to get, "Unpacking gccgo-4.9 (4.9.1-1ubuntu3) over (4.9-20140406-0ubuntu1) ..."
<mattyw> davecheney, is this close to being landable? https://github.com/juju/juju/pull/378
<davecheney> mattyw: one small fix for fwereade then will and
<davecheney> land
<davecheney> doing that now
<davecheney> hold your breath
<perrito666> fwereade: sorry you mean datadir/lockdir/uniter-hook-execution ?
<mbruzek> arosales, http://pastebin.ubuntu.com/7850160/
<mattyw> davecheney, you're a good person, thanks
<mbruzek> arosales, I get dependency problems with or without the gccgo installed
<mattyw> davecheney, we're going to make use of it in our envuser branch - do you think there's any point in having a (s *State) GetEnvironUUID() call that wraps the call to EnvironConfig?
<wallyworld> sinzui: thank you for 1.20.2 and email
<thumper> wallyworld: talked to hallyn about the bug, he said he'd look tomorrow as I asked about the best way to support 0.8 and 0.9+
<wallyworld> thumper: great, thank you. is he able to update the bug?
<fwereade> perrito666, yeah, that should be the one
<perrito666> fwereade: restore (old restore, the one failing) is a bash script run via ssh from a cmd plugin?
<fwereade> perrito666, yeah, think os
<perrito666> ouch sorry ? not intentional
<fwereade> perrito666, I just can't type "so"
<perrito666> I meant, the "?" was a typo
<tasdomas> trying to deploy on local provider gives me this: ERROR juju.networker utils.go:35 command "lsmod | grep -q 8021q || modprobe 8021q" failed (code: 1, stdout: , stderr: FATAL: Could not load /lib/modules/3.13.0-32-generic/modules.dep: No such file or directory
<thumper> wallyworld: I've updated the bug with a solution
<thumper> wallyworld: which is effectively, try using --console-log, and if the command fails with exit code 1, fall back to --console
<wallyworld> thumper: you're all over it
<thumper> wallyworld: but not actually fixing it :)
<wallyworld> close enough, you spoon fed the solution
<wallyworld> om nom nom
<davecheney> I made mongodb panic, where is my prize ?
<davecheney> http://paste.ubuntu.com/7850431/
<fwereade> davecheney, congratulations, you get to fix it
<mattyw> anyone else seeing this test failure on trunk? http://paste.ubuntu.com/7850458/
<davecheney> fwereade: shitter
<perrito666> axw: defer ping
<wallyworld> rick_h__: ping
<mattyw> thumper, hopefully I've got the right idea here https://github.com/juju/juju/pull/401
<wallyworld> fwereade: from what i can see, the charm store code does not attempt to use any configured http proxy settings, just goes straight out to the internet. is that your understanding also? if so, does that mean for private clouds, people are expected to pull down charms to a local repo and deploy from there?
<fwereade> wallyworld, oh, wtf
<fwereade> wallyworld, it is true that in general that is what people do
<wallyworld> func (s *CharmStore) get(url string) (resp *http.Response, err error) {
<wallyworld> 	req, err := http.NewRequest("GET", url, nil)
<fwereade> wallyworld, many people on private clouds specifically want their own charms and managing a local repo is a reasonable way to do that
<wallyworld> that should use any http proxy from env though
<fwereade> wallyworld, but -- I can totally believe that code got missed when we were trying to do the proxy stuff :/
<wallyworld> fwereade: so the above will use an an http proxy env var but not a juju configured http proxy
<fwereade> wallyworld, hmm
<fwereade> wallyworld, I don't think we do put the proxy stuff in the env for the machine agent do we
<wallyworld> not sure, thumper ?
<thumper> wallyworld: weird race condition for you: paste.ubuntu.com/7850518/
<wallyworld> \o/
<wallyworld> thumper: so, am i right in saying the charm store get above will fail if the user has configured a http proxy on via juju config (and doesn't have http proxy env var set)
<perrito666> sinzui: did you by any chance had errors about something corrupting your .jenv file?
<wallyworld> s/on/only
<wallyworld> thumper: because (by design) any config http proxy setting is not put into the env vars of a machine agent?
<thumper> wallyworld: by design any http proxy config *IS* put into the env of the machine agent
<thumper> see MachineEnvironmentWorker
<wallyworld> thumper: ok, so why would the charm store http gets be failing?
<thumper> NFI
<wallyworld> ok, i'll ask them for more info, just wanted to ensure we were doing the right thing
<thumper> I think we are
<wallyworld> thumper: so, if the have http proxy set up in env.yaml, we expect to be able to juju deploy a charm even from inside a private network then
<wallyworld> since proxy is propagated to machine agent on state server
<thumper> as long as the http proxy setting is right :)
<wallyworld> yep
<wallyworld> thanks
<thumper> pretty sure we actually use this with our #is folks
#juju-dev 2014-07-25
<wallyworld> thumper: juju run --machine=0 "echo $http_proxy"     <--- that should work right?
<thumper> yep
<wallyworld> ok, ta
<rick_h__> wallyworld: pong
<wallyworld> rick_h__: hi, i got my question answered. i was asking about charm store and http proxy
<rick_h__> wallyworld: cool, good because I don't know any answers around proxy use :)
<wallyworld> :-)
<mattyw> davecheney, one more look please sir https://github.com/juju/juju/pull/401
 * davecheney sir looks
<davecheney> mattyw: closer, but one more round please
<mattyw> davecheney, no problem, thanks for looking (sir)
<axw> perrito666: probably too late, but recover()
<davecheney>     c.Assert(uuidA, gc.Not(gc.Equals), uuidB)
<davecheney> ... obtained string = "6216dfc3-6e82-408f-9f74-8565e63e6158"
<davecheney> ... expected string = "6216dfc3-6e82-408f-9f74-8565e63e6158"
<davecheney> da fuq
<axw> type
<davecheney> axw: must be
<davecheney> or one implments Stringer()
<mattyw> davecheney, please sir, can I have some more https://github.com/juju/names/pull/20
<axw> katco: are you awake? I can't see the card you're talking about in your email
<axw> the link is just to the board AFAICT
<axw> wallyworld: ^^
<davecheney> axw: that test was asserting that two calls to utils.NewUUID returned two different uuids by bootstrapping two environments, then comparing the results of env.UUID
<wallyworld> axw: it's the car din coding lane
<wallyworld> card
<wallyworld> 1322302
<axw> davecheney: righto. are they []byte ?
<davecheney> (ãà² çà² )ãå½¡â»ââ»
<axw> thanks wallyworld
<wallyworld> sure
<mattyw> davecheney, one last one please https://github.com/juju/juju/pull/401
<mattyw> davecheney, you know you love it
<davecheney> mattyw: don't presume to know what I would, or would not, "love"
<waigani> dunedin sprinters: http://www.eventfinder.co.nz/2014/the-internet-party-presents-the-party-party/dunedin
<davecheney> the party of the first party invites the party of the second party to the party of the party party
<davecheney> thumper:
<davecheney> func SampleConfig() testing.Attrs {
<davecheney>         return testing.Attrs{
<davecheney>                 "type":                      "dummy",
<davecheney>                 "name":                      "only",
<davecheney>                 "uuid":                      "90168e4c-2f10-4e9c-83c2-feedfacee5a9",
<davecheney> thumper: https://bugs.launchpad.net/juju-core/+bug/1348458
<_mup_> Bug #1348458: configstore: if the size of the serialised jenv decreases the .jenv file will be corrupt <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1348458>
<thumper> davecheney: paste.ubuntu.com/7851754/
<davecheney> thumper:  https://github.com/juju/juju/pull/402
<cmars> jam2, good morning. can you take a look at my latest versioned login PR? https://github.com/juju/juju/pull/392
<jcw4> fwereade: ping
<davecheney> thumper: fwereade https://github.com/juju/juju/pull/403
<davecheney> fighting a few failing tests, but this PR is close to being ready
<davecheney> reviews appreciated
<axw> davecheney: I think we want Prepare to set the UUID? then it will be valid in the .jenv
<davecheney> (â¯Â°â¡Â°)â¯ï¸µJUJU
<davecheney> (â¯Â°â¡Â°)â¯ï¸µâ©Å¿â©Å¿
<jcw4> davecheney: 403 looks good to me :)
<davecheney> axw: ok, i'll move that logic to prepare
<davecheney> you are right
<axw> davecheney: cool, thanks
<davecheney> axw: where is this Prepare method ?
<axw> davecheney: so each Environ defines one, but I think we'd want it to be done in a provider-independent part. there's a Prepare function in the environs package
<axw> which calls the Environ-specific one
<axw> davecheney: update environs.prepare (lowercase), which does similar things for certs and admin-secret
<davecheney> ok
<davecheney> axw: thanks, that is much clearer
<davecheney> axw:  ptl
<davecheney> ptal
<davecheney> i'm running the tests again now
<axw> looking
<davecheney> 'no
<davecheney> stop
<davecheney> i fucked up
<axw> k
<davecheney> ok, fixed
<davecheney>  % juju destroy-environment -y $(juju switch)
<jcw4> davecheney PTAL : https://github.com/juju/juju/pull/351
<jcw4> I won't presume to say you'd love to
<tasdomas> cmars, https://github.com/juju/juju/pull/369
<mattyw> jcw4, even though he does
<jcw4> mattyw: :D
<jcw4> it looks like I lost davecheney to the party of the second party at the party of the party
<davecheney> i'm partied out
<jcw4> hehe
<mattyw> davecheney, less chatting, more reviewing
<davecheney> mattyw: less colonies, more brtiania
<jcw4> Britannia? mattyw is a pommie?
<jcw4> or wait... vice versa
<davecheney> I AM ZUUL, THE DESTROYER OF ENVIRONMENTS
<davecheney> CHOOSE
<davecheney> CHOOSE THE FORM OF THE DESTRUCTOR
<jcw4> :)
<mattyw> davecheney, sounds like you're done for the day as well
<mattyw> jcw4, and yes, me = pom
<jcw4> visiting the convicts this week mattyw ?
<jcw4> I guess that wouldn't be NZ
<jcw4> I'm getting confused with AUS
<mattyw> jcw4, I have to keep an eye on things for auntie Liz
<davecheney> no convicts in NZ
<jcw4> hehe
<davecheney> well, none that I am related too
<tasdomas> davecheney, iirc convicts escaping from aus usually ended up in nz
<tasdomas> so yes, there are convicts - "the ones that got away"
<jcw4> ex-cons ... perfect
<jcw4> I better not antagonize davecheney until he's done with https://github.com/juju/juju/pull/351
<davecheney> jcw4: i'm sensing you want something
<davecheney> no
<davecheney> wait
<davecheney> don't tell me
<davecheney> i'll figure it out
<jcw4> hehe, well, since it's after 5pm there, it may end up being mgz that has to suffer through it
<jcw4> fwereade: did the rest of the sprint team loose internet?
<jcw4> lose even
<axw> wallyworld: I've got all the tests passing with 2.6 again, and local bootstraps. need to make sure 2.4 still works, then I'll get onto proposing a bunch of branches
<axw> need to go pick up my daughter and do a couple of things at the other house right now tho - bbl
<bodie_> A review on https://github.com/juju/juju/pull/399 would be much appreciated :)
<wallyworld> axw: awesome, look forward to the branches
<axw> rogpeppe1: heya. in case you were thinking of doing it, I thought I'd better let you know I'm going to update charmstore to use gopkg.in/mgo.v2 in the near future
<axw> I'm updating everything from labix.org/v2/mgo -> gopkg.in/mgo.v2
<rogpeppe1> axw: ah, cool
<rogpeppe1> axw: does that mean i can finally move forward with using gopkg.in/juju/charm.v2 ?
 * axw shrugs
<axw> I don't have a problem with it
<rogpeppe1> axw: as AFAICS the only argument against it was that it introduced a new dependency
<axw> we now have gopkg.in/natefinch/lumberjack.v2, so I don't think that's a valid argument anymore
<rogpeppe1> axw: could you submit a PR for the v4 branch of the charm store too, please?
<rogpeppe1> axw: right
<rogpeppe1> axw: the v4 branch is what we're currently working on
<axw> rogpeppe1: sure, will make a note
<rogpeppe1> axw: ta
<wallyworld_> katco: mgz: i made it back in time
<katco> wallyworld_: :)
<wallyworld_> mgz: katco: sorry, google has died, trying to reconnect
<katco> wallyworld_: np
<natefinch> morning all
<TheMue> natefinch: morning
<perrito666> axw: still here?
<perrito666> morning everyone
<mgz> mornin'
 * perrito666 sharing office with friends... one gets to miss the connection all for himself
<perrito666> sinzui: morning are you around?
 * perrito666 notices that we all add each other on g+ just to get rid of the "there are people in this call which are not part of your circles" message
<katco> perrito666: :)
<katco> perrito666: it boggles me that Google didn't anticipate this issue
<perrito666> what do you mean it didn't? that message is a clear plot from google to make people use g+
<gsamfira> katco: they may have. I can just imagine a comment above the notification code saying: trolololo
<katco> gsamfira: lol
<katco> in tests, we alias the various testing subdirectories, e.g.: kvmtesting "github.com/juju/juju/container/kvm/testing"
<katco> what would be a good alias for "github.com/juju/testing"?
<perrito666> katco: we do that when we have two testings involved usually
<katco> coretesting is already taken in this instance by "github.com/juju/juju/testing"
<perrito666> coretesting iirc is the alias used for that
<perrito666> ah
<perrito666> lol
<katco> ;)
<perrito666> well you can use that one as testing
<katco> i am psychic! or at least a good neural network
<perrito666> and then alias the regular testing
<perrito666> as stdtesting
<katco> so github.com/juju/testing as stdtesting?
<katco> or maybe just "testing" since it's most root?
<perrito666> katco: ah if you have no testing other than that I guess testing is a good reason
<perrito666> s/reason/name
<perrito666> hint: if you talk to me when i am typing i will write whatever you tell me
<katco> no, sorry, there _is_ other testing, but i don't know what's more general than "coretesting"
<katco> perrito666: LOL
<katco> i do that too sometimes
<katco> actually, there's a funny story that when they used to pay typists to type in handwritten books, sometimes the typists would slip in whatever they were thinking at the moment
<katco> there were some funny entries :)
<perrito666> whel it is a side effect of touch typing I guess
<perrito666> sometimes the wrong thread will irq and then you start dropping the wrong stream to stdout
<katco> LOL
<katco> nerd humor is the best humor :)
<perrito666> natefinch: U around? says my cal I need to call you in 6 mins
<sinzui> Something bad landed in trunk in the las 14 hours. All the upgrade tests for all the substrates failed. I am looking for the first cause. Once there is a failure, all subsequent attempts to use the env fail because there are invalid mappings in the jenv.
<perrito666> sinzui: I just noticed the same
<perrito666> sinzui: in my case the last N bytes of the jenv are repeated
<sinzui> I also need to purge all the clouds of the dead zombies instance.
<perrito666> uff, thank god i thought I was going crazy
<mgz> sinzui: hm, any rough idea of the blame rev?
<sinzui> mgz, I am still looking.
<perrito666> I put my money on 5b54fcbfc1a651c5e2435f13b2b479187cc5499f
<perrito666> but I just read the PR messages so dont take my word for it
<perrito666> sinzui: mgz
<sinzui> mgz, perrito666 the issue happened after commit f81c377f, first seen in commit 7c98f388 :(
<sinzui> CI was testing the 1.20 branch for more than 20 hours, so many commits were merged into master
 * perrito666 checks
<perrito666> sinzui: f81c377f is most likely not related
<sinzui> perrito666, 7c98f38891eb98e6f1a8b01a46e3a9700eb51003 might be the corrupter of the env. http://juju-ci.vapour.ws:8080/job/aws-upgrade-precise-amd64/1569/console end with a report of bad yaml
<perrito666> sinzui: I doubt it
<perrito666> that change is harmless
<mgz> 3613c1I wonder if trunk would actually pass
<mgz> whooops
<perrito666> mgz: /
<mgz> and it's just the juggling of environ config that borked it
<perrito666> ?
<mgz> pr 362 + pr 387
<tvansteenburgh> hey guys, i'm getting a nil pointer error running juju status: http://pastebin.ubuntu.com/7856024/
<perrito666> mgz: sinzui let me run my current branch with a test I know it fails and I can tell you fast enough (~15 mins)
<mgz> perrito666: thanks
<sinzui> perrito666, mgz: this is a jenv left behind. I don't see anything wring with it other than it was left behind: https://pastebin.canonical.com/114224/
<ericsnow> natefinch, perrito666: standup?
<natefinch> ericsnow yep
<natefinch> tvansteenburgh: weird... there's a nil check that should stop that, at least in trunk
<perrito666> sinzui: my quick guess it re-writes in the same file without truncating prev content
<perrito666> natefinch: ericsnow gimme a sec
<mgz> hm, and that would suggest a windows change...
<ericsnow> perrito666: no worries
<perrito666> sinzui: mgz if that is the case, the second write is shorter and therefore leaves dirty file
<perrito666> I am actually surprised we can do that :p
<perrito666> ok, running test_recovery with master tip
<perrito666> that should tell us if this is still failing
<perrito666> mgz: anyway if it works it means we are still writing improperly but being lucky
<perrito666> ok running we will have to wait a moment
<tvansteenburgh> natefinch: any suggestions for working around this problem?
<natefinch> tvansteenburgh: investigating
<ericsnow> natefinch: sorry, I kept interrupting you!
<perrito666> sinzui: mgz master seems to be working
<mgz> sort-of ace
<perrito666> oooor, its writing more in the file
<perrito666> :p
<mgz> um..
<mgz> so not then
<perrito666> because I am getting uuid which i did not before
<natefinch> frig, godeps is platform specific
<natefinch> godeps ./...  on linux prints out all the dependencies for linux, but leaves out dependencies that only exist in windows-specific files :/
<perrito666> mgz: I suspect the bug was there longer than we supposed
<mgz> probably
<mgz> bleh
<perrito666> I see some changes from tim which might be touching how the jenv is saved check 5b54fcbfc1a651c5e2435f13b2b479187cc5499f
<mgz> right, that revert and the initial change was my guess for the breakage
<sinzui> perrito666, mgz: the problem is
 * perrito666 drumrolls
<bodie_> existential tension
<sinzui> line 138 from the pastebin. "zonaws.com/testing/tools" is missing the end colon
<sinzui> ^ mgz, perrito666 , so is it a package not juju itself that is evil
<sinzui> Oh, I see, that is truncation of tools-metadata-url: http://juju-dist.s3.amazonaws.com/testing/tools from line 135 or 136
<perrito666> true
<perrito666> in my case its the same but eople.canonical.com... blah and then a newline that says type:ec2
<perrito666> sinzui: yup same issue
<perrito666> it is consistent-ish with my guess
<perrito666> but i have no evidence to support it
<natefinch> rogpeppe1: I just realized using godeps to print out dependencies is platform-specific to the platform you're running on... e.g. we have dependencies that only exist in _windows.go files, and those don't get picked up by running godeps on linux.  Do you know if that's fixable?
<perrito666> natefinch: do you want to fix that?
<rogpeppe1> natefinch: it is fixable, but not trivially
<perrito666> i mean, there is a chance that the dep is also windows only
<rogpeppe1> natefinch: the default is just to use go/build's defaults
<natefinch> perrito666: yes, but if you use godeps on the wrong platform you'll get the wrong dependencies
<natefinch> or rather... you'll get a partial list which is ok for you on your platofrm
<perrito666> natefinch: wich mkes some sense
<perrito666> go deps will give you what you need to build your app in that platform
<perrito666> sinzui: my test run was wrong, re-running now, just in case /cc mgz
<natefinch> perrito666: but in *generating* the list of dependencies... you want it to find *all* dependencies for *all* platforms
<natefinch> man, TSV is the worst format ever
<natefinch> at least for hand editing :/
<natefinch> not worst every, obviously... just frustrating when it tells me my file is invalid
<perrito666> natefinch: missing the tab at the end?
<natefinch> perrito666: does it need a tab at the end?
<perrito666> yup
<perrito666> iirc
<perrito666> 4 columns
<perrito666> you have to love the 70s
<perrito666> :p
<natefinch> maybe the newline at the end of the file is the problem?
<perrito666> natefinch: also
<perrito666> it needs only 4 column lines
<perrito666> so
<perrito666> if you lack the tab at the end and perhaps if you have a newline it might fail
<perrito666> yes, it is a really sad set of limitations
 * perrito666 is going to screengrab restore test and release it as the next thriller movie hit
<perrito666> sinzui: perritoec2 restored
<perrito666> PASS
<perrito666> although i would not vouch for the jenv error to be fixed, just covered
<perrito666> (sinzui, In other news restore works again, ill pr)
<natefinch> what are you supposed to do when rebase says it can't apply a revision?
<perrito666> natefinch: fix the conflict
<perrito666> then git add the conflicting file/s
<perrito666> then rebase --continue
<perrito666> iirc
<perrito666> commit might be involved but i dont remember
<natefinch> it doesn't tell me what the conflict is, though
<perrito666> git status
<perrito666> UU
<perrito666> UU is the st of the conflicting file
<natefinch> how do I cancel out of a rebase if it opens up and I realize it has the wrong stuff in it?
<ericsnow> natefinch: interactive?  don't you delete all the rev lines (I believe it says something like that)
<ericsnow> natefinch: for normal rebase I believe it's something like git rebase --abort
<natefinch> interactive, yeah...
<ericsnow> natefinch: it should say something in the comments there on what to do
<perrito666> natefinch: what do you mean the wrong stuff?
<perrito666> remember that it will replay old commtis
<natefinch> perrito666: like, I do rebase -i HEAD~3 and what it brings up bears no relation to what I expect it to bring up, so I want to cancel out of it
<natefinch> I tried to rebase my log rotate PR to squash it into a single commit.... and one of the commits I wanted to squash just won't show up
<natefinch> I probably screwed up my branch with an earlier bad rebase.. I don't know
<natefinch> There's obviously some stuff about how git is working things that I don't understand.  It all went to hell after I merged from trunk to fix a conflict
<natefinch> I think my mistake was actually merging from master, instead of doing the rebasey thing where my changes get plunked back on top
<perrito666> natefinch: sounds to me like that should rebase your last 3 ommits
<natefinch> rebase ~3 brings up like 100 commits
<natefinch> I think it's bringing up all the commits that were merged
<natefinch> from master
<natefinch> but it's too late, it's already merging as-is
<natefinch> finally got the tsv in working order
<perrito666> natefinch: :|
<natefinch> I've been wrestling with git for far too long to worry about having one or two extra commit lines in master
<perrito666> my push hook complains of:
<perrito666> environs/config/config.go:800: undefined: schema.UUID
<perrito666> ... I am pretty sure I was not even close
<perrito666> does anyone else see that in master?
<natefinch> godeps?
<perrito666> ah duh
<perrito666> https://github.com/juju/juju/pull/404
<perrito666> please review ppl fixes a blocker bug so its sort of important
 * perrito666 looks at mgz and natefinch 
<natefinch> looking
<natefinch> can you add the bug number to the description?
<natefinch> oh nevermind
<natefinch> in title
<natefinch> not that I can click it there
<natefinch> I need a chrome extension that'll make lp:foo actually work.
<natefinch> perrito666: what is return value 100?
<natefinch> dang, I can't read today, nevermind
<bodie_> what are the odds of getting an LGTM / LWTM on 399 today?
<bodie_> it's a fairly simple jujuc addition for Actions to get passed args from the stateservice
<bodie_> tests are a tad bit verbose but it's quite straightforward
<perrito666> natefinch: you saw the comment above that?
<natefinch> giant regexes make for really shitty test failure messages
<mgz> that they do
<katco> natefinch: i have a feeling i know exactly what code you're looking at
<natefinch> cloud init
<katco> well nm. i ran into the same thing in the upstart stuff
<natefinch> anywhere we are doing a giant regex against a script we write
<natefinch> katco: cloudinit writes upstart scripts, and yes, it's in that code
<katco> oy... my condolences
<natefinch> mgz: the cloudinit code currently does some finagling with the log files to "Ensure log files are properly protected" - chowning them and chmodding them
<natefinch> mgz: I have a CL to use my log rolling package to roll the logs, which does not currently do the same thing
<mgz> natefinch: when you do the rolling itself right? the inital log would still have correct owner/perms I'm guessing
<mgz> you probably just want to add a copy-owner-perms thing to your rotator and use that
<natefinch> mgz: that's a really good idea, yeah
<mgz> though I do wonder why we're rolling our own on this, yet again...
<natefinch> mgz: this was a package I wrote in my free time.  We could use logrotate, but that's just more external dependencies and setup.  Plus, then we'd need something else for windows machines.   Having a pure software solution is almost always better than adding dependencies
<katco> natefinch: i think mgz might be referring to this: https://github.com/avelino/awesome-go#logging
<mgz> well, not if we have to go and get told by our users about thirty years of logrotate good practive after release...
<mgz> but it's your fun to be had :)
<natefinch> the main reason is that we'd have to find something else for Windows
<natefinch> but yes, I know how linux users like to complain that you don't use 1000 external utilities for something you can do easily in your own code.
<perrito666> wow, full screen really does not work well with unity
<natefinch> how so?
<perrito666> natefinch: well I have firefox in fscreen and every time I go to that desktop unity takes focus
<katco> natefinch: hey just thinking about that a little... why not use the windows event logs? those have log rotation etc.
<katco> natefinch: and it's the more canonical place for people to check for things
<natefinch> katco: I think of the event log as more of a place to write errors and problems, not just general logging
<katco> natefinch: nah, it has an informational log type for a reason
 * natefinch wonders at katco's sudden knowledge of Windows subsystems.
<katco> haha, no! no windows work!
<natefinch> haha
<natefinch> You're safe, you're not on my team.. I can't delegate all my work to you the way I do to my team ;)
<katco> lol
<perrito666> hey, don't act as if we where not here :p
<natefinch> I haven't ever used the windows event log that way, so I'm really not that familiar with it.
<katco> gosh i am really getting mired in juju's test philosophy. i vacillate between whether we do functional style testing or unit testing.
<natefinch>  we do both, which I think is ok... but I think our functional tests tend to be overly broad and requiring way too much fragile setup
<perrito666> why all rogpeppe mails get here duplicated?
<natefinch> haha
<katco> in a recent review, wallyworld mentioned that he wishes a test were more black-box to just test the side-effect we desire, but when i try and do that, i find that the underlying object is mocked away and won't give the desired side effect
<natefinch> he just emails from the wrong account all the time and has to resend from the right account
<perrito666> natefinch: yet I get both
<katco> so as a new person, i'm left a bit confused... do i expand the mock to mock the side effect? but then isn't that just an invalid functional test? or do i switch to unit tests and test the interaction?
<bodie_> katco, the input I've received from fwereade on that topic is that we need a pass and fail case actually integrating with the next level up or down, and to use mocks to exercise the error paths in the code to verify that they work as expected
<bodie_> so the mocks aren't for checking that the code has the desired effect, but rather to check that the code itself behaves as expected
<bodie_> I'm not sure if that answers your question, though
<katco> bodie_: would you mind taking a few minutes to walk through a concrete example with me?
<katco> (specifically the one i'm working on)
<perrito666> bbl
<bodie_> sure thing
<perrito666> sinzui: https://github.com/juju/juju/pull/404 <-- you should see restore tests passing after this lands on Ci
<perrito666> mgz: I need to leave for a moment now but I am pretty sure that the issue lies on the change to locking that tim added in https://github.com/juju/juju/commit/5b54fcbfc1a651c5e2435f13b2b479187cc5499f
<mgz> checkin'
<bodie_> any chance I can get a review on 399 today?
<bodie_> https://github.com/juju/juju/pull/399 -- relatively simple code, slightly verbose tests
<bodie_> ask me anything :)
<natefinch> bodie_: is this comment correct? ActionGetCommand implements the relation-get command.
<bodie_> good catch, no
<bodie_> fixed
<bodie_> natefinch, addressed your query
<bodie_> there, that's a bit cleaner
<bodie_> natefinch, pushed update
<bodie_> only reason I didn't comment all the exported bits was that they weren't commented in config-get -- I'll fix that as well
<bodie_> there, that should be good.
<natefinch> bodie_: thanks for the comment, that makes a ton more sense
<bodie_> natefinch, sorry to waste your time there, I'll make sure I'm keeping aware of the obvious bits before grabbing review
<bodie_> yeah, it's a little bit semantically funky.
<natefinch> bodie_: are the leaf values always strings?
<bodie_> the values might not be
<bodie_> the keys should always be
<bodie_> I just realized I think I need to add some test cases for bad args, or remove that logical branch from the test
<bodie_> should be minor
<natefinch> negatives tests are always enlightening
<natefinch> tests with non-string values would be good, too
<natefinch> not that I think there will be any problem... you're not really every touching the answer
<natefinch> s/every/ever/
<perrito666> ericsnow: in you opinion, is it worthy to build over your changes of juju restore cli?
<ericsnow> perrito666: it's not a big change so probably not :)
<bodie_> natefinch, poked those in there as well
<natefinch> bodie_: looks good to me. I don't really know the actions spec to know if there's any errors in the logic, but the code looks solid enough as it is.
<bodie_> the hole is that it's possible to have lists in the params map, with inner maps
<bodie_> however, I think action-get key.innerkey.someList[2].anotherkey.somethingelse[3] is just ... pushing it
<natefinch> so you need another case in the type switch for []interface{}?
<natefinch> ahh I see what you mean
<natefinch> if someone wants a list, they can just make the keys to their map integers, and it basically works the same
<bodie_> It's probably arguable either way.  for now I think this makes more sense than getting too fancy.
<natefinch> bodie_: is there someone else who knows this stuff better than can review to make sure I haven't missed anything?  I feel like I don't really have enough context to give it the LGTM
<bodie_> probably fwereade or mgz would be able to make a call
<bodie_> I think it does what it does properly; I'm not totally certain it does everything it could need to do, though.
<bodie_> but, that seems like functionality that can easily be added
 * perrito666 notices he never added a cli command for juju
<katco> is there a clear example anywhere of how to utilize the simplestreams package to give you a url for an image?
 * TheMue comes nearer to IPv6 addressing and routing between LXC containers on different hosts.
<natefinch> katco: wallyworld knows the simplestreams well.... the rest of us, not so much
<katco> =/
<perrito666> katco: you are trying to get info from a dev on a friday afternoon :p
<katco> haha
<perrito666> who I believe must have kids on holidays :p
<ericsnow> fwereade: ping
<jcw4> Simple PR https://github.com/juju/juju/pull/405 - PTAL
<bloodearnest> heya all, I'm getting panics on juju set in 1.20.1.1, fyi. https://bugs.launchpad.net/juju-core/+bug/1348829
<_mup_> Bug #1348829: juju-core client panics with juju set empty string <juju-core:New> <https://launchpad.net/bugs/1348829>
#juju-dev 2014-07-26
<jcw4> Another names package PR https://github.com/juju/names/pull/21
<jcw4> PTAL
#juju-dev 2014-07-27
 * thumper goes to stretch his legs and walk the dog
<rick_h__> thumper: when you get back wonder if I can steal a few min
<thumper> rick_h__: hey
<rick_h__> thumper: party
#juju-dev 2015-07-20
<axw> wallyworld: we didn't do planning.. I'm thinking 8 for the force-destroy card, because I think I'm going to have to go and implement ListVolumes/ListFilesystems as well
<wallyworld> axw: or even break it up
<wallyworld> a separate card for ListX
<axw> wallyworld: well I'm not sure if that's how it'll work yet. I'll break it up when I have a better idea
<wallyworld> ok
<axw> wallyworld: will set 8 for now and then break up later, ok?
<wallyworld> sure
<wallyworld> we'll do more tonight
 * thumper is running tests in windows and locally
<thumper> god I hate the uniter tests
 * thumper cries quietly as a different test fails intermittently on windows
<mup> Bug #1476060 opened: uniter_test.go, startUpgradeError{} fails for windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1476060>
<thumper> wallyworld: do you know much about the unit tests around "update-status"
<thumper> wallyworld: I have an intermittent failure here
<wallyworld> test name?
<wallyworld> thumper: ?
<thumper> uniter_test.go: 1351
<thumper> nope
<thumper> not that line
<thumper> line 1310
<wallyworld> looking
<thumper> test desc: 			"all relations are available to config-changed on bounce, even if state dir is missing",
<thumper> it blocks waiting for hooks
<wallyworld> for idle?
<thumper> start_hooks(false) expects an "update-status" hook call
<thumper> then on line 1340, it waits for: 			waitHooks{"update-status", "config-changed"},
<thumper> in the logs, there is only one "update-status" hook call, not two
<wallyworld> my line numbers are different
<thumper> ah... yeah
<thumper> I'm looking in a branch where I added a few lines further up
<thumper> due to fixing another bug
<thumper> sorry
<wallyworld> if the next comment after the waitHooks //// Check the state dir was recreated.
<wallyworld> is
<wallyworld> i can't see anything immediately obvious but horatio has done all that code, reviewed by william. i'll as him to look
<wallyworld> how often does it fail?
<thumper> regularly now on ppc64el and windows
<thumper> enough to be a critical blocker
<wallyworld> but on ubuntu is ok?
<thumper> yep
<thumper> wonderful eh?
<wallyworld> sigh
<thumper> and I just fixed a different bug
<wallyworld> in uniter?
<thumper> well, fix is an extreme word
<thumper> skip a test
 * thumper is submitting now
<wallyworld> ah :-)
<thumper> the bug mentions a failing test
<thumper> there were two
<thumper> and I fixed the one this bug didn't mention
 * thumper sighs
<wallyworld> ok, we'll take care of this other one
<wallyworld> maybe we need to finally gate landings on windows and ppc tests
 * thumper nods
<wallyworld> like we've been talking about forever
<thumper> I have a windows vm running
<wallyworld> ooh yuck
<thumper> there is one thing that looked strange to me
<thumper> replacing a real timeout value with zero
<thumper> I'm not convinced that that time.After always works properly with that
<thumper> seen evidence in other places
<thumper> perhaps replace with 1 millisecond
<thumper> rather than zero
<thumper> wallyworld: http://reviews.vapour.ws/r/2206/diff/#
<wallyworld> looking
 * wallyworld forgot he was ocr
<wallyworld> sigh
<thumper> oh...
<thumper> it is 1 millisecond now
<wallyworld> thumper: i think also a mock timer was introduced
<thumper> why do we have both?
<wallyworld> to control the triggering of the idle event
<thumper> 	s.PatchValue(uniter.LoopIsIdleCheckTime, 1*time.Millisecond)
<thumper> 	s.PatchValue(uniter.EnterLoopIsIdleTime, 1*time.Millisecond)
<thumper> the timing is different on ppc and windows
<wallyworld> in what way?
<thumper> hence, I'm betting this is why we are getting different numbers of calls
<thumper> scheduler works differently on different architectures / platforms
<wallyworld> 1 milli second is too small anyway
<thumper> these tests aren't checking for idle
<wallyworld> no, but update status hook depends on idle
<thumper> just the update status
<thumper> ah
<thumper> poo
<wallyworld> update status only fired when idle
 * thumper goes to make a coffee
 * wallyworld is jealous - coffee machine here is broken :-(
<menn0> thumper or wallyworld: http://reviews.vapour.ws/r/2207/diff/#
<wallyworld> menn0: +1 - did horatio's fix need to be backed out now this is handled generically? ie i guess the whole doc can be passed in now
<wallyworld> that would be ore robust than naming the fields
<menn0> wallyworld: that's the plan. I'm going to undo the fixes he made.
<wallyworld> ok, ty
<menn0> wallyworld: there's also going to be a PR with DB migrations which fix the currently incorrect records
<wallyworld> yay
<menn0> thumper or wallyworld: http://reviews.vapour.ws/r/2208/
<wallyworld> looking
<wallyworld> yay
<thumper> poke me with a fork
<thumper> I'm done
<thumper> laters
<dooferlad> TheMue: hangout?
<TheMue> dooferlad: omw, had phone
<dooferlad> fwereade: are you hangoutable?
<mup> Bug #1476214 opened: --repository option for 'juju deploy' sometimes has issues with symlinked directories <juju-core:New> <https://launchpad.net/bugs/1476214>
<tasdomas> hi
<tasdomas> has anyone seen test failures for github.com/juju/juju/service ?
<tasdomas> http://paste.ubuntu.com/11909054/
<perrito666> tasdomas: yes I attributed them to my system being dirty
<tasdomas> perrito666, ;-]
<perrito666> tasdomas: seems that it is something else
<perrito666> series "\" is certainly wrong
<tasdomas> perrito666, yeah - that was my thinking too
<dooferlad> TheMue: could you take a look at http://reviews.vapour.ws/r/2197/ when you have the time?
<TheMue> dooferlad: will take it for you *rofl*
<sinzui> katco: alexisb: 1.24.3 needs one commit and bless to be released. This review has "Ship It" http://reviews.vapour.ws/r/2210/, may I queue to to merge?
<mgz> sinzui: it's not clearl which part of the fix for the bug that is...
<sinzui> mgz: The bug lists 3 branches, that is the only one not merged.
<mgz> okAY, MEENO ADDED TH INFO IN THE BUG
<mgz> caaaaps
<mgz> sinzui: landing it seems good to me, I'll queue
<sinzui> mgz: meno also added the same comment on his other in progress bug https://launchpad.net/juju-core/+milestone/1.24.3. I think bug 1474195 is fix committed
<mup> Bug #1474195: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:Triaged> <juju-core 1.24:In Progress by menno.smits> <https://launchpad.net/bugs/1474195>
<mgz> sinzui: see uniter tests thread on juju-dev list for more fun
<sinzui> mgz :/ I am reading the argument to merge jes-cli snd make it master
<lazyPower> does anyone have a moment to help debug juju-db not running on a state server? I'm out of ideas as to why its failing to start up and syslog is not giving me anything helpful to go on.
<lazyPower> lukasa in #juju would appreciate it i'm sure.
<katco> natefinch: standup
<katco> wwitzel3: ericsnow: natefinch: doh just remembered. i will be out over lunch. doc. appt.
<ericsnow> fwereade: thanks for those reviews!
<fwereade> ericsnow, yw, hope they helped
<ericsnow> fwereade: definitely
<fwereade> ericsnow, do you see what I mean about having details nested inside what one whight thing were model packages?
<fwereade> s/thinng/think/
<ericsnow> fwereade: haven't read through the reviews in detail yet :)
<fwereade> ah god I just can't type
<fwereade> ericsnow, let me know if anything bears further discussion :)
<ericsnow> fwereade: will do
<mup> Bug #1476214 changed: --repository option for 'juju deploy' sometimes has issues with symlinked directories <juju-core:New> <https://launchpad.net/bugs/1476214>
<mup> Bug #1442149 opened: UniterSuite.TestUniterUpgradeConflicts fails <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1442149>
<TheMue> dooferlad: have to swap offices in a few moments, so I'm not sure if I'll be at the network hangout
<dooferlad> TheMue: OK, join when you can
<TheMue> dooferlad: will do
<dooferlad> TheMue: you merged didn't you... *slap*
<TheMue> dooferlad: I should, shouldn't I ... ;)
<dooferlad> TheMue: rebase! rebase!
<dooferlad> TheMue: it is easy to fix
<dooferlad> TheMue: just a bit annoying
<TheMue> dooferlad: ah, the newest one, hmm, I thought I rebased
<dooferlad> TheMue: don't worry. The best thing to do is only make changes on a branch that are related to the topic you are working on
<dooferlad> TheMue: we can do a rebase on master as a specific task
<TheMue> dooferlad: I do so
<dooferlad> TheMue: odd. Oh well, merge-commit from master onto net-cli, then rebase net-cli on master works to clean things up.
<TheMue> dooferlad: I currently don't work on net-cli
<katco> ericsnow: wwitzel3: so the status api server has the same interesting property as state, which is to say it's difficult to abstract without embedding your type in the existing status apiserver struct
<ericsnow> katco: lovely
<katco> ericsnow: wwitzel3: we could begin a pattern wherein we register accessors that return either a json string or interface{}
<katco> ericsnow: wwitzel3: returning a structured string (json) doesn't strike me as horrible, but it may be a bit different than we're doing things currently
<katco> ericsnow: wwitzel3: we'll have to see
<wwitzel3> katco: hrmm, I think json sounds reasonable
<katco> wwitzel3: i do too... so we would register functions/types that get status for something and have sig func() (string, error)
<katco> wwitzel3: and then the common apiserver stuff would iterate through those and bundle it in the result
<katco> wwitzel3: and then the client would have to understand how to parse that
<katco> wwitzel3: it's just a bit different than we do it now, b/c we have our RPC functionality do that bit for us
<katco> wwitzel3: i.e. FacadeCall("FullStatus", &statusType) instead of FacadeCall("FullStatus", &jsonString)
<wwitzel3> katco: yeah, the thing that is nice about that is then state and registered clients have an implicit contract and we aren't having to try and force things for future clients in to some interface that is likely to be different depending on the needs of the component
<katco> wwitzel3: i don't think it solves our versioning problem, if that's what you're saying
<katco> wwitzel3: we still have to account for combinations of client & server version
<wwitzel3> katco: no, not versioning, it simplifies how things look when we start to add and/or refactor other things to components
<katco> wwitzel3: ah, yes
<sinzui> katco: Do you have a few minutes to review http://reviews.vapour.ws/r/2216/
<alexisb> sinzui, katco is away atm
<sinzui> thank you alexisb
<alexisb> maybe another moonstone team member can help
<alexisb> wwitzel3, natefinch, ericsnow ^^^
<ericsnow> sinzui: done :)
<sinzui> thank you ericsnow
<natefinch> damn beat me
<alexisb> ericsnow is lightning fast!
<katco> alexisb: ty :)
<alexisb> katco, thank you for keeping the team calendar up-to-date :)
<perrito666> sinzui: hey, is there a way I can procure a ppc machine for a moment to try out some things for the uniter blocker?
<sinzui> perrito666: I can put your key on stilson-09. You can hack on the same machine that runs the test
<perrito666> sinzui: that would work
<katco> ericsnow: is your state branch landed yet?
<ericsnow> yep
<katco> ericsnow: cool... does wwitzel3 and natefinch know so they can rebase?
<ericsnow> katco: right now trying to land the branch with the fixes for bugs we found while trying to get the demo working
<katco> ericsnow: ah ok. so hold off?
<ericsnow> katco: for the moment
<ericsnow> katco: actually, that shouldn't be a real blocker
<wwitzel3> yeah, once those fixes land I'll be able to give another end to end run of the wordpress-wpm charm
<wwitzel3> I cleaned up the charm and fixes up the errors I hit along the way though, so it is pretty good shape now
<katco> wwitzel3: cool ty
<katco> ericsnow: wwitzel3: natefinch: if anyone is interested in what i'm doing, i can pair. some interesting problems
<wwitzel3> katco: yeah, that sounds good
<katco> wwitzel3: cool i'll hop in moonstone
<katco> ericsnow: hey which struct stores juju's notion of process state?
<ericsnow> katco: in what context?
<natefinch> ericsnow or wwitzel3: can one of you give me edit rights to the Juju Charm Workload Process Management doc? I have to tweak the juju status output, because what's there is invalid yaml :)
<wwitzel3> natefinch: will do that now
<katco> ericsnow: so when we run juju status, where do i go to get the new process info out of state?
<ericsnow> katco: see State.UnitProcesses() (in state/processes.go)
<katco> ericsnow: can you join moonstone rq?
<ericsnow> katco: sure
<natefinch> back later
<thumper> sinzui: if we want ppc64el tests to pass, we NEED to increase the timeout
<waigani> katco, ericsnow: wasn't this fix landed in 1.25: https://bugs.launchpad.net/juju-core/+bug/1468815
<mup> Bug #1468815: Upgrade fails moving syslog config files "invalid argument" <ci> <regression> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1468815>
<ericsnow> waigani: wwitzel3 is working on that
<waigani> ericsnow: okay, thanks
<sinzui> thumper: I can do that. 30 minutes? 40?
<thumper> 20 should be fine
<thumper> the default is 20
<thumper> sorry, 10
<sinzui> thumper: 20 right. I chage the safe and robust options to use a 20 minute timeout
<perrito666> back, sorry
<perrito666> k Ill revert the update status patch, the issue will not be solved (as it is there) but it wont be visible (as it was before)
<sinzui> katco: Your observation about feature branches verses blocked master is relevent to my expereince when Lp development did the same. The maintenance teams becase a source of blockers, and since they rotates off maintenance quickly, others had to undo the fixes
<sinzui> s/because/became/
<katco> sinzui: i suppose we're just getting started with this. we'll see how it plays out. i just wanted to make sure to raise the point that i think regardless of whether jes is landed now or after master is unblocked, the end result will be the same both in terms of time to unblock master, and quality of master.
<katco> sinzui: but that if we landed jes now, that could be a large positive. so no negatives, one positive
<perrito666> http://reviews.vapour.ws/r/2220/
<sinzui> katco: Indeed. I had gave hes a higer rank in CI testing because that team had more incenitive fix master. :(
<thumper> perrito666: shipit
<sinzui> thumper: the 1200s change is in place
<thumper> sinzui: cheers
<thumper> sinzui: fyi, I'm never going to fix master issues on the jes branch
<thumper> sinzui: will always fix in master, then merge master into jes
<thumper> sinzui: doesn't make sense to fix in the jes-cli branch
<sinzui> fair enough
<thumper> so once perrito666's branch lands in master, I'll merge master in jes
<thumper> that and the timing fix *should* give us a good shot
<thumper> however there are still racy tests in the uniter code, particularly around metrics
<thumper> so we may need to hammer it a few times
<thumper> fixing the uniter issues is not going to be a quick fix
<perrito666> thumper: so, I $$ed the branch, now we wait
<perrito666> about racyness
<perrito666> the metrics races are better covered since I merged update-status extra hooks since it made clear what was failing (fake timers)
<perrito666> s/better covered/a bit les racy
<perrito666> and regarding uniter, the race is still there it just lacks coverage
<perrito666> idle is still happening at different times in every run we are just not looking at it
<perrito666> thumper: merged
<axw> wwitzel3: thanks for forward porting my rsyslog fix, forgot about it last week
<wwitzel3> axw: thanks for fixing it :)
#juju-dev 2015-07-21
<natefinch> wwitzel3, ericsnow: btw, I made a change to the juju status --processes output to 1.) be valid yaml (list of hashtables) and present valid looking data.  Would appreciate some eyes on it: https://docs.google.com/document/d/1PcRQXaerlsACro4y1y5LWD-uvhfHya2CkOcoljyFyCU/edit#heading=h.3ypclmmspp3g
<thumper> sinzui: is there any way I can watch the current CI run?
<sinzui> thumper: login to http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/, sort the columns on last success
<thumper> sinzui: this doesn't tell me which are voting etc
<thumper> I want to get a jump on any potential failure...
<sinzui> thumper: the ones in read at the bottom
<thumper> also, how can I tell which rev from which branch is being checked?
<thumper> I know I can click through
<thumper> but wondering if there was a summary page
<thumper> for the current one
<thumper> perhaps a nice future feature :)
<sinzui> thumper: voting isn't Jenkins, it is a ci-directory feature. Jenkins also corrupts the description so it is hard to read the rules for each job
 * thumper nods
<sinzui> thumper: I might break that view. I am trying to add columns that give more incite
<thumper> i'm linda sad that vivid and wily unit test runs have failures
<thumper> damn left hand typing
<sinzui> thumper: They are close ot passing though. I think wily is fixable in a few hours
<thumper> eta on current ci run?
<sinzui> thumper: I cannot make description viewable, but...The weather icon is a condicator cloudy means there is a real chance of failure and black cloud often mean released revisions with failures
 * thumper nods
<sinzui> thumper: about 2 hours. I removed all the utopic jobs last hour. I am trying to make the openstack on maas job run faster
<sinzui> something bad just happened revision results is queued. I think the won tests failued for 1.24
<thumper> wat?
<sinzui> I think the machine needs to be restarted. I am doing that now an requeuing the tests. the failurea re not consistent
<ericsnow> thumper: FYI, looks like your most recent change in the cmd repo breaks a few tests in juju core
<thumper> what are you referring to?
<ericsnow> thumper: https://github.com/juju/cmd/pull/19
<thumper> ericsnow: it all works in jes-cli :)
<thumper> so I know I've fixed it where I'm bringing it in
<ericsnow> thumper: k
<thumper> ericsnow: do you need the new rev?
<ericsnow> thumper: just need to know how you fixed jes-cli
<thumper> with magic and fairie dust
<ericsnow> :)
<ericsnow> thumper: looks like you took care of it in https://github.com/juju/juju/pull/2540
 * thumper nods
<thumper> see... fairie dust
<thumper> sprinkled over 37 files
<natefinch> lol
<axw> thumper: what's the plan for hosted env destruction, in light of fwereade's reply?
<thumper> axw: I'm thinking we'll need to do something new
<thumper> but I'm thinking through what
 * thumper afk taking kids to jiu-jitsu
<axw> thumper: okey dokey. I think I'll go ahead and clean up storage in destroy-env --force for now, but leave the apiserver-side destruction. we can use what I'm doing if needed though
<axw> later
<thumper> urulama: hey
<urulama> hey, thumper
<thumper> urulama: when is good for you?
<thumper> if you need an hour or so
<thumper> I can come back
<urulama> tomorrow same time is ok for you?
<thumper> yeah...
<thumper> I guess
<urulama> my mother got sick so kids stayed at home an i need to figure out what to do
<urulama> :(
<thumper> kk
<thumper> np
<thumper> chat tomorrow
<urulama> thanks, thumper
<wallyworld_> jam: saw your email, shared doc with you as i could access
<jam> thx
<jam> got it
<wallyworld_> axw: if you get a chance before eod, could you look at http://reviews.vapour.ws/r/2224/ for me?
<axw> wallyworld_: ok
<wallyworld_> ta
<axw> sigh, yet another UPS that is not U
<axw> wallyworld_: LGTM
<TheMue> dooferlad: regarding our talk about postgresql see here https://twitter.com/rpy/status/622291301002670080/photo/1
<dooferlad> TheMue: arg!
<TheMue> dooferlad: hehe
<TheMue> dooferlad: dunno how to ever have this idea
<bogdanteleaga> does ec2 support agent-metadata-url?
<mgz> bogdanteleaga: yes, it does
<bogdanteleaga> mgz: I'm trying to set agent-metadata-url after the secret key
<bogdanteleaga> mgz: and it still goes to streams.canonical.com
<bogdanteleaga> mgz: am I missing something?
<mgz> bogdanteleaga: welll... in practice our ci envs use tools-metadata-url but that's just for compat with older versions
<bogdanteleaga> mgz: I've tried that too, it gave a few warnings, but it still didn't work
<bogdanteleaga> mgz: I've been doing this with maas for a while now, but it just doesn't seem to pick it up here
<mgz> bogdanteleaga: did you bootstrap with --debug? paste the log?
<bogdanteleaga> http://paste.ubuntu.com/11913639/
<bogdanteleaga> http://paste.ubuntu.com/11913647/
<mgz> bogdanteleaga: nothing jumps out at me
<mgz> I'll have to enable try debug with one of our scripts and compare the simplestreams bits more closely
<mgz> bogdanteleaga: have you done this before for earlier juju versions?
<bogdanteleaga> mgz: nope, I've always did it for maas, but I haven't tried the other providers
<mgz> wait, why does it say "trying keystone catalog"
<mgz> that's not ec2 code
<bogdanteleaga> heh
<mgz> bogdanteleaga: have you tried `juju metadata validate-tools`?
<bogdanteleaga> http://paste.ubuntu.com/11913726/
<bogdanteleaga> I'm still not sure wtf is happening with the keystone thing
<mgz> bogdanteleaga: that was with -e amazon?
<bogdanteleaga> nope but it's set as default
<bogdanteleaga> and you can see it's trying amazon in the logs
<bogdanteleaga> the first ones
<mgz> bogdanteleaga: http://paste.ubuntu.com/11913764
<mgz> looks like if there are no matches you'll only see the last fallback, which is streams
<mgz> so I think I'd start by trying to get validate-tools to see the matching tools
<bogdanteleaga> I gave you a paste of validate-tools
<bogdanteleaga> it still doesn't try the ones specified in the url
<bogdanteleaga> mgz ^^
<mgz> bogdanteleaga: compare with my paste
<bogdanteleaga> mgz: I should have a tools-url?
<mgz> when I tried a version that wasn't in the streams I had, it doesn't show my different agent url
<mgz> bogdanteleaga: no, that's spurious junk in the environments.yaml file I'm using there
<mgz> bogdanteleaga: so, it may just be that your streams data is somehow now matching when you think it should
<bogdanteleaga> I might have an idea
<bogdanteleaga> I'm using devel tools and I don't think it's looking after those by default
<mgz> right, yeah, you should add agent-stream: devel
<bogdanteleaga> I'll try to upload them again
<mgz> no, leaving as devel in the streams data is fine
<mgz> you should just add that in your environments.yaml
<bogdanteleaga> oh yeah, I meant I'll upload them again just in case
<mgz> then run the validate-tools again and see if it helps
<bogdanteleaga> the issue with doing devel in the environments.yaml
<bogdanteleaga> is that the bootstrap does not fail
<bogdanteleaga> and it uses local tools in case it doesn't find them on simplestreams
<bogdanteleaga> which doesn't work on windows
<bogdanteleaga> nah, somethings badly broken
<bogdanteleaga> it shouldn't output anything about keystone
<bogdanteleaga> I think I'm missing something, I'll try it from the top
<perrito666> morning
<bogdanteleaga> hey horacio!
<mgz> bogdanteleaga: http://paste.ubuntu.com/11913877
<bogdanteleaga> mgz: I managed to get it working, it was a problem with how I uploaded them. It still says crap about keystone though
<mgz> with debug, you can see it's trying the config location then falling back to the main
<bogdanteleaga> mgz: that's probably gonna be helpful thanks
<wallyworld> jam: not sure if you have a minute to clarify the hash as part of the naming for resources?
<jam> wallyworld:  /wave
<wallyworld> hey
<wallyworld> jam: might be easier to have a hangout if you had time. if not now, then i can hang around
<jam> wallyworld: a hangout is probably warranted, I'm currently in another conversation, so I don't want to be too distracted to talk both yet
<wallyworld> jam: no problem, just ping me if you are free later
<perrito666> sinzui: isn t https://bugs.launchpad.net/juju-core/+bug/1475724 fixed?
<mup> Bug #1475724: UniterSuite.TestUniterRelations <blocker> <ci> <ppc64el> <regression> <unit-tests> <windows> <juju-core:In Progress> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1475724>
<mgz> perrito666: probably not, but it's not a completely consistent failure so it's hard to day
<perrito666> mgz: I retracted update-status patch
<perrito666> so at least That should be fixed (not the underlying problem)
<mgz> well... maybe we should mark it as fixed and see
<mgz> the batch of failures we have on it from the 16th rev 5b05541 are probably due to that branch landing
<mgz> at least the trusty one I just looked at was a failure due to having one rather than two update-status calls
<perrito666> mgz: this is master?
<mgz> we currently hve an updated jes-cli being tested that has got passes on tests that were failing pretty reliably
<mgz> perrito666: yup, and feature branches from master
<perrito666> mgz: odd, this landed last night
<mgz> perrito666: I'm looking at the history of failures since the 16th
<mgz> not a CI run with last night's change
<mgz> master hasn't been run with your revert in yet it seems
 * perrito666 is a human popsicle
<mgz> you're melting in the sun?
<mgz> falling off your stick?
<perrito666> mgz: argentinian constructions are old, completely made of brick and certainly not prepared for cold
<mgz> ah yes, you are in the other hemisphere
<mgz> I guess you don't want to hear about how sunny and hot it is here then? :)
<perrito666> mgz: you live in brittain, you probably call "sunny and hot" the same thing I am calling freezing cold
<mgz> :P
<perrito666> mgz: its 4C which must be a typicall hot summer morning for uk, right?
<perrito666> doesn't go support a way for us to have blobs of text in something other than a const?
<perrito666> I am reviewing a go file that contains 2 lines of go and a lot of powershell
<dooferlad> perrito666: I am sure that the right way of having lots of powershell is to read in a powershell file
<perrito666> dooferlad: I am sure of it, the problem is how do you distribute that?
<dooferlad> perrito666: other than distributing the .ps1?
<perrito666> dooferlad: this is something I wish could be embedded as a resource on compile time
<perrito666> dooferlad: yes
<perrito666> dooferlad: the thing is, our software should not depend on packaging particularities
<dooferlad> perrito666: can we really not just distribute the file? That seems kind of wrong.
<perrito666> dooferlad: dunno, I would say not as we have the same thing for bash at some points
<dooferlad> perrito666: sounds like something worth starting a conversation about. I would hope that after we move to go 1.4 we could use go generate (http://blog.golang.org/generate) to pull in external files into consts
<perrito666> dooferlad: take a look http://reviews.vapour.ws/r/2199/ I accept suggestions
<mgz> bogdanteleaga: see last comment in bug 1287949
<mup> Bug #1287949: {image,tools}-metadata-url not usable w/ ec2 provider <arm64> <config> <hs-arm64> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1287949>
<mgz> bogdanteleaga: pokÃ© for hangout
<perrito666> ericsnow: please get the blessing from fwereade too on my review
<ericsnow> perrito666: k
<ericsnow> perrito666: and thanks :)
 * fwereade just wrote some disgustingly smug test code, and can't figure out if it's good or bad: http://paste.ubuntu.com/11914787/
<mgz> fwereade: cuteness depends on map ordering? :)
<fwereade> mgz, the cuteness depends on the order I wrote them in, but not on any runtime properties AFAIAA
<perrito666> fwereade: cute to read, might be confusing to debug
<mgz> it's always what the failures would look like that I think about
<fwereade> mgz, I'm not sure the word choice helps or hinders much -- it's deliberately a tangled racy write because that's what I'm testing
<bogdanteleaga> can anybody check this out? http://paste.ubuntu.com/11914363/
<bogdanteleaga> I'm trying to get aws with a custom image up
<bogdanteleaga> it seems to detect the custom image metadata and then throw it away and resort to the official one again
<mgz> fwereade: to be clear, it's fine as written, I'm just chatting about paint
<fwereade> mgz, cool -- I'm still on the fence for some reason
<mgz> bogdanteleaga: did you add in the unsigned selection hack?
<fwereade> mgz, mainly because I look at it and I can see me admiring my own cleverness and that's a bit unattractive ;)
<bogdanteleaga> mgz: nope, because from what I gathered from the bug it wouldn't work for either tools or images
<bogdanteleaga> mgz: and tools work for me
<mgz> then I think you need to add in more logging in the simplestreams code till you understand why it's not selecting an image from the first stream
<bogdanteleaga> it does seem to select it at first
<mgz> sure, but it's not matching an image, need to work out why
<bogdanteleaga> if you look at line 7 it does match it, it just fails after that and I think after the failure it tries again with the official sources
<bogdanteleaga> can't seem to understand why it fails though
<mgz> bogdanteleaga: the 1.24.3 - 1.24 tip upgrade does seem to be broken by the registry thing
<mgz> bogdanteleaga: http://data.vapour.ws/juju-ci/products/version-2906/maas-1_8-upgrade-win2012hvr2-amd64/build-62/machine-1.log.gz
<wwitzel3> katco, ericsnow: ping if you need me, I think this --extend bug is the last hurdle for me *fingers crossed*
<ericsnow> wwitzel3: cool
<katco> wwitzel3: ty
<mgz> bogdanteleaga_: ah, you dropped
<mup> Bug #1473517 changed: juju environment not usable after the upgrade <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix Committed by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1473517>
<mgz> bogdanteleaga: http://data.vapour.ws/juju-ci/products/version-2906/maas-1_8-upgrade-win2012hvr2-amd64/build-62/machine-1.log.gz
<mgz> sinzui, ocr: http://reviews.vapour.ws/r/2228
<perrito666> mgz: youve got to be kidding me right?
<perrito666> that is 7 pages of reviews
<mgz> perrito666: you are not intended to review-review it
<mgz> just check I haven't screwed up the merge
<mgz> the reviews happened in the landings onto the jes-cli branch
<perrito666> mgz: rb is failing for some files
<mgz> and ci has done a full run on the branch
<mgz> perrito666: you probably need to use github, rb is bad with moves
<sinzui> perrito666: I added a comment about the testing done
<sinzui> perrito666: http://reports.vapour.ws/releases/2906 is blessed and confirms https://bugs.launchpad.net/juju-core/+bug/1475724 is fixed
<mup> Bug #1475724: UniterSuite.TestUniterRelations <blocker> <ci> <ppc64el> <regression> <unit-tests> <windows> <juju-core:In Progress> <juju-core jes-cli:Fix Released> <https://launchpad.net/bugs/1475724>
<perrito666> how wonderful it would be that the sign on would redirect me to the link I was going
<perrito666> sinzui: mgz evidence seems to tell it is all ok, but I dont think I could notice a merge misshap in 139 files
<sinzui> :/
<perrito666> sinzui: I think we have been careful enough to be able to say this is ok to merge
<perrito666> bbl, lunch
<mup> Bug #1300692 changed: relation-get fails with return code 2 <charm> <hooks> <relations> <juju-core:Invalid> <https://launchpad.net/bugs/1300692>
<fwereade> katco, if you're OCR, can I point you at http://reviews.vapour.ws/r/2229/ and http://reviews.vapour.ws/r/2230/ ?
<fwereade> katco, 2229 is a minor interface change, already reviewed elsewhere, but maybe of interest to you
<fwereade> katco, 2230 actually uses it
<mup> Bug #1300692 opened: relation-get fails with return code 2 <charm> <hooks> <relations> <juju-core:Invalid> <https://launchpad.net/bugs/1300692>
<mup> Bug #1300692 changed: relation-get fails with return code 2 <charm> <hooks> <relations> <juju-core:Invalid> <https://launchpad.net/bugs/1300692>
<katco> fwereade: not ocr today... if i have time i'll give it a look later :)
<fwereade> katco, ...hmm, apparently it's not wednesday
<fwereade> who knew?
<fwereade> ahem
<katco> haha
<fwereade> perrito666, ^^
<wwitzel3> ericsnow: so I've narrowed it down to command.go not properly applying --extend
<wwitzel3> ericsnow: which is highly probably since we don't have tests for that
<ericsnow> wwitzel3: k
<wwitzel3> ericsnow: so I'm added some tests and hopefully that will point me to the problem :)
<natefinch> katco: wwitzel3, ericsnow: back
<ericsnow> wwitzel3: cool
<wwitzel3> ericsnow: as I'm attempting to make tests for this, I think I am getting the sense why it wasn't testing :P
<ericsnow> wwitzel3: :)
<katco> wwitzel3: hey where is that charm at?
<bdx> charmers, core: how do access predefined hooks like relation-changed or relation joined hooks in python when writing a charm? Must I import those charmtools or charmhelpers modules?
<marcoceppi> bdx: I'm not sure I understand what you're asking
<bdx> marcoceppi: for example, in the tutorial https://jujucharms.com/docs/stable/authors-charm-writing
<bdx> hooks are referenced like "db_host=`relation-get private-address`"
<bdx> marcoceppi: how can I access similar hooks in python?
<marcoceppi> bdx: I see, those aren't hooks, those are hook-tools
<marcoceppi> hooks aer the relation-joined, relation-changed, config-changed, etc
<marcoceppi> bdx: there's a charm-helpers libaray, charmhelpers.core.hookenv which has all those available as methods
<marcoceppi> bdx: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html
<marcoceppi> bdx: relation_get will be what you're looking for
<bdx> marcoceppi: ok, that sweet..... I think I was just a little confused using hooks between bash and python
<katco> wwitzel3: natefinch: ericsnow: i'm back. what can i help with?
<ericsnow> katco: not sure
<katco> wwitzel3: please lmk when the charm is ready to go. i think we're blocked on that atm
<natefinch> katco: I was thinking that ComponentStatus should be a map[string]interface{}  that way a component can just dump a struct into it, and have strongly typed values that automatically get converted in and out.
<katco> natefinch: ok, noted. let's discuss more after the demo
<natefinch> katco: ok
<katco> natefinch: can you still manage your piece w/o doing that?
<natefinch> katco: yeah
<mup> Bug # changed: 1370896, 1430245, 1461354, 1463480, 1465115, 1468653, 1469130, 1469199, 1471332, 1472014, 1472729, 1475163
<marcoceppi> Can I just make a simple request? I'm not sure who thought juju scp would be better over rsync, but it's not, it should be juju rsync ;)
<xwwt_> sinzui: Can we delay for 1/2 hr and meet at 3:30?
<sinzui> xwwt_: sure
<perrito666> btw, I am back
<katco> wwitzel3: ping?
<wwitzel3> katco: yep, I'm still trying to figure out why the --extend args are not being properly populated
<katco> wwitzel3: need any help? running out of time
<wwitzel3> katco: other people exploring it would be fine, I'm just walking through and re-compiling different logging outputs right now trying to find out how far it is actually getting
<natefinch> btw, I'm trying to test the status output, just whipping up a test charm that uses the docker plugin
<wwitzel3> katco: right now I'm poking around in process/context in the launch command and launch_test
<katco> ericsnow: natefinch: wwitzel3: let's plan on touching base in moonstone in 15m or so?
<ericsnow> katco: k
<wwitzel3> katco: k
<natefinch> ditto
<thumper> OMG, is that a bless on jes-cli I see?
<cherylj> thumper: yeah!
 * fwereade cheers at thumper, wonders if he'd like to talk environment lifecycles shortly? am approaching sleepiness but also sort of want to stay up and talk to wallyworld
<thumper> fwereade: on calls for a while now
<thumper> fwereade: but yes, at some stage
<thumper> probably not right now
<fwereade> thumper, sgtm, I'm not *especially* in the mood myself right now :)
<wallyworld> fwereade: i have a meeting in 10, should be short hopefully
<fwereade> wallyworld, cool, ping me when you're done, I may or may not still be both present and coherent
<wallyworld> ok
<wallyworld> or we can talk at yuor SOD
<wallyworld> we have one scheduled
<xwwt> sinzui: ping
<menn0> fwereade: I'm forward porting this https://github.com/juju/juju/pull/2801 but of course everything has changed in master
<menn0> fwereade: I'm going to remove the insertWithoutEnvironment collection attribute. yell at me if you don't think that's right.
<fwereade> menn0, that's exactly right, and it's what I did when I backported all that stuff to 1.24
<menn0> cool
<fwereade> menn0, I think if you check 1.24 the multiEnvRunner already does the right thing
<menn0> fwereade: well yes, I removed the always-assert-on-env-life feature from there (that's what 2801 is about)
<fwereade> menn0, I think I had a point but it escapes me right now
<menn0> fwereade: np :)
<perrito666> why is 1475724 fix committed yet still blocks?
<_thumper_> perrito666: possibly  because the blocker tag is still there and the revision hasn't passed the relevant CI test?
<perrito666> thumper: ah makes sense
<alexisb> sinzui, ^^^
 * perrito666 was falling asleep on the keyboard, decided to cook to wake up a bit, now is falling asleep and laptop has garlic smell
<sinzui> perrito666: fix commits is not CI's proof that the fix works and doesn't not introduce other regressions.
<sinzui> perrito666: alexisb thumper : CI is almost done proving that the fix work and will open master with jes-cli merged
<sinzui> Joyent is very ill.
<menn0> wallyworld: is http://reviews.vapour.ws/r/2225/ a back/forward port?
<wallyworld> menn0: forward port, already landed in 1.24, waiting for master to unblock
<menn0> wallyworld: cool, i'll ignore it unless you say otherwise
<wallyworld> np
<sinzui> Who wants to be first to break master
<perrito666> memememme
<sinzui> perrito666: thumper alexisb wallyworld : Ci blessed master (wiith jes-cli merged) and it closed the blocking bug. We can merge now
<anastasiamac_> sinzui: \o/
<perrito666> mail client with notification ftw, instant bless notice
<perrito666> :p
<sinzui> Don't panic (as I did) about the email with all the failed joyent jobs. Joyent is/was very ill. Machines were not being provisions in our region
<wallyworld> yay
<mup> Bug #1475724 changed: UniterSuite.TestUniterRelations <blocker> <ci> <ppc64el> <regression> <unit-tests> <windows> <juju-core:Fix Released> <juju-core jes-cli:Fix Released> <https://launchpad.net/bugs/1475724>
<thumper> sinzui: is that rev going to be 1.25-alpha1?
 * wallyworld off to pick up newly repaired coffee machine :-D bbiab
#juju-dev 2015-07-22
<menn0> ericsnow: ping?
<ericsnow> menn0: hey
<menn0> ericsnow: quick question... why is the vsphere provider set up to not build under gccgo or go 1.2?
<ericsnow> menn0: dependencies
<menn0> ericsnow: i'm reviewing the rackspace provider PR and the sshclient from vsphere has been moved and generalised to support it and the build tags came along
<ericsnow> menn0: (govmomi requires 1.3+)
<menn0> ericsnow: but the ssh client should work fine?
<ericsnow> menn0: yep
<menn0> ericsnow: ok, i'll mention that the build tag should go
<ericsnow> menn0: it's just the Go bindings to the vsphere API that were the problem
<menn0> ericsnow: thanks
<ericsnow> menn0: np
<menn0> ericsnow: actually, one more thing, the sshclient code assumes the host is Linux so won't work with windows hosts. is this ok/expected? (i know nothing about vsphere)
<ericsnow> menn0: for vsphere I expect it's okay
<menn0> ericsnow: any idea when it comes to rackspace? this PR is using the same code there.
<ericsnow> menn0: rackspace is openstack under the hood
<menn0> ericsnow: and openstack does appear to support windows hosts, but I don't know about rackspace
<menn0> ericsnow: i'l raise it as a possible issue. thanks.
<ericsnow> menn0: k, cool
<natefinch> katco, ericsnow, wwitzel3: evening
<katco> natefinch: o/
<katco> natefinch: i'm in moonstone. eric and wayne are taking a break
<wallyworld> thumper: have you forward ported bug 1468581?
<mup> Bug #1468581: juju bootstrap fails - Waiting for API to become available ERROR cannot get all blocks: EOF <api> <bootstrap> <oil> <juju-core:In Progress by thumper> <juju-core 1.24:Fix Released by thumper> <https://launchpad.net/bugs/1468581>
<thumper> I think so...
<thumper> let me check
<wallyworld> axw: and bug 1474614
<mup> Bug #1474614: rsyslog connections fail with certificate verification errors after upgrade to 1.24.2 <regression> <juju-core:Triaged by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1474614>
<wallyworld> and there's a few from menno as well
<wallyworld> menn0 even
<axw> wallyworld: from the bug, "Forward-porting fix to 1.25 is currently blocked on other changes to the rsyslog worker being forward ported (i.e. the change that the upgrade step is concerned with)."
<wallyworld> ah ok
<wallyworld> didn't read the bug :-)
<axw> wallyworld: I'll look at doing that after I finish up what I'm working on
<wallyworld> just the milestone page
<wallyworld> ty
<wallyworld> we're aiming to cut 1.25 alpha 1 real soon
<wallyworld> hence the follow up
<menn0> wallyworld: i'm planning on doing the forward port for bug 1474195 this afternoon
<mup> Bug #1474195: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
<wallyworld> awesome tyvm
<menn0> wallyworld: actually, I lie. I have the PR ready for that but want Will's review (have emailed him)
<thumper> wallyworld: no
<menn0> wallyworld: i'm doing bug 1474606 this afternoon
<wallyworld> menn0: and the other 2 against you?
<mup> Bug #1474606: $set updates may clear out the env-uuid field <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474606>
<thumper> wallyworld: I must have forgotten
<thumper> bad me
 * wallyworld spanks thumper
<thumper> oi
<thumper> none of that
<menn0> wallyworld: bug 1474588 is nowhere near investigated let alone fixed
<mup> Bug #1474588: Many hook failures after upgrade <regression> <juju-core:Triaged by menno.smits> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1474588>
<wallyworld> menn0: the othr one i was refering to in marked in progress bug 1457225
<wallyworld> wake up mup bug 1457225
<wallyworld> oh right, you answer that
<wallyworld> sorry
<menn0> wallyworld: mup knew that :)
<mup> Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
<mup> Bug #1476895 changed: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
<mup> Bug #1476895 opened: ec2: auto-created EBS volumes are not tagged <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1476895>
<katco> ericsnow: wwitzel3: if by any chance you're lurking around, natefinch and i are in moonstone
<thumper> menn0: ignore http://reviews.vapour.ws/r/2234/
<axw> wallyworld: would you PTAL at https://github.com/go-goose/goose/pull/12
<wallyworld> sure
<menn0> thumper: happily ignoring :)
<wallyworld> axw: +1
<axw> wallyworld: ta
<menn0> wallyworld: cherry picking the fixes for bug 1474606 is proving difficult because it depends the fixes for bug 1474195 (which i'm waiting on a review from will on)
<mup> Bug #1474606: $set updates may clear out the env-uuid field <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474606>
<mup> Bug #1474195: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:In Progress by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
<menn0> wallyworld: can it wait until tomorrow?
<wallyworld> menn0: for both? can wait if necessary
<wallyworld> i ping william later to make sure he reviews
<menn0> wallyworld: thanks. the second one doesn't need will's review, just the one that's up.
<menn0> wallyworld: i've emailed him about it as well.
<wallyworld> ty
<mup> Bug #1457797 changed: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
<mup> Bug #1457797 opened: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
<mup> Bug #1457797 changed: Juju bootstrap doesn't work behind proxy <bootstrap> <juju> <juju-core:Expired> <https://launchpad.net/bugs/1457797>
<mup> Bug #1476918 opened: juju switch doesn't work without an environments.yaml file <cli> <jes> <juju-core:Triaged> <https://launchpad.net/bugs/1476918>
<axw> wallyworld: another small review if you don't mind: https://github.com/go-amz/amz/pull/56
<wallyworld> sure
<wallyworld> jam: morning, would you have time soonish to talk about resources? in maybe 15 minutes when anastasia gets back from school pickup?
<jam> wallyworld: sure, are you available now?
<wallyworld> jam: almost, just waiting to hear back from anastasia who is caught in traffic. may have to defer if it gets to close to my school pickup time
<jam> k
<wallyworld> jam: you free now?
<jam> wallyworld: sure
<wallyworld> https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
 * thumper looks around for urulama
<mup> Bug #1476996 opened: Network communication failed during juju bootstrap <juju-core:New> <https://launchpad.net/bugs/1476996>
<mup> Bug #1477010 opened: provider/openstack: volumes may not attach if instance takes a long time to provision <openstack-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1477010>
<dooferlad> fwereade: hangout?
<frankban_> ocr: quick deps update http://reviews.vapour.ws/r/2238/ thanks!
<fwereade> wallyworld, you still around?
<fwereade> wallyworld, I'm wondering whether we need to read the whole previous status doc every time we set
<fwereade> wallyworld, could we just write the same data to both the watched and the raw collections every time?
<fwereade> wallyworld, (and fwiw I think it might be more reliable?)
 * fwereade was doing some python the other day and just spent *far* too long wondering why gofmt didn't like "def"
<natefinch> lol
<bogdanteleaga> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1287949
<mup> Bug #1287949: {image,tools}-metadata-url not usable w/ ec2 provider <arm64> <config> <hs-arm64> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1287949>
<wallyworld> bogdanteleaga: by design, we currently only allow signed image metadata for ec2. i don't know the reason for that decision
<wallyworld> the the metadata were to be signed, it would work
<wallyworld> if
<bogdanteleaga> yes, but as per the guy above me in the bug report
<bogdanteleaga> there's no way of specifying a key
<bogdanteleaga> so it has to be official?
<fwereade> wallyworld, bogdanteleaga: nor do I, but it crosses my mind that we want to be sure that anything we think is the official source is signed with our key
<wallyworld> fwereade: that is true, but for ec2 it currently requires user specified image metadata to be signed
<bogdanteleaga> definitely makes sense, but then there's no way of specifying custom images
<fwereade> wallyworld, bogdanteleaga: I can't see any reason not to allow either additional keys or to allow unsigned -- so long as we check our source
<wallyworld> and i don't know why
<fwereade> wallyworld, nor do I but I strongly suspect it was an all-we-had-time-for deal
<wallyworld> maybe because - can people add custom images to ec2?
<fwereade> wallyworld, yes, and IMO that's a good thing
<wallyworld> maybe the reasoning was, well if you can't add cutom images....
<bogdanteleaga> no, but there's no way of using a ec2 AMI that's not specified on cloud-images.ubuntu.com
<bogdanteleaga> i.e. a windows ami
<wallyworld> this was all done prior to windows :-)
<wallyworld> so the code needs changing for windows
<bogdanteleaga> yeah, I'm aware and I do understand
<fwereade> wallyworld, bogdanteleaga: I strongly agree that is a bug and it's symptomatic of a wider one -- we have metadata generate-image, but a cloud configured to never work with it?
<bogdanteleaga> was just signaling it's an actual bug and it's getting more important
<wallyworld> fwereade: ec2 was configured to only use signed metadatsa before genetate metatdata came along
<wallyworld> generate metadata was only for openstack
<wallyworld> for private clouds
<bogdanteleaga> and I wasn't aware it was a design decision until now
<wallyworld> where there were no official images
<wallyworld> bogdanteleaga: it made sense at the time because there were no unofficial images possible
<fwereade> wallyworld, heh, not in my mind, I always thought that the ability to specify custom images was important, just that the way we were originally doing it was badwrongevil
<wallyworld> fwereade: at the time, everything was based on simplestreams - that was our language
<wallyworld> and only openstack had the need for custom images
<wallyworld> and ec2 i think deliberately precluded them
<fwereade> wallyworld, custom images are super-cool all over the place
<bogdanteleaga> ok, so for a fix: do we want to allow unsigned or to allow specifying a key?
<fwereade> wallyworld, if we *just* let people do this we can have the big data folks deploying an image with their JVM already in place
<fwereade> wallyworld, and cut their deployment times like anything
<wallyworld> fwereade: i agree, but at the time it was impossible to have custom images for aws
<wallyworld> so why support something impossibe
<fwereade> wallyworld, I am certain that I was building custom images for aws before even ensemble was a thing
<wallyworld> i'm guessing - maybe it was a policy deciison
<wallyworld> not sure now
<wallyworld> bogdanteleaga: allow unsigned i think
<wallyworld> to be consistent with ec2
<wallyworld> openstack
<wallyworld> i meant
<bogdanteleaga> wallyworld: so I'm guessing ec2 is asking for signed only right? I was disabling it for everything for testing
<wallyworld> fwereade: to answer your other question - i'm not sure why whole status record written each time - histerical reaosns?
<wallyworld> bogdanteleaga: from memory there's a bool constant
<fwereade> wallyworld, no, read each time, for copying into hysterical statues
<fwereade> wallyworld, bogdanteleaga: it *would* IMO be best to allow specification of acceptable keys as well
<wallyworld> fwereade: that's a bigger change, but sure
<wallyworld> but do we really need that straight up?
<fwereade> wallyworld, bogdanteleaga: and that feels like something that shouldn't take too much per-env configuration
<fwereade> wallyworld, I am encouraging bogdanteleaga to do it, because apart from anything else I think security-conscious people *will* want to use signed simplestreams, and we're not going to sign the metadata for every OS we can deploy
<wallyworld> i agree, just ensuring it's the right time to do it
<bogdanteleaga> how about I enable the unsigned one, and add a todo+bug report for doing signed with key
<fwereade> wallyworld, bogdanteleaga: that sgtm
<wallyworld> that's what i was expecting to hapen
<fwereade> wallyworld, bogdanteleaga: indeed
<fwereade> bogdanteleaga, please talk it through with gsamfira_ though, I think it will be relevant if we expect Serious People to deploy a bunch of different OSs with juju
<fwereade> wallyworld, anyway re status
<fwereade> wallyworld, what I really want to do
<fwereade> wallyworld, is have every setstatus method be
<fwereade> go setHistoricalStatus(key, doc) // logs but ignore errors
<fwereade> ...and then to just set txnal status as usual
<fwereade> wallyworld, if I *just* do that, we'll miss one hostirical value around an upgrade
<wallyworld> fwereade: looking at the code, it seems that it is excluing the actual current status from the history?
<fwereade> wallyworld, and if I take on an upgrade step as well I will feel I've strayed too far from "enable leadership"
<fwereade> wallyworld, hmm, hadn't looked there
<fwereade> wallyworld, any particular reason?
<fwereade> wallyworld, seems surprising
<wallyworld> maybe "history" was taken as lieterally meaning in the past
<wallyworld> but yeah surprising
<fwereade> wallyworld, <pedant>latest know status is still part of history because it was set in the past
<wallyworld> fwereade: i can't see a reason not to make the change you suggest
<fwereade> wallyworld, cool, thanks
<fwereade> wallyworld, oh hell, status history is still txnal in 1.24
<fwereade> wallyworld, shouldn't it be backported?
<wallyworld> damn, i'll ask horatio to backport - i thought it was done there
<fwereade> wallyworld, no worries, thanks
<wallyworld> sorry
<fwereade> wallyworld, that's the trouble with branches, so many exciting opportunities to miss stuff
<wallyworld> yeah
<wallyworld> and we branched 1.24 off too early
<wallyworld> way too early
<fwereade> wallyworld, that too
<fwereade> wallyworld, if you're still here: if I were to make SetStatus accept a StatusInfo, and apiserver responsible for setting the time, would that be a Bad Thing?
<fwereade> wallyworld, I probably won't either way, am trying not to distract myself
<wallyworld> i'd rather apiserver not contain any business logic
<wallyworld> business logic should be in a separate service layer
<wallyworld> other services may call the status service to set the status
<wallyworld> and we'd want the status service to set the time
<fwereade> wallyworld, so, apiserver->model->persistence, and the rules in model?
<wallyworld> those other services may be co-located, hence the network layer would be short circuited
<wallyworld> yes
<wallyworld> i'd characterise it as apiserver->business_services->domain_model->persistence
<wallyworld> business services sit on an enterprise bus
<fwereade> well, that implies the model knowing about persistence rather than vice versa, but that's an aside
<wallyworld> they operate on a domain model, the model is independent of persistence
<fwereade> right
<wallyworld> yea, the linear representation doesn't work
<wallyworld> and the enterprise bus would provide pubsub, rpc, service discovery etc
<fwereade> wallyworld, so something akin to https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html
<fwereade> wallyworld, or not?
<wallyworld> looking
<fwereade> wallyworld, because the representation as services doesn't *necessarily* fit
<wallyworld> yeah, my concept doesn't quite fit that particular model
<fwereade> wallyworld, and my current background thoughts are directed towards extracting an in-memory model layer, and moving business-rule responsibilities into there from state (but state still referencing the model)
<fwereade> wallyworld, the main driver for this is the uncommitted-state/transactionality stuff
<wallyworld> agree with the above, need to think about the bit in brackets
<wallyworld> by state, do you mean "persistence"?
<fwereade> wallyworld, in which I contend that having a dependency-free model representation, with a *single* and authoritative implementation of the business rules, is critical for composing and checking sanity of large logical operations without reference to db state
<fwereade> wallyworld, yeah
<fwereade> wallyworld, although it's sorta tricky because it's still going to have a bunch of consistency concerns that are hard enough to deserve thought
<fwereade> wallyworld, so, "state" as shorthand for "what's in state today, referencing business rules defined elsewhere"
<fwereade> state without the business rules
<wallyworld> +1 for getting business rules right out of state
<wallyworld> that would be a big win
<wallyworld> for not that much effort in the big cheme of things
<wallyworld> ideally, persistence would be abstracted so it were pluggable
<wallyworld> that would help ensure a clean design
<fwereade> wallyworld, well, if you have insights, I want you to share them, because... *all* our business rules are encoded in dynamically generated strings of transaction operations
<wallyworld> fwereade: that's an artifact of out current implementation, and sadly ties us to mongo, and sadly weaves separate concerns together
<fwereade> wallyworld, and I need a sane and comprehensible model of the world, its changes, and the conditions those changes require -- that I can effectively render down into txn operations
<wallyworld> fwereade: we should be able to hand off model changes to a persistence layer, over a well defined interface boundary
<wallyworld> we should nut this out next week
<wallyworld> when i am less tired
<fwereade> wallyworld, sgtm :)
<wallyworld> flagging a bit atm
<fwereade> wallyworld, sorry to keep you up
<wallyworld> np, only 10pm but i'm tired
<wallyworld> brain needs rest
<wallyworld> hard to give decent answers and think deeply about complex issues
<wwitzel3> it's best to just avoid it ;)
<perrito666> morning
<TheMue> heya perrito666
<jam> fwereade: did I completely miss you guys on actions?
<jam> jw4: ^^
<jam> I realize I'm late coming out of the last meeting.
<fwereade> jam, heyhey, jw4 is having mic troubles
<jam> ah
<natefinch-afk> nick natefinch
<natefinch> heh
<TheMue> *lol*
<natefinch> man, I *love* that we made workload processes use plugins and not hardcoded stuff from juju-core.  It makes it trivial to produce specific fake technology plugins that mimic real ones without having to muddy juju-core codebase
<mgz> mattyw: bug 1477157
<mup> Bug #1477157: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1477157>
<mup> Bug #1477157 opened: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1477157>
<mattyw> mgz, ah crap, ack, I'll fix it
<mgz> ta!
<mattyw> mgz, https://github.com/juju/juju/pull/2860
<dooferlad> TheMue: This is what I have so far for CreateSpace: https://github.com/juju/juju/compare/net-cli...dooferlad:net-cli-api-spaces-create?expand=1
<mgz> mattyw: shipit
<TheMue> dooferlad: one moment, will take a look in a few secs
<dooferlad> TheMue: I am not doing any param checking in CreateSpace on the client side since the CLI already does that. This is also the case in api/action/client.go, but I wanted to start that conversation early.
<TheMue> dooferlad: spaces.go line 34 has a typo
<dooferlad> TheMue: wasn't expecting a full code review, I just want to discuss client side params checking.
<TheMue> dooferlad: only seen it while flying over the code, eagle eyes *cough* *cough*
<TheMue> dooferlad: what do you think, could it make sense to rename the params type now, from add to create? simply using the opportunity
<dooferlad> TheMue: sure
<TheMue> dooferlad: and when adding the subnetIds as tags into the params, don't work with append but make it with the right length and use the index
<TheMue> dooferlad: rest looks fine so far, I maybe would move testCreateSpace as a closure into TestCreateSpace.
<fwereade> perrito666, did wallyworld mention backporting status-history txnality to 1.24?
<TheMue> dooferlad: a very good example for the while testing is dimiterns instancepoller
<fwereade> perrito666, should be simple, 1.24 has state.Database now
<perrito666> fwereade: yup. already assigned me a card :)
<fwereade> perrito666, cool
<fwereade> perrito666, would you ping me when it's ready to review, I will at some stage become blocked on it
<perrito666> fwereade: sure
<perrito666> fwereade: want to tell me more?
<fwereade> perrito666, cheers
<fwereade> perrito666, I have to do gated service.SetStatus
<fwereade> perrito666, which means I need to at least touch the status-history storage
<fwereade> perrito666, and I would rather someone else had that change in place before I got involved in the details
<perrito666> fwereade: ill do as fast as possible :)
<fwereade> perrito666, oh, and if you're doing that, would you please make the updateStatusHistory signature match all the other similar funcs? (st, globalkey, doc)
<perrito666> fwereade: k
<dooferlad> TheMue: http://reviews.vapour.ws/r/2241/
<TheMue> dooferlad: reviewed
<dooferlad> TheMue: Thanks!
<TheMue> dooferlad: yw
<mattyw> mgz, can we close this yet? https://bugs.launchpad.net/juju-core/+bug/1477157
<mup> Bug #1477157: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:In Progress by mattyw> <https://launchpad.net/bugs/1477157>
<mattyw> bogdanteleaga, yeah - totally forgot to do the GOOS=windows thing
<mattyw> bogdanteleaga, copied and pasted an old script :(
<mgz> mattyw: really the process is let ci bless the branhc and close the bug itself
<mattyw> mgz, ack
<bogdanteleaga> mattyw: heh, I remember doing that a few times, but I got scared when I did git diff :)
<mattyw> bogdanteleaga, I saw it in the diff, and got over excited by deps being removed
<bogdanteleaga> mattyw: haha
<dooferlad> TheMue: could you take a look again at that review?
<mgz> okay, new run on 1.24 branch has started
<perrito666> if you ever feel bad about a hack https://twitter.com/adrianchm/status/623485623538712576
<natefinch> perrito666: lol, that's amazing
<mup> Bug #1476996 changed: Network communication failed during juju bootstrap <bootstrap> <mongodb> <juju-core:New> <https://launchpad.net/bugs/1476996>
<mgz> juju upgrade on windows just passed for the first time.
 * perrito666 cries
<wwitzel3> ericsnow: ping
<ericsnow> wwitzel3: hey
<wwitzel3> ericsnow: never mind, I'm going ot look at the board
<ericsnow> wwitzel3: k
<wwitzel3> ericsnow: we have an issue with the charm in that without having destroy available .. we can't really do much other than add-relattion
<wwitzel3> katco: ^
<katco> wwitzel3: ah, because docker containers are static and need to be torn down anytime something changes?
<wwitzel3> katco: right
<ericsnow> wwitzel3: you could add a basic proc-destroy script to the charm that calls juju-process-docker destroy...
<ericsnow> wwitzel3: in case you missed it, I didn't merge that --extends patch (we didn't get it quite right)
<katco> wwitzel3: ericsnow: natefinch: wondering how critical it is to get that right just for the demo. we probably only need the container to spin up once, yeah?
<katco> wwitzel3: ericsnow: natefinch: i.e. i don't know if it's worth the time to write a destroy script
<ericsnow> katco: well, the script would just be a one-liner call to the plugin
<wwitzel3> I don't even have to make a script
<wwitzel3> I can just call it directly from the charm
<wwitzel3> which I just tested, it works
<ericsnow> wwitzel3: true
<katco> wwitzel3: pft. over achiever
<wwitzel3> I'm going a end to end of the latest charm now, no intervention, will see how it goes
 * katco crosses fingers
<wwitzel3> katco: it worked here, so please give it a try
<wwitzel3> ericsnow: how did we get the parseUpdates wrong? ..
<wwitzel3> ericsnow: nvm I'll look at the reivew
<katco> wwitzel3: cool, pull from gh?
<wwitzel3> katco: yep
<ericsnow> wwitzel3: we weren't supposed to apply the updates to c.info
<ericsnow> wwitzel3: I left a review comment
<mup> Bug #1477263 opened: Enabling allow-lxc-loop-mounts can cause error when destroying an environment <juju-core:New> <https://launchpad.net/bugs/1477263>
<wwitzel3> ericsnow: so when I implement the changes you suggested, our test suite fails all over the place :(
<ericsnow> wwitzel3: :(
<ericsnow> wwitzel3: likely due to s.setMetadata calls
<ericsnow> wwitzel3: I can take over that patch if you need to work on the charm
<wwitzel3> ericsnow: that would be great
<ericsnow> wwitzel3: k
<katco> wwitzel3: hm. i think i'm doing something wrong
<katco> wwitzel3: i deploy wordpress-wpm, i deploy mysql
<katco> wwitzel3: i relate them
<katco> wwitzel3: but wordpress-wpm is stuck in the install hook
<wwitzel3> katco: the install can take a while .. since we pull the image down
<wwitzel3> katco: how long has it been stuck?
<katco> wwitzel3: most likely longer than it should be
<katco> wwitzel3: like... 15m maybe? i dunno i'm hopping around a lot
<katco> wwitzel3: actually more like 25 doing some timestamp math
<katco> wwitzel3: what order do you do things in?
<wwitzel3> katco: it shouldn't matter :)
<katco> wwitzel3: very true :)
<wwitzel3> katco: sounds like the install hook is getting hungup on the pull
<katco> wwitzel3: debug-log just spams: unit-wordpress-wpm-0[2980]: 2015-07-22 19:40:44 INFO unit.wordpress-wpm/0.install logger.go:40
<katco> wwitzel3: empty log message
<katco> wwitzel3: so i'm not missing any steps? just need to relate the two, yeah?
<wwitzel3> yep
<wwitzel3> katco: I'm doing a fresh deploy, see if I get that same install hook hanging issue
<katco> wwitzel3: k, i'm retrying as well
<katco> wwitzel3: weird... docker.io is consuming most of the cpu
<katco> wwitzel3: but the image isn't listed
<katco> wwitzel3: any ideas why docker.io would be spinning? never seen that
<wwitzel3> katco: no idea, my install hook is spinning as well
<wwitzel3> katco: I am wondering if there is just not enough time between installing the docker.io service and starting it and issuing the pull
<wwitzel3> katco: going to try it with a sleep, see if that helps
<katco> wwitzel3: doesn't apt-get install docker.io start the docker.io service already?
<katco> wwitzel3: instead of a sleep, query the status of the service
<mup> Bug #1477281 opened: machine#0 jujud using ~100% cpu, slow to update units state <canonical-bootstack> <canonical-is> <juju-core:New> <https://launchpad.net/bugs/1477281>
<katco> running into bug 1168154 too
<mup> Bug #1168154: Destroying a service in error state fails silently <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1168154>
<natefinch> ericsnow: process.Info.Id()'s comment states "ID composes a unique ID for the process" .... does that mean that we don't expect the process name to be unique?
<ericsnow> natefinch: potentially
<natefinch> ericsnow: the reason I ask is that the status output as stated in the spec is using the process name as the key to a map, so if there's a collision.... badness
<ericsnow> natefinch: we had discussed support for launching multiple copies of a proc
<ericsnow> natefinch: I suppose we can cross the bridge later
<natefinch> we seem to be doing a lot of "let's worry about that later" with this feature, which makes me worry about it now.
<ericsnow> natefinch: well, currently we don't support multiple copies of a proc (and we may never)
<natefinch> ericsnow: fair enough
<natefinch> ericsnow: I'm all good with YAGNI, so long as it's not YAGNIUNW  (You ain't gonna need it ... until next week)
<ericsnow> natefinch: :)
<bogdanteleaga> mgz: yay!
<bogdanteleaga> perrito666: tears of joy? :p
<perrito666> bogdanteleaga: ??
<bogdanteleaga> <mgz> juju upgrade on windows just passed for the first time.
<bogdanteleaga> * perrito666 cries
<perrito666> ah si
<perrito666> yes
<wwitzel3> katco: looks like there is an issue with the registry
<wwitzel3> katco: I can't pull the wp container at all, atm
<katco> wwitzel3: as in the official docker registry?
<wwitzel3> katco: yeah
<katco> wwitzel3: well that explains things lol. do they have a status page or something?
<natefinch> oh great, one more thing to break
<katco> wwitzel3: https://status.docker.com/
<wwitzel3> katco: nope, but after running it 4 or 5 times it started working :)
<wwitzel3> 2015/07/22 20:19:45 Could not find repository on any of the indexed registries.
<wwitzel3> Pulling repository wordpress
<wwitzel3> 48e924db07d1: Pulling dependent layers
<wwitzel3> so now it works, lol
<wwitzel3> aaaand now it doesn't work again, wtf
<katco> wwitzel3: i wonder if there's a route issue b/t ec2 <-> registry?
<wwitzel3> katco: something weird for sure, it is working locally jsut fine
<katco> wwitzel3: again, top shows docker.io is spinning
<katco> wwitzel3: i'll try pulling it manually, but yeah something odd is happening
<katco> wwitzel3: whoa: $ docker pull wordpress
<katco> 2015/07/22 20:35:39 Post http:///var/run/docker.sock/images/create?fromImage=wordpress&tag=: dial unix /var/run/doc
<katco> ker.sock: permission denied
<katco> wwitzel3: does it have to be sudo pull?
<katco> wwitzel3: says job is already started. log doesn't have anything interesting
<wwitzel3> katco: you have to be in the group or you have to be sudo, yes
<katco> wwitzel3: cycled docker.io service, now: Could not find repository on any of the indexed registries.
<katco> [a0bad6af] -job pull(wordpress, latest) = ERR (1)
<wwitzel3> katco: yeah, something odd is going on since it was working with out incident the dozen other times I did it today
<wwitzel3> and yesterday
<mup> Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
<mup> Bug #1477293 changed: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
<katco> wwitzel3: so i'm pulling this manually from the ec instance, and it's doing "stuff", but it keeps returning to a header "pulling fs layer"
<wwitzel3> katco: yeah, it should do that for a while
<wwitzel3> katco: and download little bits and fs layers
<katco> wwitzel3: just seems to be taking a long, long time
<wwitzel3> katco: yeah, the registry isn't fast
<wwitzel3> katco: I've had it take 20 minuites or longer to get an image
<katco> wwitzel3: k
<wwitzel3> katco: once I get another copy of it downloaded successfully
<wwitzel3> katco: I'll put it in the charm and we can just load it locally
<wwitzel3> katco: so we won't have to worry about network for it
<katco> wwitzel3: good idea. or i wonder if we have a docker repo charmed up :)
<mup> Bug #1477293 opened: Bootstrap attempts to use a private network address instead of the public address <bootstrap> <ec2-provider> <network> <juju-core:Triaged> <juju-core jes-cli:Triaged> <https://launchpad.net/bugs/1477293>
<perrito666> katco: can I get an amen http://reviews.vapour.ws/r/2244/ ?
<katco> perrito666: tal
 * perrito666 brain makes an odd connection and its taken back to the past to the zope template attribute language
<katco> perrito666: did we address fwereade's concern in the todo there?
<perrito666> yup, that is what the pr does :D removing all trace of txn in status history
<perrito666> pure raw metal.. err mongo insertion :p
<katco> perrito666: i recall him disagreeing with that approach pretty strongly. i can't remember if it was ever resolved. hm
<perrito666> katco: he did disagree with a previous patch, which led to uncovering an issue in envuuid automatic patching
<perrito666> I recalled that pr and made this new one
<katco> perrito666: well not just the patch, but the idea of doing "raw" mongo ops w/o txns
<perrito666> katco: in this particular case it was his idea to completely remove, history works much like logs
<perrito666> we push on the top and remove on the bottom
<katco> perrito666: yeah i recall the counter-argument, i guess i never saw it get resolved
<katco> perrito666: i have a meeting now, ian is in it, so i'll talk to him too
<perrito666> fantastic :)
<katco> perrito666: you're good! :D
<cmars> waigani or menn0, could one of you review http://reviews.vapour.ws/r/2167/?
<perrito666> oh, blocked again, it was good while it lasted
<menn0> cmars: i'll take a look
<cmars> menn0, thanks!
<ericsnow> wwitzel3: http://reviews.vapour.ws/r/2245/
<perrito666> bbl
<menn0> cmars: done. just a few little things.
<menn0> cmars: fwereade is right though: this will need to change soon when env destruction is done properly. But this is fine for now.
<sinzui> katco: https://launchpad.net/juju-core/1.26 exists. I as does https://launchpad.net/juju-core/+milestone/1.26.0
<katco> sinzui: ty sir!
<cmars> menn0, ok, we'll have to watch out for that .. any advance notice would be helpful, we're ready to make whatever changes necessary
<menn0> cmars: you guys might not need to worry about it. we just have to move when the last ditch metric sending happens to somewhere else.
<cmars> menn0, gotcha
<menn0> cmars: the plan is to have a state server worker that manages environment life cycle instead of doing env destruction synchronously in the api call
<menn0> cmars: b/c as it stands if anything goes wrong (e.g. machine destruction get stuck) the user has very little visibility
<menn0> cmars: we also need to shut down things like storage in a more controlled fashion
<menn0> cmars: all this will happen "soon". we'll keep you in the loop but you don't need to worry too much about it.
<cmars> menn0, ack. cool stuff. the env lifecycle mgmt sounds especially interesting
<katco> sinzui: i proposed a blueprint for 1.26... who does that approval go to?
<menn0> cmars: interesting/necessary whatever :)
<sinzui> katco: I think mramm, the driver of the project
<katco> sinzui: ah ok
<sinzui> wallyworld: abentley my go1.4 juju on vivid is succeeding
<wallyworld> oh interesting
<wallyworld> can we drop the bug then :-)
<katco> sinzui: what is feature-proc-mgmt in this context? https://bugs.launchpad.net/juju-core/feature-proc-mgmt
<sinzui> katco: There is/was a branch in github that was tested and failed. We registered the series to report bugs against it
<katco> sinzui: ah ok. i just created a blueprint... would it make more sense to report bugs against that? or is series considered a branch in lp parlance?
<sinzui> katco: I wish the later part of your statement was 100% true. A series should be assocated with a branch. A line of development that changes will be made too. Lp doesn't enforce the branch
<katco> sinzui: would it be an imposition to ask you to target bugs against the blueprint instead?
<sinzui> katco: Thate is difficult to do
<katco> sinzui: in an automated fashion, or just in general?
<sinzui> katco: bugs can be linked ot a blueprint, but since the blueprint feasture is broken in many places, I cannot do it
<katco> sinzui: oh, it's a permissions/lacking feature thing?
<sinzui> also the blueprints cannot be managed by the team. You for example could make the milkestone and series, but blueprints don't have sane ownership
<katco> sinzui: not sure what you mean? i am the starter, drafter, and assignee, and i can link bugs... do you mean no one BUT the starter can assign bugs?
<sinzui> katco I am not writing software to use blueprints, you are free to link issues you want fixes in blueprints. CI is not concerned with feature planning
<thumper> wallyworld: menno said he'd look at the go 1.3 on vivid issue
<menn0> sinzui: ping?
<menn0> sinzui, wallyworld: i'm trying to be sure what the aws-deploy-trusty-amd64-on-vivid-amd64 CI job does
<menn0> I think it means, deploy an EC2 env using trusty instances, with the client running on vivid
<menn0> sinzui, wallyworld, mgz: is that right?
<mgz> menn0: that's correct
<mgz> menn0: it's mostly just exercising the vivid client
<mgz> menn0: unrelated, I added some comments to the rackspace provider review
<menn0> mgz: ok cool. i only managed to look at some of it.
<menn0> mgz: i'm currently looking at bug 1477293
<mup> Bug #1477293: Bootstrap fails to connect on vivid/go 1.3 <blocker> <bootstrap> <ci> <ec2-provider> <network> <vivid> <juju-core:Triaged> <juju-core 1.24:Triaged by menno.smits> <https://launchpad.net/bugs/1477293>
<menn0> mgz: I can only see one CI run where bootstrap failed due to not being able to connect
<mgz> menn0: on the ssh bug, I feel like it's just as likely to be another manifestation of ssh config screwing up juju as anything else
<menn0> mgz: and it works for me
<menn0> mgz: the other failed runs with the same rev seem to be something else
<menn0> mgz: so i'm looking at that
<mgz> juju 1.24 makes ssh generally dodgy in the presence of any existing ssh setup
<menn0> mgz: what do you mean by "any existing ssh setup"?
<mgz> eg, I have ~/.ssh/config with stuff in it and an agent and several keys
<mgz> juju doesn't like me much.
<wallyworld> thumper: menn0: sorry, just got out of meeting, reading backscroll
<menn0> mgz: hmmm, I have that too and it works for me. but of course it might something specific in your config.
<mgz> menn0: the other failed runs at the same rev look like job setup debugging
<menn0> mgz: the bootstrap is failing: "Bootstrap failed, destroying environment"
<menn0> mgz: but i'm not sure why
<wallyworld> menn0: mgz: sinzui switched to go 1.4 and said it worked
<mgz> nah, he tried the same thing via osx which happens to use go 1.4 and it works
<menn0> Juju cannot bootstrap because no tools are available for your environment.
<wallyworld> if it is a go 1.3 issue, then i say we don't fix anything
<menn0> mgz: there it is
<wallyworld> mgz: oh, i thought we were going to use everything the exact same, but just switck go verson
<menn0> so the earlier CI failures were due to missing tools
<mgz> oh, he did also use go 1.4 on ubuntu, but wily
<mgz> but that's still proxuing through a different machine
<wallyworld> but we want a controlled experiement
<mgz> so it's hard to make it identical
<wallyworld> only changde one thing
<wallyworld> can't we switch go version on the vivid machine
<mgz> we could download and build go 1.4 on that machine and make the job use it maybe
<katco> wwitzel3: /entrypoint.sh apach   19 minutes ago      Exited (1)
<katco> wwitzel3: 2015/07/22 23:36:38 Stopping proxy on tcp/[::]:8080 for tcp/172.17.0.2:80 (accept tcp [::]:8080: use of closed network connection)
<mgz> I think going through the workspace runner just tickles ssh issues
#juju-dev 2015-07-23
<mgz> menn0: bet you can reproduce if you try to do ssh-agent forwarding and try to bootstrap juju using a remote machine
<menn0> mgz: ok i'll try that
<mgz> menn0: look at the configuration for the job
<menn0> mgz: i did notice that it took a long time for the bootstrap command to think it could connect when I could already connect using ssh myself in another terminal
<menn0> mgz: maybe openssh on vivid has changed in terms of timeouts or retries
<menn0> wallyworld, mgz: I wonder if we should always be passing "-F /dev/null" to ssh to take the user's .ssh config out of the equation?
<menn0> I guess that might cause other problems when people have config that is required for ssh to work for them.
<mgz> right, we've always subtly depended on some bits of ssh config
<mgz> really I think we should explictly take some bits and otherwise have completely isolated env to run ssh in
<menn0> that sounds sensible
<menn0> mgz: but first i'd like to be able to repro the issue so we're not guessing
<mgz> menn0: so, I think we just want to run the underlying workspace-run command, using the real vivid-slave, as that job is
<mgz> menn0: bzr branch lp:workspace-runner
<mgz> ...how is this new code actually harder to run manually than the old way >_<
<katco> wwitzel3: do i not have the latest code? error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables
<katco>   Did you forget to --link some_mysql_container:mysql or set an external db
<katco>   with -e WORDPRESS_DB_HOST=hostname:port?
<katco>  
 * menn0 looks
<sinzui> mgz: sorry, family wants me. All we need to do is copy one of the jobs again and force it to download the wily juju to the vivid machine
<menn0> sinzui, mgz: although that would be useful to try, I suspect it's less about the juju version and more about the ssh config and openssh version of the test host
<menn0> I can't repro the issue from my vivid host, bootstrapping a trusty env on ec2
<sinzui> menn0: Host 10.* 192.168.*
<sinzui>   StrictHostKeyChecking no
<sinzui>   UserKnownHostsFile /dev/null
<sinzui>   User ubuntu
<sinzui>   IdentityFile /var/lib/jenkins/cloud-city/staging-juju-rsa
<sinzui> ^ all the config
<menn0> sinzui: no global config?
<sinzui> menn0: its what ships with vivid
<menn0> ok
<menn0> good to know
<sinzui> Host *
<sinzui>     SendEnv LANG LC_*
<sinzui>     HashKnownHosts yes
<sinzui>     GSSAPIAuthentication yes
<sinzui>     GSSAPIDelegateCredentials no
<menn0> sinzui: i'll try with that config locally in case it's relevant
<menn0> mgz: so what does workspace-runner do?
<menn0> mgz: never mind, saw the readme
<menn0> :)
<sinzui> menn0: vivid-slave is unique in that it doesn't have a jenkins on it, instead we use the workspace running to execute the commands via ssh. Jenkins also uses the same ssh connection string (because we knew that worked)
<menn0> sinzui: ok.
<sinzui> menn0: would you likes to visit the machine?
<menn0> sinzui: I wonder if the whole running juju commands over ssh setup is the root cause here. not that you shouldn't be able to but that might be what's triggering the different behaviour for the vivid tests.
<menn0> sinzui: yes please
<mgz> menn0: okay, here's what I'm trying
<mgz> running workspace-run directly, as that job would be doing, with a few test things filled in
<mgz> first, ssh config for me, note key from cloud-city
<sinzui> menn0: try ubuntu@vivid-slave.vapour.ws
<mgz> http://paste.ubuntu.com/11923154
<sinzui> mgz: Why are you up. I am already burned out
<mgz> then config json file for the runner command like so http://paste.ubuntu.com/11923178
<mgz> then running this command in the workspace runner directory, changing cfg.json and s3cfg paths as needed http://paste.ubuntu.com/11923163
<mgz> that's running deploy-job and stuck on ssh currently
<mgz> I bet if I do basically the same bootstrap that the runner is from inside vivid-slave with my creds fiddled, it would not hang
<mgz> sinzui: it's hot. also, I have to do the drying up before going to bed and I'm putting it off.
<mgz> menn0: that make sense at all?
<menn0> mgz: yep, makes sense
<menn0> mgz: i was just waiting to see if another bootstrap attempt was going to work. Was using the ssh config that sinzui pasted in.
<menn0> mgz: it did work. i'll try your instructions
<menn0> sinzui: I can get to vivid-slave. thanks.
<menn0> mgz: workspace-runner is doing it's think now
<menn0> thing
<mgz> so, on vivid-slave, I can ssh to both the 52. address that the workspace runner is stuck connecting to, *if* I add -i param with the key from the local cloud-city dir
<menn0> mgz: could that be the problem then?
<menn0> mgz: the ssh config file Host section that includes the key is only for "10.* 192.168.*"
<mgz> menn0: let me check what jenkins master has exactly for this
<sinzui> mgz: it certainly doesn't
<mgz> jenkins master has way more, ... seems reasonable to try making the vivid-slave config hosts bits on host * and maybe adding agent fowarding as well
<menn0> mgz: do you want me to try that
<menn0> ?
<menn0> mgz: using workerspace-runner I was able to repro the 10min timeout
<mgz> menn0: trying this change on vivid-slave http://paste.ubuntu.com/11923214
<mgz> note I have not touched the identity stuff, if it's id related I expect it to still hang
<menn0> mgz: looks good
<menn0> mgz: I'm going to bet it's the identity stuff
<menn0> mgz: but it's good to pin it down
<menn0> mgz: if this is the problem we need to change juju to report the ssh failures it's seeing (and swallowing) during bootstrap
<mgz> if this fails, will make it use the defined key locally on the slave
<mgz> agent-forwarding can work, but can also not.
<mgz> okay, that failed, but failed fast
<menn0> mgz: what was the error?
<mgz> "TLS handshake timeout" refreshing addresses, get https://ec2.us-east-1.amazonaws.com?...
<menn0> mgz: hmmm, that's not related to what we're looking at
<menn0> mgz: that's an error talking to the API
<menn0> (Amazon's API)
<mgz> nope, probably just bad handling of intermittent https
<mgz> juju shouldn't fail there
<menn0> mgz: no it shouldn't
<mgz> should just let the retry loop continue
<menn0> mgz: we should file a bug about that
<menn0> mgz: but also try again with the current ssh config
<mgz> now... that didn't actually do a destroy-enviroment
<mgz> okay, gets done on before starting next run, we do handle htat
<mgz> going again, end of last run: http://paste.ubuntu.com/11923231
<menn0> mgz: want me to file the bug for the TLS issue?
<mgz> menn0: please do
<mgz> okay, okay that timed out.
<mgz> what's somewhat annyoing is juju doesn't log what key it thinks it's using
<mgz> vivid-slave *has* the cloud-city key copied into ~/.ssh/id_rsa
<mgz> so I presume that... but I guess it's then not using the same key to try and connect?
<menn0> mgz: the cloud-city key and the id_rsa key are not the same. i just diffed them.
<mgz> well, that'll be it then
<menn0> mgz: i've just added debug logging to the SSH connection attempt code used by bootstrap so that at least we can see why the attempts are failing
<menn0> mgz: i'll get that merged today
<menn0> mgz: maybe symlink the key? (and remove mention of it from the ssh config file to avoid confusion?)
<menn0> although ... ssh can be picky about the perms on the key files so that might be tricky
<mgz> how did I manage to get in... I guess it must be creating with the cloud-city key but then trying to connect with whatever the hell this key in ~/.ssh is
<mgz> menn0: ln -s just made my current run get in and start installing stuff
<menn0> mgz: \o/
<mgz> I'm going to invalid off this bug and make you a nice new small one on our ssh output being a pain to debug
<menn0> mgz: assign that new one to me b/c I already have a fix for it
<sinzui> Our current blocker then is https://bugs.launchpad.net/juju-core/+bug/1477355
<mup> Bug #1477355: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1477355>
<menn0> mgz: one last thing: when you ran into that TLS handshake issue, Juju didn't clean up after itself, is that right?
<mgz> menn0: right, and neither did our scripts, but I *think* that's as designed now? somewhere I think we're setting the leave-failed-bootstrap-around thing
<menn0> oh ok
<menn0> i'll mention it as a possible issue on the bug at least
<menn0> mgz: here's bug 1477357
<mup> Bug #1477357: EC2 API TLS handshake failure aborts bootstrap <juju-core:New> <https://launchpad.net/bugs/1477357>
<mgz> menn0: bug 1477358
<mup> Bug #1477358: No output from ssh problems during bootstrap <bootstrap> <usability> <juju-core:Triaged by menno.smits> <https://launchpad.net/bugs/1477358>
<wallyworld> menn0: sinzui: axw is going to fix that new blocker above
<mup> Bug #1477157 changed: Broken windows dependencies <blocker> <ci> <regression> <windows> <juju-core:Fix Released by mattyw> <https://launchpad.net/bugs/1477157>
<mup> Bug #1477293 changed: Bootstrap fails to connect on vivid/go 1.3 <bootstrap> <ci> <network> <juju-core:Invalid> <juju-core 1.24:Invalid> <https://launchpad.net/bugs/1477293>
<mup> Bug #1477355 opened: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1477355>
<mup> Bug #1477357 opened: EC2 API TLS handshake failure aborts bootstrap <juju-core:New> <https://launchpad.net/bugs/1477357>
<mup> Bug #1477358 opened: No output from ssh problems during bootstrap <bootstrap> <usability> <juju-core:Triaged by menno.smits> <https://launchpad.net/bugs/1477358>
<menn0> mgz: cheers
<axw> wallyworld: https://github.com/juju/juju/pull/2867
<axw> please
<wallyworld> looking
<mgz> axw: do you need to verify that branch actually passes tests on windows, or are you sure anyway?
<wallyworld> axw: we don't need a linux build directive, right? the compilter is smart enough to know what to do?
<axw> mgz: I'm 99% sure, enough that it would be a productivity killer to set up a VM to test in windows, rather than just watch CI
<axw> wallyworld: correct
<wallyworld> mgz: yeah, i think the pr will fix the issue
<axw> wallyworld: it's 2013 because that's just moved code
<wallyworld> ah, doh, sorry
<mgz> axw: 99% is high enough for me to not volunteer either. you saw the LoopUtilSuite failures at the bottom of the page as well as the /proc/1/cgroup ones?
<axw> mgz: doh, no I did not. I'll fix that separately.
<axw> actually, the bot didn't see my PR yet
<axw> I'll update it
 * wallyworld should have noticed too
<mgz> seperate branch and fixes-1477255 to land both is fine.
<mgz> s/2/3/
<axw> wallyworld: PTAL, I added another commit
<wallyworld> axw: will do as soon as team meeting finishes :-)
<axw> oh crap
<wallyworld> thumper: didn't want to hold up the meeting with our inane small talk
<wallyworld> that's for our 1:1 tomorrow
<thumper> :)
<thumper> oh yeah, can't make that
<wallyworld> \o/
<thumper> wallyworld: wanna do it now?
<wallyworld> sure
<mup> Bug #1474195 changed: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:Fix Released by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
<menn0> wallyworld, axw: http://reviews.vapour.ws/r/2248/diff/#
<axw> menn0: looking
<axw> menn0: shipit
<menn0> axw: cheers
<menn0> i'm going to get that into 1.24 as well given that it's tiny but potentially quite useful
<mup> Bug #1474195 opened: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:Fix Released by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
<wwitzel3> katco: you would need to pull either my branch or eric's branch to get the envvars fixes
<wwitzel3> katco: neither of them have landed on the feature branch yet
<mup> Bug #1474195 changed: juju 1.24 memory leakage <cpec> <deployer> <performance> <regression> <juju-core:Fix Released by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1474195>
<natefinch> niemeyer: are you around?
<niemeyer> natefinch:Depends.. :-)
<natefinch> niemeyer: quick question that I'm pretty sure I know the answer to.  Is there any way to force goyaml to write out strings with surrounding quotes, presuming the quotes are not strictly necessary?
<niemeyer> natefinch: Nope.. Not right now
<natefinch> niemeyer: ok, I was pretty sure.  Thanks.
<niemeyer> natefinch: np
<axw> waigani_: do we not need to upgrade the last-login/conn info? it's not really important, right?
<wwitzel3> katco: so I put the wordpress image up in s3 for now and updated the charm to pull it down from there and load it
<wwitzel3> katco: it pulls fairly quick from an EC instance
<waigani_> axw: yeah, that is what I was thinking. But fair call, I'll double check with thumper
<thumper> wallyworld: if this is about changing the last login / connection time, then yes, I would prefer an upgrade step
<wallyworld> thumper: waigani_  perhaps?
<thumper> sorry wallyworld
<wallyworld> :-)
<thumper> meant waigani_
<wwitzel3> katco: just did an end to end with the wpm charm pulling the image tar from S3 and it worked great
<jam> fwereade: standup?
<TheMue> dooferlad: did you pinged me here on irc? somehow notification failed
<dooferlad> TheMue: I pinged you yesterday, but got on with other fixes. No worries.
<TheMue> dooferlad: ah, ok, will take a look now
<TheMue> dooferlad: so, done.
<fwereade> https://github.com/juju/juju/wiki/MongoDB-and-Consistency expanded a little
<fwereade> comments, corrections, critiques actively sought
<bogdanteleaga> fwereade: http://reviews.vapour.ws/r/2252/
<bogdanteleaga> fwereade: I added azure as well since I was gonna start with it today and I realized it might have the same problem
<mup> Bug #1477464 opened: juju does not support custom signed image metadata <juju-core:New> <https://launchpad.net/bugs/1477464>
<fwereade> bogdanteleaga, the thing I want to be certain of is that we will reject unsigned metadata that purports to come from streams.c.c
<fwereade> bogdanteleaga, but I'm afraid I don't have the bandwidth to think about this properly today -- so wallyworld is surely going to be a better reviewer than me
<dooferlad> TheMue: thanks!
<TheMue> dooferlad: yw
<frankban> hi code devs, when do you expect trunk to be unlocked?
<fwereade> axw, wallyworld: do either of you know why we have such weird restrictions on sets of status data?
<fwereade> axw, wallyworld: specific messages only allowed for allocating
<fwereade> axw, wallyworld: extra data forbidden here and there
 * dooferlad is back on line again. First a server problem (power button broke off its wires so couldn't turn back on) then the ISP flakes out. Yay.
<mup> Bug #1477293 opened: Bootstrap fails to connect on vivid/go 1.3 <bootstrap> <ci> <network> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1477293>
<xwwt> mgz: Can I delay us for 5 min?
<mgz> xwwt: no problem, poke me when you're free
<mup> Bug #1477293 changed: Bootstrap fails to connect on vivid/go 1.3 <bootstrap> <ci> <network> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1477293>
<xwwt> mgz: ty
<xwwt> mgz: sorry, I am back now. If you still have time to meet, let me know.
<mgz> xwwt: lets go
<xwwt> sinzui: I was able to finally get into the call
<wwitzel3> ericsnow: ping
<ericsnow> wwitzel3: hey
<wwitzel3> ericsnow: currently, when running the launch command, the register call internally errors and complains about at least one bad arg in bulk api call
<ericsnow> wwitzel3: k
<wwitzel3> ericsnow: so checking launch for a non-zero exit and setting status is getting hung up there
<ericsnow> wwitzel3: is there any indication on why that bulk call is having a problem?
<wwitzel3> ericsnow: that one of the args in the bulk call is bad ;P
<ericsnow> wwitzel3: just what I though :)
<ericsnow> thought
<ericsnow> wwitzel3: sounds like a bug to me
<wwitzel3> ericsnow: error: &params.Error{"", "at least one bulk arg has an error"}
<ericsnow> wwitzel3: also, why is the plugin succeeding even though it failed to start?
<wwitzel3> ericsnow: it didn't fail to start
<wwitzel3> ericsnow: it fails to register
<wwitzel3> ericsnow: this started happening after the flush fix went i
<wwitzel3> n
<ericsnow> wwitzel3: I thought the point was that status is "Running" even though it failed to start
<ericsnow> wwitzel3: ah, flush
<wwitzel3> ericsnow: no, the launch command is returning a non-zero exit, because register fails, but the container is successfully run
<ericsnow> wwitzel3: in the case of the bulk call, the specific problem will be in the Error field of one or more of the individual results
<ericsnow> wwitzel3: k
<mup> Bug #1474291 opened: juju called unexpected config-change hooks after read tcp 127.0.0.1:37017: i/o timeout <hooks> <openstack> <sts> <uosci> <juju-core:New> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1474291>
<jw4> bodie_: happy birthday :)
<aisrael> I'm not sure who's working on storage, but I'm wondering if a storage-list command is planned, for running within a hook/action context?
<fwereade> aisrael, I would *think* so; axw would know for sure
<fwereade> aisrael, he won't be awake for... 6 or 7 hours I think?
<aisrael> fwereade: Ok, thanks! I'll follow up with him before I eod.
<alexisb> aisrael, axw would love feedback, so please do send mail or ping him when he is online
<alexisb> aisrael, wallyworld can also help
<aisrael> alexisb: ack, will do. I've been building a charm to put storage through its places, particularly as it applies to benchmarking. I'm sure I'll have some feedback to give.
<alexisb> awesome
<wwitzel3> ericsnow: wrapping up some touches on the charm now, did you get a chance to look at the register/apiserver bulk thing?
<ericsnow> wwitzel3: sorry, didn't realize
<mup> Bug #1477464 changed: juju does not support custom signed image metadata <metadata> <juju-core:Triaged> <https://launchpad.net/bugs/1477464>
<mup> Bug #1320312 opened: fallback to unsigned stream metadata may have security issues <improvement> <metadata> <security> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1320312>
<natefinch> That feeling when you realize the conversion you're doing you've already written 2 weeks ago.
<mup> Bug #1477709 opened: default config lacks state-port <config> <juju-core:Triaged> <https://launchpad.net/bugs/1477709>
<lazyPower> wwitzel3: ping
<wwitzel3> lazyPower: pong
<lazyPower> wwitzel3: have you seen this before? http://paste.ubuntu.com/11926451/
<lazyPower> happy to file a bug, just trying to figure out what happened so its not a pebkac issue living in the tracker.
<wwitzel3> lazyPower: doesn't matter if it, the juju command should never panic and barf in the console for a user ;)
<wwitzel3> lazyPower: so it is a bug, for sure
<lazyPower> fair enough
<wwitzel3> lazyPower: looks like a result of the google environment missing a config value
<wwitzel3> lazyPower: it should probably print something like .. "missing <config-value>"
<wwitzel3> ;)
<natefinch> rule #1 of software - never show a stack trace to the user
<natefinch> and especially true of Go where panics are almost always due to a programmer error.
<lazyPower> are the docs right, its project-id and not project_id? Seems odd to have one config option that doesn't conform
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1477712
<mup> Bug #1477712: GCE provider dumps stacktrace when missing a config option/value <juju-core:New> <https://launchpad.net/bugs/1477712>
<lazyPower> bug for reference
<wwitzel3> lazyPower: yep, project-id, auth-file
<wwitzel3> lazyPower: OR project-id, private-key, client-email, client-id
<lazyPower> ok note: when supplying the json file, no stacktrace was emitted, it behaved
<lazyPower> it seems to be related to embedding the data in the environments.yaml vs supplying the auth-file
<wwitzel3> lazyPower: can you add that in the ticket real quick
<lazyPower> ah and if its dashes, the docs are wrong
<wwitzel3> then the docs are wrong, it is for sure dashes
<lazyPower> ack, i'll tear down and try again with dashes and see if it chokes again
<lazyPower> and fix the docs while i'm in here
<lazyPower> thanks wwitzel3
<wwitzel3> np, thank you
<wwitzel3> you are a gentleman and a scholar
<mup> Bug #1477712 opened: GCE provider dumps stacktrace when missing a config option/value <juju-core:New> <https://launchpad.net/bugs/1477712>
<katco> natefinch: sorry meeting running over, we'll probably cancel?
<natefinch> katco: that's fine. I still have a bunch of work to do anyway
<lazyPower> wwitzel3: took me a minute to circle back, but that was it. convert from underscores to dashes and the panic goes away.
<katco> wwitzel3: hey can you hop in moonstone rq?
<wwitzel3> katco: ping
<katco> wwitzel3: pong
<wwitzel3> katco: hey, was stuff food in my face, still want to hangout?
<katco> wwitzel3: nah the moment hath past
<wwitzel3> katco: k, sorry
<katco> wwitzel3: wasn't anything too important
<katco> wwitzel3: no worries at all
<mup> Bug #1431286 changed: juju bootstrap fails when http_proxy is set in environments.yaml <bootstrap> <openstack-provider> <proxy> <ubuntu-openstack> <juju-core:Invalid> <https://launchpad.net/bugs/1431286>
<thumper> morning folks
<thumper> wwitzel3: hey there
<wwitzel3> thumper: hey
<thumper> wwitzel3: LXD now supports cloud images as of 0.14
<thumper> wwitzel3: I'll be chatting with stgraber next week about any other bits we need
<perrito666> oh, thumper, that is my cue to leave
 * thumper tips hat
<wwitzel3> thumper: that is great
<thumper> wwitzel3: so Friday work to resume after annecy
<wwitzel3> thumper: ok, I'll be sure to touch base with you on your Friday standup after Annecy
<sinzui> axw: Sorry your fix for the wiindows test is incomplete. I updated the https://bugs.launchpad.net/juju-core/+bug/1477355 with the error windows now sees
<mup> Bug #1477355: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1477355>
<fwereade_> wallyworld, perrito666: please both of you give this CL some close attention when you can: http://reviews.vapour.ws/r/2255/
<wallyworld> fwereade_: sure
<perrito666> aye
<wallyworld> fwereade_: btw, i saw this morning i had missed some messages last night - the reason for only allowing status data sometimes etc - NFI, the semantics were in place before the workload status changes, they were simply preserved with the new work
<fwereade_> wallyworld, awesome
<fwereade_> wallyworld, I remember them being developed, I see no reason to keep them, if you don't either then awesome :)
<wallyworld> i can't see any reason ottomh
<wallyworld> fwereade_: "no longer mixes txn and non-txn writes to statushistory collection"
<wallyworld> hasn't perrito666  already done that?
<fwereade_> wallyworld, it includes perrito666's backport
<wallyworld> ok
<fwereade_> wallyworld, added when I thought I'd finish earlier, and he hadn't done it yet
<fwereade_> wallyworld, I can suck up the merge if his lands
<wallyworld> np, ta just reading the mp description
<fwereade_> wallyworld, and it's a *shitty* diff :(
<fwereade_> wallyworld, you will probably get more out of cloning and grepping for Status in state
<wallyworld> fwereade_: the nowToTheSecond() stuff - that was cargo culted from elsewhere. i believe mongo has timestamp issues
<wallyworld> issues with precision in vs precision out
<perrito666> fwereade_: if you added the same change I think it would be dumb to merge my backport
<fwereade_> wallyworld, that's why you convert to nanoseconds and store an int64 ;p
<fwereade_> perrito666, fair point :)
<wallyworld> fwereade_: yuk
<fwereade_> wallyworld, better than silently discardinng precision imo
<wallyworld> depends on the use case
<fwereade_> wallyworld, I think "don't discard precision until you have to" is a pretty solid principle
<wallyworld> most dbs i've worked with handle timestampts propery
<wallyworld> nfi why mongo doesn't
<fwereade_> wallyworld, it keeps ms accuracy I think?
<wallyworld> "have to" - do we really need to know a machine status changed at 12:31:22.374747473
<perrito666> I would have expected it to store with more precission 0padding
<fwereade_> wallyworld, I would also suggest that that is a better source of ordering than the sequence thing
<wallyworld> fwereade_: it may be m2, not sure, but i seem to recall (maybe incorrectly) there was an issue reading back out?
<fwereade_> wallyworld, which is one extra db write/read-result, on the same doc, every time we write status history
<fwereade_> wallyworld, there is a certain amount of whiny editorialising in the comments
<wallyworld> fwereade_: it's not so much ordering but also the culling of old records, i can't recall the details now
<fwereade_> wallyworld, I have Opinions about that too ;)
<wallyworld> i'm sure you do :-)
<sinzui> wallyworld: maybe you want to try a quick followup branch to address a commplication in the window's fix https://bugs.launchpad.net/juju-core/+bug/1477355. I can pause CI for a few hours to ensure we test master again.
<mup> Bug #1477355: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1477355>
<wallyworld> sinzui: yeah saw that :-( i'll get axw to fix asap
<axw> bah
<sinzui> wallyworld: I am going to force CI to make it pause for up to 5 hours. We can manually enable build-revision if e get a fix in sooner
<wallyworld> ok, tyvm
<wallyworld> fwereade_: why was all the status validation (including for old stuff like machine status) removed from helper functions in status.go?
<fwereade_> wallyworld, it moved to the methods on the entities themselves
<wallyworld> hmmm
<fwereade_> wallyworld, which now call down to a common implementation
<fwereade_> wallyworld, we can fight this out in annecy -- but I think you are very wrong in your view that the business rules need to be deeper embedded in state
<wallyworld> i would have preferred the validation to be kept in a separate method
<wallyworld> like we do for configs etc
<wallyworld> i don't think that
<wallyworld> at all
<wallyworld> state is for persistence
<wallyworld> the model and validation is separate
<wallyworld> the old implementation had the validation deep in state
<fwereade_> wallyworld, I am glad then, you seemed to be suggesting that earlier
<wallyworld> i am not defending that
<wallyworld> no
<fwereade_> wallyworld, this has it slightly less deep
<wallyworld> +1 to that
<wwitzel3> katco: pushed up the latest charm that works against the latest feature-proc-mgmt branch
<wallyworld> i do think we are in violent agreement, just some tinking on the details
<fwereade_> wallyworld, with a view to pulling it out to the facades, because this is a situation where there's no need to weave business rules in with persistence
<wallyworld> fwereade_: but yeah, let's get into this next week
<wwitzel3> katco: I had to remove the current retval check from process-launch because I still haven't fixed the issue with register
<katco> wwitzel3: k
<axw> wallyworld: http://reviews.vapour.ws/r/2256/
<katco> wwitzel3: i'll try it out asap
<wwitzel3> katco: but it is all there if you wanted to do another deployment and have the updated status-history
<wallyworld> axw: looking
<axw> wallyworld: confirmed it builds in windows, using go 1.5 cross compiling
<wallyworld> axw: lgtm
<wallyworld> sinzui: fix on the way
<sinzui> faboo
<fwereade_> wallyworld, and it was also a bit of a reaction to the N types and N methods and N funcs all just doing the same incorrect serialization prep plus a bit of validation, far from where the data entered the package
<wallyworld> fwereade_: sadly that tends to be *everywhere*
<wallyworld> our code has sort of devolved as it has grown organically
<wallyworld> especiallywhen the api layer was added
<fwereade_> wallyworld, yeah, people have to look after it as they go
<fwereade_> wallyworld, but I'm not sure what you mean about validation done deep in state
<fwereade_> wallyworld, do you mean consistency concerns? lots of them for sure
<wallyworld> fwereade_: sorry, had to join meeting
<fwereade_> wallyworld, but the tolerable pattern is ExportedMethod() { Validation(); TransactionLoop() }
<fwereade_> dammit late again
 * fwereade_ bed
<sinzui> thank you axw. I see the merge into master
<axw> sinzui: great, now for the real test ;)
#juju-dev 2015-07-24
<natefinch-afk> ericsnow: you around?
<mup> Bug #1477355 changed: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1477355>
<sinzui> axw: We need to let ci mark the fix released so that the master is unblcked with an bless
<axw> sinzui: sorry, I just saw that the bug was resolved, so thought it shouldn't be blocking anymore. will leave it next time.
<sinzui> axw: i am pretty sure the bug  is resolved since the remaining tests passed in our previous efforts. CI though wants a bless rather that some tests passing. The bless can happen before all tests have run if non-voting tests are the only ones left runnung
<axw> sinzui: okey dokey
<sinzui> I think the last two viting jobs are running, but they are long. master will be opened in about 45 minutes...get ready to merge
<mup> Bug #1477355 opened: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1477355>
<mup> Bug #1477355 changed: MachineSuite.TestDyingMachine fails on windows <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1477355>
<axw> wallyworld: FYI: https://github.com/juju/juju/pull/2879. probably won't get it backported today, so no rush for a review. in case you're queried though, the fix is there
<wallyworld> axw: reviewed, i meant to make the comment not an issue but a question, gotta go do school pickup, will catch you next week, have a good weekend
<axw> wallyworld: cheers, have a good week
<wallyworld> will try to :-)
<mattyw> fwereade_, ping?
<fwereade_> waigani, couple of comments on http://reviews.vapour.ws/r/2250/ -- happy to discuss if you're around
<TheMue> fwereade_: hangout?
<mup> Bug #1477920 opened: juju bootstrap failed with "discarding API open error" <juju-core:New> <https://launchpad.net/bugs/1477920>
<fwereade_> perrito666: early-morning complaining
<fwereade_> perrito666, Service.Status()
<fwereade_> perrito666, returns a service status document if it exists, and otherwise aggregates unit status
<fwereade_> perrito666, how is that meant to work?
<fwereade_> perrito666, (what if nobody's set a service status for 6 hours?)
<fwereade_> perrito666, actually, when you come on -- please look in detail at http://reviews.vapour.ws/r/2255/
<fwereade_> perrito666, then we can talk in that context
<perrito666> Morning
<perrito666> Fwereade i am in my cell phone as soon as I get to a pc I'll answer and look at your patch
 * mgz imagines horacio strunk down to teeny size to fit inside his phone
<perrito666> Very short answer service shows its status, they can only be set by the leader, or an aggregated status from its units
<perrito666> Wow autocorrect on this phone really sucks in English
<perrito666> And my English too
<mgz> hey, I accidentally made up a word when making fun of your english
<mgz> so should really not talk
 * perrito666 is in a bar getting breakfast after having his blood sampled
<perrito666> I really wish the could find a less cumbersome way to get metrics from me
<mgz> maybe you should provide an api
 * perrito666 is back in a computer
<fwereade_> perrito666, cool
<fwereade_> perrito666, so I contend that we're doing it in the wrong place
<fwereade_> perrito666, http://reviews.vapour.ws/r/2255/diff/1/?file=81513#file81513line1217
 * perrito666 reas
<perrito666> fwereade_: ok, throw some justification to that?
<fwereade_> perrito666, um, in the comment?
<fwereade_> perrito666, the given approach gives bad values on a wide range of not-very-unhappy paths
<perrito666> duh, I ignored the comment
<perrito666> if you broke something its certainly not there
<fwereade_> perrito666, well, yes, there is also the clear fact that service status code does not exist because it's not tested, so we are merely discussing the features of a shared hallucination
<fwereade_> perrito666, so, the set-service-status code used to insert if the doc wasn't there
<fwereade_> perrito666, wallyworld told me that was to work around one of the not-understanding-mgo/txn things, and could be safely removed
<fwereade_> perrito666, it turns out that, no, there is logic to do things when the service status doc is not there
<fwereade_> perrito666, so it is significant
<perrito666> yes, iirc when we talked its because there is no upsert?
<fwereade_> perrito666, please reread the comment
<fwereade_> perrito666, you should not want to upssert
<fwereade_> perrito666, upserting is wrong
<fwereade_> perrito666, and causes us to lie to the user all the time
<fwereade_> perrito666, it's the leader's job to set the service status
<fwereade_> perrito666, not the state server's job to backstop for it
<perrito666> fwereade_: you are mixing layers, the leader job is to set status, it doesnt really care if service has a statusDoc already in place or not
<fwereade_> perrito666, you said it yourself
<fwereade_> perrito666, it is the leader's job to set the service status
<fwereade_> perrito666, just as it is a unit's job to set its own status
<fwereade_> perrito666, if the unit doesn't bother, the *uniter* supplies a default implementation
<fwereade_> perrito666, otherwise you have two distributed components each reponsible for setting the same value, and no sane synchronisation between them
<fwereade_> perrito666, which does not generally lead to happy outcomes
<fwereade_> perrito666, and in this particular case will *obviously* deliver stale values in several scenarios
<fwereade_> perrito666, and *probably* doesn't have any worse behaviour
<fwereade_> perrito666, but *only* because the second thing with an opinion never writes it to the db, and only sometimes tells the user what its opinion is
<fwereade_> perrito666, am I making sense?
<perrito666> you are being a bit socratic, but yes, basically yes, you are not making a point from which to extract action though
<fwereade_> perrito666, what I am primarily seeking is acknowledgement that trying to jam all our logic into state is a disaster
<fwereade_> perrito666, in the hope that you and others will more carefully consider the impact of their layering decisions
<fwereade_> perrito666, and I guess that's what I'm also hoping for from that review -- I made an effort to only fix the very worst problems in state, and do so only to the minimum standard I felt professionally comfortable with
<fwereade_> and that's still, what, a 2800-line diff
<fwereade_> perrito666, and if there is any doubt whatsoever as to why any of the changes were made, or disagreement over whether they should be, we should talk it out
<fwereade_> perrito666, in particular I am now pissed off because it looks like there's no way to introduce sane behaviour without an upgrade step
<fwereade_> perrito666, because service status is *sometimes* calculated differently from other types of status
<fwereade_> perrito666, and therefore apparently we should write new apis and new model types and new serialization rules and new document-existence conventions and ARRRRGH
<fwereade_> and NO FRICKING TESTS
 * fwereade_ is done for now
<fwereade_> perrito666, I am sorry to rant at you, it is not productive :(
<perrito666> well you did attach a patch to your rant, it makes up a bit for it ;)
<bogdanteleaga> is any gwacl expert around?
<fwereade_> perrito666, and now I start thinking about it it's not just an upgrade step it's ofc the uniter change too...
<perrito666> fwereade_: it is not  a light decission, I think that, if service status is going to grow, such grouth must be first reconciled with the existing health spec
<fwereade_> perrito666, I do not think it is a hard decision -- it is clearly wrong, because it performs unhelpfully in poor conditions, and we have to write code that assumes and tolerates poor conditions
<perrito666> not hard, I dont think I can express that particular problem in english, whatever we do with this status is going to stick with us, needs to accomodate the spec already decided (introduce changes that get accepted) and needs to take in account all our immediate future wishes for the health work
<sinzui> wow 3 blesses for master in row.
<anastasiamac> sinzui: we must b mid-cycle ;D
<sinzui> indeed :)
<mup> Bug #1472749 changed: github.com/juju/utils has contradictory licences <juju-core:Fix Released by sinzui> <juju-core 1.24:Fix Released by sinzui> <https://launchpad.net/bugs/1472749>
<natefinch> ericsnow: you around?
<natefinch> or wwitzel3 ?
<wwitzel3> natefinch: yeah
<natefinch> wwitzel3: are we supposed to be showing processes in yaml status by default or with a --processes flag?  I had thought I'd seen a --processes flag somewhere, but I don't see it in the spec now
<wwitzel3> natefinch: by default now
<natefinch> wwitzel3: ok, good, that's what I have done :)
<perrito666> rick_h_: you beat me to the joke to mattyw
<rick_h_> perrito666: lightning paws!
<mattyw> perrito666, rick_h_ since when is twitter a medium that requires people not to duplicate messages?
<mattyw> lol natefinch wins
<rick_h_> mattyw: twitters approved form of dupes is the RT :P
<perrito666> so mattyw wanna tell us about your new job?
<mattyw> perrito666, can't think of anything funny to reply with :(
<natefinch> dammit, another package with friggin' table driven tests... man those things make it so hard to figure out what is failing AND you can't then rerun just a single test.
<perrito666> natefinch: yep, at least there should be a way to make the runner pick an element of the table and just try that
<perrito666> it shouldnt beall that hard to implement, sicne you are there :p
<natefinch> wow.... Sublime's open file dialog (maybe it's the linux one, I don't know if there is such a thing?)  sorts files by name very weirdly.  It seems to ignore underscores when sorting by name.  so status.go  status_formatters.go then statushistory.go
<natefinch> boggle
<natefinch> oh 3500 line file, fantastic
<dooferlad> natefinch: you don't just hit ctrl+p and start typing bits of path and file name?
<TheMue> hehe, pretty small and very good maintainable
<natefinch> dooferlad: No, I'm not really a super user of sublime.   I should probably learn more of this stuff.   Didn't realize ctrl+p could do that
<dooferlad> natefinch: I am not a super user of sublime either, but I try. With gosublime installed ctt
<dooferlad> ctrl + . then ctrl + g is very useful
<dooferlad> that or F12
<perrito666> I hate being a over the average user of an editor, it makes me an idiot in any other editor
<dooferlad> both try and get you to a definition.
<TheMue> take vim
<TheMue> or katco would say you should take emacs
<TheMue> ;)
<dooferlad> the JetBrains go to definition logic is the best I have found, but the go plugin is still in heavy development and it can cause the editor to lag.
<perrito666> TheMue: well I tried to go from vi to emacs, I completely failed, and I dont think emacs had anything to do with it, I just became too vi dependent
<katco> TheMue: i am but a messenger for the divine creator which is emacs
<TheMue> perrito666: once being trained on a product it always is hard to switch seamlessly
 * TheMue remembers good old times with SPF/2
<perrito666> TheMue: the problem is I was unable to do fairly reasonable stuff
<natefinch> dooferlad: yeah, I remapped ctrl+shift+g to go to definition.  I don't do two key shortcuts.
<perrito666> natefinch: so hardcore
<TheMue> perrito666: I'm no emacser too. using vim when hopping from server to server and bbedit today as my development environment
<natefinch> perrito666: there should be no "then" in a shortcut, otherwisei t's not a shortcut
<perrito666> you will all laugh at me, but I really miss turboC editor
<TheMue> perrito666: which inherited from turbo pascal, yeah, which inherited from wordstar
<natefinch> wtf? https://github.com/juju/juju/blob/master/cmd/juju/commands/status_test.go#L3712
<katco> natefinch: standup
<mup> Bug #1478024 opened: Looping config-changed hooks in fresh juju-core 1.24.3 Openstack deployment <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1478024>
<natefinch> ericsnow: not going to get that review up this second... unfortunately there was a merge in there that is confusing git, so my PR looks like it has 100 of your commits in it
<ericsnow> natefinch: np
<katco> perrito666: the only thing i am adamant about with editors is this: if it's your job to edit text, find an editor and learn everything about it
<katco> perrito666: and it would behoove you to find an editor you can use for the rest of your life
<katco> perrito666: this is why i like emacs; it has grown with me and followed me across windows, linux, os x, and bsd
<katco> natefinch: lmk when your changes are pushed to a branch i can merge
<bogdanteleaga> perrito666: you should try spacemacs
<natefinch> katco: everything is on my fix-status branch, but I think cherry-picking one of your changes may have confused git or something
<natefinch-afk> gotta run
<mup> Bug #1478027 opened: supportedSeriesWindowsSuite.TestSupportedSeries mismatch with "arch" <blocker> <ci> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1478027>
<bogdanteleaga> http://reviews.vapour.ws/r/2259/ for the supportedseries bug
<bogdanteleaga> anybody knows why ubuntu only returns ubuntu series when SupportedSeries is called?
<mattyw> evilnickveitch, ping?
<evilnickveitch> mattyw, hey
<mup> Bug #1478051 opened: KVM provider should expose host CPU to guest <kvm> <local-provider> <juju-core:New> <https://launchpad.net/bugs/1478051>
<katco> ericsnow: looks like i have what i need from the feature branch. merge away!
<ericsnow> katco: thanks
<katco> ericsnow: ty for waiting
<natefinch> gah, sometimes I really hate git
<natefinch> ericsnow: somehow I think when I merged some of katco's stuff into my branch, it brought in a bunch of featuretest stuff as well.  Are those going to get into the feature branch anytime soon?
<ericsnow> natefinch: probably not today but Monday
<perrito666> Bbl
<natefinch> think i finally got the correct revisions cherry-picked into a new branch that I'll be able to propose
<natefinch> ericsnow: is the launch command proc-launch now or something?
<ericsnow> natefinch: process-launch
<natefinch> ericsnow: ok, cool.  gotta update my charms
<ericsnow> natefinch: k
<mup> Bug #1477920 changed: juju bootstrap failed with "discarding API open error" <bootstrap> <cloud-installer> <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1477920>
<natefinch> ericsnow: http://reviews.vapour.ws/r/2261/
<ericsnow> natefinch: yeah, I'll take a look soon
<natefinch> ericsnow: thanks
<natefinch> gah, I cannot for the life of me get goyaml to omit an empty map
<natefinch> it works in a simple testcase, but in the status output... it's not.
<natefinch> also - goyaml seems to always lowercase the field names, which means like half our struct tags are unnecessary
<natefinch> wow, man.... don't ever typo a struct tag
<perrito666> natefinch: there was a policy about where to put tags
<natefinch> perrito666: can you find the typo?
<natefinch> Components         map[string]interface{} `json:"components,omitempty", yaml:"components,omitempty"`
<natefinch> the yaml wasn't doing the omit empty
<perrito666> I give up
<natefinch> perrito666: the comma between the json definition and yaml definition is incorrect
<natefinch> should not exist
<natefinch> correct version:
<natefinch> Components         map[string]interface{} `json:"components,omitempty" yaml:"components,omitempty"`
<natefinch> that one frigging comma was making a ton of tests fail.  Stupid comma.
<perrito666> yeah, the comma
<natefinch> I wish omitempty on a map would omit it if the len of the map was zero, not just if the map was nil
<natefinch> I bet go vet catches the struct tag problem
<natefinch> yep, it does
<marcoceppi> katco: ping
<marcoceppi> katco: for when you get back https://bugs.launchpad.net/juju-core/+bug/1478156
<mup> Bug #1478156: summary format does not give enough details about machine provisioning errors <charmers> <juju-core:New> <https://launchpad.net/bugs/1478156>
<mup> Bug #1478156 opened: summary format does not give enough details about machine provisioning errors <charmers> <juju-core:New> <https://launchpad.net/bugs/1478156>
#juju-dev 2015-07-25
<natefinch> ericsnow: you around?
<ericsnow> natefinch: more or less :)
<natefinch> ericsnow: sorry, stepped out.  still here?  Wanted to talk the API objects for component status
<mup> Bug #1478232 opened: juju 1.24 poor performance <cpec> <juju-core:New> <https://launchpad.net/bugs/1478232>
<mup> Bug #1478287 opened: Juju bootstrap fails on Azure <azure-provider> <juju-core:New> <https://launchpad.net/bugs/1478287>
#juju-dev 2016-07-25
<menn0> anastasiamac: ok, good to know. better just let curtis know.
<anastasiamac> menn0: i have emailed Curtis. There is a tiny possibility too that the master was taken from a commit that preceeds ur and my changes... hence our work was re-targeted to next milestone although it is in codebase..
<menn0> anastasiamac: could be, although the specific issue xtian and I were working on was one of the reasons for beta13 and I'm fairly certain it did make it
<anastasiamac> menn0: \o/ then it's just an oversight and will get cleared tonight
<menn0> anastasiamac/thumper: quick review please: http://reviews.vapour.ws/r/5298/
<anastasiamac> menn0: looking
<thumper> axw: I have some questions around volumes when you have some time
<blahdeblah> axw: FWIW, re: https://bugs.launchpad.net/juju-core/+bug/1599503 I reverted the charm storage change which was exercising this bug.  So the production urgency is gone from our perspective.
<mup> Bug #1599503: Cannot upgrade charm if storage is modified, even if the service doesn't use said storage <juju-core:In Progress by axwalk> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1599503>
<axw> blahdeblah: OK, thanks for letting me know. I was thinking that would be the fastest course of action. I'm on the case anyway, hope to have it fixed today or tomorrow in the 2.0 branch.
<axw> thumper: fire away
<thumper> axw: probably best over hangout
<blahdeblah> axw: FWIW, we don't care about 2.0 for production envs. :-)
<axw> blahdeblah: yeah I just mean I'm fixing it there, then will back port :)
<axw> back port shouldn't be long behind
<thumper> axw: https://hangouts.google.com/hangouts/_/canonical.com/volumes?authuser=1
<blahdeblah> axw: cool - thanks
<thumper> see ya tomorrow folks
 * dimitern waves ;)
<dimitern> morning all
<macgreagoir> \o
<hoenir> morning
<frobware> dimitern: welcome back; want to sync?
<dimitern> frobware: thanks! sure, just give me ~15m need to sort out the sprint stuff quickly first
<frobware> dimitern: ok
<dimitern> frobware: hey, let's sync? joining the new "Standup" HO now..
<babbageclunk> dimitern: welcome back!
<dimitern> babbageclunk: thanks! how's it going ? :)
<dimitern> babbageclunk: I see you'll be deserting our motley team for NZ :D
<babbageclunk> dimitern: pretty good, I think! How was gophercon?
<babbageclunk> dimitern: Yeah :( but :)
<dimitern> babbageclunk: awesome! lots of good talks and 2x as many people as 2014
<babbageclunk> dimitern: how big is it?
<dimitern> babbageclunk: >1500
<babbageclunk> dimitern: nice
<dimitern> babbageclunk: I'll prep a summary and send it some time this week
<dimitern> jam1: ping?
<dimitern> jam1: np, ignore that :)
<babbageclunk> fwereade_: ping?
<babbageclunk> dimitern: got a moment for some advice?
<dimitern> babbageclunk: sure
<babbageclunk> dimitern: I'm working on bug 1585878
<mup> Bug #1585878: Removing a container does not remove the underlying MAAS device representing the container unless the host is also removed. <2.0> <hours> <maas-provider> <network> <reliability> <juju-core:Triaged by 2-xtian> <https://launchpad.net/bugs/1585878>
<dimitern> babbageclunk: yeah..
<babbageclunk> Talking to Will it turns out to need a bit more structure than it first seemed from the bug.
<dimitern> babbageclunk: wanna HO or IRC is ok?
<babbageclunk> dimitern: Actually HO would be better, now that you say.
<babbageclunk> dimitern: In juju-sapphire?
<dimitern> babbageclunk: ok, joining the upcoming standup call - "core" I think it's called
<dimitern> I don't think I have that one anymore on my cal
<babbageclunk> ok, I still see sapphire.
<babbageclunk> how about that
<babbageclunk> ?
<dimitern> babbageclunk: I see it from last friday
<fwereade_> babbageclunk, sorry! what can I do for you?
<fwereade_> babbageclunk, dimitern: shall I join somewhere?
<mup> Bug #1605714 changed: juju2 beta11: LXD containers always pending on ppc64el systems <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605714>
<mup> Bug #1605747 changed: [ juju2 beta11 ] Maas system is deployed but agent remains pending <oil> <oil-2.0> <juju-core:Invalid> <https://launchpad.net/bugs/1605747>
<mup> Bug #1605756 changed: [ juju2 beta11 ] system show up in juju status as pending but there is no attempt to deploy in maas <oil> <oil-2.0> <juju-core:Invalid> <MAAS:New> <https://launchpad.net/bugs/1605756>
<mup> Bug #1605790 changed: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
<mup> Bug #1605790 opened: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
<mup> Bug #1605790 changed: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
<mup> Bug #1605986 changed: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
<mup> Bug #1605986 opened: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
<mup> Bug #1605986 changed: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
<frobware> dimitern: apt-get update generally working for you atm?
<dimitern> frobware: I've been using apt update instead, for a while now
<dimitern> frobware: but apt-get update worked - just checked
<frobware> dimitern: my network is flaky, possibly because I'm setting up IPv6, but lots of things seem to work, equally lots don't...
<dimitern> frobware: oh I see :/
<frobware> dimitern: I can generally do enough but any update eventually fails. some machines make lots of progress, others stop after 1 hit
<dimitern> frobware: maas-proxy could be messing some reqs?
<rick_h_> fwereade_: can you make sure to have a card in kanban with the links and such for your PR: https://github.com/juju/juju/pull/5863 please?
<frobware> dimitern: ping - standup
<dimitern> oops omw
<rick_h_> dimitern: natefinch dooferlad ping for standups and such
<frobware> dooferlad: ^^
<rick_h_> fwereade_: ping for standup ^
<mup> Bug #1606256 opened: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
<mup> Bug #1606256 changed: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
<mup> Bug #1605756 opened: [ juju2 beta11 ] system show up in juju status as pending but there is no attempt to deploy in maas <oil> <oil-2.0> <juju-core:New> <MAAS:Invalid> <https://launchpad.net/bugs/1605756>
<mup> Bug #1606256 opened: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
<katco> fwereade_: hey, how did you come up with 100 for the batch size? i'm trying to find documentation for that flag
<fwereade_> katco, "more pessimistic than the 1000 many reported success when using"
<fwereade_> katco, I drew a blank on docs too, went entirely by empirical "does it still work"
<katco> fwereade_: ahh :)
<katco> fwereade_: yeah is this a flag on mongod? doesn't seem to be there
<katco> fwereade_: or looks like maybe mongorestore
<fwereade_> katco, I have half a suspicion that a sufficiently-runaway txn-queue could make specific docs problematic in themselves, which is why I asked for a dump
<fwereade_> katco, yeah, mongorestore
<dimitern> gtv
<katco> fwereade_: sorry got distracted. that sounds interesting (i.e. horrible)... what is a "run away txn-queue"? writes/s txns > reads/s txns?
<fwereade_> katco, in particular, if you run transactions that *only* have asserts, the txn-queue fields in the affected documents never get cleaned up and grow without bound
<fwereade_> katco, mgopurge fixes it; we have code we run on a timer to catch it; but it's a Thing That Can Happen
<katco> ew
<fwereade_> katco, especially in older environments from before we discovered this
<fwereade_> katco, well put ;)
<katco> everything about that is ew haha. the situation, our fix, ew
<fwereade_> katco, indeed, we *should* better filter the txns so we don't even allow them to hit mgo/txn, to prevent the issue in the first place
<katco> fwereade_: ship it, with a request for just a little documentation
<fwereade_> katco, ack, tyvm
<babbageclunk> fwereade_: I don't really understand the relationship between state/watchers and watcher/watchers. Can you explain?
<rogpeppe1> a question for anyone: given that model names are relative to usernames, how can i switch to a model owned by someone else that has the same name as a model owned by me?
<frobware> dooferlad: does this work on your network: wget -6 http://security.ubuntu.com/ubuntu/
<babbageclunk> I *think* at the bottom everything ends up being a state watcher, right?
<babbageclunk> fwereade_: The admonitions in Boring Techniques against workers depending on state watchers is just a specific case of "workers should use the api rather than talking directly to state".
<mup> Bug #1606265 opened: Bogus upgrade in progress <ci> <list-controllers> <regression> <reliability> <juju-core:Triaged> <https://launchpad.net/bugs/1606265>
<fwereade_> babbageclunk, hey, sorry
<fwereade_> babbageclunk, it *is* a special case of that, yes; but more generally it's "don't use watchers that close their channels except where obliged to by reasons surrounding state"
<babbageclunk> fwereade_: Ok.
<fwereade_> babbageclunk, the relationship is pretty much *just* that one closes its Changes chan and the other one doesn't
<fwereade_> babbageclunk, and they have different types in a sort of attempt to encourage people to distinguish between them
<babbageclunk> fwereade_: So in my case, I'll add a state one, and then expose that in the API as a watcher/watcher and make a worker that uses that?
<fwereade_> babbageclunk, watchers are basically all implemented either in state, or in api/watcher
<fwereade_> babbageclunk, exactly, yeah
<babbageclunk> fwereade_: Also, I can't find an implementation of a notifywatcher that watches a whole collection (rather than a specific doc).
<fwereade_> babbageclunk, hmm; cleanup watcher maybe? *checks*
<babbageclunk> fwereade_: Aha, thanks!
<babbageclunk> ok, cool.
<fwereade_> babbageclunk, state.newNotifyCollWatcher?
<fwereade_> babbageclunk, ah yes indeed
<fwereade_> babbageclunk, (derail: I don't think there's anything stopping one from implementing a `watcher.WhateverWatcher` against whatever one chooses -- I think the watcher model is pretty useful -- but it is true that the vast majority, and perhaps all, of our live watcher.WhateverWatchers *are* backed by state.WhateverWatchers on a controller somewhere)
<fwereade_> babbageclunk, (if one *doesn't* swap out the implementation when writing worker tests, one generally has an unpleasant time and ends up with slow and/or flaky tests)
<babbageclunk> fwereade_: Makes sense.
<frobware> dimitern: ping - care to debug some ipv6 issues? Only if you have time...
<dimitern> frobware: I have some time before I go out at the top of the hour
<frobware> dimitern: 1:1 HO?
<dimitern> frobware: omw
<frobware> dimitern: oh, think I've deleted it. link? :)
<dimitern> frobware: me too :)
<dimitern> frobware: let's use the last one (standup)
<mup> Bug #1606278 opened: juju (2.0) deploy <charm-name>/<revision#> fails <juju-core:New> <https://launchpad.net/bugs/1606278>
<mup> Bug #1606282 opened: juju (2.0) deploy <bundle-name> fails as current working directory has a bundle <juju-core:New> <https://launchpad.net/bugs/1606282>
<katco> does the phrase "ABA problem" mean something to anyone?
<katco> ah... should have searched first: https://en.wikipedia.org/wiki/ABA_problem
<mgz> liking swedish pop too much?
<katco> mgz: ha, the same joke occurred to me
<perrito666> uh, I arrived way too late for the joke
<perrito666> katco: interesting how they try to convey the issue in the name but just make it way more unrelated
<katco> perrito666: yeah
<katco> perrito666: i feel like it should have "race condition" somewhere in the name
<katco> "ABA race condition" at least hints at what's happening
<mup> Bug #1606300 opened: Race in github.com/altoros/gosigma <cloudsigma-provider> <intermittent-failure> <race-condition> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1606300>
<mup> Bug #1606302 opened: testsuite.TestWatchUnitAssignment got Next <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1606302>
<redir> morning juju-dev
<katco> hey redir
<dooferlad> frobware: sorry about the delay, yes, that works for me
 * dooferlad goes back to running around like a crazy person
<frobware> dooferlad: yep, things work from my router, not my clients.
<dooferlad> frobware: ip -6 route ?
<frobware> dooferlad: I've fiddled as this was mostly working before. :)
<mup> Bug # opened: 1606303, 1606308, 1606310, 1606313
<mup> Bug #1606337 opened: Change single to multiple 'auto' stanzas in generated network configuration. <juju-core:New> <https://launchpad.net/bugs/1606337>
<mup> Bug #1606354 opened: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
<mup> Bug #1606354 changed: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
<endomorphosis_> does anyone know how to deal with this error message?
<endomorphosis_> cmd supercommand.go:458 storing charm for URL "cs:juju-gui-130": cannot retrieve charm "cs:juju-gui-130": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/juju-gui-130/archive?channel=stable: dial tcp 162.213.33.122:443: getsockopt: connection timed out
<mup> Bug #1606354 opened: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
<natefinch> sinzui, thumper: do you know what kind of cert we're using for TLS on the juju server?  i.e. RSA or ECDSA (or both if that's possible?  I don't know).   Working on https://bugs.launchpad.net/juju-core/+bug/1604474
<mup> Bug #1604474: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <juju2.0> <oil> <oil-2.0> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1604474>
<sinzui> natefinch: I don't know
<natefinch> gotta run for a while, but will be back later
<natefinch> sinzui: ok, np
<mup> Bug #1488245 changed: Recurring lxc issue: failed to retrieve the template to clone  <canonical-bootstack> <landscape> <lxc> <oil> <juju-core:Invalid> <https://launchpad.net/bugs/1488245>
<mup> Bug #1605669 changed: grant-revoke User could not check status with read permission <ci> <grant> <regression> <juju-core:Fix Released by gz> <https://launchpad.net/bugs/1605669>
#juju-dev 2016-07-26
 * thumper stretches
<thumper> that is some tests passing now...
<thumper> time to take the dog for a walk
<mwhudson> anastasiamac: building a new juju-mongodb3.2 now in my ppa \o/
<mwhudson> at least half the time on the update was spent getting confused by dos line endings, grr
<anastasiamac> mwhudson: m sure they'd ab lots ppl very happy :) tyvm!
<axw> thumper: apart from the two things I've commented on your PR, it looks fine to me
<thumper> axw: the pubsub one?
<axw> thumper: sorry yeah
<axw> thumper: and they're not showstoppers, just my preference to avoid reflection, and I think the channel stuff could be simplified but otherwise it's ok
<menn0> thumper: http://reviews.vapour.ws/r/5303/ pls
<thumper> menn0: done
<rogpeppe2> axw: ping
<axw> rogpeppe: pong
<rogpeppe> axw: hiya
<rogpeppe> axw: i was just wondering about user-namespaced model names
<rogpeppe> axw: if there are two models with the same name in a controller but under two different user namespaces, how would i switch to each?
<axw> rogpeppe: juju switch owner/model
<axw> rogpeppe: it was done and dusted, but got backed out as it broke a CI test. waiting on a change to CI being merged before landing again
<rogpeppe> axw: ah! that's why i don't see it!
<rogpeppe> axw: i thought you'd done something but couldn't find it
<axw> rogpeppe: should be back in tomorrow I think
<rogpeppe> axw: cool
<axw> assuming master remains unblocked
<rogpeppe> axw: did you update the docs for juju switch too?
<axw> rogpeppe: hmm not sure now, but I don't remember doing it. I'll make sure they're up to date when I re-land
<rogpeppe> axw: thanks
 * dimitern is going offline for ~30m (scheduled power cut)
<hoenir> so if the fix is commited why does it appear on this ? http://juju.fail/
<mup> Bug #1606487 opened: apt-mirror does not override security.ubuntu.com for controller <juju-core:New> <https://launchpad.net/bugs/1606487>
<dimitern> hoenir: because it's not Fix Release yet
<dimitern> should be once 1.25.7 is out
<axw> fwereade_: what's wrong with OpenParams? I'm hesitant to change only because it's consistent with just about every other param struct we use
<fwereade_> axw, it's not specific to Open
<fwereade_> axw, it's naming the type for one of its uses, instead of for what it really *is*
<axw> fwereade_: environs.New shouldn't even exist. it's just a short-cut around the global provider registry, and EnvironProvider.Open
<axw> fwereade_: if it didn't exist, it would be specific to Open :/
<axw> fwereade_: trying to think of another name. Definition doesn't really sit right with me, because I don't think credentials define an environ
<fwereade_> axw, yeah, nor with me
<fwereade_> axw, so if you can't think of anything better, go for it -- but I still really doubt this type will be used *exclusively* when calling Open
<hoenir> thanks dimitern
<axw> fwereade_: I'm trying very hard to make Open be the only thing that cares about creds/cloud definition, so I really hope it is. I'll ponder, but otherwise I think I'll defer renaming until we have another user - it would probably be clearer what to call it then anyway.
<fwereade_> axw, ack, thanks for pondering :)
<fwereade_> axw, tyvm for all the recent refactoring, btw, it is gruelling to review but well worth it
<axw> fwereade_: cool, and thanks for reviewing. pretty gruelling to change too :p
<axw> I'm especially not looking forward to adding cloud/creds into OpenParams
<fwereade_> axw, ha, yeah
<axw> only 65 call sites
 * fwereade_ shudders gently
<mup> Bug #1606506 opened: juju should tell maas to use a kernel with all required features <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1606506>
<axw> mgz: any chance of a review on the juju-ci-tools MP?
<mup> Bug #1606506 changed: juju should tell maas to use a kernel with all required features <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1606506>
<mup> Bug #1606506 opened: juju should tell maas to use a kernel with all required features <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1606506>
<dimitern> dooferlad: cheers for the write-up!
<dimitern> dooferlad: I got a bit carried away suggesting and commenting :)
<dooferlad> dimitern: thanks!
<dooferlad> dimitern: no worries!
<dimitern> dooferlad: it also helped me get into the right train of thought w.r.t. the sprint prep around IPAM
<dooferlad> dimitern: would you like to hop into a conversation with macgreagoir and me?
<dooferlad> dimitern: just in a hangout
<dimitern> can do in 2m though
<dooferlad> dimitern: https://hangouts.google.com/hangouts/_/canonical.com/release-team
<dooferlad> dimitern: see you when you get here
<babbageclunk> dimitern: ping?
<babbageclunk> fwereade_: you around?
<dimitern> babbageclunk: I am, but I'm just going out - will be back ~1h though
<babbageclunk> dimitern: Ok cool - will go for a run now. Talk to you later!
 * babbageclunk goes for a run.
<frankban|afk> fwereade_: hi, can I ask you a question a bout controller's and model's config?
<rogpeppe> axw: ping
<katco> everyone, pencils down. we can stop working on juju. emacs has solved the problem: https://github.com/cjohansson/emacs-ssh-deploy
<mup> Bug # opened: 1606566, 1606568, 1606569, 1606571
<natefinch> dooferlad: looks like we screwed windows when we dropped most of our ciphersuites
<dimitern> rick_h_: standup ?
<katco> frobware: standup?
<jam1> mgz: I just filed https://github.com/juju/juju/pull/5870 but it doesn't look like reviewboard picked it up
<dimitern> babbageclunk: wanna chat in the other HO (networking tests)?
<dimitern> https://plus.google.com/hangouts/_/canonical.com/networking
<babbageclunk> dimitern: oh, yes please! See you there.
<frobware> balloons: sorry I missed our meeting; was debugging IPv6 and my network lost it!
<frobware> katco: ^^ ditto
<katco> frobware: lol that is a good excuse! :)
<anastasiamac> jam1: reviewed
<jam1> thanks anastasiamac, still hasn't shown up
<jam> (on Reviewboard)
<anastasiamac> jam: i have seen some proposals that behave this way. I've reviewed on github, You should b good to land \o/ Do we want to backport it to 1.25 too?
<jam> anastasiamac: did we backport the other 'status' changes? Like putting the 'model' 'controller' sections?
<jam> if so, then yes
<anastasiamac> jam: m not sure if model/controller exist on 1.25
<anastasiamac> :)
<anastasiamac> jam: if address/port order change needs to be backported, it'll need to handcrafted (!) rather than cherry-picked \o/
<babbageclunk> dimitern: ipAddressDoc doesn't have mac address! Should I use linkLayerDeviceDocs to populate the interfaceInfo?
<babbageclunk> dimitern: duh, reading that it's obvious - of course an ip address doesn't have a mac address.
<dimitern> babbageclunk: yeah :)
<mup> Bug #1606611 opened: test log spammed with TLS handshake <logging> <tech-debt> <testing> <juju-core:Triaged> <https://launchpad.net/bugs/1606611>
<dimitern> any reviews welcome btw: https://github.com/juju/juju/pull/5871
 * dimitern is outta here
<mup> Bug #1606611 changed: test log spammed with TLS handshake <logging> <tech-debt> <testing> <juju-core:Triaged> <https://launchpad.net/bugs/1606611>
<mup> Bug #1606611 opened: test log spammed with TLS handshake <logging> <tech-debt> <testing> <juju-core:Triaged> <https://launchpad.net/bugs/1606611>
<mup> Bug #1606617 opened: juju not using image-stream-url for container images <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1606617>
<mup> Bug #1606617 changed: juju not using image-stream-url for container images <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1606617>
<mup> Bug #1606617 opened: juju not using image-stream-url for container images <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1606617>
<frobware> rick_h_:  With my ipv6 only environment now working (huzzah) my first bootstrap using the manual provider failed (boo!).  Will start to accumulate my setup notes tomorrow.
<mup> Bug # changed: 1440097, 1455629, 1572634, 1572637, 1573286, 1584626, 1592537, 1600539, 1602231, 1603133, 1603585, 1604006
<natefinch> dude our tests suck
<natefinch> gaahhhh
<mup> Bug #1606658 opened: local charm does not deploy updated version after removing application and deploying application <juju-core:New> <https://launchpad.net/bugs/1606658>
<katco> looks like github is acting up
<katco> annnd we're back
<mup> Bug #1606684 opened: upgrade-charm fails to upgrade local charm <juju-core:New> <https://launchpad.net/bugs/1606684>
<mup> Bug #1606663 changed: cannot juju ssh to lxd machine in non-default model <juju-core:Invalid> <https://launchpad.net/bugs/1606663>
<mup> Bug #1606569 changed: transient "agent is lost" <oil> <oil-2.0> <v-pil> <juju-core:Invalid> <https://launchpad.net/bugs/1606569>
<mup> Bug #1606658 changed: local charm does not deploy updated version after removing application and deploying application <juju-core:New for anastasia-macmood> <https://launchpad.net/bugs/1606658>
<katco> perrito666: ping
<perrito666> katco: pong
<katco> perrito666: hey is that offer to get connected up to my environment's mongo still good?
<perrito666> katco: always </snape>
<katco> hehe
<mup> Bug #1606706 opened: MachinerSuite.TestMachinerMachineAssignedUnits fails intermittently <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1606706>
<thumper> oh poop
<thumper> I had assumed that the factory method for creating machines with volumes actually took all the params into account... but it appears not to
<mup> Bug #1606709 opened: manual provider shows started machine as "pending" <manual-provider> <status> <juju-core:Triaged> <https://launchpad.net/bugs/1606709>
<thumper> or at least it removes the pool name
<thumper> ah... go, sometimes you suck badly
<thumper> ffs
 * thumper thinks
<thumper> nope, that was me
<mup> Bug #1606711 opened: When add-machine exits with nonzero status, rerunning add-machine can produce a duplicate <add-machine> <ci> <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1606711>
<perrito666> bb just in time for standup
<menn0> thumper: http://reviews.vapour.ws/r/5308/ pls
<thumper> menn0: shipit
<menn0> thumper: thanks
<thumper> perrito666: ping
<thumper> volume export tests passing, now to work in imports after lunch
#juju-dev 2016-07-27
<mup> Bug #1606569 opened: transient "agent is lost" <oil> <oil-2.0> <presence> <v-pil> <juju-core:Triaged> <https://launchpad.net/bugs/1606569>
<redir> so i should expect to see aws under known cloud types when running juju help add-cloud, yes?
<anastasiamac> redir: i think it's "ec2"...
<redir> anastasiamac: tx
<anastasiamac> redir: \o/
<redir> interesting that it's azure:azure but aws:ec2 and google:gce
<redir> so would it be openstack:nova?
<redir> or since it is kind of ec2 compatible woudl it be openstack:ec2
<redir> anyhow
<redir> naming things
<anastasiamac> i like consistency too but this has grown overtime and there is a difference between cloud name and cloud type I guess :)
<mwhudson> veebers: juju mongodb3.2 is in yakkety-release now, so i'd hope it will be picked up by whatever yakkety CI you guys are doing...
<thumper> anastasiamac, redir: mark has wanted ec2 to be renamed as aws for some time
<thumper> just an FYI
 * redir backs away slowly
<redir> is there an easy way to see what config options are available for the ec2 type?
<redir> easier than grep
<anastasiamac> thumper: sounds like it's worthy of a bug then :) it seems to be a simple change on the surface but will probabaly have a big effect :) might b wirthewhile tracking
<anastasiamac> and might be a good idea to try to do for 2.0... considering all the breaking changes we already tackkling \o/
<thumper> redir: look at provider/ec2/config.go
 * thumper guesses at the filename
<natefinch-afk> yeah, pretty much always config.go
<natefinch> redir: btw, aws is ec2, because there's also aws-china and aws-gov   and there's azure-china
<natefinch> I guess the type could still be aws with aws and aws-china and aws-gov for cloud names
<redir> works for azure...
<redir> but naming things...
<natefinch> anyone know if we're getting our help topics back before 2.0 is released?  We used to have help for constraints and clouds and any number of other things and they were all removed for some reason
<redir> I saw at a glance it was alpha sorted, but missed ec2...
<redir> of course the first cloud listed under juju clouds was aws
<redir> I just missed the whole type thing
 * redir backs away further, slower
<redir> natefinch: I think I had the issue to remove them my first week on the job.
<thumper> ok... have a failing import test written
<redir> so I couldn't tell you much about them.
<thumper> I don't understand why those topics were removed
<redir> thumper: w00t
<thumper> they were useful
<redir> It was bug #1564017
<mup> Bug #1564017: 'juju help glossary' and 'juju help topics' are deprecated <helpdocs> <juju-core:Fix Released by reedobrien> <https://launchpad.net/bugs/1564017>
<anastasiamac> natefinch: i wanted to talk to u but it must be getting late for u.. do u think we could catch up tomorrow before the standup?
<anastasiamac> mayb 30mins before?
<natefinch> anastasiamac: I'm online for some time yet. Totally willing to talk now if you like.
<anastasiamac> natefinch: k. gimme 10 min :)
<redir> 10 minimum
<redir> :)
<anastasiamac> lol
<anastasiamac> natefinch: ho we've used yesterday?
<natefinch> anastasiamac: sure
<anastasiamac> natefinch: there :) here :)
<redir> axw: yt?
<axw> redir: I am
<redir> got a few minutes for a HO?
<redir> axw: ^^
<axw> redir: sure, see you in standupo
<redir> k
<redir> tx again axw that was very helpful
<axw> redir: no worries
<menn0> thumper: ping?
<thumper> menn0: ya?
<menn0> opinion required
<thumper> I'm full of those :)
<menn0> i'm making a change to the migrationmaster so that each instance (for each model) has it's own logger
<menn0> so that it's possible to distinguish the activity for each
<menn0> it's done but i've realised it's kinda ugly
<menn0> right now it's using "juju.worker.migrationmaster:model-uuid"
<menn0> which is really long
<menn0> for example:
<menn0> 2016-07-27 02:56:00 INFO juju.worker.migrationmaster:fcc877db-3451-440b-84f4-0d817000b4b9 worker.go:259 opening API connection to target controller
<thumper> what about just: "juju.migration:model-uuid" ?
<thumper> what about just: "juju.migration:model-uuid[:6]" ?
<thumper> we use the last 6 chars of the uuid for the instance id
<thumper> model-uuid[-6:]
<thumper> python equivalent
<menn0> changing the prefix (juju.worker.migrationmaster) makes it sufficiently different from other workers
<menn0> I think that should probably stay the same
<menn0> thumper: what about using the model name instead of the UUID?
 * menn0 wonders if they are unique in a given controller
<menn0> suspect they are
<thumper> 2016-07-27 02:56:00 INFO juju.worker.migrationmaster:00b4b9 worker.go:259 opening API connection to target controller
<thumper> how does that look?
<thumper> I think the uuid is better
<thumper> the models aren't unique
<thumper> but namespaced to the owner
<thumper> model names that is
<menn0> right
<menn0> what about: "juju.worker.migrationmaster:owner/name"
<menn0> I guess that could get pretty long too
<menn0> but at least it's readable...
<thumper> consider...
<thumper> "juju.worker.migrationmaster:foobar@external/production-foo"
<menn0> yeah, I think the UUID is better
<thumper> one thing it should probably log first would be owner/name :)
<menn0> there is likely to be tooling which extracts the migration logs for a given model
<menn0> thumper: that's not a bad idea
<menn0> will require some API changes but nothing difficult
<thumper> I'm really close to having volume migration done
<menn0> just to round out the discussion, what about:
<menn0> 00b123:juju.worker.migrationmaster
<thumper> will be off line for about an hour shortly as I take Maia and my laptop to her hockey game
<thumper> that will have a different base logger
<thumper> so not caught with the juju=debug
<thumper> logger hierarchy is based on "."s
<menn0> thumper: true
<thumper> w00t import tests pass
<menn0> thumper: on the flip side it makes it easier to filter all the logs for a given model
<natefinch> juju.00b123.worker.migrationmaster?
<thumper> migration.00b123 ?
<menn0> really, all of this is a moot point if we had the model specific logs generated on the controller attached to the correct model when the logs are recorded
<menn0> but we haven't quite figured that out
<thumper> which would be WAY nicer if we had one 64 bit int as the primary key
<thumper> rather than a futzing UUID
<menn0> yep yep
<natefinch> IMO the best idea is not to munge the log heirarchy but give another dimension on which users can filter
<thumper> menn0: hmm... $ git diff master | wc -l
<thumper> 1492
<thumper> and a few more tests to write for migration_internal_test.go
<menn0> natefinch: we have that (debug-log only shows you the logs for the model you're connected to)
<menn0> natefinch: it just doesn't work well for model-specific workers which run in the controller
<menn0> natefinch: if we did that right then none of this would matter
<menn0> I'm going to raise this at the tech board meeting
<natefinch> menn0: then let's do that right ;)
<menn0> natefinch: that's what I intend to raise at the meeting - doing it right
<menn0> it might not even be that hard
<thumper> I do think we should store some unique DB integer keys for models
<thumper> and store those against the logs
<thumper> and translate as necessary
<thumper> less junk in the db
<thumper> rather than storing perhaps two different uuids for every log line
<menn0> thumper: to save space? probably
<thumper> and faster queries
<thumper> integer comparison is faster than string comparison
<natefinch> we did that at my last job... the real IDs were UUIDs, but there was a translation table to ints to save memory (we had millions of items in memory)... they are also more user friendly when counts are small
 * thumper nods
<menn0> thumper: file a ticket!
<thumper> ha
<thumper> menn0: I'm offline for a bit, but taking laptop to write more tests
<thumper> volumes should be good to go when I get back
<menn0> thumper: that's great news
<thumper> now to do all the same shit with filesystems
<thumper> hazaah
<veebers> mwhudson: re: yakkity/mongo sweet thanks, I'll make sure that's passed on to the team
<natefinch> quick review anyone?  Fixing windows by adding ciphersuites it actually supports: http://reviews.vapour.ws/r/5307/
<natefinch> (it's two lines and a comment)
<thumper> axw, menn0: pretty sure I have everything for migrating volumes -  http://reviews.vapour.ws/r/5310/
<thumper> menn0: when we are both back, I'd like to chat about the bootstrap and upload tools issue
<thumper> menn0: ping
<menn0> thumper: pong
<menn0> thumper: 1:1?
<thumper> menn0: quick hangout?
<thumper> yeah
<menn0> axw, fwereade__: tech board?
<axw> menn0: coming
<balloons> axw, you still here?
<axw> balloons: sure am
<balloons> axw, I wanted to chat with you about your MP
<axw> balloons: sounds good. here or hangout?
<balloons> axw, sure, let's do a hangout
<axw> balloons: there is a bug with "juju register". easy to fix, just adding a test for it now.
<balloons> axw, ahh brillant. So indeed there's a bug
<balloons> sorry I didn't file something specific for this
<axw> balloons: if you added a user with access to exactly one model, register would try to switch to it but failed to qualify its name
<axw> balloons: np
<balloons> axw, right. When you talked about switching models, I had thought register did that for you
<balloons> and that's how your change caused the issue :-)
<axw> balloons: only when you have exactly one, which I didn't when I tested
<balloons> the world makes sense again
<axw> balloons: :)
<babbageclunk> frobware: Is it possible to install maas1.9 on xenial? Or do I need to make a trusty vm?
<babbageclunk> frobware: Ignore me, worked it out.
<balloons> axw, so there's an MP that already moves that function to jujupy
<axw> balloons: which what?
<balloons> axw, https://code.launchpad.net/~nealpzhang/juju-ci-tools/assess_multi_users/+merge/299463
<frobware> babbageclunk: you need trusty
<babbageclunk> frobware: Yeah - I see that there's no xenial in the stable ppa.
<balloons> axw, so you can kind of see how it would work. You need a EnvJujuClient2B14 that inherits EnvJujuClient2 and overrides that func
<babbageclunk> frobware: Do you know if the trusty version of cloud-init needs your hack to the image?
<frobware> babbageclunk: remind me which one :-D
<babbageclunk> frobware: The one to let it bring up the network when there's no dhcp.
<frobware> babbageclunk: HO?
<babbageclunk> frobware: sure - which one?
<babbageclunk> frobware: https://hangouts.google.com/hangouts/_/canonical.com/standup?hl=en&authuser=1
<balloons> axw, it also looks like the code change could be made easier -- you can pass the model name to clone
<macgreagoir> frobware dooferlad: I have a more complete (albeit ugly) work-around for this ip addr sorting issue. Is either of you available for a quick HO to discuss?
<dooferlad> frobware, macgreagoir: give me two minutes then I can.
<frobware> macgreagoir: otp with babbageclunk :)
<macgreagoir> I'll let y'all ping me when ready, cheers :-)
<mup> Bug #1597941 changed: juju2.0beta10: websockets API usability Application Deploy failure to inform of required addCharm pre-requisite <2.0> <conjure> <usability> <juju-core:Won't Fix> <https://launchpad.net/bugs/1597941>
<dooferlad> macgreagoir: back
<macgreagoir> dooferlad: stand-up HO?
<dooferlad> macgreagoir: https://hangouts.google.com/hangouts/_/canonical.com/james-mick ?
<dooferlad> macgreagoir: we can put anything after canonical.com it seems
<axw> balloons: doh, thanks for that. that other MP will need to be updated too.
<balloons> axw, if you want to redo your MP with that as a base, I'm approving it to land, so they can both go
<axw> balloons: okey dokey
<balloons> axw, would you like a bug for the register issue?
<axw> balloons: do you want to go ahead and land that other MP? I'll need to retest, and also update the other new tests and test htem with my juju PR
<balloons> axw, I was going to let Leo do it when he awakes in a few hours
<axw> balloons: don't worry about the bug, I'm on it and it's not on master anyway.
<balloons> axw, do you need it before then? If so, that might be possible
<axw> balloons: ok, no worries
<axw> balloons: I'll just merge and work off his commit
<axw> rev
<balloons> right, i asked him to remove a comment, but I don't think you should have a bunch of conflicts remerging trunk when it's in :-)
<balloons> axw, I am curious though -- did the grant revoke test pass with your changes to the test itself?
<balloons> axw, it should still have exposed the register bug -- did it not?
<axw> balloons: it did not
<axw> balloons: which is curious...
<balloons> axw, any thoughts on why not? That would be useful to make sure it's caught
<balloons> the manual test is very similar to what it's doing -- I'm not sure how it worked
<balloons> axw, perhaps you can tweak assert_read_model to add some additional asserts?
<axw> balloons: yep, will look into it
<balloons> axw, sorry for the scope creep, but we do want to make sure we have coverage :-)
<axw> balloons: no worries
<axw> balloons: actually leo's branch works with my juju PR without any further changes. I think because the API methods being used to verify read/write access
<balloons> axw ohh really?
<mup> Bug #1605986 opened: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
<axw> balloons: so to answer your question about why it wasn't failing: the test wasn't checking the exit status of the "juju register" pexpect process. register is failing right at the end, when it goes to set the current model. so the controller is set up, and further commands succeed because they're specifying a model with "-m"
<babbageclunk> frobware: bug 1585878 - does it affect maas 1.9 as well?
<mup> Bug #1585878: Removing a container does not remove the underlying MAAS device representing the container unless the host is also removed. <2.0> <hours> <maas-provider> <network> <reliability> <juju-core:In Progress by 2-xtian> <https://launchpad.net/bugs/1585878>
<frobware> babbageclunk: I would expect it to if we don't issue calls to release when we do $(juju remove-machine 0/lxd/0).
<babbageclunk> frobware: I can't see the device allocated anywhere
<frobware> babbageclunk: sorry, was otp.
<babbageclunk> frobware: no worries - worked it out - it was the version of my code where I'd added a method to the interface but hadn't added it to the impls yet, so no providers would allocate container addresses.
<babbageclunk> frobware: just noticed that the vsphere provider tries to implement the Networking interface but doesn't. Do you think I should add the other missing methods (Spaces, SupportsSpaceDiscovery) with stub implementations as well as adding ReleaseContainerAddresses?
<frobware> babbageclunk: raise a bug for this. at the very least track it as an independent change.
<babbageclunk> frobware: ok
<frobware> babbageclunk: discrete fixes are easy and trivial to revert -- should the need arise. And cherry-pick too.
<babbageclunk> frobware: you mean do it in a separate commit? Or in a separate PR?
<frobware> babbageclunk: bug + separate PR. your change should be so trivial and obvious and discrete that it's a no-brainer to accept it. :)
<babbageclunk> ok cool
<babbageclunk> thanks
<babbageclunk> frobware: This whole area seems weird - there are providers for which SupportsNetworking returns true (and they implement the Networking interface) but the method implementations return errors.NotSupportedf. Why is this one interface?
<babbageclunk> frobware: I'm putting arch notes to this effect in commits.
<mup> Bug #1606917 opened: vsphere provider is missing some Networking methods <juju-core:New> <https://launchpad.net/bugs/1606917>
<mup> Bug #1600301 changed: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Invalid by dooferlad> <https://launchpad.net/bugs/1600301>
<frobware> macgreagoir: per or HO earlier... any chance you could try this: http://pastebin.ubuntu.com/21139080/
<frobware> macgreagoir: will look at the LXD / eth.cfg / cloud-init fix now
<mup> Bug #1606922 opened: private-address isn't updated on machine reboot <juju-core:New> <https://launchpad.net/bugs/1606922>
<macgreagoir> frobware: ack
<frobware> macgreagoir: https://bugs.launchpad.net/juju-core/+bug/1602054/comments/14
<mup> Bug #1602054: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces <2.0> <network> <regression> <juju-core:In Progress by dooferlad> <https://launchpad.net/bugs/1602054>
<frobware> macgreagoir: tree is here: https://github.com/frobware/juju/tree/master-lp1602054
<frobware> natefinch: standup?
<mup> Bug #1606939 opened: UNIQUE constraint failed <ci> <intermittent-failure> <lxd-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1606939>
<kwmonroe> ahhh.. roger that.  i'm VM'd.  i heard of people trying ubuntu native on the 2014 macbooks, but had problems with wifi and screen resolution, so i never went down that road.  only guy i know with native ubuntu on a mac is curtis.
<kwmonroe> damn wrong channel :/
<katco> can someone else TAL at http://reviews.vapour.ws/r/5305 ? i feel like i'm missing something and i'm not comfortable landing it
<katco> it's a very short review
<katco> natefinch: perrito666: frobware: et. al: ^^^
<perrito666> checking
<katco> perrito666: ta
 * perrito666 finally has internet
<perrito666> glorious 6Mdown 1Mup
<katco> :)
<katco> perrito666: btw thanks for your help yesterday. i was actually able to write a script to connect to mongo from outside the lxc container
<perrito666> katco: niice, care to share ?
<katco> perrito666: which is a huge relief! no more worrying about adding packages from within the container
<katco> perrito666: yeah i'll send it out to the list
<perrito666> katco: the diff from fwereade seems to break something else, I added a comment
<katco> perrito666: k tal
<katco> perrito666: if i'm reading that code correctly, doesn't the loop remove directories? so prior to fwereade's change, it would have deleted everything *except* for backups?
<perrito666> ah I see that is  poc
<katco> poc?
<perrito666> not something I can say in front of children
<perrito666> so, the previous iteration would keep everything but backups, but done in the most awful way possible
<perrito666> it would loop over ignored list but do nothing unless one of those was backups
<perrito666> the new code will do the same, buut, I missed a part, and he is right, yes that code can land, despite how disgusting it is, it is a bit less so than theprevious one
<katco> oh i see, i inverted the test case
<katco> wow that is weird code
<perrito666> katco: there, I added a coment
<perrito666> katco: I presume it was done to "in the future we can expand this already complicated code with bleh" mindset, something that happens too often in backups
<perrito666> restore instead is mostly writen with "they asked me to quick patch this as restore is being replaced rsn"
<katco> lol yeah
<katco> perrito666: btw sent that script out
 * katco goes for a walk
<mup> Bug #1606991 opened: TestWaitMinionNeverBecomeMinion wrong minion <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1606991>
<mup> Bug #1606991 changed: TestWaitMinionNeverBecomeMinion wrong minion <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1606991>
<babbageclunk> frobware: quick review please? https://github.com/juju/gomaasapi/pull/55
<frobware> babbageclunk: should Devices() return a copy or the pointers/refs?
<babbageclunk> frobware: Yeah, I wondered that - I followed the example of .Nodes(). It's more powerful, and since it's a test server returning the actual thing seemed ok.
<mup> Bug #1606991 opened: TestWaitMinionNeverBecomeMinion wrong minion <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1606991>
<frobware> babbageclunk: before continuing would like to understand the rationale for not returning copies.
<babbageclunk> frobware: I did that for consistency with the other similar methods (Nodes, Files, OwnedNodes, IPAddresses). I don't need it in this case, but it's conceivable that a future test might.
<frobware> babbageclunk: just wondering if it makes test code brittle w.r.t. things changing underneath you
<babbageclunk> frobware: Not sure whether it's obvious, but only the tests use those methods - the live code only interacts with the server over HTTP.
<frobware> babbageclunk: so the immediate question... why does it need exporting?
<babbageclunk> frobware: So that the test that I'm writing (for ReleaseContainerAddresses) can monkey with the devices to set them up.
<frobware> babbageclunk: ah, I see
<babbageclunk> frobware: (Sorry, should probably put that on the PR.)
<babbageclunk> frobware: Added it.
 * babbageclunk should be home already, whoops bye!
<mup> Bug #1607000 opened: Immortal zombie Juju controller on MAAS <landscape> <juju-core:New> <https://launchpad.net/bugs/1607000>
<mup> Bug #1607044 changed: WorkerSuite.TestUpdatesAfterPeriod timed out <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607044>
<mup> Bug #1607044 opened: WorkerSuite.TestUpdatesAfterPeriod timed out <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607044>
<natefinch> anyone seen a problem where c.Mkdir panics on windows?
<natefinch> nvm, looks like it's just cruft left from an earlier run... but evidently the random number generator on windows always produces the same random string for gocheck's mkdir
<natefinch> which uh... makes it somewhat less useful
<bodie__> Anyone know when Dimiter is off vacation?  He wanted me to talk to him at Gophercon, but I wasn't able to catch him.
<perrito666> bodie__: no clue, sorry
<katco> bodie_: he was back, but is currently at a sprint. he should be back next week
<bodie__> Thanks katco :)
<katco> bodie__: hth
<thumper> hmm...
<thumper> bootstrapping lxd and having that feeling that it won't start because my squid-deb-proxy is not up to date...
 * thumper waits to see
<ahasenack> thumper: thanks for helping troubleshoot https://bugs.launchpad.net/landscape-client/+bug/1605002
<thumper> ahasenack: no worries
<thumper> was the subordinate hook staying running?
<ahasenack> thumper: it was your comment about the subordinate apparently still doing stuff, and the "machine lock", that lead me in the right direction
<ahasenack> thumper: it was hung, like in an infinite loop
<thumper> right
<thumper> it got me thinking about how to surface that
<ahasenack> so it was running according to juju, but not doing a thing really
<thumper> if a hook has been running for a "long" time
<thumper> we should highight somehow
<thumper> highlight
<ahasenack> yeah, don't know
<ahasenack> timeouts are tricky
 * thumper nods
<menn0> redir: nice catch on my dumb test failure... embarassing
<redir> go test caught it
<redir> menn0: ^
<menn0> redir: I changed from using the full UUID to using the suffix and forgot to re-run the tests
<redir> figured that or it was a WIP
<bdx> bdx
<bdx> bdx
<katco> bdx: are you a pokemon?
<redir> ok buck
<bdx> omg lol
<bdx> whoops
<bdx> must be
<menn0> thumper: no leads call today right?
<menn0> thumper: also, can I have a chat with you regarding migration status pls?
<thumper> sure
<thumper> menn0: I don't think the leads call is on
<mup> Bug #1605986 changed: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:Invalid> <https://launchpad.net/bugs/1605986>
<thumper> menn0: 1:1 for chat?
<menn0> thumper: yep
<mup> Bug #1607109 opened: juju needs to warn about lxd on trusty earlier <juju-core:Triaged> <https://launchpad.net/bugs/1607109>
<menn0> perrito666: ping
<perrito666> menn0: pong
<menn0> perrito666: so i'm trying to make another juju user a controller admin (for testing something) and I can't make it work
<menn0> I did "juju add-user foo"
<menn0> did the registration thing (in another Linux account)
<menn0> then juju grant user --acl=write controller
<perrito666> controller admin not yet landed, trying to now actually
<menn0> ahhh
<menn0> that explains it then
<menn0> never mind
<menn0> perrito666: is there also something coming to show the currently assigned ACLs?
<perrito666> menn0: enoparse
<menn0> perrito666: where can you see a list of users with access to a model and the ACLs grants to them?
<menn0> granted
<perrito666> menn0: I "think" there is something in place, but I dont recall having in my plate adding more stuff, although now that I try to remember, what is in place might be to get info about one user only
<menn0> perrito666: it seems like "juju shares" is close but it doesn't show the ACLs.
<menn0> and the name doesn't really make sense (the word "share" is only used there)
<perrito666> menn0: it was used elsewhere before
<menn0> perrito666: I suspected it was but it might need renaming now
<perrito666> anyone found mgo failing a lot of tests with socket left in blah state?
<menn0> perrito666: that usually happens when you've forgotten to close a collection or State
<perrito666> menn0: tx a lot, very possible
<perrito666> meh, not that
<axw> thumper: I need to go help with kids, then will be out for a while at school - if you're still stuck later I'll see what I can do to help. we should be updating the agent-version in bootstrap config in environs/bootstrap/bootstrap.go
<thumper> yeah, that's where I'm poking around
<thumper> menn0: I have a fix for the bug, but need thoughts on how to test
<thumper> menn0: got a few minutes?
<menn0> thumper: yep
<menn0> 1:1?
<thumper> yeah
#juju-dev 2016-07-28
<redir> axw: can I borrow you to clarify a couple things?
<rick_h_> thumper: do you have a sec to peek at https://github.com/juju/utils/pull/226/files and ship if it's ok today please?
<rick_h_> sorry, that's not the RB link but close enough
<thumper> rick_h_: ack
<redir> bbiab
<thumper> menn0: I found out why that ARCH patch was there...
<thumper> menn0: fails on s390
<thumper> without it
<menn0> thumper: argh right
<thumper> proposing fix now
 * menn0 doesn't really understand why 
<menn0> but fair enough
<thumper> menn0: http://reviews.vapour.ws/r/5314/
<menn0> thumper: should that setup live in SetUpTest or will that cause other problems?
<menn0> thumper: never mind... looking at the code it's only done in one other place
<thumper> my gut tells me it may cause other problems
<menn0> better to just do it when necessary
<menn0> thumper: ship it with one sugestion
 * thumper looks
<thumper> ack
<axw> redir: sorry was out before. if you're still around I'm here now
<axw> thumper: I don't really understand why your change was needed. we're already setting agent-version earlier in the function -- why doesn't that work?
<axw> thumper: ahh, because building the tools bumps the version?
<thumper> yep
<redir> axw: np, I'll just dig in tomorrow.
<mup> Bug #1607161 opened: generate-image metadata command help needs updating <helpdocs> <juju-core:Triaged> <https://launchpad.net/bugs/1607161>
<mup> Bug #1607161 changed: generate-image metadata command help needs updating <helpdocs> <juju-core:Triaged> <https://launchpad.net/bugs/1607161>
<mup> Bug #1607161 opened: generate-image metadata command help needs updating <helpdocs> <juju-core:Triaged> <https://launchpad.net/bugs/1607161>
<menn0> thumper: apiserver changes for migration status in the FullStatus response
<menn0> http://reviews.vapour.ws/r/5315/
 * thumper looks
<mup> Bug #1607170 opened: enable-ha requires '-m controller' (i.e. doesn't do the 'right thing' by default). <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1607170>
<menn0> thumper: by error state do you mean ABORT?
<menn0> if so yes it is (well actually ABORTDONE is the final, terminal phase)
<thumper> menn0: chat?
<thumper> abort done is only set after aborting is finished?
<thumper> I was thinking about an intermediate error state
<menn0> thumper: sure
<menn0> going to to 1:1
<mup> Bug #1605986 opened: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:Triaged> <https://launchpad.net/bugs/1605986>
<ejat> <ejat> 2016-07-28 08:04:34 INFO config-changed subprocess.CalledProcessError: Command '['/opt/opendaylight-karaf/bin/client', '-r', '20', '-h', 'localhost', '-a', '8101', '-u', 'karaf', 'feature:list']' returned non-zero exit status 1
<ejat> odl-controller charm
<ejat> anyone can help me ?
<babbageclunk> ejat: If you run that command on the machine, do you get any useful output?
<ejat> u mean this : /opt/opendaylight-karaf/bin/client', '-r', '20', '-h', 'localhost', '-a', '8101', '-u', 'karaf', 'feature:list
<ejat> or '['/opt/opendaylight-karaf/bin/client', '-r', '20', '-h', 'localhost', '-a', '8101', '-u', 'karaf', 'feature:list']'
<babbageclunk> ejat: yes, but without the quotes and commas.
<ejat> /opt/opendaylight-karaf/bin/client -r 20 -h localhost -a 8101 -u karaf feature:list <-- ?
<babbageclunk> ejat: yes
<ejat> $ /opt/opendaylight-karaf/bin/client -r 20 -h localhost -a 8101 -u karaf feature:list
<ejat> client: JAVA_HOME not set; results may vary
<ejat> Logging in as karaf
<ejat> 175 [pool-2-thread-2] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at localhost/127.0.0.1:8101 presented unverified key:
<ejat> Error executing command: java.lang.NullPointerException
<ejat> juju not set the java_home ?
<babbageclunk> ejat: It sounds more like there's a problem with the key, but I don't know anything about opendaylight.
<babbageclunk> ejat: You might be better asking in #juju - there are more openstack people and charmers there.
<ejat> ok thanks babbageclunk
<babbageclunk> ejat: good luck!
<ejat> grattitude
<babbageclunk> voidspace: welcome back! Just around for 2 days?
<babbageclunk> frobware, voidspace: a couple of reviews if you have a moment? http://reviews.vapour.ws/r/5312/ and http://reviews.vapour.ws/r/5317/
<frobware> babbageclunk: will get to it later; syncing with macgreagoir
<babbageclunk> frobware: cool, thanks - no big rush
<mup> Bug #1607000 changed: Immortal zombie Juju controller on MAAS <landscape> <juju-core:Invalid> <https://launchpad.net/bugs/1607000>
<mup> Bug #1607303 opened: Help text for juju kill-controller needs to point to unregister <landscape> <juju-core:New> <https://launchpad.net/bugs/1607303>
<frobware> macgreagoir: combined patch updated - https://github.com/frobware/juju/tree/master-lp1566801-strike3%2Blp1602054
<macgreagoir> frobware: FTR, woo hoo!
<voidspace> babbageclunk: yeah, just here for a couple of days
<voidspace> babbageclunk: off to cornwall with my dad for a week after that
<babbageclunk> voidspace: nice
<voidspace> babbageclunk: looks like everything changed whilst I was away...
<babbageclunk> voidspace: of course!
<voidspace> babbageclunk: your doing no doubt...
<voidspace> babbageclunk: I got a microbit from EuroPython!
<voidspace> babbageclunk: I haven't played with it yet though
<voidspace> babbageclunk: I'm currently trying to fix MAAS (again)
<voidspace> babbageclunk: a recent update broke my install
<babbageclunk> voidspace: stink. I recently redid my MAASes using frobware's scripts - took a bit of time to get set up, but now I can add/remove nodes really easily.
<voidspace> babbageclunk: ah cool - got a link
<voidspace> babbageclunk: I'll try and get that working
<babbageclunk> voidspace: In KVMs, or hardware?
<voidspace> babbageclunk: KVM
<babbageclunk> I hacked this one up: https://github.com/frobware/ubuntu-fan-hacks
<voidspace> babbageclunk: have you pushed your changes anywhere, or are they babbageclunk specific?
<frobware> babbageclunk, voidspace: I should really update the docs there ....
<babbageclunk> ...to get the initial VM, then installed maas manually.
<babbageclunk> then used https://github.com/frobware/kvm-maas with minimal hacks to add nodes.
<frobware> babbageclunk: what changes did you need for kvm-maas?
<babbageclunk> frobware: checking - only cut down the disk sizes a bit - I couldn't get it to respect the env vars. Nothing that needs pushing.
<frobware> babbageclunk: hmm - I wonder why
<babbageclunk> voidspace: The only tricky thing with the initial maas controller setup is that you need a hacked cloud image from frobware that enables cloud-init to do network setup in a network where there's no dhcp
<voidspace> babbageclunk: ah, right
<frobware> babbageclunk: so I'm relibably informed that that's now the default
<babbageclunk> voidspace: ok - so that might be fixed!
<frobware> voidspace: if that's not true there's always my enhanced version: http://178.62.20.154/~aim/xenial-server-cloudimg-amd64-disk1.img
<voidspace> frobware: babbageclunk: cool, thanks - will have a play
<babbageclunk> voidspace: hang on, I'll push my hacked (and now badly named) ubuntu-fan-hacks branches - I've got one for maas2 and one for maas1.9
<mattyw> frobware, hey there, are you OCR today?
<frobware> mattyw: yes
<mattyw> frobware, I have a tiny one for you. http://reviews.vapour.ws/r/5319/. I'm about to got afk but if you had any questions I'll be back in an hour or so
<mattyw> frobware, but if you're busy - it's not that urgent :)
<frobware> mattyw: me too - lunch is required
<mattyw> frobware, I'm planning on combining it with a cycle as the sun is out :)
<frobware> babbageclunk: running VIRT_DISK_SIZE=12 ./add-node disk-size-tes maas19-net maas19
<frobware> babbageclunk: gives me: http://178.62.20.154/~aim/disk-test.png
<frobware> mattyw: heh, sounds fab!
<babbageclunk> frobware: yeah, I may have been getting confused by something VMM was telling me - I think that's working fine.
<babbageclunk> voidspace: https://github.com/babbageclunk/ubuntu-fan-hacks/tree/maas-2-vm
 * frobware really really really goes for lunch. Really. Almost. Really.
<voidspace> babbageclunk: thanks
<babbageclunk> voidspace: https://github.com/babbageclunk/ubuntu-fan-hacks/tree/maas-2-vm
<babbageclunk> voidspace: https://github.com/babbageclunk/ubuntu-fan-hacks/commit/5f5b736308bfdfb15af84fe7986d252b7ef0990b
<babbageclunk> voidspace: That should show the kinds of things you're likely to need to change.
<voidspace> babbageclunk: cool
<babbageclunk> voidspace: Still need to install maas manually and click around in the admin a little bit, but it's a bit simpler.
<voidspace> babbageclunk: I'm just doing the mass install at the moment
<voidspace> frobware: what's the right ppa for the latest and greatest maas
<voidspace> frobware: ah, experimental3
<mup> Bug #1607347 opened: Password for juju-gui not showing up after a change <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1607347>
<mup> Bug #1607362 opened: TestMachinerMachineNotFound agent should be terminated <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607362>
<mup> Bug #1607362 changed: TestMachinerMachineNotFound agent should be terminated <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607362>
<macgreagoir> frobware: I think I'm showing --bind working here (with your latest eth0 patch and an lp:1603473-like net config): https://pastebin.canonical.com/161866/
<frobware> macgreagoir: yes, looks good
<mup> Bug #1607362 opened: TestMachinerMachineNotFound agent should be terminated <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607362>
<mup> Bug #1607365 opened: PresenceSuite.TestScale reported false <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607365>
<mup> Bug #1607368 opened: bootstrapSuite.TestBootstrapUploadTools model "foo" of type dummy does not support instances running on "ppc64el" <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607368>
<mup> Bug #1607365 changed: PresenceSuite.TestScale reported false <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607365>
<mup> Bug #1607368 changed: bootstrapSuite.TestBootstrapUploadTools model "foo" of type dummy does not support instances running on "ppc64el" <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607368>
<mup> Bug #1607365 opened: PresenceSuite.TestScale reported false <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607365>
<mup> Bug #1607368 opened: bootstrapSuite.TestBootstrapUploadTools model "foo" of type dummy does not support instances running on "ppc64el" <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607368>
<frobware> katco: standup?
<frobware> rick_h_: you doing standup today?
<natefinch> sinzui, mgz: it looks like we're building the CI tests with an old version of go?
<sinzui> natefinch: ouch, I will look into it. Which test/host?
<natefinch> sinzui: http://juju-ci.vapour.ws:8080/job/github-merge-juju-utils/125/console
<sinzui> natefinch: not ci then
<natefinch> sinzui: oh, right, landing bot, sorry
<sinzui> natefinch: found the problem. Not every merge job has the path set right. I can fix this job in a minute. I need to review about 8 jobs though
<natefinch> sinzui: ok... glad to help find problems :)
<sinzui> natefinch: replaying the job with the right juju http://juju-ci.vapour.ws:8080/job/github-merge-juju-utils/126/console
<natefinch> sinzui: yay, thanks.  And glad that we'll be using the right version of Go.  I intentionally removed the guardrails for go 1.2 vs 1.6 in this change.... we really just won't build with go 1.2 anymore.
<babbageclunk> Is there anything that logs the actual userdata we send when starting an instance (particularly in maas)?
<sinzui> babbageclunk: natefinch are your bugs the same: bug 1604474 and bug 1605777
<mup> Bug #1604474: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <juju2.0> <oil> <oil-2.0> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1604474>
<mup> Bug #1605777: munna/juju2 is too slow to deploy windows images <ci> <maas-provider> <regression> <windows> <juju-ci-tools:Triaged> <juju-core:Triaged by 2-xtian> <https://launchpad.net/bugs/1605777>
<babbageclunk> sinzui, natefinch - I think they are, yes.
<babbageclunk> Good spotting!
<babbageclunk> sinzui: Should I just mark 5777 as a duplicate of 4474? natefinch's quite far along with it, as far as I can see.
<sinzui> babbageclunk: yep.
<mup> Bug #1605777 changed: munna/juju2 is too slow to deploy windows images <ci> <maas-provider> <regression> <windows> <juju-ci-tools:Triaged> <juju-core:Triaged by 2-xtian> <https://launchpad.net/bugs/1605777>
<voidspace> natefinch: ping - some advice
<voidspace> natefinch: if you have the mental bandwidth (juju advice - not life advice, don't worry)
<voidspace> for that I go to babbageclunk
<babbageclunk> voidspace: for *bad* life advice
<perrito666> uh uh uh, bad life advice, I can do that, who is in need?
<voidspace> babbageclunk: well, naturally...
<voidspace> perrito666: hah, thanks - if the advice from babbageclunk ever turns out to not be bad enough I'll turn to you
<voidspace> perrito666: babbageclunk: frobware: juju bug 1599972
<mup> Bug #1599972: juju2 beta11 unable to parse PORT during a maas bootstrap <2.0> <conjure> <maas-provider> <juju-core:In Progress by mfoord> <https://launchpad.net/bugs/1599972>
<voidspace> perrito666: babbageclunk: frobware: the bug is that specifying the MAAS server in the form cloud.lol:5240 doesn't work
<voidspace> with an error that it's a malformed url
<voidspace> perrito666: babbageclunk: frobware: specifying instead http://cloud.lol:5240 however works
<voidspace> do you think that is worth fixing or should be a "won't fix"...
<frobware> voidspace: is 'cloud.lol:5240' actually a URL?
<voidspace> frobware: well it is is you have cloud.lol in /etc/hosts
<voidspace> which I do
<voidspace> frobware: because ip:port worked so I needed a domain
<frobware> voidspace: I'm really asking do you need http:// or ftp:// or ...
<voidspace> frobware: if you don't specify the scheme but do specify a port then the url is "incorrectly" parsed
<frobware> voidspace: so GIGO
<voidspace> frobware: yep
<stokachu> maas/172.0.0.1:5240 doesn't work
<voidspace> stokachu: ah, it worked for me - trying again
<perrito666> voidspace: cloud.lol is a hostname
<perrito666> http://cloud.lol is a url
<perrito666> voidspace: cloud.lol:port is incredibly ambiguous as it could be both
<stokachu> oh maybe I'm mistaken
<stokachu> maas/cloud.lol:5240 is what doesn't work
<stokachu> and you're saying that we should be using maas/http://cloud.lol:5240?
<voidspace> stokachu: yep
<voidspace> stokachu: what I'm *really* saying is that if you don't specify a malformed url you won't get the "malformed url" error...  ;-)
<voidspace> stokachu: but I was trying not to be a dick about it
<perrito666> stokachu: voidspace I was just clarifying why the erorr makes sense, I lack context for more detail
<voidspace> not trying very hard though... :-p
<voidspace> stokachu: although cloud.lol:5240 is what I would have tried as well
<frobware> voidspace: in the same vein that $(firefox cloud.lol:5240) would just work, should juju?
<voidspace> frobware: it would be nice if it did
<perrito666> frobware: firefox is lenient, if you put $(firefox agoogleableterm) will also work
<voidspace> frobware: I don't think it's a critical bug that it doesn't
<frobware> voidspace: nope, but I do like DWIM
<perrito666> this said, we should be a bit nicer in how we treat urls, if we expect a url and its not a url we could try a hostname
<voidspace> perrito666: frobware: on error just adding http:// and retrying would work - if it still fails return the original error
<natefinch> voidspace: sorry, was afk.
<natefinch> sinzui:  are we sure that #1605777 is the same as the bug I fixed?  The bug I fixed was not that windows was slow - it was entirely broken
<mup> Bug #1605777: munna/juju2 is too slow to deploy windows images <ci> <maas-provider> <regression> <windows> <juju-ci-tools:Triaged> <juju-core:Triaged by 2-xtian> <https://launchpad.net/bugs/1605777>
<mup> Bug #1607457 opened: juju migrate fails to find model after migration <migration> <juju-core:Triaged> <https://launchpad.net/bugs/1607457>
<sinzui> natefinch: The bug does not say windows is slow, It says juju 2. cannot deoploy a windows machine by 1.25 can. The suibstrate is known to be slow
<natefinch> sinzui: I don't know what munna is, I assumed it was a windows machine in CI
<natefinch> sinzui: regardless, it sounds like it's completely different from the bug that I fixed, which was that the userdata script we run on windows couldn't connect to the controller to download the jujud binary, because we didn't support a ciphersuite that windows supported.
<sinzui> natefinch: okay. The issue we see is that the machine agent never comes up. Looks like a cloud init issue
<natefinch> sinzui: that *could* be the same problem, then, since it's the userdata script failing.  But it would have been a new failure around July 7th
<natefinch> That's when we broke windows with the thing I just fixed
<sinzui> natefinch: We first see the issue in July 15 http://reports.vapour.ws/releases/issue/5789759d749a5616aa83b491 We stopped testing on the host last week. The previous failures were timeouts, but machine want not checked
<sinzui> natefinch: The last pass of the test for juju 2 was jujy 7! the first failure is on the 8th http://reports.vapour.ws/releases/4121/job/maas-1-9-deploy-win2012hvr2-amd64/attempt/624
<natefinch> sinzui: sound suspicious :)  Let's try it again with this fix and see how it goes.
<natefinch> / Fields holds a map from attribute name to information about that attribute.
<natefinch> ....so why isn't it called Attributes? :/
<perrito666> Bbl
<redir> brb reboot
<thumper> natefinch: ping
<thumper> natefinch: re: https://github.com/juju/utils/pull/226#event-737900037 I saw this fail to land yesterday, and don't see a change in code, did the landing bot change?
<thumper> sinzui: ^^
<natefinch> thumper: yep
<thumper> was it using an older go?
<natefinch> thumper: yep
<thumper> ok
<natefinch> thumper: so, win-win... it landed and now we have a canary for jobs that use a too-old version of go.
<natefinch> I had intentionally removed the guardrails for some go 1.6 code in there, since we officially support it everywhere now.
<sinzui> natefinch: I updated all the merge jobs to include go 1.6 in the path. We wont see any go 1.2 in the current jobs
<natefinch> sinzui: huzzah
<natefinch> man I hate middle-click paste
<natefinch> simple PR for anyone - https://github.com/juju/schema/pull/13
<redir> anyone here know anything about the model-config-tree details?
<redir> natefinch-afk: LGTM, FWIW
<mup> Bug #1607557 opened: wrong JSON field for storage endpoint in API <juju-core:New> <https://launchpad.net/bugs/1607557>
<menn0> thumper: addition MigrationMaster.SetStatusMessage API (small and boring): http://reviews.vapour.ws/r/5324/
<redir> axw: same bat station?
#juju-dev 2016-07-29
<redir> couple easy ones: http://reviews.vapour.ws/r/5325/
<redir> http://reviews.vapour.ws/r/5323/
<redir> one last easy straggler: http://reviews.vapour.ws/r/5323/
<menn0> thumper: TestHostedModelWorkers is sooo unreliable on Windows
<menn0> I might take a look if I get a chance later one
<menn0> on
<thumper> ok
<natefinch> pretty simple review anyone? https://github.com/juju/schema/pull/13
<natefinch> redir: thanks for the review btw
<menn0> thumper: I just assigned you this as I think you knew what it was :-p https://bugs.launchpad.net/juju-core/+bug/1607599
<mup> Bug #1607599: new unit ends up on wrong machine after migration <migration> <juju-core:New for thumper> <https://launchpad.net/bugs/1607599>
<thumper> ok, no problem
<mup> Bug #1607599 opened: new unit ends up on wrong machine after migration <migration> <juju-core:New for thumper> <https://launchpad.net/bugs/1607599>
<menn0> thumper: migrationmaster now does status reporting: http://reviews.vapour.ws/r/5326/
<redir> natefinch: np
<mup> Bug #1501569 changed: MachineSuite failed <ci> <intermittent-failure> <unit-tests> <juju-core:Won't Fix> <https://launchpad.net/bugs/1501569>
<mup> Bug #1501569 opened: MachineSuite failed <ci> <intermittent-failure> <unit-tests> <juju-core:Won't Fix> <https://launchpad.net/bugs/1501569>
<mup> Bug #1501569 changed: MachineSuite failed <ci> <intermittent-failure> <unit-tests> <juju-core:Won't Fix> <https://launchpad.net/bugs/1501569>
<mup> Bug #1582881 changed: destroying a subordinate returns an error but seems to work <juju-core:Expired> <https://launchpad.net/bugs/1582881>
<mup> Bug #1607601 opened: There is no way to see assigned user ACLs <juju-core:Triaged> <https://launchpad.net/bugs/1607601>
<mup> Bug #1607601 changed: There is no way to see assigned user ACLs <juju-core:Triaged> <https://launchpad.net/bugs/1607601>
<mup> Bug #1582881 opened: destroying a subordinate returns an error but seems to work <juju-core:Expired> <https://launchpad.net/bugs/1582881>
<mup> Bug #1582881 changed: destroying a subordinate returns an error but seems to work <juju-core:Expired> <https://launchpad.net/bugs/1582881>
<mup> Bug #1600301 opened: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1600301>
<mup> Bug #1607601 opened: There is no way to see assigned user ACLs <juju-core:Triaged> <https://launchpad.net/bugs/1607601>
<mup> Bug #1607608 opened: "juju shares" doesn't make sense any more <juju-core:New> <https://launchpad.net/bugs/1607608>
<mup> Bug #1600301 changed: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1600301>
<mup> Bug #1607608 changed: "juju shares" doesn't make sense any more <juju-core:New> <https://launchpad.net/bugs/1607608>
<blahdeblah> Anyone know how I can work around this error?  ERROR cannot add unit 1/1 to service "glance": cannot add unit to service "glance": inconsistent state
<blahdeblah> Seems to be coming from here: https://github.com/juju/juju/blob/1.25/state/service.go#L899
<blahdeblah> It happened when I tried to "juju add-unit glance --to lxc:21"
<mup> Bug #1600301 opened: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1600301>
<mup> Bug #1607608 opened: "juju shares" doesn't make sense any more <juju-core:Triaged> <https://launchpad.net/bugs/1607608>
<mup> Bug #1607611 opened: User commands "See also" help sections need work <juju-core:New> <https://launchpad.net/bugs/1607611>
<menn0> axw: you should really stop biting yourself
<axw> menn0: too bad you can't edit launchpad comments ;p
<axw> that's two people now
<menn0> axw: https://bugs.launchpad.net/juju-core/+bug/1600301/comments/4
<mup> Bug #1600301: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1600301>
<menn0> haha
<blahdeblah> axw: Why are you biting yourself? https://www.youtube.com/watch?v=W_bQ_vMtnbo
<axw> blahdeblah: :p
<mup> Bug #1607620 opened: vsphere provider doesn't use region/endpoint from clouds.yaml <juju-core:Triaged> <https://launchpad.net/bugs/1607620>
<menn0> axw: ping
<axw> menn0: pong
<menn0> axw: I've been looking at bug 1600301
<mup> Bug #1600301: cmd/jujud/agent MachineSuite.TestHostedModelWorkers fails because compute-provisioner never really starts <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1600301>
<menn0> and I think I see what's happening
<menn0> it's a miracle it even works at all
<axw> menn0: was I right with my comment about multi-model? or is it something else?
<menn0> the only reason it works is that the test spins in a tight loop and just happens to see the compute-provisioner come up just long enough before it dies with a "model is not prepared" error
<menn0> on windows it fails a lot more before things are slower or the timings are different
<menn0> if I reduce the test's loop frequency it fails reliably
<menn0> axw: there appears to be some stuff in there to support multi-model
<menn0> axw: maybe there just needs to be some extra setup done in the dummy provider... right now the test just creates the hosted model in state with no other setup
<menn0> axw: I know very little about the dummy provider
<axw> menn0: ok. the code's a bit inscrutable, but it looks like we only enter things into the "state" map during bootstrap. and if there's nothing in there for a model UUID, we get "model is not prepared"
<axw> hence my comment. I may be missing something though
<menn0> axw: yep that's what I had figured out too
 * menn0 checks if there's another way to get something into dummy.state
<menn0> axw: failing that, what would you say to having this test - and the other 2 or 3 that use a similar structure - skipped
<menn0> axw: i'll keep working on it but that seems like a reasonable short term solutionm
<axw> menn0: seems fine. we need a better way of telling that workers are started
<menn0> axw: I'll bring it up with will
<menn0> axw: he last touched these tests
<axw> menn0: we'll find out pretty quick if the compute provisioner isn't working ;)
<menn0> axw: I wonder if it's enough to just statically check the manifolds config to ensure the workers we expect are listed
<menn0> axw: and then rely on other tests (featuretests and CI) if something isn't wired up
<axw> menn0: yeah, I think that'd probably be enough. it's not something that's likely to break often - and if it did, it would be very obvious
<axw> well, I think it would? maybe not. might be wondering why the machine never gets provisioned...
<axw> menn0: the other thing we could do is stop using the dummy provider...
<menn0> axw: but what then?
<menn0> for this test I mean
<axw> menn0: something more programmable. it wouldn't need to do much really - just as long as EnvironProvider.Open works
<menn0> axw: here's the PR that skips the 3 flaky tests: http://reviews.vapour.ws/r/5328/
<menn0> axw: I'm emailing Will now regarding a better fix.
<axw> menn0: ok, ta
<blahdeblah> anastasiamac: So, re: juju add-unit, this failed on 8 services across 2 different nodes, but retrying them all again worked! \o/
<axw> menn0: shipit, thanks
<anastasiamac> blahdeblah: \o/
<blahdeblah> Which is kinda weird, but yay anyway
<blahdeblah> I was only ever adding one unit at a time
<anastasiamac> blahdeblah: worthy of a bug :) and the reference to the code that u've paster is brilliant!
<anastasiamac> plz add it to the bug too ;)
<blahdeblah> For the record, all I did was pattern match the error message + guess.  I find go code incredibly hard to read, and couldn't trace the logic of it to save my life.
<blahdeblah> Is it really worth a bug report when it's not likely to get fixed?
<anastasiamac> blahdeblah: it's worth a bug and i think it's some kind of race :)
<blahdeblah> OK
<blahdeblah> will do
<anastasiamac> \o/
<menn0> axw: merging and i've emailed Will
<menn0> axw: I need to EOD
<menn0> axw: if the merge fails can you retry pls?
<axw> menn0: no worries. have a good weekend
<menn0> axw: you too, cheers
<frobware> dooferlad: you up? about?
<mup> Bug #1607689 opened: Cannot launch lxd container in MAAS provider with local image mirror <juju-core:New> <https://launchpad.net/bugs/1607689>
<TheMue> morning
<frobware> why oh why does azure take so long to tear itself down
<mup> Bug #1607727 opened: JUJU_GUI_SIMPLESTREAMS_URL cannot be specified except by environment variable <juju-core:New> <https://launchpad.net/bugs/1607727>
<frobware> dooferlad, macgreagoir: PTAL @ http://reviews.vapour.ws/r/5330/
<frobware> fwereade_: looks like your OCR - any opinion on http://reviews.vapour.ws/r/5330/ welcome...
<fwereade_> frobware, ack
<frobware> fwereade_: ty - needs +2 from somebody else on sapphire
<fwereade_> frobware, yeah, I can't pretend to expertise there ;)
<frobware> fwereade_: it's sooooo much better than what we have. in fact, it appears to work! OoO
<fwereade_> frobware, nice :D
<macgreagoir> frobware: I'm retesting with my lp:1603473 -based env.
<frobware> macgreagoir: is this different to the testing you were doing yesterday?
<macgreagoir> No, just now with the pr, really.
<macgreagoir> Any worth?
<mup> Bug #1607727 changed: JUJU_GUI_SIMPLESTREAMS_URL cannot be specified except by environment variable <juju-core:Won't Fix> <https://launchpad.net/bugs/1607727>
<mup> Bug #1607689 changed: Cannot launch lxd container in MAAS provider with local image mirror <juju-core:New> <https://launchpad.net/bugs/1607689>
<fwereade_> frobware, can we reasonably drop the e/n/i and e/n/i-juju globals and supply them explicitly in this CL?
<fwereade_> frobware, in good healthy ioc style ;p
<mup> Bug #1607749 opened: juju bootstrap fails with MAAS trunk and juju beta12 <juju-core:New> <https://launchpad.net/bugs/1607749>
<frobware> fwereade_: we can - I'm just wary that I'm a) away for two weeks and unsatifactorily w.r.t your comment I'm trying to close a few other PRs too.
<frobware> fwereade_: it is just wrong
<frobware> fwereade_: I very deliberately made -- func raiseJujuNetworkInterfacesScript(oldInterfacesFile, newInterfacesFile string) -- take arguments. :)
<frobware> fwereade_: let me come back to this once I've closed on a few other things.
<fwereade_> frobware, fair enough, but please do come back to it: bug#, or card, or whatever's most likely to ensure it gets dealt with as soon as practically possible
<frobware> fwereade_: https://canonical.leankit.com/Boards/View/122969419/123547832
<fwereade_> frobware, <3
<fwereade_> frobware, LGTMed
<mup> Bug #1607749 changed: juju bootstrap fails with MAAS trunk and juju beta12 <juju-core:New> <https://launchpad.net/bugs/1607749>
<mup> Bug #1607749 opened: juju bootstrap fails with MAAS trunk and juju beta12 <juju-core:New> <https://launchpad.net/bugs/1607749>
<mup> Bug #1607766 opened: juju-upgrade-mongo should ask for confirmation <juju-core:New> <https://launchpad.net/bugs/1607766>
<Mmike> Hi, lads - can I get some traction (or at least info) on bug #1457575 ?
<mup> Bug #1457575: archive/tar: write too long <backup-restore> <canonical-bootstack> <intermittent-failure> <juju-core:Fix Released> <juju-core 1.24:Fix Released> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1457575>
<mgz> Mmike: weeelll... that's a confusing bug status
<Mmike> mgz: what can I do to un-confuse it? :) (the subject is a bit vague, yup)
<mgz> I wish my link was still valid...
<mgz> Mmike: I'm guessing we just marked it fixed because we stopped seeing it in CI
<mgz> not because anyone actually fixed it
<mgz> this is probably not much good to you unless I can work out what changed
<Mmike> mgz: I just tested it locally (in our stsstack), and while it is fixed in latest 1.24, it is still failing in 1.25.6
<mgz> welp
<mgz> I wonder what the diff looks like if I try to merge 1.24 to 1.25
<Mmike> Have a customer that is hitting this issue - I'll suggest a workaround, as to delete stale log files and then try backups again. But the customer would appreciate if he sees some traction on this bug )
<Mmike> mgz: maybe it's matter of just some tactical include missing :D
 * Mmike is joking, of course
<mgz> there's no commit on the 1.24 branch mentioning that bug number or the symptom...
<Mmike> mgz: maybe the issue never existed in 1.24 (or was already fixed by the time bug was first reported), so it was 'Invalid' for 1.24
<Mmike> s/was/should be/
<frobware> fwereade_: does my comment regarding the lack of default gateway need not be catastrophic, hence warning vis-a-vis error?
<mgz> Mmike: maybe. we have one failure in CI on 1.24 from 2015-05
<mgz> the others are on master/feature branches (1.25)
<mgz> Mmike: given you have repo steps, I think it's probably just something we need to get assigned to someone to fix
<Mmike> that's be neat
 * Mmike would look to cheryl, for no apparent reason, had she been here when I wanted to look :D
 * Mmike thinks his english broke
<fwereade_> frobware, hmm... if it *is* reasonably expected, can or should it be an INFO? honestly I was pretty much convinced by davecheney's rubbishing of most logging levels
<fwereade_> frobware, I think the place we generally need more sophistication is in the provenance of log messages
<frobware> fwereade_: that logging statement I added today - I've had the patch for a while. One wonders whether we should log anything at all. And if so, let's make it just informational.
<fwereade_> frobware, +1
<mup> Bug #1607786 opened: juju backups won't backup whole /var/log/juju directory <juju-core:New> <https://launchpad.net/bugs/1607786>
<mgz> Mmike: so... this looks reasonably fixable, though the differnce between 1.24 and 1.25 isn't totally clear
<mgz> they do have different utils versions but nothing in the log looks like a relevent utils/tar change
<Mmike> mgz: also, it seems that the actual size of all-machines.log file which triggers the bug is not the same each time - I tested this against maas provider (so my bootstrap machine was a node in maas cluster), and I had to kick the allmachines.log to 1.5GB for the bug to kick in
<Mmike> now I'm testing against openstack provider (so my state machine is some kvm instance), and it worked ok for 300MB file, failed for 600MB file
<mup> Bug #1607794 opened: withoutControllerSuite.TestWatchMachineErrorRetry unexpected change <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607794>
<mgz> Mmike: oh...oooo
<mgz> I wonder
<mgz> I bet the size is just helping us hit the bug by making the process slower
<mgz> Mmike: the log is still being written during backup right?
<mgz> so... theory-of-bug
<mgz> we start backup, pass the name all-machines.log to tar.TarFiles to stick in our tar
<mgz> that code stats the file to get the size
<Mmike> mgz: keep in mind that if I repeat the test with the exact same file sizes, it doesn't fail on 1.24.7
<mgz> starts io.Copy the contents across
<mgz> a new line goes into the log - it's now longer than it was when statt-ed
<mgz> we hit the error case in tar.Write
<mgz> which doesn't let you write more than you said you were going to
<mup> Bug #1607794 changed: withoutControllerSuite.TestWatchMachineErrorRetry unexpected change <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607794>
<mgz> Mmike: I can try a trivial patch to test that theory if you're up for trying a custom binary
<Mmike> mgz: sure thing, would be happy to
<mgz> how are even the utils tests slow..
<mgz> 0.004s runtime 5 seconds compiling I guess
<mgz> okay, passed
<mgz> http://paste.ubuntu.com/21401859
<mgz> Mmike: building you some binaries... presume amd64 is fine?
<dooferlad> frobware: If you have a moment, http://reviews.vapour.ws/r/5333/ is ready to fix that LXD gateway bug
 * dooferlad goes to get tea
<Mmike> mgz: yup, amd64
<frobware> dooferlad: I wasn't sure what stage things were for you so I was also working this: http://reviews.vapour.ws/r/5331/
<mgz> ...now I wish I added some more debug statements as well
<mgz> build takes aaages
<mup> Bug #1607794 opened: withoutControllerSuite.TestWatchMachineErrorRetry unexpected change <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607794>
<natefinch> does anyone know how to use juju/schema?  I can't understand the API
<natefinch> rogpeppe: ^
<rogpeppe> natefinch: sure, what don't you understand?
<rogpeppe> natefinch: FWIW it's one of the very oldest packages in juju...
<mgz> Mmike: okay, I went back and added a bunch of debugging in case I'm wrong
<natefinch> rogpeppe: I expect there to be some sort of "Validate" method or something... I'm adding some logic to validation options, except that I can't find code that actually does validation
<rogpeppe> natefinch: Coerce is the method to use
<rogpeppe> natefinch: it's not just validation, but it also converts to a standard form
<rogpeppe> s/method/function/
<Mmike> mgz: ack, just let me know where to get the binary from
<natefinch> rogpeppe: ahh, I think it's the FieldMap function that is the entrypoint I'm trying to find
<frobware> mgz: how often is the daily(??) PPA generated?
<natefinch> lol, I hope there's only one answer to that
<mgz> frobware: by default, when master is blessed
<frobware> mgz: ty
<mgz> so, 99 days ago
<mgz> hm, what happened to lillypilly
<mgz> Mmike: so, I have a file, but I don't have routing from my build machine to mombin, which is the new lillypilly
<mgz> Mmike: so, if you can get to my canonistack box, pull it straight from there?
<Mmike> mgz: how large is it, can you email it to me?
<Mmike> mgz: or you can add my keys from https://launchpad.net/~mariosplivalo/+sshkeys
<mgz> Mmike: scp 10.55.60.255:~/juju-1.25-gztardebug.tar.xz .
<mgz> done ssh-import-id already
<Mmike> ack
<Mmike> mgz: what username should I use?
<mgz> ubuntu
<mgz> has the juju and jujud binaries for 1.25 tip with this change applied, so unpack somewhere and ./juju bootstrap --upload-tools
<Mmike> ack
<Mmike> it's downloading
<mgz> expectation is either no error with your steps, or same error but with +GZ and extra details on the end
<Mmike> Building tools to upload (1.25.6.1-trusty-amd64)
<Mmike> mgz: shouldn't the version stting be changed?
<Mmike> eh, sorry, my fault
<Mmike> wrong binary :)
<mgz> :D
<mgz> it should report 1.25.7.1
<Mmike> mgz: testing it now
<Mmike> -rw------- 101/4    1572890664 2016-07-29 14:45 var/log/juju/all-machines.log
<Mmike> mgz: it worked ok, it put that large file into root.tar
<Mmike> sec, I'll pastebin
<Mmike> mgz: http://paste.ubuntu.com/21407167/
<mgz> Mmike: now I want to build one that I think will fail with the extra debugging to be sure...
<Mmike> sure thing
<Mmike> i'm going to be around for the next hour and a half
<mgz> I guess it looks like we have the fix though
<mgz> hm, the other thing that could matter is the go version
<mgz> Mmike: scp ubuntu@10.55.60.255:~/juju-1.25-gztarbad.tar.xz .
<mgz> I worry that may actually pass too given it's go 1.6 but hopefully now
<mgz> *not
<Mmike> mgz: few mins, pls
<mup> Bug #1605714 opened: juju2 beta11: LXD containers always pending on ppc64el systems <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605714>
<perrito666> this is glorious https://twitter.com/fatih/status/759049292531109889
<Mmike> mgz: sorry, had mtgs, testing it now
<mgz> Mmike: no worries, thanks for all your help
<Mmike> mgz: it failed, as expected: http://paste.ubuntu.com/21412575/
<natefinch> perrito666: gah, I should have finished my code to do that: https://github.com/natefinch/graffiti
<mgz> Mmike: woho
<Mmike> that's a good 'woho', right? :)
<mgz> Mmike: yeah, though... I did get my debug statement slightly wrong, annoyingly
<mgz> Mmike: I'm happy to propose this
<mgz> will be in 1.25.7 and it seems like the best workaround is minimising the logs/avoid getting the logfile written to mid-backup
<Mmike> mgz: excellent, thank you very much!
<Mmike> mgz: may I ask you to throw a short update on the bug so that I can point the customer there too?
<mgz> Mmike: I shall indeed summarzie from irc and assign to me
<Mmike> mgz: muchos gracias, senor!
 * Mmike signs of for the week
<mup> Bug #1457575 opened: archive/tar: write too long <backup-restore> <canonical-bootstack> <intermittent-failure> <juju-core:In Progress by gz> <juju-core 1.24:Fix Released> <juju-core 1.25:In Progress by gz> <https://launchpad.net/bugs/1457575>
<mgz> perrito666: so... you want to review some code for me?
<mgz> I have a slight issue in that I can fix this much easier than I can test it
<mup> Bug #1457575 changed: archive/tar: write too long <backup-restore> <canonical-bootstack> <intermittent-failure> <juju-core:In Progress by gz> <juju-core 1.24:Fix Released> <juju-core 1.25:In Progress by gz> <https://launchpad.net/bugs/1457575>
<mup> Bug #1457575 opened: archive/tar: write too long <backup-restore> <canonical-bootstack> <intermittent-failure> <juju-core:In Progress by gz> <juju-core 1.24:Invalid> <juju-core 1.25:In Progress by gz> <https://launchpad.net/bugs/1457575>
<mup> Bug #1607855 opened: introspectionSuite.SetUpTest unable to listen to socket <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1607855>
<mup> Bug #1607858 opened: MachinerStateSuite.TestSetsStatusWhenDying timeout waiting for status to change <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1607858>
<mgz> any OCR today?
<perrito666> mgz: fwereade_  or what is left of himm
<mgz> ...what did you do to him...
<mgz> perrito666: this one is right up your alley though, https://github.com/juju/utils/pull/227 (and 228)
<fwereade_> mgz, LGTM
<mgz> I looked quite seriously at adding a test, but that meant rewriting so much of the implementation to take non io/os things
<fwereade_> mgz, so I imagined
<fwereade_> mgz, perhaps a comment is warranted?
<mgz> // Limit data copied to the size of the file on first stat to prevent ErrWriteTooLong from tar if file grows
<mgz> or something like that?
<perrito666> mgz: oh, that is not right
<mgz> from archive/tar I guess
<perrito666> mgz: where in the world did you see tar not whining about files changing during copy?
<mgz> perrito666: for the log case it's reasonable. for other cases of getting files rewritten under you it's perhaps a higher level code fault, but can only be recovered by abandoning the tar and recreating
<fwereade_> mgz, yeah, I had "CopyN lest f grow during Copy" in my head, which is unhelpfully terse
<mgz> perrito666: it does whine, that's the issue. breaks backup.
<fwereade_> mgz, perhaps it's worth a doc comment actually
<perrito666> mgz: do add a doc comment, out of the top of my head I can think quickly in another file that might change and we definitely dont want to truncate
<mgz> yeah, doesn't hurt to be explicit about the function behaviour
<perrito666> which is agent.conf
<fwereade_> mgz, "if it succeeds, it has copied at least the full contents of the file at the time it was passed" or something
<perrito666> fwereade_: not entirely true
<mup> Bug #1607859 opened: MachineWithCharmsSuite.TearDownTest left sockets in a dirty state <ci> <intermittent-failure> <regression> <unit-tests> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1607859>
<mgz> if the file is being in-place mutated we're at the mercy of the filesystem implementation anyway
<perrito666> if it succeeds will copy at least size(file) at the time it was passed  <- this is more realistic
<fwereade_> heh, yes
<mgz> if we're doing rewriting correctly by linux rules, agent.conf is not a problem
<mgz> because we have a handle to the old file
<mgz> so we just back up the old version
<perrito666> mgz: I suddenly fear we are not
<mgz> not the agent.conf.new that gets renamed agent.conf
<mgz> perrito666: well, that's a bug then :)
<perrito666> are we actually doing that?
<mgz> we really should be
<perrito666> someone, not me, should check
<fwereade_> perrito666, we darn well should be
<mgz> truncate and write is asking for bugs on linux
<perrito666> fwereade_: never even loked at that part of the code
<fwereade_> perrito666, agent/agent.go:622 has utils.AtomicWriteFile
<perrito666> sweet
<natefinch> relatively easy review anyone?  https://github.com/juju/schema/pull/13/files
<mgz> fwereade_: added some text on the master branch for your delectation
<Guest16266> abentley, sinzui is there a way to populate my local db with issues?
<mgz> issue #1: I have lost my irc nick
<mgz> I suspect you can dump the live reports db, or at least some of it, and pull it in locally
<sinzui> Guest16266: import issues form reports.vapour.ws?
<natefinch> gah, juju/schema is so weird
<perrito666> k bbl, about 1h
<mup> Bug #1607895 opened: juju2, maas2, two credentials yields confusing error message <landscape> <usability> <juju-core:New> <https://launchpad.net/bugs/1607895>
<mgz> if anyone is alive still, I have the utils dep bump branches up
<mgz> natefinch: you have a vested interest in reviewing, also picks up one of your bug fixes
<natefinch> mgz: link me?
<mgz> http://reviews.vapour.ws/r/5338/ http://reviews.vapour.ws/r/5337/
<natefinch> mgz: the CopyN thing is weird.... what causes that?
<natefinch> mgz: i.e. - when copy copy more than that?
<mgz> the way the writeContents function works is that it opens the file at the start
<mgz> stats it
<mgz> that information (including the filesize) it then put in the tar header
<mgz> then we kick of io.Copy with the file handle
<mgz> if io.Copy takes long enough, the file might have been appended to by the time it finishes
<mgz> so, we're writing more data than we promised we would in the header
<natefinch> I get it
<mgz> to prevent borked tarfiles, the archive/tar package has a check for that and throws
<mgz> as we're talking logfiles in practice, just taking their contents (length) at time of opening is sane
<mgz> anything else should be using atomic file updates
<natefinch> right
<natefinch> lgtm'd
<mgz> natefinch: ta!
<sidarta>  /j #lxcontainers
<perrito666> mgz: Y U ESCALATE ON FRIDAY?
<mgz> perrito666: so I can land and get a CI run while I frolick over the weekend? :)
<perrito666> you are a sick puppy (and just sent me to look up a word)
<mgz> perrito666: much of my pleasure on irc is making you learn more obscure english language and culture
<mgz> though, this one really is pretty innocent (and lose the k when not being archaic)
<perrito666> mgz: you wont stop until you see me having afternoon tea with scones, arent you?
<mgz> I feel scones are not a great stretch for you
<perrito666> mgz: can I get more english than that?
 * perrito666 has an actual tea pot in his work desk
<mup> Bug #1607964 opened: juju2, maas2, lxd containers started with wrong IP, rely on dhclient to switch things <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1607964>
<mup> Bug #1607971 opened: [juju 2.0 ] cannot remove or destroy machine in pending state <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1607971>
<redir> I am going EoW soon and will be on Holiday next week. See you all the following week.
#juju-dev 2016-07-30
<redir> if anyone get a round-tuit and gives http://reviews.vapour.ws/r/5340/ a shipit! can you also shepherd it through merging, please.
<mup> Bug #1608105 opened: LXD no longer activates all interfaces on initial deploy when using MAAS2rc3 and JUJU Beta13 <juju-core:New> <https://launchpad.net/bugs/1608105>
#juju-dev 2017-07-24
<axw> burton-aus: standup?
 * babbageclunk goes for a run
<blahdeblah> Anyone want a look at my live controller hitting https://bugs.launchpad.net/juju/+bug/1695894 before I hulk smash it?
<mup> Bug #1695894: [2.2.b4] model deletion hung, unable to completely destroy the model or create a new one with the same name, errors in logsink.log <canonical-is> <juju:New> <https://launchpad.net/bugs/1695894>
<blahdeblah> I guess you're all still recovering from travel
<babbageclunk> blahdeblah: sorry, just saw this - is the controller still in the same state? Can you get machine-0.log from a controller machine?
<blahdeblah> babbageclunk: I restarted all the jujuds
<blahdeblah> but I can get you a controller log
<babbageclunk> blahdeblah: ok thanks
<babbageclunk> blahdeblah: and restarting the jujuds resolved it, presumably?
<blahdeblah> babbageclunk: at least from the perspective of juju status not hanging for 10 minutes, yes
<blahdeblah> Still seems an awful lot of log spew
<blahdeblah> babbageclunk: https://private-fileshare.canonical.com/~paulgear/lp1695894/ has log
<babbageclunk> blahdeblah: thanks - taking a look now
<babbageclunk> blahdeblah: I was hoping that might have something useful/interesting at the time the problem started, but it looks like it was happening for long enough that the spam from the i/o timeout is all that's in the log.
<blahdeblah> babbageclunk: I've got two previous logs; let me see if I can find something
<babbageclunk> ok thanks
<blahdeblah> when was it I was talking to you about my 2.2.2 upgrades?
 * blahdeblah checks scrollback
<babbageclunk> blahdeblah: you mean in Mooloolaba?
<blahdeblah> nah - looks like last Monday
<babbageclunk> oh, I'd forgotten, sorry!
<blahdeblah> so I'll have 2 more logs for you - 1 sec
<blahdeblah> babbageclunk: two more logs there now
<babbageclunk> blahdeblah: and they both had the same thing happen?
<blahdeblah> babbageclunk: same controller; just earlier logs
<blahdeblah> hopefully you'll find what you're looking for sometime between last Monday (when I left it in a working state), and this afternoon, when I came back and juju status didn't give me anything for 10 minutes before I gave up waiting.
<blahdeblah> babbageclunk: Are you the only one around?  Who's the best person to ask about the details of rick_h's post?  http://mitechie.com/blog/2017/7/7/call-for-testing-shared-services-with-juju
<blahdeblah> rick_h: FWIW, went through your blog post step by step, worked out my problem, and got a successful cross-model relation.  Seems to work.  Where and what sort of further testing feedback are you looking for?
<rick_h> blahdeblah: awesome
<rick_h> blahdeblah: testing is basically making sure it works for the charms, use cases, network setups, how's it work with permissions, stability, error reporting/logging information usefulness. Basically everything lol
<blahdeblah> rick_h: I sent you a reply on the mailing list; hopefully that will prompt a few others to try it out.  Once Ian gets the reverse direction cross-controller relations going, I'll be able to do a lot more of the testing you'd like to see.
<rick_h> blahdeblah: cool, thanks for testing it out and the email. Good stuff
<bdx> heya
<bdx> I just had a user set a model config that didn't previously exist, now it exists in my model config and I can't seem to remove it
<bdx> http://paste.ubuntu.com/25165776/
<bdx> notice the 'env' key
<bdx> I'm getting this is my log
<bdx> machine-0: 18:27:16 ERROR juju.provisioner cannot start instance for machine "7": cannot run instances: The requested configuration is currently not supported. Please check the documentation for supported configurations. (Unsupported)
<bdx> but I cant remove the model-config
<bdx> but its squashing any other functionality in the model
<mup> Bug #1672306 changed: unit stuck executing update-status <canonical-is> <juju-core:Confirmed> <https://launchpad.net/bugs/1672306>
#juju-dev 2017-07-25
<hml> babbageclunk: standup?
<babbageclunk> d'oh
 * babbageclunk goes for a run - it's a glorious day!
<menn0> axw: do you know of a simple charm that uses storage? I'm lo
<menn0> I'm looking to sanity check some changes
<axw> menn0: postgresql is the thing I usually test with, but it's not exactly simple. there's http://jujucharms.com/u/axwalk/storagetest, it's pretty rough and ready but I think it counts as simple
<menn0> axw: thanks!
<menn0> axw: the filesystem storage seems to work but the block storage doesn't
<menn0> axw: fallocate: /var/lib/juju/storage/loop/volume-0-0: fallocate failed: Operation not supported: exit status 1
<axw> menn0: lxd?
<menn0> axw: I don't think that's due to my changes
<menn0> axw: yep
<menn0> axw: well loopback on lxd
<axw> menn0: you have to open up permissions, there's docs... just a sec
<menn0> axw: it doesn't matter
<axw> https://jujucharms.com/docs/stable/charms-storage#lxd-loop-devices
<axw> ok
<menn0> axw: as long as that's expected
<axw> menn0: we should detect support and reject, but yeah it's expected atm
<menn0> axw, mmcc: large (and boring) PR to break off storage bits onto new type
<menn0> https://github.com/juju/juju/pull/7667
<axw> menn0: ok. probably won't finish till after lunch
<mmcc> menn0: thanks, will start to look in a couple hours. probably don't need to wait on me either way though
<menn0> mmcc: no worries
<axw> menn0: LGTM
<menn0> axw: thanks
<thumper> babbageclunk: morning
<thumper> babbageclunk: we should chat
<babbageclunk> thumper: hey, yes!
<babbageclunk> thumper: are you in a release call atm?
<thumper> babbageclunk: no
<babbageclunk> thumper: ok, in 1:1 then?
<thumper> babbageclunk: sure
#juju-dev 2017-07-26
<axw> veebers: are the full CI runs being converted to a pipeline too?
<veebers> axw: that's the plan, although still a little bit off as we line the ducks up
<axw> veebers: ok, cool
<wallyworld> axw: got 10 minutes for a HO?
<axw> wallyworld: sure, 1:1? standup?
<wallyworld> ok
<axw> wallyworld: hang on, delivery here
<thumper> I have a parent teacher interview shortly, should be back for tech board
<wallyworld> axw: here's the first PR when you are free https://github.com/juju/juju/pull/7672
<axw> wallyworld: did you already push fixes? WatchRelationUnits is still using RemoteRelationDetails
<wallyworld> axw: yeah, i missed committing that bit, pushed now
<babbageclunk> rogpeppe: have you got a moment to give me some hints about a macaroon problem I'm hitting with the 1.25 upgrade?
<rogpeppe> babbageclunk: i'm in a call right now. be with you shortly.
<babbageclunk> rogpeppe: ok, thanks
<axw> wallyworld: I'll miss standup tomorrow, going to Charlotte's school assembly. my PR got held up, I just need to fix a couple of tests now and will propose tomorrow
<wallyworld> axw: no worries, enjoy the assembly
<babbageclunk> axw: MOAR PRIZES
<axw> babbageclunk: heh no, not this time - just her class item :)
<wallyworld> babbageclunk: if you have a chance at some stage, here's a PR which renames two structs and  splits a worker into separate files for maintainability (verbatim cut and paste) https://github.com/juju/juju/pull/7674
<wallyworld> there's no urgency as the followup is still under development
<wallyworld> i need to land at some point this before proposing the next bit of work
<babbageclunk> wallyworld: ok, looking now
<wallyworld> \o/
<babbageclunk> wallyworld: LGTM'd
<wallyworld> you rick tyvm
<wallyworld> *rock
<babbageclunk> I rick as well, sometimes.
<rick_h> Lol
 * babbageclunk is popping out very briefly
#juju-dev 2017-07-27
<menn0> axw: ping
<axw> wallyworld: finally got this up, can you please take a look when you're free? https://github.com/juju/juju/pull/7675
<wallyworld> sure
<axw> wallyworld: apologies for the size
<wallyworld> that's that they all say
<wallyworld> axw: did you want to fix the conflicts in that brnch?
<axw> wallyworld: yep, in progress
<axw> wallyworld: done, sorry about that
<wallyworld> np
<wpk> wallyworld axw: Q: do you think that placement directive (--to <foo>, add-machine <foo>) should ignore model constraints?
<axw> wpk: yes
<wallyworld> i would hace though tso
<wallyworld> the user is asking for a specific machine
<wpk> ok
<wallyworld> axw: lgtm, faer out, how many more env->model cases will we find; it's never ending
<axw> wallyworld: heh yeah :/
<axw> wallyworld: thanks
<wallyworld> sure
<axw> wallyworld: the cleanup args thing felt a bit yucky, don't suppose you had any ideas on alternatives?
<wallyworld> axw: not really. it seemed ok to me tbh
<axw> wallyworld: ok
<wallyworld> i mean we could encode specific attrs in the struct, but generic args is ok too?
<rogpeppe> is anyone around that might be able to do a second review on this goose swift change, please? https://github.com/go-goose/goose/pull/54
<rogpeppe> i've got one review but it's reasonably involved code, so i'd really like another
<stokachu> axw: are you still online?
<wallyworld> rick_h: you around for 1:1?
<rick_h> wallyworld: yep omw
<babbageclunk> axw_: some reviews for you! https://github.com/juju/1.25-upgrade/pulls
<babbageclunk> axw_: (if that really is you)
 * babbageclunk narrows eyes
<axw_> babbageclunk: okey dokey. probably won't get to it until after school dropoff
<babbageclunk> axw_: cool cool. Also don't get freaked out by the line counts! :)
#juju-dev 2017-07-28
<wallyworld> externalreality_: juju/charm tests failed
<wallyworld> externalreality_: also, you ned to land the juju/charm change, pull tip of master locally, and then do the dep updates in the bundledata PR and push
<wallyworld> you can't do the dependencies.tsv updates until the other repo pr has landed
<wallyworld> iianm
 * babbageclunk goes for a run in the horrible cold weather
<menn0> externalreality_: any chance I could get a quick summary of what you've been working on? (email if possible)
<menn0> wallyworld: those 2 weekly status emails you sent are about the same thing right?
<wallyworld> menn0: oh, i didn't realise both got sent - the first one had a typo
<menn0> wallyworld: no worries
<menn0> axw, babbageclunk, or wallyworld: https://github.com/juju/juju/pull/7679
<wallyworld> ok
<wallyworld> menn0: what a stupid bug, lgtm
<menn0> wallyworld: thanks
<axw> babbageclunk: oops, I pulled the trigger on https://github.com/juju/1.25-upgrade/pull/5 too soon
<axw> babbageclunk: ignore that review, it was meant for just one commit
<babbageclunk> axw: ok, holding fire.
<axw> babbageclunk: https://github.com/juju/1.25-upgrade/pull/5/commits/d936c14a4504ca3463f95037659b950c7c0be7a1 makes me think we should keep any changes to the trees separately. code is probably too inflexible to support that though
<babbageclunk> axw: yeah, I was thinking something similar - it's going to be a pain if we need to update juju2 again after 2.2.3 comes out.
<babbageclunk> axw: Could keep those changes as a set of patches that get applied when building?
<babbageclunk> axw: nasty though - works ok in CI where we're throwing away working trees each time.
<axw> babbageclunk: yeah, I think it'll be a PITA to work that way
<axw> not sure yet
<axw> babbageclunk: ok I think you're good to merge
<babbageclunk> To be honest it wasn't *too much work* to unpick them - I think if we keep changes to juju2 in separate commits from changes to the other parts it might be ok.
<babbageclunk> axw: but yeah, a better way would be good.
<babbageclunk> axw: awesome, thanks.
<axw> babbageclunk: ok. we're not going to have as huge a change between 2.2.2 and 2.2.3 anyway
<babbageclunk> axw: true
<babbageclunk> axw: actually, I don't think there's a merge-bot on that repo and I don't have commit privs. Do you?
<axw> babbageclunk: nope
<axw> and no thumper in sight
<babbageclunk> he's on a swap day
<axw> wallyworld: do you have special powers over https://github.com/juju/1.25-upgrade? can you merge?
<wallyworld> i'll try
<wallyworld> which pr?
<babbageclunk> yeah, I was just going to hassle wallyworld too
<babbageclunk> start with #1 please - I think I'll need to rebase each one.
<babbageclunk> wallyworld: ^
<wallyworld> yep, on it
<babbageclunk> wallyworld: I don't think there's a mergebot.
<wallyworld> babbageclunk: done, next one has conflict
<babbageclunk> yeah, unsurprising - I'll need to do some rebasing for some of them
<anastasiamac> can someone plz stamp a straighforward move of resources cmd? https://github.com/juju/juju/pull/7680
<anastasiamac> purely mechanical :D
<wallyworld> anastasiamac: lgtm
<anastasiamac> wallyworld: menn0: \o/ i've missed one little refactoring thing... but thank you for reviews :D
<wallyworld> axw: here's a PR against the upstream branch yet to land, it add the api/apiserver layers for the relation status watcher https://github.com/wallyworld/juju/pull/40
<axw> wallyworld: ok. prob won't get to it until after lunch
<wallyworld> axw: no worries, i still need to land the first one. i'll keep going on the 3rd one
<babbageclunk> wallyworld: can you please land the other upgrade PRs in order now? I've rebased them all, so I think they should go one after the other. https://github.com/juju/1.25-upgrade/pulls
<wallyworld> ok, just for you
<babbageclunk> wallyworld: thanks! Whoa, that "now" sounds really demanding where it is, I actually meant "I've rebased them now".
<wallyworld> nah, all good :-)
<wallyworld> if you had said, "get me a beer, now"
<wallyworld> that would have been different
<wallyworld> anyway, all landed
<babbageclunk> wallyworld: awesome! Get yourself a beer, now.
<babbageclunk> Hmm, maybe a bit early.
<wallyworld> soon, soon
<axw> wallyworld: I've added another commit to https://github.com/juju/juju/pull/7676, which does the storage counts. can you PTAL?
<wallyworld> sure
<wallyworld> axw: looks great, thanks for updating
<menn0> wallyworld, babbageclunk: could you pls do a quick review of the summary email I just sent you guys
<babbageclunk> menn0: sure
<wallyworld> menn0: looks great. we can save local resources in bundles till next week when it lands
<babbageclunk> menn0: oops, replied to the email instead.
<menn0> wallyworld: i'm happy to include local resources if that's close
<wallyworld> it is
<wallyworld> should land tomorrow hopefully
<menn0> wallyworld: can you send me 2 sentences on it?
<wallyworld> ok
<menn0> babbageclunk: good call on the ordering
<menn0> wallyworld: when you said "revision number" did you mean "charm URL"?
<menn0> wallyworld: never mind
<menn0> i'm dumb
<wallyworld> no :-)
<menn0> wallyworld, babbageclunk: thanks guys. sent.
<wallyworld> ty
<babbageclunk> menn0: nice one
<axw> I reviewed your PR if you didn't notice
<axw> wallyworld: ^
<wallyworld> axw: oh great ty
<wallyworld> axw: yeah, there was already a RelationStatus. I couldn't think of a name, but just now "RelationStatusValue" popped into my head
<axw> wallyworld: that's certainly better, thanks
<wallyworld> axw: changes pushed
<axw> wallyworld: and approved
<wallyworld> \o/
<anastasiamac> axw: here is the PR - https://github.com/juju/juju/pull/7682 whenever u get a chance
<axw> anastasiamac: LGTM
<anastasiamac> axw: \o/
<wallyworld> axw: if you get a chance later, here's a small(ish) PR stacked on the previous one to wire up the workers to complete the functionality. just waiting on me menno to look at the first state watcher pr and i'll land together after that https://github.com/wallyworld/juju/pull/41
<wallyworld> there's no hurry
<axw> wallyworld: looks good but I'm going to take another pass on monday, my brain is just about cooked
<wallyworld> axw: no worries, have a good weekend
#juju-dev 2017-07-30
<thumper> veebers: morning
<veebers> thumper: o/
<wallyworld> menn0: can I ping you for that review this morning :-)
<menn0> wallyworld: ah shit. of course!
<wallyworld> ty :-)
<babbageclunk> wallyworld: hey, what version did max-logs-age and max-logs-size get released in?
<wallyworld> 2.2.1 i think
<babbageclunk> dammit
<babbageclunk> No, looks like it was 2.2, but that still doesn't help me
<babbageclunk> since the log splitting was also added in the same version.
<wallyworld> veebers: is the merge bot broken for the juju/bundlechanges repo?
<veebers> wallyworld: otp, will check asap
<wallyworld> no worries, ty
<menn0> wallyworld: i'd like some further info on that PR. see comment
<wallyworld> ok, just filling out esta, will look in a sec
#juju-dev 2018-07-27
<externalreality> guild, before machine crashed I was going to say that durring the refactor time I wanted to add some acceptance tests
<rick_h_> externalreality: sounds good
<externalreality> Generally I run throught the same routine of installing ubuntu charm, upgrade-series prepare, upgrade-sereies complete
<rick_h_> externalreality: I'd encourage looking into adding something to the test plan doc heather had done to help make a note of things we want to have and we can help make sure to check them off
<externalreality> rick_h_, makes sense
<rick_h_> externalreality: https://docs.google.com/document/d/1PH6GeOelTzG2gn3ki5nTwseljc_O9nLC5kDR83oZp1M/edit
<rick_h_> externalreality: <3
<externalreality> One slightly different topic, does anyone know why if I make a symbolic link from snap land to real world land then the snap refuses to run. Actually snaps that have nothing to do with the link I just made refuse to run too.
<externalreality> I guess its a security thing :-(
<rick_h_> externalreality: right, you can try to see if you can install the snap with --classic to make it more open, or the dev mode flag as well I think
<rick_h_> externalreality: e.g. there's some escape loops that break the security/etc but might do the job if you need it to
