[00:35] <wallyworld_> thumper: hey. can you look at a simple mp? https://codereview.appspot.com/44420044
[00:44] <sinzui> ha ha. I am preparing a release and Azure is having network issues.
[00:44] <wallyworld_> sinzui: is this 1.17?
[00:44] <sinzui> wallyworld_, I mean azure, not juju.
[00:45] <wallyworld_> no i mean is the release you are preparing a juju 1.17 release?
[00:45] <sinzui> CI failed 2166 for azure with networking errors. When I logged into azure, they said they were having networking issues
[00:46] <wallyworld_> :-(
[00:46] <sinzui> I am going to replay the tests, but given that azure is wobbling, I may need to wait
[00:48] <sinzui> wallyworld_, I am going to release 1.17.0 CI loves juju at the moment so I have let every keep landing (I can have a lot of good revisions to choose from).
[00:49] <wallyworld_> great :-)
[00:49] <wallyworld_> sinzui: btw, there is a mp for a restore plugin in progress. so backup/restore will definitely be ready for 1.17.1 i think
[00:49] <sinzui> rock
[00:54] <wallyworld_> sinzui: i forget who's court the proverbial ball is in - streams.canonical.com still has our test data we did at the sprint. were there plans to get that sorted out for the 1.17 release also?
[00:54] <sinzui> wallyworld_, yes, Ben is on holiday and didn't want to touch it while other staff were also missing
[00:54] <wallyworld_> ok, np :-)
[00:55] <sinzui> wallyworld_, We have the build/release process tested and will set everything on the first week of Jan
[00:55] <wallyworld_> \o/
[00:55] <sinzui> wallyworld_, oh, actually...
[00:56] <sinzui> The lost os images on azure happened when Azure added Japan  regions
[00:56] <sinzui> I need to add them to the mirror's file
[00:56] <wallyworld_> ah
[01:02] <thumper> hi wallyworld_
[01:02] <wallyworld_> hey
[01:09] <thumper> wallyworld_: reviewed
[01:09] <thumper> wallyworld_: just a small thing
[01:09] <wallyworld_> great thanks
[01:11] <wallyworld_> thumper: so i hacked up some code for charm version in status - it shows version info for both service and units since i figure units can before out of date separate to service if node is down and service is updated
[01:11] <thumper> yeah...
[01:11] <wallyworld_> but i need martin's stuff to land before i can go any further
[01:11] <thumper> I was thinking about that
[01:11] <wallyworld_> so i'll park it for now
[01:11] <thumper> well, mgz is off now
[01:11] <wallyworld_> :-(
[01:11] <wallyworld_> too bad status didn't get done
[01:12] <wallyworld_> thumper: i could propose against trunk as is maybe
[01:12] <wallyworld_> i would need to clean up my code - i just hacked it together
[01:12] <thumper> at least we'll get it done for 1.18
[01:13] <thumper> I'd like to talk through the status before you ran with it
[01:13] <wallyworld_> sure, next year then :-)
[01:13] <wallyworld_> or we can talk now
[01:17] <thumper> we can chat now
[01:17] <wallyworld_> thumper: i don't understand "otherwise if you get a comment that starts with a space, you return it."
[01:18] <thumper> do you still have that pastbin?
[01:18] <wallyworld_> yeah
[01:18] <thumper> let's hangout
[01:18] <wallyworld_> https://pastebin.canonical.com/102221/
[01:18] <thumper> I still have it open too
[01:18] <wallyworld_> https://plus.google.com/hangouts/_/7ecpi24kekk8g5rnnpd44j64e0
[01:25] <bradm> https://pastebin.canonical.com/102275/ <- interesting response to a juju destroy-environment
[01:33] <thumper> bradm: :-(
[01:34] <bradm> looks like swift is a bit confused with whats in that bucket
[01:34] <bradm> swift download tells me there's files not found
[01:35] <bradm> fwiw juju seems to have done the right thing, and blown away my environment, just the swift bucket still exists with some dodgy content
[01:38] <bradm> interested in any debugging from this?  otherwise I'll just blat the swift bucket
[01:47] <wallyworld_> bradm: 409 normally means there's content inthe container that's being asked to be deleted. not sure how that happened unless a call to delete a file in the container failed. i wonder if anything was logged?
[01:47] <wallyworld_> thumper: changes pushed
[01:47] <thumper> already approved
[01:50] <bradm> wallyworld_: I can't see anything, and if I rebootstrap, then destroy-environment, it looks clean
[01:50] <wallyworld_> blame it on santa's elves or something
[01:51] <bradm> seems as good a reason as any!
[01:51] <wallyworld_> if it happens more regularly we can look at it
[01:53] <bradm> definately, I'd hate to file a bug for an unreproducible issue, just wastes everyones time
[02:33] <wallyworld_> axw: i've landed a change to more robustly ignore empty lines in the auth keys string in juju env config
[02:34] <axw> wallyworld_: thanks!
[02:34] <wallyworld_> i meant to do it right the first time but clealr didn't
[02:34] <axw> no worries
[02:35] <axw> I mean to do lots of things right the first time but clearly don't ;)
[02:37] <wallyworld_> don't we all :-)
[02:44] <axw> thumper: do you have any juju-run things you'd like me to do while you're off? otherwise I have manual-provider destroy-environment, and Windows SSH/bootstrap stuff to work on
[02:46] <wallyworld_> thumper: also, don't forget to email us the hotel details
[02:57] <thumper> wallyworld_: ok
[02:57] <thumper> axw: work on the windows bootstrapping and ssh stuff
[02:57] <thumper> axw: I'm parking the rest of the juju run client until after the break
[02:59] <axw> thumper: okey dokey
[03:00] <thumper> axw: how do I get the long hash for the tip of go.crypto?
[03:00] <thumper> hg tip just shows the short hash
[03:00] <axw> thumper: not sure sorry, I've just used "godeps -t" in the past
[03:00]  * axw looks at the source
[03:00] <thumper> -t?
[03:01] <axw> thumper: -t generates the contents of dependencies.tsv
[03:01] <axw> godeps -t <some package that imports go.crypto/ssh>
[03:01] <thumper> I don't have godeps installed
[03:02] <axw> thumper: hg log -l 1 -r . --template "{node} {rev}\n"
[03:02] <axw> that's what godeps does
[03:03] <thumper> cheers
[03:04] <thumper> axw: where is godeps? and is its use documented?
[03:04] <axw> thumper: go get launchpad.net/godeps
[03:04] <axw> thumper: there's a bit about it in our README I think
[03:04] <thumper> ta
[03:04] <axw> thumper: juju-core/CONTRIBUTING
[03:04] <axw> right at the bottom
[08:09] <jamespage> wallyworld_, its not canonistack - jam + mgz have access to it
[08:42] <rogpeppe> thumper: the -t flag asks for testing dependencies to be included (probably not necessary in your case)
[08:42] <rogpeppe> thumper: (but was undocumented - i've just fixed that)
[09:33] <dimitern> thumper, you around?
[09:33] <dimitern> thumper, I'm thinking of taking over martin's branch about status via the api and landing it
[09:37] <axw> wallyworld_: am I a bit thick, or is the authorized_keys updater modifying /root/.ssh, rather than ~ubuntu/.ssh?
[09:38] <axw> wallyworld_: I can't see where the ubuntu user is specified
[10:38] <rogpeppe> axw: ping
[10:38] <axw> rogpeppe: pong, sorta. holding my baby, may take a while to respond
[10:38] <rogpeppe> axw: np
[10:38] <rogpeppe> axw: just a quick question on dying environments
[10:39] <rogpeppe> axw: what operations is a dying environment supposed to forbit?
[10:39] <rogpeppe> forbid, even
[10:39] <axw> rogpeppe: machine and service creation
[10:39] <rogpeppe> axw: what about container creation?
[10:39] <rogpeppe> axw: i.e. nested machines
[10:40] <axw> rogpeppe: hmm, I think I had intended it to, but now I'm not sure it's necessary
[10:40] <rogpeppe> axw: i just noticed that there's no check for that, and it'll be awkward to do...
[10:40] <axw> rogpeppe: since taking down the instances will take out the containers, I think it doesn't matter
[10:41] <rogpeppe> axw: ok, that's good
[10:42] <axw> rogpeppe: will it be difficult to retrofit if I think of a reason? ;)
[10:42] <rogpeppe> axw: not *too* bad - it would just need to go in about three or four different places
[10:42] <rogpeppe> axw: one other thing: i was wondering about caching the Environment object in the state, so we don't have to fetch it every time we add a machine. is there anything about it that can change?
[10:42] <axw> ok
[10:43] <axw> rogpeppe: only life at the moment
[10:43] <axw> so yeah, that's probably a good idea
[10:44] <rogpeppe> axw: if it was an object in the state, it would make the retrofitting easier, because we wouldn't need to go through the environment object fetching dance in each place
[10:44] <axw> rogpeppe: that'll work for now, but maybe not when JAAS comes along
[10:44] <axw> but of course we can deal with that when it comes
[10:45] <rogpeppe> axw: i suspect that we'll want to maintain a one-to-one State<->Environment mapping
[10:45] <rogpeppe> axw: and add a higher level object.
[10:45] <rogpeppe> axw: or perhaps rename State to Environment
[10:45] <rogpeppe> axw: otherwise every single method gets a new argument, and that seems wrong
[10:46] <axw> rogpeppe: yeah, makes sense
[10:46] <rogpeppe> ah, standup
[10:46] <dimitern> rogpeppe, jam, others? last standup of the year! get it while it's hot https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig
[10:47] <axw> gotta go - have a nice Christmas all
[10:47] <rogpeppe> axw: you too
[10:47] <rogpeppe> axw: have fun!
[10:47] <dimitern> axw, all the best!
[10:47] <axw> thanks :)
[12:39] <rogpeppe> natefinch: what would you think about moving the replicaset to juju-core/replicaset, or perhaps juju-core/utils/replicaset ?
[12:40] <rogpeppe> natefinch: environs doesn't seem quite right as a place, as it's a building block and doesn't really have any logic related to juju environments
[13:04] <rogpeppe> lunch
[13:43] <natefinch> rogpeppe: juju-core/replicaset works.  environs was a suggestion by I forget who.  Ideally I'd just move it out to be a standalone package, but no time etc
[13:53] <sinzui> rogpeppe, dimitern, Do either of you expect to land anything in the next 2 hours?
[13:54] <dimitern> sinzui, yes, status using the API
[13:54] <dimitern> sinzui, I'm proposing it now
[13:54] <sinzui> sweet. I will wait.
[13:55] <dimitern> thanks sinzui
[13:58] <rogpeppe> sinzui: i'm also expecting a couple of branches to land, but the bot seems stalled currently - i've had them approved for the last hour
[13:59] <dimitern> rogpeppe, I'll look into the bot
[14:02] <sinzui> rogpeppe, thank you.
[14:05] <dimitern> rogpeppe, in the mean time, can you take a look at the status CL: https://codereview.appspot.com/38420046
[14:05] <rogpeppe> dimitern: looknig
[14:08] <rogpeppe> dimitern: reviewed
[14:08] <dimitern> rogpeppe, thanks
[14:14] <dimitern> rogpeppe, mongod on the bot seems to have hung, so I killed it and removed the sockets, now it seems it works again
[14:14] <rogpeppe> dimitern: ok, thanks
[14:14]  * rogpeppe should work out how to do that some time
[14:21] <sinzui> jcsackett, can you take some time to read my propose juju-core 1.17.0 release notes? https://docs.google.com/a/canonical.com/document/d/1-xisuLkhozMm40e4w5M_Nl1l4yzBBxhzrYhj5eGUoas/edit
[14:22] <dimitern> rogpeppe, I usually check for a lot of /tmp/gocheck-* or /tmp/test-mgo-* dirs, which sometimes take up all the space, then I look at ~/.bash_history to see what the others are doing :) - mainly tail -f ~/tarmac/tarmac.log
[14:23] <dimitern> rogpeppe, btw, now I can feel what you were saying about waiting forever for lbox to finish - it seems on a weak wireless link it really takes 5m to run, if not more
[14:29] <rogpeppe> dimitern: i don't actually know how to log in to see what the bot is doing
[14:29] <rogpeppe> dimitern: or look at its files
[14:30] <dimitern> rogpeppe, just a sec
[14:30] <dimitern> rogpeppe, you need to source a file like this (sent you a pm)
[14:31] <dimitern> rogpeppe, then if you have the nova python client, you can do "nova list" and should see 2 machines, the bot is running on the second one
[14:32] <dimitern> rogpeppe, and you'll need to setup your canonical sshbang stuff, so you can do ssh 10.55.x.x (whatever the ip is
[14:35] <rogpeppe> dimitern: what do you mean by "sshbang"?
[14:36] <natefinch> rogpeppe: https://wiki.canonical.com/InformationInfrastructure/IS/SSHebang
[14:36] <dimitern> natefinch, rogpeppe, yeah - you need to se
[14:36] <dimitern> setup chinstrap access, so you can ssh tunnel to canonistack machines
[14:37] <natefinch> or you can just ssh into chinstrap and ssh from there to the machine
[14:49] <jcsackett> sinzui: release notes look good.
[14:50] <sinzui> thank you jcsackett
[15:20] <dimitern> natefinch, once it's merged it's gone btw
[15:21] <dimitern> natefinch, you can't re-merge it
[15:23] <natefinch> dimitern, aww, there's no way to modify it and merge it in again, I have to make a new branch?
[15:24] <dimitern> natefinch, yes, sorry :)
[15:24] <natefinch> dimitern, bah
[15:24] <natefinch> dimitern, at least it's a trivial change
[15:28] <sinzui> natefinch, dimitern rogpeppe. I just disabled CI auto build so that I can start the builds. CI takes about 30 minutes. I want to start the CI in the next 60 minutes to verify tip is ready for release
[15:29] <dimitern> sinzui, I'm done with my stuff, the CLI API has all landed
[15:29] <sinzui> thank you very much dimitern
[15:29] <rogpeppe> sinzui: you'd better go ahead without my juju-restore then, because that'll take at least 15 minutes to merge
[15:30] <sinzui> rogpeppe, I can wait a bit. I am ever so hopeful that this release will take less than 4 hours, most of the time being spent waiting for the packages to reach the archives
[15:33] <natefinch> sinzui: the stuff I've landed isn't actually used yet, so no worries from my end
[15:33] <sinzui> thank you natefinch
[15:35] <rogpeppe> sinzui: well, it's approved, and hopefully will land soon
[15:36] <sinzui> excellent
[15:38] <dimitern> rogpeppe, ah, Client().Status(nil) is what you need for that build failure
[15:38] <rogpeppe> dimitern: oh damn, another merge problem
[15:41] <rogpeppe> approved again
[15:44] <rogpeppe> right, that's cool. i now have a "watchgobot" script that i can run to tail the tarmac log file
[15:45] <dimitern> rogpeppe, nice!
[15:45] <rogpeppe> dimitern: http://paste.ubuntu.com/6606340/
[15:45] <natefinch> dimitern, rogpeppe: can you check this out https://codereview.appspot.com/34660045 ...you can't tell from the diff, but I did a bzr mv environs/replicaset ./replicaset  (you can tell if you browse the code on the branch: http://bazaar.launchpad.net/~natefinch/juju-core/027-mongoha2/files
[15:46] <rogpeppe> dimitern: the "gobot" script just runs a command with the env vars you gave to me set
[15:46] <rogpeppe> natefinch: looking
[15:46] <rogpeppe> natefinch: LGTM
[15:47] <rogpeppe> natefinch: thanks
[15:47] <dimitern> rogpeppe, heh, nice script, too bad it won't work out of the box on bash :)
[15:47] <rogpeppe> dimitern: oh, sorry
[15:48] <rogpeppe> dimitern: rc is my default shell scripting language
[15:48] <dimitern> rogpeppe, np, it's trivial to change it so
[15:48] <rogpeppe> dimitern: you'll need to change the sed command so it takes the argument that allows extended regexps
[15:48] <dimitern> rogpeppe, i tend to stay away from bash and any shell scripting language that's not at least python :)
[15:49] <natefinch> dimitern: ditto.  bash is one step below perl in my book.... and I hate perl
[15:49] <rogpeppe> dimitern: for trivial stuff that's just calling commands, rc works pretty well
[15:49] <rogpeppe> dimitern: and is a bunch better than sh
[15:49] <dimitern> natefinch, oh, perl is somewhere far down on my list, a bit higher than java
[15:49] <natefinch> seriously, what the hell is this doing?  if(! ~ $#* 0) {
[15:49] <rogpeppe> natefinch: ! is not
[15:50] <rogpeppe> natefinch: ~ is the pattern matching operator
[15:50] <rogpeppe> natefinch: $#var gives the number of elements in a variable (all variables hold lists)
[15:50] <rogpeppe> natefinch: $* are the arguments
[15:50] <dimitern> yeah, weird i tell you..
[15:51] <rogpeppe> if len(os.Args) > 1 {
[15:51] <dimitern> yep
[15:51] <dimitern> and looks sooo much better :)
[15:51] <rogpeppe> dimitern: yeah, that's not a fantastic idiom, although the primitives are simple
[15:52] <rogpeppe> rc is actually *really* simple
[15:52] <dimitern> rogpeppe, I got my share of bash (and the like) highly arcane stuff some years back, I was never the same person since :)
[15:53] <rogpeppe> dimitern: the main problem with bash is the hideous quoting rules
[15:53] <rogpeppe> dimitern: it's almost impossible to deal well with stuff that has spaces in
[15:53] <dimitern> rogpeppe, and let's not even get into arrays and fancy substitutions
[15:53] <rogpeppe> dimitern: ha
[15:54] <rogpeppe> dimitern: i still use vanilla bourne shell when writing bash
[15:54] <rogpeppe> dimitern: apart from $( )
[15:54] <dimitern> that's useful, as is ``
[15:54] <rogpeppe> dimitern: thing is with bash, it seems that almost noone knows about the crucial role of "$@"
[15:55] <rogpeppe> sinzui: $() is better than ` `; that's why i use it
[15:55] <rogpeppe> s/sinzui/dimitern/
[15:55] <sinzui> rogpeppe, I also agree. I cannot tell backticks from specs of dust om my screen
[15:56] <rogpeppe> sinzui: merged
[15:56] <dimitern> rogpeppe, I can show you some CMD batch files I had to write like 10 ago, which make bash look a really sensible language
[15:56] <rogpeppe> the main reason it's better is that it nests correctly
[15:56]  * sinzui once scrubbed his screen for 10 minutes, then released the script had a backtick in it
[15:56] <rogpeppe> dimitern: ha, .bat is utterly braindead
[15:56] <dimitern> but backticks are so much easier to write, and with proper vi syntax highlighting it's a breeze
[15:57] <dimitern> rogpeppe, .bat was the precursor to .cmd and the "extensions" winNT introduced
[15:57] <rogpeppe> dimitern: ah, i don't know about that
[15:58] <rogpeppe> dimitern: the basic syntax of sh is actually quite nice
[15:58] <natefinch> I only use bash scripts for putting together commands that I would actually type out on the commandline otherwise. If I need branching or looping, I use a real language. Same for batch scripts on windows
[15:58] <rogpeppe> dimitern: i'm still happy with the shell i wrote. i will probably port it to unix at some point.
[15:59] <rogpeppe> sinzui: you can go ahead with the release now if you like, BTW
[16:00] <dimitern> anyway
[16:00] <dimitern> i think i'll sign off for this year
[16:00] <sinzui> rogpeppe, I expect CI to see the new revs in a minute http://162.213.35.54:8080/
[16:00] <rogpeppe> dimitern: ok
[16:00] <dimitern> with the warm fuzzy feeling we did all we could :)
[16:01] <rogpeppe> dimitern: have a great christmas
[16:01] <sinzui> if not, I will force a build
[16:01] <rogpeppe> dimitern: and holidays afterwards
[16:01] <dimitern> thanks!
[16:01] <rogpeppe> dimitern: thanks for all your work
[16:01] <dimitern> have great holidays all of you guys!
[16:01] <dimitern> rogpeppe, cheers, thanks for all the reviews and help ;)
[16:02] <dimitern> (but remember, I'm coming BACK :)
[16:02] <rogpeppe> dimitern: np. hope i wasn't too harsh :-)
[16:02] <dimitern> rogpeppe, it's fine, not to worry :) i appreciate it
[16:03] <rogpeppe> natefinch: FYI here's something nice someone wrote about my shell: http://debu.gs/entries/inferno-part-1-shell
[16:10] <natefinch> rogpeppe: pretty cool
[16:12] <rogpeppe> natefinch: it had one or two warts, but it was actually amazingly powerful for its size
[16:14] <natefinch> rogpeppe: very cool. When did you write it?
[16:14] <sinzui> rogpeppe, natefinch 2173 is hated by CI http://162.213.35.54:8080/job/prepare-new-version/540/console
[16:15] <sinzui> was juju-update-bootstrap moved?
[16:15]  * sinzui can update packaging rules if so
[16:15] <rogpeppe> natefinch: the initial version was probably written around '97, but it was developed over a few years
[16:16] <rogpeppe> sinzui: yeah, juju-update-bootstrap was moved to juju-restore
[16:16] <rogpeppe> sinzui: sorry, i didn't realise it needed an explicit rule
[16:16] <rogpeppe> natefinch: the core shell is still only ~2700 lines though
[16:16] <natefinch> rogpeppe: that's not bad at all
[16:17] <sinzui> rogpeppe, packaging rules from ubuntu make it explicit. We just inherit it. This is good because we can tell jamespage to review our packaging changes when we release 1.18.0
[16:17] <rogpeppe> natefinch: it would probably be smaller in Go, although there are some things Go can't do that are kinda fundamental to it
[16:18] <rogpeppe> natefinch: and in Go you'd have to deal with all the hideous unix process group management
[16:19] <natefinch> yeah, but if it was written in go, you could call it gosh :)
[16:19] <natefinch> inferno doesn't even have sh in it... I don't know how it can be considered a valid shell ;)
[16:19] <rogpeppe> natefinch: :-)
[17:04] <sinzui> rogpeppe, natefinch: Do either of you have time to review the 1.17.1 inc branch? I want to land it the moment I release 1.17.0. https://codereview.appspot.com/44600044
[17:04] <natefinch> sinzui, sure
[17:05] <rogpeppe> sinzui: i'll look too
[17:06] <natefinch> sinzui, what's with the change in client.go?
[17:07] <sinzui> natefinch, lbox reported this when I proposed my number changes http://pastebin.ubuntu.com/6606702/
[17:08] <sinzui> natefinch, I am on trusty which provides golang 1.2, which I think is the reason for the message
[17:08] <rogpeppe> sinzui: i'd prefer to just delete one of those lines. i said that in the code review, but didn't check that it had been done.
[17:09] <rogpeppe> sinzui: but tbh i don't mind
[17:09] <rogpeppe> sinzui: we'll fix it in trunk
[17:29] <rogpeppe> natefinch, sinzui: trivial review of fix for the above? https://codereview.appspot.com/44620044
[17:34] <sinzui> rogpeppe, LGTM
[17:34] <rogpeppe> sinzui: thanks
[17:35] <sinzui> We are release r2173 as 1.17.0. CI loves it
[18:06] <sinzui> rogpeppe, do you mind landing your import errors fix before me. I can reconcile conflicts then land mine
[18:07] <rogpeppe> sinzui: ha, i thought i'd approved it some time ago, but i'd only set the commit message
[18:08] <sinzui> that is also my secret technique for not accomplishing anything
[18:13] <nate_finch> rogpeppe: anything you want me to look into in the next couple days? I'll be working on EnsureMongoServer, but if that goes faster than expected, I might have some time.
[18:13]  * rogpeppe thinks
[18:16] <rogpeppe> nate_finch: API server address caching, perhaps?
[18:17] <rogpeppe> nate_finch: essentially you just need to call configstore.EnvironInfo.SetAPIEndpoint at the right time
[18:18] <nate_finch> rogpeppe hmm, ok
[18:19] <rogpeppe> nate_finch: you might also want to think about how we might do agent failover.
[18:19] <rogpeppe> natefinch: it's a little tricky and i'm not sure of the best approach there
[18:20] <natefinch> rogpeppe: yeah, I can see it being pretty tricky
[18:39] <rogpeppe> natefinch: i just did a little bit of mongo stats digging - if we deploy a service with one unit, the API server does 170 consecutive round trips to mongo
[18:45] <natefinch> rogpeppe: dang!  That's um... a lot
[18:46] <rogpeppe> natefinch: at least they're all round trips to localhost (in the non-HA case anyway)
[18:49] <natefinch> rogpeppe: right... not the end of the world, still, seems like a lot
[18:49] <rogpeppe> natefinch: indeed
[18:50] <rogpeppe> natefinch: takes about 30ms all told
[18:50] <rogpeppe> natefinch: (but that is on an otherwise empty database)
[18:51] <rogpeppe> natefinch: sorry, 40ms
[18:51] <natefinch> rogpeppe: any idea how it scales?
[18:51] <rogpeppe> natefinch: linearly i imagine
[18:51] <rogpeppe> natefinch: pretty much
[18:52] <rogpeppe> natefinch: our database is pretty tiny, even with a large env
[18:53] <natefinch> rogpeppe: yeah, I was assuming that.  We just don't have that much information to keep in there.  Honestly, barely even need a DB.
[18:53] <rogpeppe> natefinch: indeed. there would be no problem with holding the whole thing in memory if we wanted to
[18:54] <rogpeppe> natefinch: i occasionally think about redoing the whole of state with a much simpler in-memory representation and a raft protocol to keep state servers in sync
[18:55] <rogpeppe> natefinch: in time, that's definitely something we should consider - it would speed things up by a couple orders of magnitude i think
[18:55] <rogpeppe> natefinch: because the db operations could be juju-specific
[19:35] <sinzui> we may have a problem with 1.17.0
[19:36] <sinzui> rogpeppe, natefinch My own upgrades to 1.17.0 results in config errors in to services. I cannot destroy the environments. CI didn't see this/
[19:37] <rogpeppe> sinzui: darn
[19:37] <rogpeppe> sinzui: what errors?
[19:38] <rogpeppe> sinzui: i've really got to go though
[19:38] <sinzui> rogpeppe, I wont keep you then. I have not released the packages yet. The tools are there and I have 1.17.0 installed using them
[19:38] <natefinch> rogpeppe: I can work with him on it.  No sense staying late on the friday before vacation week
[19:39] <natefinch> (any later than it already is over there)
[19:40] <rogpeppe> natefinch: ok
[19:40] <sinzui> natefinch, rogpeppe : http://pastebin.ubuntu.com/6607413/
[19:41] <rogpeppe> sinzui: those aren't errors AFAICS
[19:41] <rogpeppe> sinzui: it's just warning you about deprecated fields in your configuration
[19:41] <sinzui> the agent did upgrade, but each service shows config-changed hook failed
[19:41] <sinzui> This is on aws, hp, azure, and canonistack
[19:42] <rogpeppe> sinzui: that's probably unrelated to what you pasted
[19:42] <rogpeppe> sinzui: the paste is about environment config
[19:42] <rogpeppe> sinzui: a hook is about service config
[19:42] <sinzui> rogpeppe, that is all that all-machines-log reports for each
[19:43] <rogpeppe> sinzui: could you paste the entire all-machines.log ?
[19:43] <rogpeppe> sinzui: or actually
[19:43] <rogpeppe> sinzui: just the log for a single machine with a failed hook would be better
[19:43] <rogpeppe> sinzui: the uniter log in particular
[19:44] <sinzui> this is the all. I will get the unit http://pastebin.ubuntu.com/6607428/
[19:45] <sinzui> natefinch, rogpeppe, this is the machine-log http://pastebin.ubuntu.com/6607431/
[19:46] <rogpeppe> sinzui: the machine log won't tell us anything about failed hooks
[19:46] <natefinch> rogpeppe: required environment variable not set for credentials attribute: User
[19:47] <rogpeppe> natefinch: where's that from?
[19:47] <natefinch> rogpeppe: the machine log
[19:47] <natefinch> http://pastebin.ubuntu.com/6607431/
[19:47] <rogpeppe> natefinch: oh yes, i see it
[19:48] <natefinch> seems like there's an environment variable for the openstack user that isn't set
[19:48] <rogpeppe> natefinch: i wonder how it worked before
[19:48] <rogpeppe> natefinch: the agents should not rely on env vars
[19:48] <sinzui> sorry rogpeppe, natefinch I typed too fast and went to the wrong machine. This is better http://pastebin.ubuntu.com/6607451/
[19:49] <natefinch> rogpeppe: maybe a red herring, then
[19:49] <sinzui> natefinch, this was an upgrade from 1.16.5 to 1.17.0. I am tempted to take the azure stack down and do a simple deploy with 1.17.0
[19:50] <rogpeppe> hmm, the error seems to be that config-get couldn't find the charm: "cs:precise/mysql-31"
[19:50] <natefinch> rogpeppe: yeah, was looking at that
[19:53] <sinzui> 1.16.5 can destroy the envs. A small plus.
[19:54] <rogpeppe> oh frick, i know the problem
[19:54] <rogpeppe> sinzui, natefinch: it's related to dimiter's work on charm uploads
[19:54] <rogpeppe> i *think* the problem is on this line:
[19:54] <rogpeppe> 	err := st.charms.Find(D{{"_id", curl}, {"pendingupload", false}}).One(cdoc)
[19:55] <rogpeppe> in State.Charm
[19:56] <sinzui> I am going to test a simple 1.17.0 deploy of the samething to aws. If it goes well, I will update the release notes to state upgrade from stable to 1.17.0 are note supported
[19:57] <natefinch> rogpeppe: why is that wrong?
[20:06] <rogpeppe> natefinch: sorry, had a call
[20:07] <rogpeppe> because i think that won't match charm docs with no pendingupload field (the old environment won't have that field)
[20:07] <rogpeppe> natefinch: because false!=nil
[20:07] <natefinch> ahh
[20:07] <rogpeppe> natefinch: i think it should be an easy fix
[20:08] <natefinch> rogpeppe, yeah
[20:08] <rogpeppe> natefinch: could you take it forward please - i'm done
[20:09] <rogpeppe> natefinch: it'd first be worth verifying if that is actually the problem
[20:09] <rogpeppe> g'night all.
[20:09] <rogpeppe> sinzui: hope you manage the release...
[20:09] <sinzui> rogpeppe, It still looks good
[20:11] <sinzui> natefinch, simple deploys look very good with the new data. Since we don't officially support stable to dev upgrades, I am going to add a disclaimer to not upgrade, it is no supported
[20:23] <rogpeppe> sinzui: sgtm
[20:25] <natefinch> sinzui: sounds good.  We should still fix it, though, and like Roger said, it's not hard
[20:26] <sinzui> natefinch, +1 for a fix. It can go out in 1.17.1
[20:27] <natefinch> sinzui:  I can get it done for Monday morning if that works?  I'm short on time today.
[20:27] <sinzui> natefinch, I think I can manage that.
[20:27] <sinzui> Most of my time is still spent waiting for builders and azure to do something
[20:33] <rogpeppe> sinzui: BTW this problem will occur upgrading from 1.17.0 to 1.17.1 too; i don't know if that's a problem
[20:33] <sinzui> rogpeppe, noted
[20:35] <natefinch> sinzui, rogpeppe: I gotta run, but I'll submit a fix, it's very easy
[20:35] <sinzui> natefinch, thank you.
[20:35] <natefinch> er, I'll submit a fix this weekend / early monday morning