[00:37] <wallyworld> thumper: can i pick your brain? let me know if you have time for a hangout
[00:47] <thumper> how about in 15 min?
[00:47]  * thumper is just eating
[00:53] <wallyworld> sure
[00:53]  * wallyworld reboots
[01:00] <thumper> wallyworld_: https://plus.google.com/hangouts/_/gyff6xf73v7pnxrzkv4lm3t7fqa?hl=en
[01:01] <LinStatSDR> Party is over ;(
[01:29] <thumper> do we have a good tool for navigating the history of the git branch?
[01:31] <thumper> wallyworld_: here is a suspect comment: // Write non-empty contents to the file, otherwise delete it
[01:31] <thumper> so the question becomes, why is the info value empty
[01:31] <wallyworld_> thumper: almost finished standup, will look soon
[01:32] <thumper> hmm... it seems like it should be in the agent config
[01:49] <wallyworld_> sinzui: you around?
[01:50] <sinzui> I am
[01:50] <wallyworld_> sinzui: thumper has questions about bug 1392745
[01:50] <mup> Bug #1392745: juju run doesn't after upgrade to 1.20.11 <canonical-bootstack> <regression> <run> <upgrade-juju> <juju-core:Triaged> <juju-core 1.21:Triaged> <https://launchpad.net/bugs/1392745>
[01:50] <wallyworld_> we know 1.19 was broken
[01:50] <thumper> sinzui: oh hai
[01:50] <wallyworld_> so comment #1 we can put don to that
[01:51] <thumper> also...
[01:51] <thumper> the bug targets 1.22 and 1.21-beta3 but the bug talks about 1.20.11
[01:51] <sinzui> yep, CI was 1.18 and 1.20
[01:51] <thumper> sinzui: first question, what is machine 6?
[01:51] <thumper> sinzui: is it a state server or not?
[01:52] <thumper> sinzui: are you running in HA or not?
[01:52] <sinzui> thumper, CI itself... http://juju-ci.vapour.ws:8080
[01:52] <sinzui> thumper, no HA
[01:52] <thumper> sinzui: is machine 6 a state server?
[01:52] <thumper> or is it just a normal machine?
[01:53] <thumper> how many machines are in the environment?
[01:53] <thumper> sinzui: can I get access to the state server machine?
[01:53] <sinzui> thumper, no, not a state-server, that is 0, 6 really is juju-ci's jenkins master.
[01:53] <sinzui> sure
[01:54] <thumper> sinzui: isn't there some magic script to import someones launchpad ssh keys to a machine?
[01:54] <sinzui> thumper, https://drive.google.com/a/canonical.com/?ddrp=1#search/juju-ci3.jenv will show the jenv
[01:55] <sinzui> thumper, sure I can do that if you just want access to the machine, the staging key in cloud-city also gives you that power
[01:55]  * sinzui places keys
[01:57] <thumper> sinzui: I could just use the juju command to add my ssh key
[01:58] <axw> sinzui: has there been any change to the github-merge-juju job in the last day? it just failed three times in a row with intermittent test errors... at least two mongo related
[01:59] <sinzui> axw, I don't thinks so. mgz, didn't mention he made changes
[01:59] <axw> okey dokey
[01:59] <thumper> wallyworld_: when adding an ssh key, do I specify the path or the actual key?
[01:59] <thumper> wallyworld_: help isn't clear
[02:00] <wallyworld_> um
[02:00] <wallyworld_> let me look to refresh memory
[02:01] <wallyworld_> the key itself
[02:01] <wallyworld_> or you can import from lp
[02:01] <wallyworld_> juju authorised-keys import lp:thumper
[02:02] <thumper> nm, there already
[02:03] <thumper> haha
[02:03] <thumper> ha
[02:03] <thumper> oops
[02:03]  * thumper does some digging
[02:03] <thumper> this could be 'orrible
[02:04] <sinzui> thumper, I think I put your keys on manually because the above command exited with error 1
[02:04] <thumper> sinzui: I also put them in through the environment
[02:04] <thumper> :)
[02:05]  * thumper facepalms
[02:07] <thumper> sinzui: well, I can fix the problem but it doesn't make it clear why it happened in the first place...
[02:07] <thumper> although I do have some ideas
[02:07]  * thumper continues digging
[02:08] <thumper> sinzui: do you want me to fix the environment?
[02:08] <sinzui> thumper, just helping us fix is will let us get on with our lives
[02:08] <thumper> kk
[02:28] <thumper> sinzui: my attempted fix didn't work :-(
[02:29] <sinzui> :(
[02:29]  * thumper tries something else
[02:32] <ericsnow> davecheney: thanks for taking a look at http://reviews.vapour.ws/r/485/
[02:33] <ericsnow> davecheney: if you like I can roll the rest of removing-backups-dependency-in-state into the patch
[02:42] <thumper> wallyworld__: managed to get into the inside of the mongo database, it says "system-identity" is ""
[02:42] <wallyworld__> \o/
[02:42] <wallyworld__> that that explains why the file is being removed
[02:42] <wallyworld__> but not why system identity in db is ""
[02:43] <thumper> right
[02:43] <thumper> wallyworld__: did you want to talk this through?
[02:43]  * thumper needs a teddybear
[02:44] <wallyworld__> thumper: i'm just about to get a pie out of the oven, give me 5 mins to eat it
[02:44] <thumper> kk
[02:44] <wallyworld__> then i'll spoon you
[02:45] <jw4> wallyworld__, thumper I'm guessing that means something different in that hemisphere?
[02:45]  * jw4 is from South Africa though and spooning meant the same there
[02:45] <thumper> jw4: no... I think he just used the wrong word
[02:45]  * thumper hopes
[02:45] <jw4> lol
[02:52] <wallyworld__> thumper: am in our 1:1 waiting for a cuddle
[02:52] <thumper> jw4: he meens "huddle"
[02:52]  * jw4 blushes and tiptoes away
[03:06] <menn0> LOL
[03:06] <menn0> i just looked at the channel after having not looked for a while and oh dear...
[03:13] <jw4> menn0: I feel safer now that theres someone else here
[03:13] <menn0> :)
[03:33] <wallyworld__> menn0: well, he is very cuddly, you should try it sometime
[03:33] <anastasiamac> wallyworld__,thumper: "huddly"?..
[03:38] <jw4> anastasiamac: :-D
[03:44] <anastasiamac> jw4: m relieved this channel is PG-rated ;-)
[04:00] <jw4> anastasiamac: hehe - I'm sure thumper is squirming right now
[04:00] <thumper> nah
[04:01] <thumper> calm but annoyed by minions running around the house
[04:01] <jw4> hehe
[04:01] <jw4> for some reason my minions have been CRAZY the last few days
[04:03] <anastasiamac> thumper: u'll need to teach me someday how u manage to "be calm but annoyed" :-)
[04:04] <thumper> anastasiamac: outer calm
[04:05] <thumper> inner rage
[04:05] <thumper> deep breathing
[04:07] <thumper> wallyworld__: so... about this system identity problem...
[04:07] <thumper> wallyworld__: are you looking after it?
[04:07] <wallyworld__> no, i was hoping you would :-)
[04:08] <wallyworld__> i was delegating :-D
[04:08] <wallyworld__> as i'm stuck on the other one
[04:08] <wallyworld__> sharing the love, john and nate's teams have already fixed stuff for beta3
[04:09] <anastasiamac> thumper: yep, I know the feeling. JUst got off the phone with the school re: discipline ;-(
[04:09] <thumper> :-(
[04:09] <anastasiamac> thumper: it's less than 2 weeks left!
[04:09] <anastasiamac> :-(
[04:10] <thumper> wallyworld__: ok, I'll take it, and look in the morning
[04:10] <wallyworld__> \o/ ty
[04:11] <thumper> wallyworld__: I expect it will be an upgrade step specifically for 1.20.12
[04:11] <thumper> wallyworld__: and put back in for 1.21 too
[04:11] <wallyworld__> yeah, cause 1.21 and on run all the other steps
[04:12] <thumper> also, was only going to put the upgrade step in the 1.20 branch for 1.20.12
[04:12] <thumper> and in the normal 1.21 steps for the 1.21 branch and trunk
[04:12] <thumper> wallyworld__: sound reasonable?
[04:12] <wallyworld__> yep, do it as a 120 step, right
[04:12] <thumper> um... no
[04:12] <thumper> well
[04:13] <wallyworld__> if done as a 120 step, 1.21 and later will run it
[04:13] <thumper> no it won't
[04:13] <thumper> not if on 1.20 already
[04:13] <thumper> you may have upgraded, and just be broken
[04:13] <wallyworld__> so to unbreak, the step needs to reinsert the file into the db
[04:14] <wallyworld__> shit, i guess it becomes an agent startup step
[04:14] <wallyworld__> like ensureMong()
[04:14] <thumper> it needs to create a new system identity file and put it in the authorized keys and update the db
[04:15] <thumper> wallyworld__: no, just an upgrade step
[04:16] <wallyworld__> maybe i'm missing something - if the install is 1.20.11 and broken, having new 1.20.12 won't rerun the 1.20 upgrade stes wil it?
[04:16] <thumper> no, but I can make an upgrade step for 1.20.12
[04:16] <wallyworld__> oh, i thought we only did 1.20
[04:16] <wallyworld__> not 1.20.x
[04:16] <thumper> so far we have…
[04:16] <wallyworld__> cool
[04:16] <thumper> but that doesn't mean we are limited to it
[04:17] <wallyworld__> rightio
[04:59] <menn0> waigani: here's the NewEnvironment change
[04:59] <menn0> http://reviews.vapour.ws/r/489/diff/
[05:00] <menn0> waigani: this is what's required to easily allow multiple test environments to be set up
[06:55] <dimitern> morning all
[07:01] <jw4> morning dimitern :)
[07:02] <dimitern> hey jw4 :)
[08:35] <TheMue> morning
[08:35] <TheMue> jw4: still awake?
[08:36] <dimitern> morning TheMue
[08:36] <jw4> Yeah, but off to bed in minutes
[08:36] <jw4> TheMue: :)
[08:36] <TheMue> jw4: good, already wondered
[08:36] <TheMue> dimitern: o/
[08:50] <mattyw> morning all
[10:04] <jam> wallyworld__: any more bugs to tackle today ?
[10:06] <wallyworld__> jam: just eating, be with you soon
[10:07] <jam> wallyworld__: np, I looked at the milestones and everything at least looks assigned
[10:11] <wallyworld__> jam: so bug 1392745, the system identity was moved into the state database from the file system, presumably for HA, but there was no upgrade step written. tim is onto that one.
[10:11] <mup> Bug #1392745: juju run doesn't after upgrade to 1.20.11 <canonical-bootstack> <regression> <run> <upgrade-juju> <juju-core:Triaged by thumper> <juju-core 1.21:Triaged by thumper> <https://launchpad.net/bugs/1392745>
[10:11] <wallyworld__> and then there's bug 1392602
[10:11] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[10:11] <wallyworld__> i can't see what the root cause is
[10:11] <wallyworld__> i can reproduce it, but not find the cause to fix it
[10:12] <wallyworld__> i have manually stopped the juju agents, and rsyslog, removed the update conf files, everything is good
[10:12] <wallyworld__> ie the log dir is still there
[10:12] <wallyworld__> then a reboot results in the whole log dir being deleted
[10:12] <wallyworld__> with juju seemingly not involved because it had been stopped and the conf files moved away to avoid upstart stating the agents
[10:13] <wallyworld__> a manual stop and start of the agents (but no reboot) does not reproduce the issue
[10:13] <wallyworld__> it's only a reboot
[10:13] <wallyworld__> i have nfi right now
[10:16] <wallyworld__> is there some diagnostic process that can be run to see what process is removing adir?
[11:22] <davecheney> wallyworld: fuser
[11:23] <wallyworld> i'm trying auditd but can't get the rules to stic
[11:23] <wallyworld> k
[11:26] <wallyworld> fuser shows nothing is referencing the file after bootstrap
[11:50] <voidspace> dimitern: should ReleaseIPAddress take instance id and net id - or just net id (plus address of course)?
[11:50] <voidspace> dimitern: I think only net id is sufficient
[11:51] <dimitern> voidspace, you mean Environ.ReleaseAddress?
[11:53] <dimitern> voidspace, I think, for consistency, it can take the same args as AllocateAddress (in maas we won't use instId and netId, but for ec2 we will right?)
[11:55] <voidspace> dimitern: for maas we'll use netid I think
[11:56] <voidspace> dimitern: for ec2 we might *need* instance id
[11:56] <voidspace> dimitern: so yeah, probably taking the same is best
[11:56] <voidspace> dimitern: ah no, for maas we just need ip address
[11:56] <dimitern> voidspace, +1
[11:57] <voidspace> cool
[11:57] <bodie_> morning folks :)
[12:30] <perrito666> hey, anyone knows what the Total attribute of utils.AttemptStrategy is for?
[12:40] <bodie_> Total Warfare
[12:40] <bodie_> sorry....  hehe
[12:43] <perrito666> oh I get how it works.. mm
[13:26] <rogpeppe> perrito666: it bounds the total length of time of all attempts
[13:27] <rogpeppe> anyone know how local charm revisions are meant to work?
[13:27] <rogpeppe> dimitern: ^
[13:27] <perrito666> rogpeppe: yes, I read the code because it was not all that clear, It is odd that there is no clear way to ask for a given number of attempts instead, the only way seems to be to set any short time and min retries
[13:27] <rogpeppe> perrito666: just specify min retries, but don't specify Total
[13:28] <dimitern> rogpeppe, yes, so the revision in the revision file is respected only if there's no duplicate charm with the same revision in state (it's bumped then until unique)
[13:28] <bodie_> am I the only one getting breakage on master again?
[13:29] <bodie_> I rolled back to go1.2.2 thinking it would make things work better, cleaned my $GOPATH/pkg, ran godeps, and now everything is broken. lol
[13:29] <bodie_> throw in a few go get -u -v ./... for good measure
[13:29] <bodie_> ... with godeps at the end, of course
[13:29] <rogpeppe> dimitern: so if there's a local charm with no revision file and there's an existing charm in the environment, but no existing charm with revision 0, what should happen?
[13:30] <rogpeppe> bodie_: what breakage are you seeing?
[13:30] <rogpeppe> dimitern: i'm seeing what i think is broken behaviour, but perhaps it's deliberate
[13:31] <bodie_> rogpeppe, looks like the same newline breakage we were getting last night when we updated the yaml.v1 version
[13:31] <dimitern> rogpeppe, if there's no rev file and no local charm with the same url, revision is set to 1
[13:31] <bodie_> jw4 and I went over that, but it looks like the update changed some of the behaviors expected by many tests in master
[13:32] <bodie_> rogpeppe, so, I wasn't able to get the new dep in, perhaps somebody poked it into the deps without checking the tests, or possibly I'm just doing something silly :P
[13:32] <rogpeppe> dimitern: ok, so that will upload to revision 1 in the environment? so won't actually upgrade the charm?
[13:32] <rogpeppe> bodie_: remind me what the issue was again?
[13:32] <dimitern> rogpeppe, hmm.. wait a sec - how can you upgrade a charm not yet deployed (i.e. missing from the env and state)?
[13:32] <bodie_> rogpeppe, we needed to update yaml.v1 version in order for the new charm content to pass tests
[13:33] <rogpeppe> dimitern: it's already deployed
[13:33] <bodie_> rogpeppe, but when updating yaml.v1, many tests in master broke with slight changes to newline behavior
[13:33] <rogpeppe> dimitern: but the local charm doesn't have a revision file
[13:33] <bodie_> rogpeppe, so we didn't push the change since it was already past EOD for us
[13:33] <rogpeppe> bodie_: example test output?
[13:34] <dimitern> rogpeppe, iirc, the revision file is only used if present, and only if the revision is unique in state; if it's missing it's assumed rev 1 (and if rev 1 exists, rev 2 and so on)
[13:34] <bodie_> rogpeppe, can't promise that's what I'm seeing right now, jw4 was the one who was seeing the test breakage, as I wasn't able to roll back to 1.2.2 due to some network issues and was seeing different breakage
[13:34] <bodie_> but, it looks like this, as far as I can tell --
[13:35] <rogpeppe> dimitern: the behaviour i'm seeing is that i run "juju upgrade-charm myservice" and it says "Added charm "local:trusty/myservice-41" to the environment." (that's later than any other revision) but the service isn't actually seeing an upgrade
[13:35] <bodie_> rogpeppe, http://paste.ubuntu.com/9072227/
[13:35] <rogpeppe> bodie_: ok. so those are actually broken tests, i'd say
[13:35] <dimitern> rogpeppe, you mean looking at the uniter logs you don't see the upgrade happening?
[13:36] <rogpeppe> dimitern: yup, and the charm source didn't change
[13:36] <rogpeppe> dimitern: (when i looked in /var/lib/juju/agents/unit-myservice-0/charm)
[13:36] <dimitern> rogpeppe, weird
[13:36] <bodie_> rogpeppe, this is my master with latest godeps, so I'm hoping it's an issue with my rolling back to 1.2.2, but yes
[13:37] <dimitern> rogpeppe, sounds like a bug to me
[13:37] <rogpeppe> bodie_: so if we don't change those tests, we won't be able to use the latest version of goyaml
[13:37] <rogpeppe> dimitern: me too :)
[13:37] <rogpeppe> bodie_: but the plus side is that it should be relatively easy to fix them
[13:37] <bodie_> yes, rogpeppe, this is on my master with current godeps, however
[13:38] <bodie_> unless we ugpraded the yaml version in master already
[13:38] <rogpeppe> bodie_: i'll see if i can replicate
[13:39] <bodie_> hoping it's just a failure with my packages
[13:50] <rogpeppe> bodie_: this is such awesome test output http://paste.ubuntu.com/9072392/
[13:50] <rogpeppe> bodie_: now where's the difference again? :)
[13:51] <bodie_> rogpeppe: what, isn't it obvious to you?
[13:51] <bodie_> hmmm...
[13:51] <rogpeppe> bodie_: if i update yaml deps to current juju-core version, all tests in that package pass for me
[13:52] <bodie_> hmm... okay, good to know
[13:55] <rogpeppe> bodie_: ok, here's the diff: http://paste.ubuntu.com/9072508/
[13:57] <bodie_> rogpeppe, for?  deps.tsv?
[13:57] <bodie_> I don't know if I'm supposed to know what I'm looking at here, heh
[13:57] <rogpeppe> bodie_: the odd thing is why the obtained value doesn't seem to be formatted with the same goyaml
[13:57] <rogpeppe> bodie_: that's the diff between expected and obtained output from that test output i pasted above
[13:58] <rogpeppe> bodie_: ah! i have a suspicion
[13:59] <rogpeppe> bodie_: hmm, unfounded
[14:24] <rogpeppe> bodie_: ah, i've found the problem, and i'm not sure of the best way of fixing it
[14:25] <rogpeppe> bodie_: in windows_userdata_test.go, it has an expected literal agent configuration
[14:25] <rogpeppe> bodie_: that seems bogus to me
[14:27] <rogpeppe> bodie_: i think that test should be using assertScriptMatch
[14:28] <rogpeppe> bodie_: then it could just leave out the problematic places, i think
[14:37] <bodie_> rogpeppe, I apologize, family woke
[14:37] <rogpeppe> bodie_: np
[14:42] <rogpeppe> bodie_: ok, i've got that package passing its tests
[14:43] <bodie_> rogpeppe, excellent
[14:52] <rogpeppe> bodie_: i've pushed the branch to github.com/rogpeppe/juju under the 019-fix-environs-cloudinit-for-new-yaml  branch
[14:52] <rogpeppe> bodie_: if you could move forward with that, that would be great, as i've already spent too much time on it
[14:53] <rogpeppe> bodie_: ideally that enormous string in windows_userdata_test.go would be honed down to stuff we actually care about.
[14:56] <bodie_> rogpeppe, sure thing.  I need the charm working to push forward on my work anyway, so this will have to come first either way :)
[14:57] <bodie_> but, we are pretty slammed with trying to push actions forward
[14:57] <bodie_> I don't imagine this will be a big timesink though
[14:59] <perrito666> natefinch: wwitzel3 ericsnow brt, restarting a few things to see if hangouts works better
[15:01] <rogpeppe> bodie_: sorry about that. it is paying off tech debt, at least, though
[15:19] <perrito666> natefinch: ericsnow wwitzel3 could you hear any of that?
[15:19] <natefinch> perrito666: when are you going to upgrade to that 5800 baud modem? :)
[15:19] <natefinch> perrito666: nope
[15:19] <perrito666> natefinch: I am saving to buy the serial cable
[15:19] <natefinch> lol
[15:20] <perrito666> well, apparently chrome decided to take my 4 cores
[15:20] <perrito666> in the good side, almost no ram
[15:20] <perrito666> except for the 84G virt memory
[15:20] <perrito666> that is
[15:21] <perrito666> I have no clue what is going on there but I fear I will have to switch to firefox if this keeps going on
[15:21] <perrito666> natefinch: btw, I did not see your rant
[15:21] <natefinch> perrito666: I'm still writing it.  Almost done.
[15:21] <perrito666> natefinch: rant early, rant often
[15:25] <perrito666> well that explains it... its is a freaking virtual machine https://en.wikipedia.org/wiki/Google_Native_Client
[15:28] <voidspace> dimitern: ping
[15:28] <voidspace> dimitern: why do some providers use errors.NotSupportedf for not implemented methods
[15:28] <voidspace> dimitern: whereas others use NotImplementedf
[15:30] <dimitern> voidspace, well supposedly because some *can* support something (once implemented) while others don't even support that feature
[15:30] <voidspace> dimitern: ok
[15:30] <voidspace> dimitern: makes sense
[15:30] <voidspace> if true in actual usage patterns...
[15:31] <dimitern> voidspace, it's not a strictly followed policy :)
[15:32] <natefinch> perrito666: rant sent
[15:34] <voidspace> dimitern: looks like they're used in local and manual
[15:34] <mgz> anyone remember how I turn off the 10m timelimit for tests?
[15:34] <voidspace> dimitern: is it true that local and manual will not support AllocateAddress and ReleaseAddress?
[15:34] <natefinch> mgz: -timeout
[15:35] <natefinch> mgz: also go help testflag
[15:35] <dimitern> voidspace, possibly not, at least not in the near future I think
[15:35] <mgz> natefinch: ta
[15:35] <voidspace> dimitern: cool
[15:37] <voidspace> dimitern: for the dummy environ, do I just write an "OpReleaseAddress" to the estate.ops ?
[15:38] <voidspace> dimitern: or do we need to mimic the behaviour of the real release and undo the previous allocate?
[15:39] <voidspace> natefinch: I had to do that very thing (report a bug) and had exactly the same problem as you...
[15:39] <voidspace> natefinch: I just picked "unity" as the package name in the end
[15:40] <dimitern> voidspace, let's have a OpReleaseAddress - that way we can test it gets called
[15:40] <natefinch> voidspace: heh... on my second try, I picked "X.org", just because it was a UI related thing, but then it went to a screen asking what particular thing crashed or whatever, and none of that was right either, so I finally said screw it and wrote that rant instead.
[15:40] <voidspace> dimitern: sure
[15:41] <voidspace> natefinch: hah
[15:41] <dimitern> voidspace, and the implementation should allow us to test it, without expecting too smart behavior of the dummy provider :)
[15:41] <perrito666> natefinch: I think its lightdm
[15:41] <perrito666> although I have no clue what the package for it its called
[15:41] <voidspace> natefinch: mine was a bluetooth/touchpad/multi-touch gesture issue
[15:41] <voidspace> fuck-knows where the actual bug is with that
I shouldn't need to know what the f'ing package name is to report a bug!!</shouting>
[15:42] <voidspace> a three-fingered (or four fingered) gesture causes an event storm and the magic trackpad stops responding
[15:42] <perrito666> natefinch: really all it took is a few years of fighting with dms and dpkg -l | grep -i light
[15:42] <perrito666> :p
[15:51] <cmars> hi natefinch, could I trouble you to look at a small PR I've got, http://reviews.vapour.ws/r/488/. fixes a go 1.3 map ordering issue bodie_ and I ran into yesterday
[15:52] <mgz> cmars: interestingly, I couldn't make it fail of gccgo
[15:53] <cmars> mgz, that is interesting... does gccgo implement the same nondeterministic map ordering?
[15:53] <mgz> it's different non-determinism
[15:54] <natefinch> cmars: why do we care what order the port ranges are returned in?
[15:54] <bodie_> "non-deterministic"
[15:54] <cmars> natefinch, so that we can more easily test them
[15:54] <bodie_> natefinch, go 1.3 breaks map ordering on purpose I think, so that people won't rely on map ordering in tests for example (like us)
[15:55] <bodie_> there is no doubt also another good reason
[15:55] <mgz> natefinch: in my opinion, api returns should be deterministic
[15:55] <sinzui> natefinch, I would like your thoughts on this bug. I hope to defer it to another milestone or show that is it a local setup issue. https://bugs.launchpad.net/juju-core/+bug/1392602
[15:55] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[15:55] <mgz> if we're providing a public method to retrieve some stuff, it's better if you get the same stuff back when nothing has changed, rather than a random shuffle
[15:56] <bodie_> rogpeppe, I love your branch naming! lol
[15:56] <cmars> natefinch, that apiserver/firewaller test fails intermittently on go 1.3. easy to reproduce if you repeat it in a loop. the test deep-equals a slice, whose order was coming from map iteration.
[15:56] <bodie_> rogpeppe, I am constantly trying to convince people to use expressive branch names :/
[15:57] <rogpeppe> bodie_: :)
[15:58] <rogpeppe> bodie_: in the old juju-core, my branch numbers had reached 562
[15:59] <hallyn> anyone have time for a go question?
[15:59] <hallyn> (^ wording Kapil suggested :)
[16:01] <bodie_> ask away, #go-nuts is also a great place if people are too busy here :)
[16:01] <hallyn> I've got a https client in go, talking to an https server in go.  On first connect, I'd like the client to grab the server's certificate from the connectio nand present the fingerprint to the user for verification.  In the server code I can grab the certificate from the http.Request.TLS.  but that .TLS doesn't exist on the client end.
[16:01] <hallyn> I guess I should probably join there
[16:02] <hallyn> But so I'm wondering how I can grab the cert.  I.e maybe override the Dial command to intercept it, but I still don't have access to my own structs then to store the result
[16:03] <hallyn> ooh, can i get it fro mtr.clientsessionstate?
[16:04] <hallyn> from config.ClientSessionCache that is
[16:07] <natefinch> mgz, cmars: I guess it makes sense to return the same data each time, but I kinda hate guaranteeing an order, because then people will depend on it, and if we change the sorting mechanism, it'll break people.
[16:09] <mgz> natefinch: I think when we're returning a slice of stuff, having it in a random platform/compiler dependent order is perverse
[16:09] <voidspace> dimitern: ping
[16:09] <voidspace> dimitern: http://reviews.vapour.ws/r/491/
[16:10] <voidspace> dimitern: I'm pinging you specifically because I'm not happy that the test for the dummy provider implementation of ReleaseAddress is basically a duplicate of the one for AllocateAddress
[16:10] <voidspace> dimitern: all that changes is the op type, and in the absence of generics (or reusing the OpAllocateAddress which would be bad) I can't see an obvious way round it
[16:10] <voidspace> dimitern: suggestions welcomed!
[16:11] <natefinch> mgz: I don't ever assume lists are sorted unless it is very specifically called out as such.  That being said, if we decide we want this to be a sorted list, that's fine.... but we need tests around it so we don't break it later.
[16:15] <mgz> xwwt: https://launchpad.net/juju-core/+milestone/1.21-beta3
[16:18] <sinzui> natefinch, I really do need an opinion on bug https://bugs.launchpad.net/juju-core/+bug/1392602 it block the release
[16:18] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[16:18] <natefinch> mgz, cmars:  I'd like to bring this up on the mailing list, because we should probably be consistent with any API that returns a list of things - either explicitly sorted or not-sorted.,
[16:19] <voidspace> natefinch: if you fancy a review
[16:19] <voidspace> natefinch: http://reviews.vapour.ws/r/491/
[16:19] <voidspace> natefinch: note what I said to dimitern above
[16:19] <voidspace> natefinch:  I'm not happy that the test for the dummy provider implementation of ReleaseAddress is basically a duplicate of the one for AllocateAddress
[16:19] <voidspace> natefinch:  all that changes is the op type, and in the absence of generics (or reusing the OpAllocateAddress which would be bad) I can't see an obvious way round it
[16:19] <voidspace> natefinch: so suggestions welcomed...
[16:21] <voidspace> In other news, it seems Nokia is still a thing...
[16:21] <dimitern> voidspace, sorry, was afk for a while, will have a look now
[16:21] <voidspace> dimitern: np, thanks
[16:21] <natefinch> voidspace: nearly duplicate tests do not bother me
[16:21] <voidspace> natefinch: heh, fair enough
[16:21] <natefinch> voidspace: merging two tests that just happen to be very similar *does* bother me
[16:21] <voidspace> natefinch: it just seems like there ought to be an elegant solution
[16:22] <voidspace> natefinch: they're *identical* other than type
[16:23] <natefinch> voidspace: it's really just a coincidence.  There's nothing inherently "the same" about allocation or deallocation.  They happen to be the same here, but that's not a contractual thing we want to ensure.
[16:23] <voidspace> natefinch: in terms of implementation in the dummy provider it's more than coincidence
[16:23] <voidspace> natefinch: for both of them it's "write a typed op into the log"
[16:24] <voidspace> natefinch: it's not even a coincidence that they have exactly the same parameters...
[16:24] <voidspace> natefinch: it's very deliberate
[16:24] <voidspace> natefinch: I'm not fighting you on saying it's alright for the tests to be duplicates though
[16:24] <voidspace> natefinch: and agreed we may not want to bake in the fact that the parameters are the same
[16:24] <voidspace> natefinch: although that would be trivially easy to change of course
[16:25] <natefinch> the fact that we have to test the dummy provider at all is appalling to me.
[16:25] <natefinch> we're testing our mock... awesome
[16:25] <voidspace> natefinch: there's code there
[16:25] <voidspace> it needs to be tested
[16:25] <natefinch> that's the problem
[16:25] <voidspace> you should certainly test your test infrastructure
[16:25] <natefinch> anyway
[16:25] <voidspace> not your tests themselves of course
[16:26] <voidspace> natefinch: I would hope the mock servers in gomaasapi and goamz are tested
[16:26] <voidspace> natefinch: but they're "merely mocks"...
[16:26] <natefinch> I would hope you could test the code without a mock server :)
[16:26] <natefinch> but, yes, for exporting a mock server for others to test again, that's valid, I suppose
[16:27] <voidspace> and dummy provider is essentially that, no?
[16:27] <natefinch> I wouldn't wish testing against the dummy provider on my worst enemy, however
[16:27] <voidspace> although "others" is us...
[16:27] <voidspace> hah
[16:27] <voidspace> ok
[16:28] <natefinch> sinzui: thanks for pinging me, I'll get someone on it
[16:39] <dimitern> voidspace, LGTM
[16:39] <voidspace> dimitern: cool, thanks
[16:39] <voidspace> dimitern: maas implementation done - just testing it
[16:39] <voidspace> not really an achievement - it's about ten lines of code or so...
[16:40] <dimitern> voidspace, sweet! :)
[16:40] <mgz> can someone on utopic either confirm bug 1392602 exists or doesn't?
[16:40] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[16:40] <mgz> we need that to see if we have to block beta3 on it
[16:41] <mgz> dimitern: what version of ubuntu are you on atm?
[16:42] <dimitern> mgz, trusty
[16:43] <dimitern> 14.04.1
[16:43] <voidspace> mgz: I thought wallyworld_ confirmed that this morning
[16:43] <voidspace> mgz: he said he could reproduce and was working on it IIRC
[16:43] <voidspace> mgz: he just had no idea why the log dir would disappear last I heard
[16:44] <voidspace> mgz: it happened even with jujud stopped and the upstart config blown away to prevent it restarting
[16:44] <voidspace> mgz: so it didn't look like it was *juju* that was deleting the log dir...
[16:44] <dimitern> cmars, anastasiamac, hey, thanks for taking care of that port ranges bug with http://reviews.vapour.ws/r/488/ - sorting LGTM
[16:44] <voidspace> mgz: ah, you're asking about utopic specifically
[16:44] <voidspace> mgz: I missed that, sorry :-)
[16:46] <mgz> voidspace: he can, but sinzui cannot - we want a third opinion :)
[16:47] <voidspace> mgz: hah :-)
[16:47] <mgz> basically, to see if the bug should be bumped or not
[16:55] <voidspace> is gc.ErrorMatches *really* a regex match?
[16:57] <voidspace> hmmm... it really is
[16:57] <natefinch> voidspace: yes....
[16:57] <voidspace> but .* doesn't seem to be doing its job
[16:57] <dimitern> voidspace, but with some added nastiness
[16:57] <voidspace> http://pastebin.ubuntu.com/9074743/
[16:57] <natefinch> it's a substring match
[16:58] <voidspace> is it
[16:58] <natefinch> yep, you can just do "failed to release ip address 192.168.2.1"
[16:58] <voidspace> it puts ^ and $ around the regext
[16:58] <voidspace> natefinch: nope, that failed
[16:58] <voidspace> which is why I added .*
[16:58] <natefinch> really? dang, sorry, I think it did
[16:59] <dimitern> voidspace, if it's a multiline string you're trying to match, add (.|\n)* at the beginning and at the end of the regexp
[16:59] <voidspace> dimitern: yuck
[16:59] <voidspace> dimitern: probably - it's an annotated error I'm trying to check
[16:59] <dimitern> voidspace, yeah, that's one of the ways to get around the auto-appending/prepending of ^ and $
[17:00] <voidspace> dimitern: that worked...
[17:00] <voidspace> dimitern: thanks
[17:02] <voidspace> dimitern: see what you think of the test here
[17:02] <voidspace> http://reviews.vapour.ws/r/492/
[17:02] <voidspace> dimitern: hmm... I could get rid of the mocking indirection, as the test server is sufficient for testing this
[17:02] <dimitern> voidspace, there are a few places in the codebase this is used
[17:02] <dimitern> voidspace, ok, looking
[17:05] <dimitern> voidspace, reviewed
[17:06] <voidspace> dimitern: thanks
[17:06] <dimitern> voidspace, np
[17:06]  * dimitern needs to stop now 
[17:07] <voidspace> dimitern: see you tomorrow
[17:07] <ericsnow> natefinch: would you mind taking a look at http://reviews.vapour.ws/r/484
[17:08] <ericsnow> natefinch: it should be a quick review
[17:13] <natefinch> ericsnow: done
[17:13] <ericsnow> natefinch: thanks
[17:50] <ericsnow> fwereade: does it make sense to worry about bulk calls for the backups API?
[18:00] <natefinch> wwitzel3: did you look at this bug at all?  Can't remember if you were looking at it or not: https://bugs.launchpad.net/juju-core/+bug/1392602
[18:00] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[18:10] <fwereade> ericsnow, I would say that in general yes it does, I can absolutely see myself wanting to create backups of all my envs in one shot
[18:10] <ericsnow> fwereade: ah, got it
[18:15] <sinzui> natefinch, wwitzel3 voidspace : I really want to know if this issue affects utopic users. I don't see this affecting trusty users and I don't think this is a juju-core bug https://bugs.launchpad.net/juju-core/+bug/1392602
[18:15] <mup> Bug #1392602: local provider agent fails to restart on reboot of host - log dir missing <local-provider> <lxc> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.21:In Progress by wallyworld> <https://launchpad.net/bugs/1392602>
[18:16] <sinzui> natefinch, wwitzel3 voidspace : If we can prove the bug is not in juju-core, I want to release beta3 today
[18:19] <voidspace> I don't have utopic I'm afraid
[18:20] <voidspace> :-(
[18:20] <natefinch> gah, guess I can upgrade
[18:20] <natefinch> 8////
[18:20] <voidspace> g'night all
[18:20] <voidspace> natefinch: good luck!
[18:20] <natefinch> thanks
[18:23] <ericsnow> natefinch: when you have a minute, could you take a look at http://reviews.vapour.ws/r/493/?
[18:23] <ericsnow> natefinch: effectively I take state/backups.go and move it to state/backups/storage.go (plus deal with various consequences)
[18:29] <natefinch> ericsnow: sorry, gotta try to work on the release blocker...
[18:30] <natefinch> ericsnow: unless you want to upgrade to utopic while I look at your review... ;)
[18:31] <ericsnow> natefinch: no worries
[18:35] <sinzui> natefinch, I am not asking anyone to update to utopic. I don't want to ruin someone's desktop. I am hoping someone on the engineering team is on utopic
[18:36] <natefinch> sinzui: too late, I've pulled the trigger...
[18:36] <sinzui> natefinch, ouch
[18:36] <natefinch> if I don't come back, tell my family I love them
[18:36] <sinzui> natefinch, I am staying on trusty since it is where juju really needs to run
[18:55] <ericsnow> perrito666: sorry, I'm going to call it a day
[18:55] <perrito666> ericsnow: no worries
[18:55] <perrito666> are you ok?
[18:56] <ericsnow> perrito666: I've got a sudden head cold that's kicking my but
[18:56] <perrito666> well go rest then
[18:59] <natefinch>   
[18:59] <perrito666> natefinch: wise words
[18:59] <natefinch> well, couple notifications about chrome crash-
[18:59] <natefinch> sorry 1 year old on my la
[18:59] <natefinch> lap
[19:00] <natefinch> anyway, ubuntu thinks chrome crashed a couple times, but my browser windows look ok
[19:00] <natefinch> otherwise, no problems so far in the first 2 minutes
[19:00] <natefinch> 5
[19:00] <perrito666> natefinch: same happens to me
[19:00] <perrito666> I would guess auxiliary chrome apps
[19:00] <perrito666> if you ps you will find chrome runs a load of processes
[19:00] <natefinch> 3andthis i why I like having an off switch on my keyboard
[19:01] <perrito666> one of them a virtual machine
[19:01] <natefinch> yeah
[19:01] <perrito666> I had to go to firefox even tough I dont like it at all
[19:02] <perrito666> I feel kind of dirty runing xul apps
[19:04] <natefinch> reminds me of this: https://www.youtube.com/watch?v=lg7MAacSPNM&t=15
[19:05] <perrito666> lool
[19:05] <perrito666> havent seen that one in a while
[19:06] <natefinch> me either
[19:08] <perrito666> (a while <1 year)
[19:25] <natefinch> gah, dependencies.tsv is wrong on the 1.21 branch
[19:26] <mgz> where's our logging bot gone for this channel?
[19:26] <natefinch> actually, wait, now
[19:26] <natefinch> no
[19:26] <perrito666> mgz: dunno, where is chanserv?
[19:27] <mgz> nvm, maybe it is here after all...
[19:32] <natefinch> sinzui: couldn't repro that bug on my machine using 1.21 on utopic
[19:32] <sinzui> natefinch, thank you very much. I am going to defer that bug and release beta3
[19:38] <thumper> morning
[19:38] <rick_h_> party thumper
[19:39] <thumper> sinzui: hey, looked more into the juju-run issue yesterday
[19:39] <thumper> sinzui: we *think* we know what the problem is
[19:39] <thumper> sinzui: but not a quick fix, needs a patch, backport and new version
[19:39] <thumper> hi rick_h_
[19:40] <sinzui> thumper, thank you! I removed the bug from the beta3 milestone since we are certain the regression is older
[19:40] <thumper> sinzui: yeah, I think it came about from 1.18 -> 1.20 upgrade
[19:40] <thumper> I'll write an update in the bug and I agreed with wallyworld_ to take it
[19:41]  * thumper checks sprint agenda for more surprise hangouts
[19:41] <rick_h_> thumper: but why ruin the surprise?
[19:41] <thumper> rick_h_: I was trying to rest lying in bed when the hangout request came through from mramm2
[19:42] <thumper> rick_h_: when I answered and heard sabdfl in the background, I was like "oh bugger, better get up"
[19:42] <rick_h_> thumper: yea, I got home from bringing the boy home from school and saw an email "we're in this hangout"
[19:43] <rick_h_> "hmm, wonder what hangout that is...click link...guess I'll see if anyone's still there. Oh crap, that's the boss in front of a whiteboard!"
[19:43] <thumper> heh... yeah
[19:55] <sinzui> natefinch, mgz, do you have a minute to review https://github.com/juju/juju/pull/1183
[19:57] <mgz> on it
[20:22] <katco> fwereade: are you around by any chance?
[20:23] <katco> wallyworld_: what about you?
[20:36] <menn0> thumper: when you're free can you have a look at http://reviews.vapour.ws/r/489/ pls?
[20:36] <thumper> sure
[20:37] <waigani> menn0: I'm looking now too
[20:37] <menn0> waigani: thanks. that'll make 4 reviewers :)
[20:39] <waigani> menn0: so this is moving business logic out of initialize into NewEnvironment and then calling NewEnvironment from Initialize
[20:43] <menn0> waigani: that was the original plan but it didn't quite work out that way.
[20:44] <menn0> waigani: Initialize and NewEnvironment do share the code to create and initialize an environment though
[20:44] <waigani> so you've got envSetupOps now
[20:45] <waigani> menn0: ^ why couldn't that go in NewEnvironment ?
[20:46] <wallyworld_> katco: hiya
[20:48] <menn0> because Initialize needs to take those ops and add more
[20:48] <katco> wallyworld_: did you see my pms?
[20:48] <menn0> they should happen in one trxn
[20:48] <menn0> txn
[20:48] <menn0> waigani: ^
[20:49] <menn0> waigani: when the state server env is created, the initial user needs to be created and various docs about the state servers need to be created
[20:49] <menn0> waigani: these ops aren't required when setting up further envs
[20:51] <waigani> right
[21:02] <mgz> can I bother someone about an lxc container start issue/local provider file storage? see bug 1393932
[21:02] <mup> Bug #1393932: 'container failed to start' with local provider <juju-core:New> <https://launchpad.net/bugs/1393932>
[21:18] <natefinch> anyone know why I might have to source my .profile every time I start a terminal to get my prompt stuff set up?
[21:21] <perrito666> natefinch: you broke your conf most likely :p
[21:21] <natefinch> can't tell you how much I hate the linux profile configuration stuff
[21:28] <wallyworld_> mgz: it could be that an old container template is being used as tools are no longer in provider storage and the template has the bind mount entry for that still
[21:28] <wallyworld_> i'd delete the lxc template container
[21:33] <mgz> wallyworld_: as in, the template in /var/lib/juju/containers ? we tried deleting the contents of that dir
[21:33] <mgz> or should we try wiping anything else?
[21:33] <wallyworld_> mgz: no, /var/lib/lx c/juju-trusty-template or something like that
[21:34] <mgz> is this something we expect to break for users who do an apt-get upgrade and get a new version of juju?
[21:34] <wallyworld_> it could do
[21:35] <mgz> yeay.
[21:35] <wallyworld_> this is the problem of template containers being stale
[21:35] <wallyworld_> we should do something about this issue though
[21:35] <mgz> we were talking yesterday about whether a big-bang client upgrade from 1.18 to 1.21 would work the other day, if we want to sru back a modern juju
[21:36] <wallyworld_> if i'm correct, if there's containers already running, that will work, but if you destroy env and recreate, it will break
[21:36] <wallyworld_> but
[21:37] <wallyworld_> it needs more investigation
[21:37] <wallyworld_> what i say above is an educated guess about the root cause
[21:37] <waigani> ericsnow: http://reviews.vapour.ws/r/493/diff/ "There was an error displaying this diff"
[21:41] <jw4> is there a consensus on leaving liberal loggo Debugf statements in the code?
[21:44] <wallyworld_> sinzui: not sure what you want to do with bug 1393932 - a lxc template created in 1.20 has invalid config when using 1.21
[21:44] <mup> Bug #1393932: 'container failed to start' with local provider <juju-core:Triaged> <juju-core 1.21:Triaged> <https://launchpad.net/bugs/1393932>
[21:45] <natefinch> jw4: if you're going to be really liberal, trace is better.... debug still gets turns on with --debug, but if you want super verbose, one can manually turn on trace
[21:45] <jw4> natefinch: yeah; I went back and converted my most chatty ones to Tracef
[21:45] <jw4> natefinch: thanks
[21:47] <sinzui> wallyworld_, I have seen that report
[21:47] <wallyworld_> sinzui: i just sorta confirmed the root cause
[21:48] <sinzui> wallyworld_, beta3 is in the cpcs and will be on streams.canonical.com in 30 minutes.
[21:48] <wallyworld_> there's a now obsolete bind mount entry in the template config
[21:48] <wallyworld_> deleting the template container fixes
[21:48] <wallyworld_> cause juju will recrete it
[21:49] <wallyworld_> maybe that's a workaround we add to the release notes
[21:49] <wallyworld_> maybe juju should put a version in the template config file
[21:50] <wallyworld_> and if a newer juju is incompatible, advise the user to delete the template
[21:50] <wallyworld_> or something
[21:51] <sinzui> wallyworld_, to rephrase users should lxc- destroy their juju-*-lxc-templates after upgrading?
[21:51] <mgz> a comment in release notes sounds good to me
[21:51] <sinzui> wallyworld_, is this just local, or are lxc containers on other substrates affected?
[21:51] <wallyworld_> sinzui: that's a big hammer approach, but yeah. the image tarball will be cached still in /var/cache/lxc so it's not too bad
[21:51] <wallyworld_> just local i think
[21:53] <sinzui> wallyworld_, I disagree. the templates are stale within days of creating them. I purge /var/cache/lxc and juju templates when I start each development effort to be sure my charms run with new images
[21:54] <sinzui> wallyworld_, weekly purges with apt update and upgrade off are fastest
[21:55] <wallyworld_> fair enough
[21:57] <sinzui> wallyworld_, we already have a section about new local defaults in the 1.20.0 release notes. I will update it with some advice about best practice.
[21:57] <sinzui> wallyworld_, so you you like to declare that bug as documentation, not a defect?
[21:57] <wallyworld_> sinzui: it's only in 1.21 that this particular issue occurs
[21:58] <sinzui> wallyworld_, I might release 1.21.0 on Friday :)
[21:58] <wallyworld_> it's because we no longer use provider storage for tools
[21:58] <wallyworld_> sinzui: i personally am +1 with doco, bu that's IMHO
[21:58] <sinzui> wallyworld_, could you review https://github.com/juju/juju/pull/1183
[21:58] <wallyworld_> there's an argument we could add an upgrade step
[21:59] <wallyworld_> done
[21:59] <wallyworld_> i hope we can release on friday too
[22:01] <sinzui> wallyworld_, since the 1.21.x user is not getting upgrades, the template needs to be recreated regularly, or we explain how to upgrade a template.
[22:01] <sinzui> wallyworld_, deletion is fastest.
[22:01] <sinzui> wallyworld_, in fact, my own preference for an upgrade step is to remove stale templates
[22:02] <wallyworld_> how do we define stale?
[22:02] <thumper> menn0: I think I see a problem with the new env branch
[22:02] <thumper> menn0: plz don't land it yet
[22:02] <thumper> menn0: just about to talk to fwereade
[22:02] <thumper> menn0: I'll chat with you after
[22:03] <menn0> thumper: no problems. i wasn't going to land it without you having a look.
[22:05] <sinzui> wallyworld_, the packages are more than a fortnight stale update settings set 2 weeks as the longest interval.
[22:05] <thumper> menn0: ah...no actually, I think it may be ok, still reading...
[22:05] <sinzui> wallyworld_, sorry, that makes no sense
[22:05] <menn0> thumper: ok good :)
[22:07] <sinzui> wallyworld_, We have dealt with ppc64el bugs caused by users who went out of their way to disable updates. juju requires current images, and now that updates and upgrades are off by default for local-provider, users need to learn to manage their images
[22:07] <wallyworld_> sinzui: updates should no longer be off by default
[22:07] <wallyworld_> only upgrades
[22:08] <sinzui> wallyworld_, maybe you want to review and revise https://docs.google.com/a/canonical.com/document/d/1SDhb5UKpsPL4jwnOhgHMJyNZb0Vj01LHY0mDgWMNU1w/edit
[22:08] <sinzui> the docs based on previous release notes says they are off by default
[22:09] <wallyworld_> sinzui: they were off by default at one point, but kapil asked for that be be changed so updates were on, i'll check the state of it  all
[22:11] <wallyworld_> sinzui: i also have a plan brewing to hopefully better manage the template staleness issue moving forward
[22:11] <mgz> those notes are 16 pages...
[22:12] <sinzui> wallyworld_, a cunning plan?
[22:12] <wallyworld_> sinzui: yes, my lord
[22:12] <sinzui> mgz, the release was due 7 weeks ago...
[22:13] <sinzui> features kept landing
[22:13] <wallyworld_> sinzui: is is as cunning as a fox what used to be Professor of Cunning at Oxford University but has moved on and is now working for the U.N. at the High Commission of International Cunning Planning
[22:13] <sinzui> :))
[22:15] <sinzui> wallyworld_, I am sure your plan cannot possibly fail.
[22:15] <wallyworld_> sinzui: of course not, it is so cunning
[22:15] <menn0> thumper: re your comment about using StateServerEnvironment intead of StateServerInfo
[22:15] <menn0> thumper: I originally did it that way, but all I wanted was the UUID
[22:16] <menn0> thumper: so using StateServerInfo seemed like a more direct way to get it
[22:16] <menn0> thumper: StateServerEnvironment does a bunch of things I don't need at that point
[22:16] <menn0> thumper: what do you think? I don't mind changing it if you think it's better.
[22:17] <wallyworld_> sinzui: it involves a combination of changing the current EnsureCloneTemplate() method, storing lxc images in state blobstore, and redirecting lxc to download images via state server so we get to determine if we should continue to use a cached image or reach out and download a new one
[22:17] <wallyworld_> so we continue to cache for performance, bt can make a decision to update images when needed
[22:19] <sinzui> wallyworld_, my that is cunning
[22:19] <wallyworld_> i hope it works, it should do
[22:19] <wallyworld_> i have a prototype
[22:19] <wallyworld_> where the redirection happens, but minus logic to look at staleness
[22:20] <thumper> menn0: I thought about that, and I think that it just makes it a little more easy to understand
[22:20] <thumper> menn0: comments could also help
[22:21] <thumper> menn0: alternatively make a function that returns just that
[22:21] <thumper> and use the new function
[22:22] <menn0> thumper: ok... i'll make this clearer one way or another
[22:44] <davecheney> menn0: if you can kill stateserverinfo, that would be great
[22:50] <menn0> davecheney: i'm not going to kill it in this PR, but I will probably avoid adding another use of it
[22:50] <thumper> menn0: can we have a chat about upgrade order
[22:50] <menn0> thumper: sure
[22:50] <thumper> menn0: just jump in the standup hangout
[23:11] <thumper> sinzui: are we merging the 1.21 branch into master periodically?
[23:11] <thumper> sinzui: and also want to confirm that we aren't merging the 1.20 branch into anything any more
[23:12] <thumper> this fix is horrible
[23:12] <thumper> but I know what to do now
[23:12] <mgz> thumper: not as a matter of course, we've been backporting changes from 1.22 to 1.21
[23:12] <mgz> we're not upmerging 1.20
[23:12] <thumper> mgz: ok, I gan go with the back port option
[23:12] <thumper> will land on trunk first, them push into 1.21
[23:15] <mgz> I have proposed <https://github.com/juju/juju/pull/1184> for an edge case failure, it's possible a reviewboard thing will show up for it
[23:16] <sinzui> thumper, I don't think we are. jam has done it on occasion.
[23:16] <sinzui> thumper, I am sure we are not merging 1.20 branch into master. we are hoping to never commit to it again
[23:17] <thumper> sinzui: to fix juju-run we'll need 1.20.12
[23:19] <sinzui> thumper, really? we cannot count on everyone to upgrade to the latest. in each minor. If 1.21.1 gets the fix, can't I upgrade my 1.20.11 to 1.21.1?
[23:19] <sinzui> thumper, many envs went from 1.18.1 to 1.20.5
[23:20] <thumper> sinzui: ok, what I mean is that if we want the 1.20 release to have a fixed juju-run, we'll need 1.20.12
[23:20] <thumper> sinzui: if we are fine with it just being fixed in 1.21, then that makes my job easier
[23:21] <thumper> sinzui: AFAICT, juju-run was broken for everyone with the 1.20 branch
[23:21] <thumper> sinzui: for everyone that upgraded to it
[23:21] <thumper> rather than bootstrapped it
[23:21] <sinzui> thumper, okay, yes, I agree. juju-ci will support 1.20.x for a while. we want to remove the dual support for old streams soon
[23:21] <thumper> sinzui: so... fix in 1.20 series or not?
[23:22] <sinzui> thumper, if it isn't patch hell, yes please.
[23:22] <thumper> ok
[23:22] <thumper> so what I need now is a 1.18 binary so I can bootstrap and test upgrades
[23:22]  * thumper pokes around
[23:23] <thumper> bugger
[23:23] <thumper> /usr/bin/juju is 1.20.11
[23:23]  * thumper goes to get files out of the package by hand
[23:23]  * thumper goes to make lunch first
[23:24] <sinzui> thumper, ci has a cache of what was published in trusty
[23:24]  * sinzui looks
[23:24] <thumper> sinzui: can we find the amd64 version for 1.18.x where x is the latest?
[23:24] <sinzui> yep, Lp will have it too
[23:24]  * sinzui looks
[23:26] <sinzui> thumper,  expand 1.18.4 and download the juju-core and juju-local package https://launchpad.net/ubuntu/+source/juju-core