[00:05] <katco> wallyworld_: https://github.com/juju/juju/pull/621
[00:05] <katco> ready for review
[00:05] <wallyworld_> looking
[00:09] <katco> wallyworld_: grabbing some supper, brb
[00:09] <wallyworld_> ok
[00:13] <katco> wallyworld_: back
[00:14] <wallyworld_> katco: i left a couple of droppings in your PR
[00:14] <wallyworld_> i think maybe the test coverage needs to be expanded a bit
[00:16] <katco> wallyworld_: gah i'm flipping back and forth b/t branches too much. the reason i didn't use the val in that map is b/c with the harvesting stuff, what's in the map will be a string and we want an int. here it's totally fine though. thanks :p
[00:16] <wallyworld_> :-)
[00:44] <katco> wallyworld_: ready again. running tests on my machine
[00:44] <wallyworld_> ok
[00:54] <katco> wallyworld_: http://golang.org/doc/effective_go.html#redeclaration
[00:56] <wallyworld_> katco: rightio. i thought that only applied to the first variable
[00:56] <wallyworld_> i HATE := vs = soooo much. worst design decision EVER
[00:56] <katco> wallyworld_: afaik, go doesn't do anything anywhere with regards to parameter ordering
[00:56] <katco> wallyworld_: haha
[00:57] <katco> wallyworld_: i don't mind it, but i do wish they would have standardized on new(...) vs :=
[00:57] <wallyworld_> := vs = is the cause of so many bugs
[00:57] <wallyworld_> and anyway, it's the fucking compiler's problem to sort out, not the programmer
[00:57] <katco> wallyworld_: really? i haven't experienced that directly yet
[00:57] <wallyworld_> we have in juju
[00:58] <katco> wallyworld_: any more feedback? i think i got everything/pushed
[00:58] <wallyworld_> katco: yeah, just about to LGTM but you interrupted me :-P
[00:59] <katco> wallyworld_: sorry oh supreme leader of wallyworld! ;)
[00:59] <wallyworld_> have i told you today?
[00:59] <katco> told me what?
[00:59] <wallyworld_> fark off!
[00:59] <katco> LOL
[00:59] <katco> that'll be 2x i think today
[00:59] <wallyworld_> there, now you've been told
[00:59] <katco> haha
[01:00] <katco> ok back to the harvesting stuff
[01:00] <wallyworld_> thanks for fixing
[01:00] <katco> thanks for the review
[01:00] <katco> thumper: should be landing momentarily. sorry for the regression.
[01:00] <wallyworld_> don't apoligise to him, he will expect it everytime now
[01:01] <katco> i already told him i trusted him. i think i'm just off on the wrong foot with that guy
[01:01] <hazmat> axw, ping
[01:01] <hazmat> axw, was doing some research on azure earlier today and found some interesting info, wanted to run by you..
[01:01] <wallyworld_> he's just a pussy cat really, roll him over and rub his belly and he's all good
[01:02] <hazmat> axw, nutshell we don't need to associate the env to an affinity group anymore for the sole purpose of getting a vnet.. vnets can be associated to regions now
[01:03] <wallyworld_> hazmat: i have a theory why provisioning is failing, but the log files don't contain the error message i would expect to see, so it's a guess. apt contention installing container dependencies
[01:03] <wallyworld_> that explains the issue you raised, but doesn't explain the one where only one container out of several fails to start
[01:04] <hazmat> wallyworld_, do we normally have errors we don't log?
[01:04] <hazmat> wallyworld_, possibly.. i thought that was addressed already via retry?
[01:04] <wallyworld_> hazmat: yes we do, and i'm not seeing them which is confusing me
[01:04] <hazmat> wallyworld_, the issue is nothing else on that machine is installing anything
[01:04] <hazmat> wallyworld_, the one other unit on the machine.. is the ubuntu charm.. aka do nothing
[01:04] <hazmat> wallyworld_, its log is also in the tarball
[01:04] <hazmat> wallyworld_, so apt contention with what..
[01:05] <wallyworld_> hazmat: retry is only in 1.20.6
[01:05] <hazmat> wallyworld_, ah.. fair enough.. this is .5
[01:05] <hazmat> wallyworld_, but still curious as to what it would contend with
[01:05] <wallyworld_> hazmat: ok, i didn't see what the unit was. but apt contention is the only thing that i can see right now that explains why the logging cuts off at the point it does
[01:06] <wallyworld_> there may be another cause
[01:06] <hazmat> wallyworld_, why are all the container watchers being killed on all the machines at the same time
[01:06] <wallyworld_> but what's happening is that the code is calling the container setup,which calls apt, but then gets no further
[01:07] <wallyworld_> hazmat: they are killed because they are no longer needed - they exist to set up container support for the machine and then they die
[01:07] <wallyworld_> ie the apt stuff and set up to run lxc is done lazily
[01:07] <wallyworld_> once the first lxc is asked for
[01:07] <hazmat> wallyworld_, so it would be a container provisioner logic issue then
[01:08] <wallyworld_> maybe, i can't explain why things just stop
[01:08] <wallyworld_> there was an issue in 1.20.5 where the watcher was stopped twice
[01:08] <wallyworld_> but i don't think that will cause this issue
[01:08] <wallyworld_> i need to keep digging a bit
[01:08] <hazmat> wallyworld_, ack
[01:09] <wallyworld_> thanks for getting the logs
[01:09] <hazmat> np
[01:09] <wallyworld_> i'm keen to get 1.20.6 out there
[01:09] <wallyworld_> so we can see how it behaves
[01:09] <wallyworld_> lots of fixes in there
[01:09] <wallyworld_> a CI issue with azure is holding things up
[01:10] <hazmat> the --upload-tools issue on azure?
[01:10] <wallyworld_> azure was broken for CI, might be fixed now
[01:10] <wallyworld_> not sure, i just heard 2nd hand that the CI tests failed
[01:10] <wallyworld_> they passed yesterday or the day before
[01:10] <wallyworld_> but i haven't heard directly
[01:11] <wallyworld_> we'll be pushing for a release tomorrow regardless
[01:11] <wallyworld_> we have to get this 1.20.6 out and into the hands of landscape and other folks
[01:12] <hazmat> wallyworld_, so does that mean there's some coordination between container watcher and container provisioner?
[01:24] <axw> hazmat: I changede our vnet creation to use a "location" (region) a while back, but reverted it. not sure if it was a coincidence, but after that change there were a lot of problems with the vnet not being available
[01:25] <axw> it would take >5 mins for the vnet to be accessible after creation
[01:25] <hazmat> axw, interesting
[01:25] <axw> IIRC the warning message that popped up in the azure console said it was only a problem with vnets created without an affinity group
[01:26] <thumper> katco: awesome, ta
[01:26] <katco> thumper: np
[01:27] <axw> hazmat: did you see my PR for the docs on zones?
[01:27] <hazmat> axw, i did looked good
[01:27] <axw> reading your doc now
[01:27] <wallyworld_> hazmat: yes, the container watcher starts the provisioner when a new container is requested
[01:30] <hazmat> ah ic now
[01:32] <hazmat> wallyworld_, by why would container watcher killed be seen before apt-get install lxc if the watcher is responsible for installing the pre-reqs
[01:35] <wallyworld_> hazmat: not sure. i think it asks the worker to die, but it won't do so until the current operation has finished ie the provisioner is started and then it exists
[01:35] <wallyworld_> i'm not 100% across the worker infrastructure
[01:35] <hazmat> wallyworld_, that's not apparent
[01:35] <hazmat> wallyworld_, ie the logs where its successful show it die and then the provisioner come up
[01:37] <wallyworld_> hazmat: my understanding is that kill() marks the worker as dying, and it still needs the current loop invocation to finish, but i'm not sure
[01:40] <wallyworld_> i'll look at reworking it, adding more logging also
[01:49] <hazmat> ah ic.. it signals to stops itself before doing its actual work
[01:51] <davecheney> thumper: https://github.com/juju/juju/pull/614
[01:51] <davecheney> if you have a sec
[01:51]  * thumper looks
[01:52] <davecheney> this is th eone from standup
[02:02] <axw> waigani: you can stop reviewing https://github.com/juju/juju/pull/547, it's redundant
[02:02] <axw> I already fixed the problem
[02:04] <waigani> axw: ah so I see, thanks I missed that
[02:10] <axw> hazmat: I don't have much to say on your doc, SGTM.
[02:10] <axw> it would be nice if we could use thise to enable colocation of services in azure
[02:11] <axw> atm that's disallowed because we can't control which units communicate to which based on zone allocation
[02:11] <hazmat> axw, hmm
[02:12] <axw> (I still have no idea how it would work though)
[02:12] <hazmat> axw, zone/fault domain in azure is a logical concept that's specific to azure service and its role instances.
[02:13] <hazmat> axw, theoretically we could map to those when doing co-location, and pick the appropriate next instance (ie distribution group from co-located service's instances
[02:13] <axw> you'd also have to make sure you don't spread the two units across fault domains though
[02:14] <axw> it's not enough to stick them in the same availability set
[02:14] <axw> and then there's upgrade domains
[02:14] <hazmat> axw, you do want to spread across fault domains.
[02:14] <hazmat> axw, we don't actually use upgrade domains afaics
[02:15] <axw> they are implicitly used
[02:15] <hazmat> axw, does azure use upgrade domains under the hood?
[02:15] <axw> when the machine is upgraded
[02:15] <axw> i.e. regular maintenance
[02:15] <hazmat> ic
[02:15] <axw> that's my understanding anyway
[02:15] <hazmat> my understanding was that it was tied to the app roll out of updates
[02:16] <hazmat> but yeah.. underlying upgrades also makes sense
[02:16] <axw> it is definitely tied to the app updates
[02:16] <axw> I thought both tho
[02:17] <axw> hazmat: re spreading across fault domains, I mean if you have a co-dependent app server & db, you surely don't want to spread htem across fault domains
[02:17] <axw> but multiple units of each, eys
[02:17] <axw> yes*
[02:18] <hazmat> axw, every ref ic to upgrade domain references app /deployment updates.. not iaas updates
[02:18] <hazmat> axw, multiple units of each.. and you'd want spread.. single unit of each.. does it matter ;-)
[02:19] <axw> hazmat: all I'm saying is the pairs need to be located in the same fault domain, otherwise you have a broken service if one goes down
[02:19] <hazmat> axw, single unit of each and we don't really have any real notion of trying to keep it up.. fault domains are not global
[02:19] <hazmat> their service local logical
[02:20] <hazmat> axw, ie. if their co-located their on the same vm.. so doesn't matter.. if their separate services in azure
[02:20] <hazmat> there is no guarantee that 0 == 0 between two services fault domains
[02:20] <axw> right, they have to be in the same cloud service
[02:21] <axw> it's a bit messy, forget I said anything :)
[02:21] <axw> when thye're in the same CS there's also issues of port collision
[02:21] <hazmat> axw, so we'd use them as separate roles ?
[02:22] <hazmat> within a service
[02:22] <hazmat> axw, yeah
[02:22] <hazmat> azure.. is special
[02:23] <axw> yes, separate roles. I was thinking we could deploy a service and specify the cloud service name
[02:24] <axw> (to be the same as an existing one)
[02:24] <hazmat> woah.. now your talking crazy.. semantic service names in an iaas console ? ;-)
[02:24] <hazmat> i walk away for a few months to come back and remember how special it is.. i wrote up some code to verify the fault/upgrade domain thingy and its interaction with affinity groups. https://gist.github.com/kapilt/d326b853e4606f9203e9 i kinda of wish we had a list-machines to do iaas provider specific details
[02:24] <hazmat> axw, oh.. nevermind not semantic
[02:25] <hazmat> axw, we currently do separate roles per instance as well.. there's some messiness trying to treat azure as general compute
[02:26] <axw> a (Virtual Machine) role is an instance
[02:26] <axw> there's some other roles that aren't applicable to IaaS
[02:26] <axw> web worker roles.. don't know much about them
[02:26] <hazmat> axw, so why do we/they have roles and role_instance_list separately
[02:26] <axw> nfi
[02:27] <axw> I think it's to do with deployments
[02:27] <axw> you can have prod/testing deployments
[02:27] <axw> and switch them at runtime
[02:27] <hazmat> yeah.. the slots and upgrades
[02:27] <hazmat> and rollbacks
[02:27] <axw> so you define a role and ther's an instance for it in each deployment
[02:28] <hazmat> ah.. ic.. that makes a certain sense.. logical from instantation across prod vs staging
[02:29] <axw> anyway, so what I was saying is we could do, say "juju deploy app --to cloudservice=mythingy" and "juju deploy db --to cloudservice=mythingy", then if you ensure each service has at least the same number of units as there are fault domains, then the units can self organise to talk to units in the same fault domain
[02:30] <axw> there's still the issue of port collisions, but there's not much we can do about that. only matters for exposed services anyway
[02:31] <davecheney> waigani: https://github.com/juju/juju/pull/622
[02:32] <hazmat> axw, there on the same  machine w/ co-location.. so the port collision thing is immaterial to the provider.
[02:32] <hazmat> axw, also matters for unexposed.. cause failure to bind
[02:32] <axw> hazmat: not same machine, just same cloud service
[02:32] <hazmat> axw, we don't control fault domain
[02:33] <waigani> davecheney: looking :D
[02:33] <axw> hazmat: no... but there are 2 fault domains and we allocate 2 units, I think Azure will spread them equally?
[02:33] <axw> but if*
[02:33] <hazmat> it will
[02:34] <hazmat> axw, this is where the spec comes into play.. the charms can choose to self-organize that way if they choose.. via relation-get query to remote unit matching zone
[02:34] <axw> hazmat: right, that was my point :)
[02:35] <axw> I'm saying with your proposal, this is feasible
[02:35] <hazmat> axw, aha.. finally i understand.. i should go to bed.. that took a while ;-)
[02:35] <waigani> davecheney: that's awesome.
[02:42] <waigani> davecheney: what's the -type d flag?
[02:43] <waigani> help just says: -type [bcdpflsD]
[02:43] <waigani> not very helpful...
[02:44] <davecheney> waigani: man find
[02:45] <davecheney> waigani: please review my comments to https://github.com/juju/juju/pull/617
[02:51] <davecheney> waigani: please review my comments to https://github.com/juju/juju/pull/613
[02:52] <thumper> waigani: did you want to update the envuser stuff now with the st.environTag, or as a followup?
[02:53] <waigani> thumper: followup? I've got the todos in there so should be easy/quick
[02:53] <thumper> kk
[02:53]  * thumper keeps reviewing
[02:58] <waigani> davecheney: I've got to do the school run,  I'll be back online in a bit
[03:14] <davecheney> kk
[03:46] <thumper> heh...
[03:46]  * thumper squeezed (╯°□°)╯︵ ┻━┻ into a unit test
[03:54]  * hazmat steps back from the unicode wizardry
[03:59] <bodie_> davecheney, thinking about your concern with the empty ActionTag as a signal of non-action hook
[04:00] <bodie_> davecheney, since I'm initializing the ActionTag with an empty value and only inserting a value (via api) if the hook was an Action, it seems to me like there would never be a case when it would not suffice as the switch
[04:01] <davecheney> bodie_: then you never need to check ?
[04:02] <bodie_> davecheney, well, the check is to consider whether it is an action (i.e. always has a tag value), or not in which case the value will always be empty
[04:02] <davecheney> if it's empty then use somethign that can be nil
[04:03] <davecheney> otherwise you'll get fucked by the subtle difference between var a names.ActionTag, and a = names.NewActionTag("")
[04:03] <bodie_> I think the latter would only happen if the action didn't have an id, in which case we're fucked anyway
[04:04] <wallyworld_> axw: a small one https://github.com/juju/juju/pull/623 if you have a moment
[04:04] <bodie_> but, that error case should get caught by runHook
[04:04] <axw> looking
[04:04] <bodie_> i.e., the value is *always* going to either be empty = non-action, or non-empty = action, or already errored out when the id was mysteriously missing
[04:05] <davecheney> bodie_: i don't like using the zero value like that
[04:05] <bodie_> that is my feeling too
[04:05] <davecheney> please make it a pointer or use the names.Tag interface
[04:05] <bodie_> sounds like a plan
[04:06] <bodie_> :)
[04:06] <bodie_> thanks
[04:06] <davecheney> cool
[04:06] <davecheney> thanks
[04:08] <axw> wallyworld_: that AddInt32 test looks like crack anyway
[04:09] <axw> won't it always stop the last worker it added?
[04:09] <wallyworld_> it doesn't add workers
[04:09] <wallyworld_> it stops the container watcher once all supported container types have been intialised
[04:10] <wallyworld_> lazy init of containers
[04:10] <axw> ah, I see
[04:10] <wallyworld_> i'm not sure it was bad how it was, but it's more logical to have it in a defer i think
[04:19] <wallyworld_> axw: thanks. the defer is a hail mary. it *shouldn't* matter but the runner stuff is a bit mysterious. certainly early termination of the worker is one explanation for the logs i saw
[04:20] <thumper> waigani: if you can't use the factory, just use the state methods to create users
[04:20] <waigani> thumper: ok
[04:38] <waigani> thumper: https://github.com/juju/juju/pull/553
[04:40] <thumper> waigani: I'll look shortly, need to go make dinner
[04:40] <waigani> thumper: np, I'll have to do the same soon - at ice skating right now
[04:45] <axw> wallyworld_: can you please close https://github.com/juju/juju/pull/547?
[04:45] <wallyworld_> sure
[04:45] <axw> thanks
[04:56] <bodie_> davecheney, addressed your points.  any response to https://github.com/juju/juju/pull/617#discussion_r16817826 when you have a sec?
[05:42] <bodie_> davecheney, this code is the dep for a bunch of other stuff, so if I can get even a brief comment on that reply it would be really helpful to moving us forward
[05:42] <bodie_> otherwise I believe others may hesitate to jump in on that topic
[05:43] <bodie_> and since this is my 1:30 am, I don't have a lot of confidence I will get a chance to pester you again soon :)
[05:55] <wallyworld_> axw: something to ponder with the tools work, not 100% relevant now but good to keep in mind https://bugs.launchpad.net/juju-core/+bug/1347984
[05:55] <mup> Bug #1347984: container provisioner may choose bad tools <juju-core:Triaged> <https://launchpad.net/bugs/1347984>
[05:56] <axw> wallyworld_: thanks
[06:17] <axw> wallyworld_: you make a good point about "pending forever", but it's the same either way
[06:17] <axw> perhaps when we fix that we can put a sensible timeout in place?
[06:18] <wallyworld_> yeah, we do need to do something
[06:18] <wallyworld_> we have work scheduled to improve this area
[06:45] <TheMue> morning
[07:06] <dimitern> morning
[07:15] <TheMue> hmm, two tests running the whole story fail *checking*
[07:16] <TheMue> dimitern: btw, your latest change led to a minor but nice redesign by my side
[07:31] <dimitern> TheMue, oh yeah?
[07:34] <TheMue> dimitern: yeah, using an interface answering the questions RequiresSafeNetworker() has, instead of adding more and more arguments
[07:34] <dimitern> TheMue, cool!
[07:35] <TheMue> dimitern: and, you may believe it or not, machiner.Machine implements this interface too :D like my mock type for the tests
[07:35] <dimitern> TheMue, the IsManual thing?
[07:36] <TheMue> dimitern: and the Id of the machine, all now fetched in one versioned doc, and the params separated from the in-memory storage
[07:36] <TheMue> dimitern: John and I discussed about it these days
[07:36] <dimitern> TheMue, yep, it is better like this, isn't it?
[07:37] <TheMue> dimitern: yeah, I think so. params should simply be for transport. this also will make the implementation of versioning more simple
[07:37] <dimitern> TheMue, that's the intent, yeah
[07:38] <TheMue> dimitern: +1
[08:17] <mattyw> davecheney, morning - thanks for the review
[08:35] <TheMue> so, looks like I catched all failing tests due to the redesign. one final complete test and then PR :)
[09:25] <davecheney> mattyw: no worries
[10:11] <gsamfira> morning all
[10:17] <natefinch> wwitzel3, ericsnow, team meeting?
[10:43] <hazmat> do we have a stack trace dump signal handler on agents?
[10:44] <hazmat> wallyworld_, was thinking that might have helped container debug
[10:45] <wallyworld_> hazmat: no, would be nice though
[10:45] <natefinch> hazmat: the stack trace should get output to stderr on a panic and thus go into the log
[10:46] <natefinch> hazmat: or maybe you mean like give it a signal and it'll log the current stack trace?  We can do that easily
[10:46] <hazmat> natefinch, the later
[10:47] <natefinch> but yeah, no, doesn't currently exist
[10:47] <hazmat> given a hung/spun .. with no log output. nothing happening on syscalls (per strace).. it would be nice to see what's brokens
[10:49]  * hazmat files a bug
[10:51] <natefinch> hazmat: what's your preferred signal?
[10:53] <hazmat> natefinch, QUIT
[10:54] <hazmat> natefinch, https://bugs.launchpad.net/juju-core/+bug/1362546
[10:54] <mup> Bug #1362546: Need a way/signal handler to dump stack trace on agents <juju-core:New> <https://launchpad.net/bugs/1362546>
[10:56] <hazmat> jam, i think i totally misunderstood the context of your email yesterday
[10:56] <hazmat> re container density
[10:57] <jam> hazmat: well, some of it was just testing that we can genuinely get container addressibility, and some of it was trying to see what we could do with it for scale testing.
[10:57] <jam> natefinch: SIGQUIT is built into Go
[10:57] <jam> to trigger a panic()
[10:57] <jam> I've used it repeatedly
[10:58] <jam> hazmat: I'm pretty sure you alredy can
[10:58] <natefinch> jam: triggering a panic is different than just printing a stack trace though
[10:58] <natefinch> jam: but that's a good point
[11:02] <hazmat> jam, thanks x2
[12:02] <wallyworld_> axw: katco: finish meeting, be therereal soon
[12:02] <wallyworld_> finishing
[12:09] <jam> wallyworld_: is aggregateSuite.TestMultipleResponseHandling one of your intermittant tests?
[12:09] <jam> because I just came across it
[12:09] <jam> and it assumes that "go foo(); go bar()" will call foo before bar
[12:09] <jam> which is *not* guaranteed.
[12:09] <wallyworld_> jam: no. i will add it. what's the jenkins link?
[12:10] <jam> wallyworld_: I just discovered it locally
[12:10] <jam> wallyworld_: I'll try to just fix it, since i'm doing some tests there
[12:10] <wallyworld_> jam: ok, thanks
[12:10] <jam> I happened to have the system change ordering, or I wouldn't have noticed.
[12:13] <jam> fortunately it is just a bug in the test, and not a more serious underlying issue
[12:48] <perrito666> good morning everybody
[12:48] <perrito666> natefinch: hey, did you get my email?
[13:29] <natefinch> perrito666: yep, got it.
[13:30] <perrito666> the cold medicine I took must be either made of unicorn powder or some illegal drug, these things work waaay too well
[13:31] <natefinch> perrito666: heh.... psuedoephedrine is good stuff
[13:31] <perrito666> heh, well that explains
[14:03] <wwitzel3> ericsnow, natefinch: standup time :)
[14:03]  * perrito666 notices that the only person actually standing up in those is wwitzel3 
[14:39] <mattyw> apparently landing is blocked - is anyone currently working on https://bugs.launchpad.net/juju-core/+bug/1362636 ?
[14:39] <mup> Bug #1362636: ppc64el compilation error <ci> <ppc64el> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1362636>
[14:41] <mgz_> mattyw: not that I know of
[14:46] <natefinch> mgz_, rogpeppe1, thumper, wallyworld_: do any of you know if we verify the SSL certificate of the state servers when agents connect to them?  I presume we do, but I don't actually know.
[14:46] <natefinch> dimitern, TheMue ^^
[14:46] <rogpeppe1> natefinch: we did originally, but at some point someone added InsecureSkipVerify i think.
[14:46] <rogpeppe1> natefinch: i hope that's been removed now.
[14:48] <mattyw> mgz_, curtis doesn't seem to be around - any idea how I can get started on looking into that?
[14:49] <rogpeppe1> natefinch: actually it does look as if we correctly verify the SSL cert of the state servers now
[14:49] <rogpeppe1> natefinch: look in state/api/apiclient.go
[14:53] <natefinch> rogpeppe1: blech
[14:54] <rogpeppe1> natefinch: what's the blech for?
[14:55] <natefinch> rogpeppe1: oh, sorry, misread what you said
[14:56] <natefinch> rogpeppe1: I can't really tell from the apiclient code if it's actually verifying the certs.  I see them being passed around, but I can't figure out where they're actually being checked.
[14:56] <mgz_> 's just done in the go stdlib, no?
[14:56] <rogpeppe1> natefinch: they're being checked by the websocket code
[14:56] <rogpeppe1> natefinch: and by the fact that we use a wss: address
[14:56] <rogpeppe1> natefinch: and we add a known root CA to the config
[14:57] <natefinch> ahh, ok
[15:05] <natefinch> anyone know of a way to get gtalk inside gmail to make the desktop notification mail icon thingy turn blue?  Also, what is that thing called and how do I change its settings?  It doesn't seem to have any kind of menu on it.
[15:10] <perrito666> I dont think you can do that
[15:10] <perrito666> that is a part of unity iirc
[16:29] <mattyw> does anyone know how I could try to run a ppc build of core? I'm trying to take a look at https://bugs.launchpad.net/juju-core/+bug/1362636
[16:29] <mup> Bug #1362636: ppc64el compilation error <ci> <ppc64el> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1362636>
[16:35] <bodie_> my full test always times out on my pc
[16:35] <bodie_> is there some way to accelerate the tests, or increase the timeout?
[16:40] <mattyw> arosales, ping?
[16:40] <arosales> mattyw: hello
[18:42] <rogpeppe1> natefinch: just had a brief glance through lumberjack.go
[18:42] <rogpeppe1> natefinch: looks great in general
[18:43] <rogpeppe1> natefinch: a few minor suggestions:
[18:43] <rogpeppe1> natefinch: if you specified MaxAge as a time.Duration you wouldn't need the comment and your code would be simpler, and (i think) the API a little more obvious
[18:44] <rogpeppe1> natefinch: similarly, if you specified the max size in int64 bytes, you wouldn't need to mock megabytes.
[18:45] <rogpeppe1> natefinch: i think that rather than returning an error if a write is too big, you'd be best off just writing it anyway
[18:45] <natefinch> rogpeppe1: v1 used bytes, but then in config files you have like size = 100000000 which is illegible and error prone... and no one cares about anything smaller than a megabyte anyway.
[18:45] <rogpeppe1> natefinch: i don't see any particular reason why you sort the result of oldLogFiles
[18:45] <natefinch> rogpeppe1: I really appreciate the feedback btw.
[18:45] <natefinch> rogpeppe1: sorting the old logfiles may be a leftover from the v1 code.  I'll look at it again
[18:46] <natefinch> rogpeppe1: I thought it was so I could determine which were the N newest and keep those
[18:46] <rogpeppe1> natefinch: you just scan directly through the list. you *could* break, i suppose, but that would seem like severe premature optimisation...
[18:47] <natefinch> rogpeppe1: they're likely returned in last modified order, which if someone modifies an old log file might mean its last modified date is newer than the contents.
[18:48] <rogpeppe1> natefinch: i can't see how the order affects anything
[18:48] <rogpeppe1> natefinch: oh, i see
[18:48] <natefinch> rogpeppe1: maxbackups .... right
[18:48] <rogpeppe1> natefinch: yeah
[18:49] <rogpeppe1> natefinch: perhaps it would be better to put the sort just before the code that relies on it
[18:49] <rogpeppe1> natefinch: rather than sorting in oldLogFiles
[18:49] <natefinch> rogpeppe1: yeah that's probably more clear
[18:50] <rogpeppe1> natefinch: then it's more obvious why the slicing logic works
[18:50] <rogpeppe1> natefinch: trivial thing: i'd put the [:l.MaxBackups] before the [l.MaxBackups], just because it's slightly nicer to slice the start before the end
[18:53] <rogpeppe1> natefinch: i'm not entirely sure about the conflation of actual Logger and the serialisability of the logger config
[18:53] <rogpeppe1> natefinch: i *think* i'd be happier leaving all the serialisation stuff out, and leaving it for higher layers
[18:56] <natefinch> rogpeppe1: I could see splitting out the config from the logger object itself, so people won't try to do wacky stuff like change values on the fly...
[18:57] <rogpeppe1> natefinch: the thing that seems a little hooky to me is the "well we'll preguess yaml and json because we know about those formats" thing
[18:58] <natefinch> rogpeppe1: yeah, that's true
[18:59] <rogpeppe1> natefinch: i'd just leave the config as vanilla, i think, and if people outside the package want to massage it, they're free to
[18:59] <rogpeppe1> natefinch: and specify age as time.Duration and size as bytes.
[19:00] <rogpeppe1> natefinch: leaving it up to higher layers to decide about sensible formatting if need be (i'd like to see 4g, 32m, for example to specify sizes, but that's really out of the domain of lumberjack)
[19:00] <rogpeppe1> natefinch: great package name, BTW
[19:01] <rogpeppe1> natefinch: but i do see the other side of the coin
[19:01] <rogpeppe1> too
[19:01] <rogpeppe1> natefinch: it forces higher layers to know about all the lumberjack config details
[19:02] <natefinch> yeah... I struggled with that
[19:02] <rogpeppe1> natefinch: but then again, they probably will anyway - we'd probably use juju config attributes to specify some of this stuff
[19:03] <rogpeppe1> natefinch: i *think* i tend towards the "not this package's concern" p.o.v.
[19:03] <natefinch> rogpeppe1: yeah, easy deserialization definitely affected the API, that's why it's megabytes and days, not bytes and time.Duration
[19:04] <natefinch> rogpeppe1: I think you're right, that it shouldn't be this package's concern
[19:06] <natefinch> rogpeppe1: Thanks again for the review.  It's a big help having fresh eyes on it.
[19:08] <rogpeppe1> natefinch: np. it's a nice package, thanks.
[19:09] <katco> hey need a quick opinion: i'm looking to document the new harvest mode behavior, and also the update/upgrade settings. are those better in their own individual documents, or embedded in another file (architectural-overview.txt)?
[19:23] <katco> oops nevermind, just reviewed my notes. looks like juju/docs is the place to be
[19:37] <marcoceppi> what's the URL to download the zip from charmstore?
[19:38] <marcoceppi> having a hell of a time tracking it down int he code
[20:06] <arosales> mattyw around?
[20:19] <arosales> I think alexisb got the power machine worked out
[20:20] <arosales> re mattyw
[20:21] <alexisb> cmars, you have what you need with the power box?
[20:21] <alexisb> arosales, mattyw is probably gone for the day
[20:25] <arosales> alexisb: ack
[20:47] <abentley> natefinch: it says On-call reviewer: see calendar.  What calendar?
[20:56] <natefinch> abentley: that's the joke, which calendar keeps changing.... ask thumper, he's redoing it as of this morning.  I think it'll be on the juju core team calendar... which I doubt most people can see.
[20:57] <abentley> natefinch: Could I  ask you do do a review?  It's verra short.
[20:57] <natefinch> abentley: I have 3 minutes, so we'll see how short
[20:58] <abentley> https://github.com/juju/juju/pull/629
[20:58] <natefinch> :)
[20:58] <natefinch> abentley: LGTM'd
[20:58] <abentley> natefinch: TY.
[21:36] <thumper> waigani: https://github.com/juju/juju/pull/631
[21:36] <waigani> thumper: https://github.com/juju/juju/pull/632
[21:36] <waigani> ;)
[21:38] <waigani> thumper: good catch, I didn't write that test
[21:38] <thumper> I had it fail on me this morning
[21:40] <waigani> I also didn't know you could use the || in an assert like that - makes sense
[21:40] <waigani> thumper: CI blocker
[21:40] <cmars> thumper, mattyw and I were unable to land anything today, there's a ppc64el build error blocking
[21:40] <cmars> i got access to a ppc64 and about to try to reproduce
[21:41] <cmars> is davecheney around today?
[21:41] <thumper> cmars: he will be later
[21:41] <thumper> cmars: he normally starts in just over an hour
[21:41] <cmars> ok cool
[21:43] <thumper> cmars: you can do it locally
[21:43] <thumper> cmars: I have reproduced the compiler error on amd64
[21:44] <thumper> state/apiserver/deployer$ go test -compiler gccgo
[21:44] <cmars> thumper, ah, so its a general gccgo issue
[21:44] <thumper> yep
[21:44] <thumper> unlikely to be power specific
[21:45] <thumper> should find out when it last passed, and what the change was
[21:45] <cmars> git bisect might be helpful there
[21:49] <thumper> damn, how to I get git log to show me the diff
[21:49] <thumper> for revisions
[21:49] <cmars> gitk might be best to browse that
[21:49] <cmars> ugly but useful
[21:50] <perrito666> yup or gitg, which is slightly less ugly but also less useful
[21:52] <thumper> so git log won't show me a diff for the revision?
[21:52] <perrito666> i dont think so, it should just tell you the commit message and some other metadata
[21:53] <perrito666> thumper: what exactly are you trying to do?
[21:53] <ericsnow> I've found qgit to be a lot nicer
[21:53] <thumper> I want to look at the files changed for every commit
[21:54] <thumper> I know what I'm looking for (ish), I just want to see the commits
[21:54] <perrito666> thumper: apparently -p does that
[21:54] <thumper> nope
[21:54] <thumper> ah...
[21:54] <thumper> hang on
[21:55] <perrito666> --stat
[21:55] <perrito666> that seems to produce a very useful output
[21:56] <perrito666> I use that kind of ouput for pull and it is actually very informative
[21:57] <cmars> looks like it is passing now. something must have landed to fix?
[21:57] <thumper> cmars: that is the 1.20 branch
[21:57] <cmars> oh
[21:58] <perrito666> cmars: yes, that is certainly very confusing
[21:59] <cmars> good grief, is there a way to see more build history for http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-ppc64el/
[22:00] <cmars> i know where it is on the filesystem... grr
[22:00] <perrito666> cmars: jenkins is not actually finding it
[22:00] <perrito666> I tried going to a previous job by hand and I get 404
[22:01] <cmars> hmm
[22:01] <thumper> hmm...
[22:01] <thumper> ok, I have a commit I want to test
[22:01] <thumper> how do I revert the tree to a particular commit?
[22:02] <perrito666> thumper: you can use co or revert
[22:02] <perrito666> sorry s/co/checkout
[22:03] <cmars> not revert
[22:03]  * thumper nods
[22:04] <perrito666> aghh effing git commands
[22:04] <perrito666> thumper: apologies I meant to say reset
[22:04] <perrito666> which is like svn revert
[22:04] <perrito666> and those always get mixed in my head
[22:06] <thumper> why is git show <rev> for a commit not showing me the diff?
[22:08] <thumper> --stat shows lots of files
[22:08] <thumper> but no diff
[22:08] <thumper> ok, definitely have the error
[22:08] <perrito666> thumper: is a merge
[22:08] <thumper> yes
[22:09] <perrito666> thumper: you dont get diff on merges
[22:09] <thumper> I want to see the diff as a result of the merge
[22:09] <thumper> yes you do...
[22:09] <thumper> grr
[22:09] <thumper> dumb git
[22:09] <perrito666> thumper: one of the lines from show say merge blah and bleh
[22:10] <perrito666> git diff those two
[22:10] <perrito666> thumper: let me correct myself, you should, git sucks
[22:10] <cmars> thumper, i'm running an automatic bisect, will let you know how it goes
[22:10] <thumper> doing that now
[22:10] <thumper> cmars: I have the revision
[22:10] <thumper> looking at the change
[22:10] <cmars> oh cool
[22:10] <thumper> 3ebb3a1edbccd8e6c4211b2f5b9e1fd6d518d82a
[22:10] <perrito666> thumper: I presume that merge in internal terms for git adds actual git nodes doesnt do an actual merge of diffs
[22:11]  * perrito666 never bothered to actually check how git internally works
[22:13] <thumper> the problem is that the code is perfectly fine, just triggering a bug in gccgo
[22:13]  * thumper sighs
[22:17] <thumper> hmm, ok not that bit
[22:18] <thumper> I have a bad feeling about this
[22:20] <thumper> hahaha
[22:20] <thumper> omg
[22:20]  * thumper grunts
[22:21] <waigani> well don't leave us hanging...
[22:21] <alexisb> waigani, I was thinking the same thing
[22:22]  * perrito666 eats popcorn and reads
[22:22] <thumper> here is the code that was removed:
[22:22] <thumper> -                               // TODO(dfc) comparing the two interfaces caused a compiler crash with
[22:22] <thumper> -                               // gcc version 4.9.0 (Ubuntu 4.9.0-7ubuntu1). Work around the issue
[22:22] <thumper> -                               // by comparing by string value.
[22:22] <thumper> -                               if names.NewMachineTag(parentId).String() == authEntityTag {
[22:22] <thumper> it was replaced by a line that compared two interfaces
[22:22] <thumper> well
[22:22] <thumper> one interface and one type
[22:22]  * thumper pokes
[22:22] <waigani> lol
[22:23] <perrito666> I remember that one
[22:23] <cmars> thumper, bisect tells me the breaking change is 41e8f0a7bf33d3b22a7ccf0949e988c834c4eeac
[22:24] <cmars> and i confirm this with gccgo on 41e8f0a7bf33d3b22a7ccf0949e988c834c4eeac vs 41e8f0a7bf33d3b22a7ccf0949e988c834c4eeac~1
[22:24] <thumper> cmars: didn't you trust me?
[22:24] <cmars> i did, but i wanted to see my bisect work :)
[22:24] <cmars> trust but verify? :)
[22:25]  * perrito666 never saw "no" said so elegantly
[22:25] <thumper> cmars: ok, I'll give you that
[22:26] <cmars> funny thing is, it has a conditional *very* similar to the one you pasted up there
[22:26] <thumper> oh this is so fucked
[22:26] <cmars> for some value of 'this'
[22:28] <thumper> ok I have a fix
[22:29]  * thumper runs all apiserver tests with gccgo
[22:30] <thumper> gccgo needs work
[22:30] <alexisb> thumper, that is why we are investing in golang
[22:31]  * thumper nods
[22:31] <thumper> I was just about to say something about that
[22:31] <thumper> beautiful day here today, want to take the dog for a walk around ross creek at lunch time
[22:32]  * cmars misses beautiful dunedin now. 100F outside and all the grass is dead
[22:32] <thumper> it is about ...
[22:32]  * thumper calculates
[22:32] <thumper> 50°F
[22:32] <perrito666>  now that is something useful we can teach mup
[22:32] <thumper> so quite cool
[22:32] <cmars> i'd take it :)
[22:33] <perrito666> cmars: interesting we had a couple of days like that a few days ago
[22:34] <perrito666> the only issue is that we are in winter
[22:34] <cmars> perrito666, aw man, that's not fair at all. sounds like our winters
[22:34] <perrito666> but was an interesting change, it is quite hard to actually store summer clotes in winter
[22:35] <thumper> cmars, perrito666: https://github.com/juju/juju/pull/633
[22:35] <cmars> fixing the compiler would be best, but i wonder, if we could walk the AST to look for this bug ahead of time, to prevent this from breaking the build
[22:35] <thumper> confirmed passes tests locally with gc and gccgo
[22:35] <thumper> for apiserver at least
[22:35] <cmars> it's comparing two interfaces that triggers the compiler bug?
[22:36] <thumper> hang on
[22:36] <thumper> I think I can simplify
[22:36] <thumper> cmars: no, it appears to be one interface, and one concrete type
[22:40]  * thumper pushing
[22:41] <thumper> cmars: https://github.com/juju/juju/pull/633/files
[22:42] <perrito666> thumper: isnt tag == authEntityTag  blowing?
[22:42] <thumper> perrito666: no, because they are both interfaces
[22:43] <thumper> it appears to be when one is a concrete type, and one is an interface
[22:43] <thumper> where the concrete type implements the interface
[22:43] <thumper> not the pointer to the concrete type
[22:43] <perrito666> that is a good thing to mail to the list for people to keep an eye on it
[22:43] <thumper> agreed
[22:44] <perrito666> btw, isnt there a bug filed for that in gccgo? perhaps a reference to it in the comments would be useful so future maintainers can know when to remove the workaround
[22:44] <cmars> ok, let me pull and restart the tests
[22:49]  * thumper shrugs
[22:49] <thumper> perrito666: I'll ask dave in the standup
[22:50] <thumper> cmars: sorry this blocked you and matty so much today
[22:52] <cmars> thumper, no problem. it's a good reminder to check gccgo locally. although, it's nice to have access to power8 now, in case we need it in the future
[22:52] <alexisb> thumper, I pointed wallyworld to a spreadsheet today that says you have access to multiple power vms
[22:52] <alexisb> but there was not access info
[22:53] <alexisb> it would be nice to share the info with the whole team
[22:53] <alexisb> https://docs.google.com/a/canonical.com/spreadsheets/d/1_y3BM1Fcxmc_niIMrNvqtrzOl23vrX1DdeoQQqTejbg/edit?usp=sharing
[22:53] <thumper> I've forgotten mostly how to get there ... :)
[22:53] <thumper> sure...
[22:53] <thumper> however mostly gccgo problems can be caught locally
[22:54] <alexisb> that way we can have power access in US timezones when there is an issue
[22:54] <thumper> people just don't know how
[22:54] <alexisb> well education would help to :)
[22:54] <thumper> I included that in my email to the list
[22:54] <alexisb> cool, thanks
[22:55] <alexisb> cmars, thank you for driving help with that bug today!
[22:57] <mwhudson> thumper: is there a gccgo bug report for that?  want me to file one?
[22:57] <thumper> cmars: I'll check with dave if there is a bug fix
[22:57] <thumper> mwhudson: oh hai
[22:57] <thumper> mwhudson: I'll see if dave has done one already first
[22:57] <mwhudson> ok
[22:57] <thumper> would be good to get a minimal test case
[22:58] <thumper> which I think I have a good grip on now
[22:58] <mwhudson> yeah, that was going to be my next question :)
[23:00] <thumper> waigani: with you in a sec
[23:00] <thumper> waigani: just testing this bug
[23:00] <waigani> thumper: okay, I'll just keep the hangout open in bg
[23:01] <waigani> thumper: nice work on getting the bug!
[23:03] <waigani> thumper: dave is here
[23:10] <mwhudson> thumper: i don't see a fix flicking through gofrontend commits
[23:23] <thumper> rogpeppe1: your suggestion works, and is less intrusive, ta
[23:24] <rogpeppe1> thumper: cool, np
[23:24] <thumper> rogpeppe1: we are looking at creating a simple reproduction of the error
[23:24] <thumper> seems to be only with nested funcs and closure issues
[23:24] <thumper> so... not simple
[23:24] <rogpeppe1> thumper: ah
[23:25] <waigani> what are the system-y tests - mentioned in the team lead minutes?
[23:59]  * thumper takes the dog for a walk
[23:59] <thumper> bbl