[00:00] <davecheney> oh ffs
[00:00] <davecheney> There is a problem connecting to this video call. Try again in a few minutes.
[00:08] <davecheney> FAAAAAAAAAAAAAAARK
[00:11] <davecheney> thumper: right, sorry about that
[00:12] <davecheney> i'll use a different computer tomorrow
[00:12] <thumper> shit internet or shit computer?
[00:12] <thumper> is your laptop not happy with trusty?
[00:13] <waigani> okay, so how do I resolve this: http://paste.ubuntu.com/7449953/
[00:13] <waigani> I'm using ec2
[00:13] <waigani> i'm trying to destroy the vanilla service and redeploy it
[00:15] <davecheney> thumper: tusty + chrome == failboat
[00:15] <davecheney> will just use a mac tomorrow
[00:16] <davecheney> thumper: how come we're doing this identity work
[00:16] <davecheney> this is something green are pushing
[00:16] <davecheney> ie, they need it
[00:16] <davecheney> they are probably in the best place to describe what they need
[00:16] <menn0> davecheney: weird. I'm on trusty + chrome and hangouts are working ok
[00:16] <thumper> davecheney: you mean emerald?
[00:16] <davecheney> thumper: i mean casey
[00:16] <thumper> :)
[00:17] <davecheney> didn't casey and rog spend a week whiteboardin this ?
[00:17] <thumper> some of it
[00:17] <thumper> there was a big focus on jaaz identity
[00:17] <thumper> which is on their plate
[00:17] <thumper> we are looking at it from the juju-core side
[00:18] <waigani> menn0: I just saw "juju resolved -r <unit-name>". -r = re-execute failed hooks
[00:19] <waigani> looks like I'll have to blow away the env and start again though
[00:19] <menn0> waigani: yep. thumper mentioned that just before the hangout
[00:19] <waigani> is this a bug that I've hit though? I've got a dying service that I can't redeploy or resolve
[00:19] <thumper> waigani: dude... use the long names
[00:19] <thumper> it is ueasier
[00:20] <waigani> thumper: okay, okay
[00:27] <davecheney> waigani: if it is in state dying
[00:27] <davecheney> and the machine that host's it gone
[00:27] <davecheney> then you're sorta screwed
[00:27] <davecheney> s/it/is
[00:28] <davecheney> waigani: i'm assuming the local provider ?
[00:28] <waigani> yep, this is the case.
[00:28] <waigani> davecheney: ec2
[00:28] <davecheney> ORLY
[00:28] <waigani> but how can I get into that state so easily?
[00:28] <davecheney> btw, does everyone know about the juju-local footgun ?
[00:28] <waigani> nop
[00:28] <davecheney> ie, you *MUST* install juju-local to use juju at all
[00:28] <davecheney> even from source
[00:29] <davecheney> but if you do that, /usr/bin/juju will take precidence in your path
[00:29] <thumper> davecheney: it complains now if you don't
[00:29] <thumper> davecheney: I explicitly set my PATH in .bashrc
[00:29] <thumper> to have $GOPATH/bin first
[00:29] <davecheney> so unless you're paying a lot of attention, you won't be using the juju you think you are using
[00:29] <davecheney> and bootstrap --upload-tools will upload the wrong one
[00:29] <davecheney> thumper: yes there are a few workarounds
[00:30] <davecheney> 1. put $GOPATH/bin in the top of your path
[00:30] <thumper> I do think that most people who compile from source are aware of this...
[00:30] <davecheney> 2. there is apparently a tool which can make up fake packages just to shut up apt, sinzui uses it
[00:30] <davecheney> thumper: i've found one on this channel this morning who wasnt :)
[00:31] <davecheney> thumper: this is an unbelievable cockup
[00:31] <davecheney> is anyone looking at it ?
[00:33] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1306544
[00:33] <_mup_> Bug #1306544: developing juju requires juju-local  to be installed <juju-core:Won't Fix> <https://launchpad.net/bugs/1306544>
[00:34] <davecheney> i don't agree with this issue status
[01:35] <wallyworld_> axw: morning, good weekend?
[01:35] <axw> wallyworld_: heya. hectic weekend
[01:35] <axw> house hunting & mothers day
[01:35] <axw> and yourself?
[01:35] <wallyworld_> well, i had the latter, but not the former
[01:36] <wallyworld_> and a wedding
[01:36] <wallyworld_> bit sick sunday morning
[01:36] <axw> self inflicted? :)
[01:36] <wallyworld_> yeah :-(
[01:36] <wallyworld_> quick catchup?
[01:36] <axw> sure
[01:53] <menn0> review please: https://codereview.appspot.com/97330044
[02:08] <thumper> menn0: done
[02:10] <menn0> thumper: thanks
[02:41] <waigani> menn0: did you use the mysql charm at all?
[02:42] <menn0> waigani: no I haven't looked at it. I found the postgresql charm to be pretty useful to look at though.
[02:42] <waigani> I have a relation-changed hook, which is meant to grab "user", i.e. relation-get user - but user never gets set my mysql
[02:43] <waigani> at least I assume it doesn't
[02:46] <davecheney> waigani: the relatino-* hooks get run multiple times
[02:47] <davecheney> so generally you need to put guards in place to bail out if the relation value isn't set yet
[02:47] <waigani> davecheney: so I should be patient?
[02:49] <davecheney> waigani: you just need to exit 0 if you are waiting for more information
[02:49] <davecheney> in theory the remote side will set some more relatoin values later and your hook will be called again
[02:50] <waigani> davecheney: yep, already doing that. mysql has been up for ages. I fixed the install hook in my webservice a few times. Got it all working. BUT can't get user from relation-get yet.
[02:50] <waigani> no I've ironed out the bugs, I might destroy env and bootstrap/deploy again
[03:21] <thumper> jam: o/
[03:21] <thumper> jam: we probably need to find a different time for our call as my girls have ice-skating now on a monday afternoon
[03:21] <jam> thumper: np
[03:23] <jam> is there a time you have in mind?
[03:26] <thumper> not really...
[03:26] <thumper> jam: what time do you normally start your work day?
[03:29] <jam> thumper: officially in 1.5hrs
[03:30] <jam> but we had our 1:1 an hour before I started when DST was different
[03:30] <jam> thumper: and, ya know, occasionally I'm online a bit earlier than that. Espec. when my wife is away :)
[03:30] <thumper> hmm...
[03:30] <thumper> ya know
[03:31] <thumper> pretty much every day sucks :)
[03:31] <thumper> tuesday's probably suck least
[03:31] <jam> thumper: we can go in your evening if that is better for you
[03:32] <jam> Its great for me, but probably sucky for you
[03:32] <thumper> perhaps monday evening, and I could try to get you and william done together
[03:32] <thumper> :)
[03:32] <thumper> at the moment william is missing my 9am timeslot (11pm for him)
[03:32] <thumper> not entirely surprising really
[03:33] <jam> well, I've been waking up at 1UTC (5am), because I'm trying to take the dog for a walk before it gets too hot, and thus before I have to get my son ready for school, etc.
[03:33] <jam> I believe that slot is 2am for me, so that wouldn't work either
[03:33] <jam> but your evening is my mid-day
[03:38] <wallyworld_> jam: hiya, a few of us talked about having a kanban board for each team (since there's now 5 teams). woud you be happy with that?
[03:38] <jam> thumper: so your monday evening or same time on Tues is fine with me. I need to know about Monday since I'll probably have to move around my team 1:1's, but that is generally more flexible than our overlap.
[03:38] <jam> wallyworld_: not very happy, though I'm not sure how to make it all work.
[03:38] <jam> I'm really concerned about people losing visibilty outside of their small squad
[03:38] <jam> but I certainly understand you can't keep fine grained visibility over 20 people
[03:39] <wallyworld_> sure, but i can't see how we can manage sooo many cards, especially if we plan 2 weeks out
[03:39] <wallyworld_> maybe a lane per squad then for the 2 week cards
[03:40] <wallyworld_> ie i want to start adding cards for the next 2 weeks but there's nowhere really to put them
[03:42] <jam> wallyworld_: right, lane per squad was the hope
[03:42] <wallyworld_> jam: i haven't got editing permissions on the board - are you able to add a tanzanite lane?
[03:42] <waigani> is environments.yaml only used on the initial bootstrap to create the jenv? Is it ever used after jenv is created?
[03:42] <jam> I don't have great answers here. I was hoping to find some way to balance getting some cross coverage without having all of it.
[03:43] <jam> wallyworld_: I'll just give you edit rights, just a sec
[03:43] <wallyworld_> ta
[03:43] <wallyworld_> waigani: nope, not used after jenv is created
[03:43] <waigani> why do we keep it around then?
[03:43] <wallyworld_> jam: we can/should discuss the best approach but till then a lane is great so we can at least start planning
[03:44] <jam> waigani: well for one env.yaml has many environs in it
[03:44] <wallyworld_> waigani: so it can be edited for next time, and histerical reasons
[03:44] <jam> and 2, it is the place that users wrote their stuff
[03:44] <jam> and deleting stuff that people actually wrote is usually bad
[03:44] <jam> waigani: did you hear any of the juju 2.0 conf stuff?
[03:44] <waigani> nop
[03:44] <jam> the intent is to change out what is written, so instead users have a separate .conf file that just include account descriptions
[03:45] <waigani> that makes a bit more sense
[03:45] <jam> and then on bootstrap you would do "juju bootstrap $ACCOUNT $NAME [—template some.yaml]"
[03:46] <jam> wallyworld_: I just made you and Nate Managers so that you should be able to edit the board
[03:46] <wallyworld_> thanks
[03:50]  * jam needs to head to the grocery store before they get extra busy, bbiab
[03:51] <jam> thumper: just to confirm, you're calling off our regular 1:1 today
[04:00] <wallyworld_> jam: thumper: i added lanes for the squads, i think we need to get all the cards out of general and then delete that lane?
[04:07] <davecheney> \o/ I cannot download the mongo driver because labix.org is down ...
[04:07]  * davecheney hacks it
[04:15] <thumper> jam: yes...
[04:51] <axw> woo, only 44 conflicts to resolve
[04:54] <wallyworld_> \o/
[04:55] <wallyworld_> thumper: do you think we should remove the fast-lxc option in trunk and just make it work that way always? i can't see why we'd ever not want to have it
[04:55] <wallyworld_> s/option/config setting
[04:57] <thumper> wallyworld_: I'm not convinced that we always want it in every situation
[04:57] <thumper> willing to be convinced
[04:57] <wallyworld_> when wouldn't we want it?
[04:57] <wallyworld_> in any case, i think the deafult should be true if we keep the option
[04:58] <wallyworld_> and now it's a global setting, we can refactor a little - remove the local provider setting
[04:58] <davecheney> wallyworld_: want about precise
[04:58] <davecheney> or kernels that don't support aufs
[04:58] <davecheney> (or it might be btrfs)
[04:59] <wallyworld_> davecheney: if there's no aufs or btrfs we just copy the template image
[04:59] <wallyworld_> still faster than download 100000000GB
[04:59] <wallyworld_> davecheney: you saying that lxc on precise doesn't support cloning?
[05:03] <davecheney> wallyworld_: i know of two places in our current LTS releases that would not support fast-lxc
[05:03] <wallyworld_> these are?
[05:04] <davecheney> precise on all platforms and trusty on ppc64el
[05:06] <wallyworld_> davecheney: ok, so i reckon we can detect those and default to true otherwise
[05:06] <wallyworld_> make the setting mean "clone if supported by series and arch"
[05:06] <wallyworld_> fast-lxc-where-possible or whatever
[05:06] <davecheney> wallyworld_: +1 to dumping the setting and making it automatic
[05:07] <davecheney> less options is alway more good
[05:07] <wallyworld_> yeah, and users *want* the behaviour if it is available
[05:07] <wallyworld_> no one wants to wait 30 minutes to deploy a few lxc hosted charms
[05:13] <jam> wallyworld_: AIUI all LXC implementations still use the cached .tgz that gets downloaded
[05:13] <jam> it is just a question of whether the FS is cloned or not
[05:15] <wallyworld_> jam: i must be confused then because thumper said, if i understood correctly, that we still wanted to create a template image file. if aufs or btrfs, it's cheap to copy the template, but even without, we just copy the entire template file, causing more i/o but still faster than downloading again
[05:15] <jam> and AIUI the question then is, do you always double disk space (1 for template, 1 for first container), or do you sometimes only do 1 (1 for container)
[05:16] <jam> wallyworld_: so there is the question of "when wouldn't you want it," what are the actual tradeoffs. thumper probably knows the details better than I do, but my understanding is
[05:16] <jam> 1) lxc copies down the template rootfs from cloud-images
[05:16] <jam> 2) it keeps that .tgz around
[05:17] <jam> 3) with lxc-clone, we then expand that into a template lxc
[05:17] <jam> 4) and then copy the expanded version (optionally doing a nice copy-on-write clone with btrfs) for each new lxc
[05:17] <jam> without lxc-clone
[05:17] <jam> we just create the one you asked for
[05:17] <jam> so if you ever create 2, then you have to unpack a .tgz again
[05:17] <jam> if you actually have to redownload the tools, that would be bad
[05:17] <jam> the other bits that I didn't mention are
[05:18] <jam> 3a) we do 'apt-get update' in the template before we start copying
[05:18] <jam> 4a) we never do apt-get update again
[05:18] <jam> vs
[05:18] <jam> without lxc-clone whenever you deploy a new container we apt-get update in it.
[05:18] <jam> its possible the apt-get update was what was killing cloning from dan's guys
[05:19] <jam> if LXC wasn't caching the .tgz that would also be terrible, but it would be an LXC bug (AIUI)
[05:19] <jam> So, if the expectation is that anyone who wants to deploy 1 service into a container is always going to deploy >1, then we're likely at a net win
[05:19] <jam> if, on average, people deploy only 1, then we are at a net loss
[05:19] <jam> hence, the configuration flag.
[05:19] <jam> I would be fine with defaulting it to true, I think.
[05:20] <jam> wallyworld_: for the "no aufs or btrfs" I agree that copying an expanded template is still faster than extracting from a tarball, so we could go ahead and do that
[05:20] <wallyworld_> ok, clearly i misunderstood the mechanations. i  guess it was the apt-get update then. thanks for clarifying. i guess i didn't see that apt-get update would take so long
[05:20] <jam> it is more about having 2 copies when you may only want 1
[05:20] <wallyworld_> i think we'd want > 1 most of the timne
[05:21] <jam> wallyworld_: well, I'm guessing, without being there and having them say anything other than "this took a long time" it is hard to say for sure
[05:21] <wallyworld_> so we can just flip the setting to default to true
[05:21] <jam> wallyworld_: note that Tim felt strongly that you needed the "juju-local" plugin which gives you a way to refresh/update your template
[05:21] <jam> which was intentionally a hackish-plugin-that-only-works-locally
[05:21] <wallyworld_> so as per dave's comment then, what would be any issue with precise or ppc64?
[05:22] <jam> wallyworld_: so AIUI you can still copy everything, you just don't get "cheap" clones
[05:22] <davecheney> wallyworld_: no idea
[05:22] <davecheney> but when fast-lxc landed in trunk
[05:22] <davecheney> it broke precise and ppc64
[05:22] <davecheney> so we had to add a flag to *enable* it specifically
[05:22] <davecheney> thereby disabling it by default
[05:23] <jam> wallyworld_: potentially the issue is that they claim to support aufs, but it is broken
[05:23] <jam> so we have to explicitly not use it and do straight copying
[05:23] <wallyworld_> maybe we can hard code something then for now
[05:23] <jam> even though the platform says "I can do a cheap clone with aufs"
[05:24] <wallyworld_> i guess i'm saying i think we should support "use-clone" as true by default
[05:24] <wallyworld_> so the out of the box experience for most users is nice without having to read release notes etc
[05:24] <davecheney> i thought it was more that lxc didn't gracefully degrade
[05:25] <davecheney> but we didn't have time to implement the detection
[05:25] <davecheney> so turned it off by default
[05:26] <wallyworld_> i guess i'm not sure what lxc has to support - don't all versions use a cached .tgz to use whenever a new container is asked for
[05:29] <davecheney> wallyworld_: thumper knows the answer
[05:29] <davecheney> all i know is this broke ppc when it landed
[05:29] <davecheney> so we had to turn it off by default
[05:29] <wallyworld_> np, i'll ask him
[05:30] <wallyworld_> i reckon/hope we can detect when supported and default to true in that case
[05:31] <davecheney> i agree
[05:32] <davecheney> wallyworld_: or
[05:32] <davecheney> fix the pcc kernel so it supports that optoin
[05:32] <davecheney> precise is a noop
[05:32] <davecheney> we already do not recommend precise when using the local provider
[05:32] <davecheney> or
[05:32] <davecheney> even cheaper
[05:33] <davecheney> make fast-lxc the default, and we have to turn it off on trusty/ppc64el or upgrade the kernel
[05:33] <jam> davecheney: note that this isn't about the local provider, this is turning it on everywhere that you use containers
[05:34] <wallyworld_> correct. we added it everywhere for the maas guys for ods
[05:34] <davecheney> sure
[05:34] <davecheney> but i think what i said still applies
[05:34] <davecheney> ppc is the outlier
[05:34] <davecheney> nobody sohuld have to suffer becuase it's lame
[05:35] <wallyworld_> i still can't see why precise would be different
[05:35] <davecheney> wallyworld_: 12.04 <- lxc is broken
[05:35] <wallyworld_> but if it is, we can detect that and not use it on precise
[05:36] <wallyworld_> lxc in general?
[05:36] <davecheney> but that is fine because we have continually said that use juju with the local provider (insert whatever name lxc has)
[05:36] <davecheney> is *not* supported on precise
[05:36] <davecheney> there is some hand waving and floor peering about applying a backport kernel
[05:36] <wallyworld_> ok, no we don't need to check for precise then, just ppc
[05:36] <wallyworld_> if they want to try lxc on precise, it's at their own risk
[05:37] <davecheney> correct
[05:37] <wallyworld_> since it's considered broken
[05:37] <davecheney> so ppc is the only outlier
[05:37] <wallyworld_> great, ok, that makes it easier
[05:48] <jam> wallyworld_: is it possible for you guys to size your cards?
[05:48] <jam> in kanban?
[05:48] <jam> that gives us a way to track average velocity for future planning.
[05:48] <wallyworld_> sure, just gathering a list first
[05:48] <jam> np
[05:48] <jam> I wasn't sure what stage you were at
[05:48] <jam> thanks for doing the lanes
[05:48] <jam> we probably need to clear out the "TODO" lane
[05:49] <wallyworld_> sure. i'm hoping we can get all the ones i've added done in 2 weeks
[05:49] <wallyworld_> we need to clear out a bunch of stuff
[05:49] <jam> get the items moved into the team lanes, and then close/move TODO into backlog or somesuch
[05:49] <wallyworld_> and also clear out general
[05:50] <jam> wallyworld_: I mean all of the TODO, which is general/HA/MaaS etc
[05:50] <wallyworld_> so what's in the todo/general vertical block becomes feature related lanes
[05:50] <wallyworld_> ah ok
[05:50] <wallyworld_> i was thinking the feature lanes could still be useful
[05:52] <jam> wallyworld_: perhaps, but I think the feature-related is really just the teams
[05:52] <jam> maybe not
[05:53] <wallyworld_> a team could be on more than one feature concurrently
[05:53] <wallyworld_> and folks outside core may want to look to see what's being done for feature A vs B
[05:53] <wallyworld_> just athought
[06:00] <vladk> jam: morning
[06:00] <vladk> jam: hangout time
[06:01] <jam> vladk: yep
[07:24] <jam> wallyworld_: you merged 1.18 into trunk, but it didn't seem to include my changes to testing/mgo.go
[07:24] <jam> is there a reason?i
[07:25] <jam> wallyworld_: from what I can tell you explicitly reverted everything outside of cmd/8
[07:25] <wallyworld_> jam: that was a mistake, sorry
[07:25] <wallyworld_> there were so many conflicts
[07:26] <jam> wallyworld_: k. I just noticed the test suite is just broken for me, and I thought I had fixed that
[07:26] <jam> I was trying to merge trunk to get the fix
[07:26] <jam> but… still broken :)
[07:26] <wallyworld_> i'll fix
[07:26] <jam> wallyworld_: so when doing patches with big conflicts, you can go back to stepping the merges
[07:27] <jam> so you know you can reject everything in "Backport fix for …" but you probably don't want to reject one "Init EnvCommandBase.EnvNam…" etc.
[07:27] <wallyworld_> yeah
[07:28] <fwereade> jam, wallyworld_: oops, sorry -- I knew I recognised everything when I LGTMed, didn't spot there were some things missing that I *should* have recognised ;)
[07:29] <jam> fwereade: fwiw, it did end up conflicting on trunk because store/ in trunk changed how they were doing the test suite
[07:29] <wallyworld_> nah, it was my fault, the number of conflicts confused me
[07:29] <jam> wallyworld_: I didn't realize it was going to conflict, so I can pick it up
[07:29] <wallyworld_> ok, if that works
[07:29] <jam> wallyworld_: otherwise you should be able to just "bzr merge -c 2248.26.15 ." on trunk, but mgo_test.go is gone now
[07:30] <jam> wallyworld_: I'll fix it up for trunk
[07:30] <wallyworld_> ok, thank you
[08:05] <jam> wallyworld_: fwereade: https://codereview.appspot.com/100380044/
[08:05] <wallyworld_> looking
[08:07] <wallyworld_> looks good, thanks for fixing my snafu
[08:09] <jam> wallyworld_: np, it had to be fixed different in trunk anyway, so it wasn't like a simple clean "merge up"
[08:09] <jam> I didn't realize the mongo infrastructure had changed this much, or I would have done 2 branches to start with.
[08:21] <voidspace> morning all
[08:21] <fwereade> voidspace, heyhey
[08:28] <TheMue> morning
[08:36] <jam> morning TheMue
[08:36] <TheMue> jam: heya
[08:37] <jam> TheMue: so in Las Vegas it sounded like you were coming back to juju-core, has that started?
[08:37] <TheMue> jam: I’ll have to talk to Curtis, but I understood so too.
[08:39] <TheMue> jam: How has Vegas been? I’ve seen that one result is the discovered urgency for dev docs. That’s a good news. :)
[08:42] <TheMue> jam: And what is the status about moving to GitHub? I read about this discussion some time ago.
[08:43] <jam> TheMue: vegas was pretty good. Good conversations, a giant document of possible work items with pretty good granularity
[08:44] <jam> https://docs.google.com/a/canonical.com/document/d/1XZN2Wnqlag9je73mGqk-Qs9tx1gvOH0-wxMwrlDrQU4/edit#
[08:44] <jam> I think it is about 90 pages long
[08:44] <jam> :)
[08:45] <jam> as for Github, Ian and Curtis appear to be working together to do a conversion, and get a Lander and CI, etc up and running.
[08:46] <TheMue> jam: 90 pages??? OK, I have to read a lot. :D
[08:47] <jam> not all of it is going to be done, and the work is going to be divided across 5 teams, but it all got put together in one big doc
[08:47] <TheMue> jam: Moving to GH sounds good to me. I moved my private projects too after I learned a bit more about it when working for docs (hosted in GH too).
[08:48] <TheMue> jam: 5 teams now? Wow.
[08:48]  * TheMue thinks back of the start with one little team.
[09:19] <voidspace> has anyone else seen go install not replacing binaries?
[09:20] <davecheney> voidspace: only if there is nothing to built
[09:20] <davecheney> ie, they are up to date
[09:20] <davecheney> voidspace: have you been bitten by /usr/bin/juju ?
[09:21] <davecheney> juju-local, which is required to run the tests installs a version of juju higher up your path
[09:21] <voidspace> davecheney: no, but I've been switching branches and rebuilding a lot to test some changes
[09:21] <voidspace> davecheney: and wwitzel3 thinks he has seen cases where "go install ./..." doesn't replace old binaries with the new ones
[09:21] <davecheney> always use -v
[09:21] <voidspace> davecheney: which would screw all the results of my testing
[09:21] <davecheney> alias gb='go install -v'
[09:21] <voidspace> davecheney: ah, which will tell me when it's replacing binaries
[09:21] <davecheney> that is what I use
[09:21] <voidspace> ok
[09:21] <voidspace> good idea, thanks
[09:22] <davecheney> the only case it would not replace the final executable is where it thought there was no work to do
[09:22] <voidspace> right, that's what it *should* do :-)
[09:22] <davecheney> voidspace: go install -x will show the commands being run
[09:22] <davecheney> you can start debugging it there
[09:24] <davecheney> voidspace: one thing which might throw it off is if you have a 'creative' GOPATH
[09:24] <davecheney> the best advice isjust to export GOPATH=$HOME
[09:24] <davecheney> or GOPATH=$HOME/code
[09:24] <voidspace> davecheney: my GOPATH is highly uncreative
[09:24] <voidspace> yeah, it's a fixed location
[09:24] <davecheney> voidspace: in that case I cannot explain why you are having issues
[09:24] <davecheney> use -v or -x
[09:24] <davecheney> so you have some details we can dig through
[09:25] <voidspace> davecheney: I was just worried by Wayne - I don't *think* I've seen it, but he says he has - and I wondered if this was a general issue
[09:25] <voidspace> in which case I'd need to always delete binaries when I build
[09:25] <voidspace> I have a more juju specific question
[09:25] <voidspace> on a state server I need to get the addresses of all the *other* api servers (not including itself)
[09:26] <voidspace> there is agentConfig.APIAddresses() which gets all addresses
[09:26] <voidspace> how do I determine the address of the state server this is being run on to remove it?
[09:27] <voidspace> inside cmd/jujud/agent.go (specifically inside newRsyslogConfigWorker)
[09:27] <davecheney> func (st *State) DeployerConnectionInfo() (*DeployerConnectionValues, error)
[09:27] <davecheney> state has this odd method
[09:27] <davecheney> which returns an object which has a list of api and state servers
[09:27] <davecheney> apart from that
[09:27] <davecheney> nfi
[09:27] <voidspace> a list of *all* api and state servers?
[09:28] <davecheney> type DeployerConnectionValues struct { StateAddresses []string APIAddresses   []string
[09:28] <davecheney> }
[09:28] <davecheney> apparently
[09:28] <davecheney> never used it myself
[09:28] <voidspace> davecheney: right, I have *that* information - I want just the current one
[09:28] <voidspace> (to remove it from the list)
[09:28] <voidspace> I guess I have to see how that information is populated, and where the information comes from
[09:28] <davecheney> hmm, there is this one
[09:28] <voidspace> each api server must publish this information
[09:28] <davecheney> func (st *State) APIHostPorts() ([][]instance.HostPort, error) APIHostPorts returns the API addresses as set by SetAPIHostPorts.
[09:29] <voidspace> again, that's all of them
[09:29] <davecheney> yeah
[09:29] <davecheney> looks like that information isn't accessible
[09:29] <voidspace> davecheney: I'll dig into it - thanks though
[09:29] <davecheney> it's just regurgitating what is in mongo
[09:29] <voidspace> right
[09:29] <voidspace> so I need to find where that information is published into mongo
[09:30] <voidspace> sounds like a job requiring more coffee :-)
[09:40] <jam> voidspace: there is probably something like Machine.PublicAddress (es?)
[09:40] <jam> but yes, what you want is something like APIHostPorts but for only your agent
[09:41] <jam> potentially you only want private addresses which I think there is Machine.PrivateAddresses
[09:41] <jam> though I don't know what our exact story is for rsyslog and manually provisioned machines
[09:41] <jam> as they need to contact the API server on the Public ports
[09:42] <jam> natefinch just did a patch so that they will try to connect to API servers on both addresses in case they cannot connect privately
[09:42] <jam> though I think I convinced him it was worth changing how we cache addresses so that we can do stuff like "try the private addresses first"
[09:45] <axw> fwereade: we can take git out of the packages installed during cloud-init now, right?
[10:00] <voidspace> jam: right, interesting
[10:00] <voidspace> jam: however - I remember another conversation with you
[10:01] <voidspace> jam: I can *not* filter out the machine's own address and instead of logging local messages on the state server
[10:01] <voidspace> jam: rely on it broadcasting messages to all state servers, and logging its own messages "remotely" as well
[10:01] <voidspace> jam: that way is slightly inefficient
[10:02] <voidspace> jam: but I don't need to filter the addresses
[10:02] <voidspace> jam: and will do for a first cut
[10:02] <voidspace> jam: as it makes the state server config slightly simpler too
[10:02] <jam> voidspace: sounds ~ok to me.
[10:02] <jam> if we are intending to change logging completely anyway
[10:02] <voidspace> right
[10:02] <jam> I wouldn't spend huge amounts of time figuring out all of this, though some of it might still be useful for future work
[10:02] <voidspace> we *finally* have the new ryslog config working
[10:03] <voidspace> (units broadcasting to all state servers)
[10:03] <voidspace> that took a *long* time to get right
[10:03] <voidspace> due mainly to horrible documentation and no error messages
[10:03] <voidspace> so this shouldn't take anything like as long
[10:03] <voidspace> (i.e. today hopefully)
[10:08] <axw> voidspace: aha, you've had the pleasure of reading rsyslog docs I see
[10:08] <axw> took ages to get the original stuff working right too :)
[10:08] <axw> such cryptic configuration language
[10:17] <voidspace> axw: yeah, great fun. Not.
[10:22] <jam> axw: voidspace I think the lack of messages like "unable to forward messages to XXXXX" that would give you a bit more of a hint as to why the syntax wasn't doing what you thought it was
[10:22] <jam> but the fact that the configuration file is actually a mini program
[10:22] <jam> but without all the nice things like visible control flow
[10:22] <voidspace> yep
[10:23] <voidspace> the parser must be a beast, especially for the legacy format rulesets
[10:23] <voidspace> there's no documentation on how the parser decides if a directive is global or within the ruleset definition
[10:23] <voidspace> or even how you terminate a ruleset definition
[10:23] <voidspace> the new syntax is better
[10:24] <voidspace> I'm giving a bash with the legacy format - if that doesn't work I'll switch to the new syntax and the state server logging broadcasting will be trusty only
[10:27] <voidspace> right, now to test this with canonistack
[10:36] <jam> perrito666: sorry I missed our 1:1, and while I realize you're on Nate's team, I was at least planning on saying goodbye.
[10:39] <jam> fwereade: if you're around, care to join the standup chat early, there is something I'd like to bounce off of you
[10:47] <voidspace> it's always slightly disconcerting to log into a juju machine and see "System restart required"
[10:48] <voidspace> I assume that after the "apt upgrade" we don't restart
[10:48] <voidspace> so some security upgrades aren't *actually* applied
[10:49] <jam> voidspace: I thought we had a bug open about possibly restarting machines when we see that apt wants a restart, but I can't find it in my 30s googling
[10:50] <voidspace> jam: shall I create one?
[10:51] <jam> voidspace: certainly
[11:03] <voidspace> yay, I'm seeing logging from state server machine-1 on machine-0
[11:03] <voidspace> so the state server logging broadcasting is working
[11:03] <voidspace> so how exactly to write a useful test for this...
[11:03] <fwereade> jam, sorry, I was having lunch and paying no attention to the time -- are you all done?
[11:04] <fwereade> jam, I'm here for idea-bouncing regardless
[11:04] <jam> not yet, we're now in the "juju-sapphire" hangout
[11:19] <perrito666> good morning
[11:20] <natefinch> morning perrito666 (and everyone else... I've been lurking for a while)
[11:25] <voidspace> perrito666: morning
[11:25] <voidspace> natefinch: morning
[11:25] <perrito666> natefinch: I didnt, I tried that wwitzel3 trick to sleep a bit more for one day
[11:26] <natefinch> perrito666: haha... one of my cats woke me up just at the time I used to get up, and then I was screwed.
[11:31] <wwitzel3> hello all
[11:31] <natefinch> morning wwitzel3
[11:34] <voidspace> wwitzel3: morning
[11:34] <voidspace> wwitzel3: so the logging config changes you made Friday seem to work for state servers too
[11:34] <voidspace> wwitzel3: I have state servers broadcasting their logging to all the other state servers
[11:34] <voidspace> wwitzel3: just fixing up tests
[11:35] <voidspace> wwitzel3: I just put the broadcast loop into the local ruleset and it worked without changes
[11:35] <voidspace> wwitzel3: which was nice :-)
[11:36] <wwitzel3> voidspace: nice :)
[12:03] <vladk> natefinch: in ./cmd/jujud/machine.go all workers are started with startWorkerAfterUpgrade expect apiserver and peergrouper that are started with runner.StartWorker.
[12:03] <vladk> natefinch: is any reason to start peergrouper before upgrade completes?
[12:04] <jam> natefinch: did you still want to do our 1:1 today?
[12:04] <natefinch> vladk: I don't believe so.  Seems like it would be good for it to wait until we're done upgrading
[12:04] <natefinch> jam: oh yeah, sorry, lost track of time.  Sure, give me a minute to change a diaper.
[12:05] <voidspace> vladk: my daughter loves the sochi teddy bear - thank you
[12:05] <jam> natefinch: then I'll just get my son set up quickly
[12:06] <jam> back, I'm in the hangout
[12:11] <vladk> voidspace: glad to hear it
[12:19] <voidspace> right, lunch
[12:26] <_benoit_> Hmmm the ec2 provider does not work at all without S3
[12:27] <_benoit_> This will give me a good excuse to learn how go interfaces work :)
[12:51] <perrito666> btw, now that there are more of us back, could anyone else ptal? https://codereview.appspot.com/94330043/
[12:51] <perrito666> vladk: I answered your comments
[12:59] <jam> perrito666: reviewed
[13:00] <perrito666> jam: tx
[13:21] <_benoit_> Could the Ssh Storage worker be used to replace S3 in to support the outscale EC2 compatible provider ?
[13:22] <_benoit_> I mean does it have some serious issues preventing to use it to do this ?
[13:33] <fwereade> _benoit_, the ssh storage provider is kinda broken, because it won't work in a HA context -- we're going to be moving environment storage intogridfs within mongo
[13:33] <fwereade> _benoit_, not sure offhand exactly when it's scheduled
[13:34] <fwereade> if anyone's watching irc, I'd appreciate a review of https://code.launchpad.net/~fwereade/juju-core/arch-docs-1/+merge/219190 -- it's just a beginning, but I wanted to get something in
[13:34] <_benoit_> fwereade: what is the smothest option right now configuration wize ?
[13:34] <_benoit_> fwereade: the http storage seem to require a lot of intervention from the user
[13:36] <_benoit_> fwereade: I would like to replace S3 without asking the user to enter a configuration for storage
[13:45] <fwereade> _benoit_, sorry, had to dash out to collect laura
[13:46] <fwereade> _benoit_, what exactly is the use case? ec2-like, but not-really-ec2, environments?
[13:49] <_benoit_> fwereade: mostly (98%) ec2 without S3
[13:49] <_benoit_> fwereade: I am looking for a way to have a smooth integration (not the manual provider)
[13:50] <_benoit_> fwereade: https://wiki.outscale.net/display/DOCU/AWS+Compatibility+Matrix
[13:50] <_benoit_> I already patches goamz to add the provider regions
[13:50] <_benoit_> s/patches/patched:
[13:51] <fwereade> _benoit_, I see, cool
[13:51] <fwereade> _benoit_, well, if you don't mind that HA won't be possible in the short term, I guess ssh storage might be viable
[13:52] <_benoit_> ok
[14:02] <voidspace> anyone else have this test failure on trunk?
[14:02] <voidspace> validatetoolsmetadata_test.go:123: ValidateToolsMetadataSuite.TestEc2LocalMetadataUsingIncompleteEnvironment
[14:04] <natefinch> wwitzel3: standup?
[14:07] <wwitzel3> natefinch: sorry off in space
[14:07] <wwitzel3> coming
[14:28] <wwitzel3> natefinch, voidspace: https://codereview.appspot.com/96190043/
[15:10] <bac> juju help scp shows using the -r option.  when i try to use it i get an error saying "error: flag provided but not defined: -r"
[15:14] <bac> sinzui: have you seen ^^^
[15:15] <sinzui> bac: yes
[15:15] <sinzui> I think juju ssh has mangled ssh and scp command line
[15:15] <bac> sinzui: is there a work-around?
[15:15] <sinzui> bac, last week I used scp with -i
[15:16] <sinzui> bac, we always know the key that was used, and we can always ssh to the public address
[15:16] <sinzui> of the unit
[15:22] <bac> sinzui: there is a bug filed?
[15:24] <natefinch> wwitzel3: LGTM'd
[15:29] <sinzui> bac, there isn't a bug.
[15:36] <bac> sinzui: i've filed bug 1318711
[15:36] <_mup_> Bug #1318711: juju scp reports -r as an invalid option <juju-core:New> <https://launchpad.net/bugs/1318711>
[15:36] <sinzui> I see
[15:37] <sinzui> bac, which version of juju?
[15:38] <bac> 1.19.2
[15:38] <sinzui> excellent, thank you bac
[15:38] <bac> sinzui: i'll add that
[15:38] <sinzui> bac, I just did thank you
[15:46] <sinzui> hazmat`, did you provide wallyworld_ with a list of bug that we want to fix to improve juju's reliability and and usability?
[15:46] <hazmat`> sinzui, i did
[15:46] <hazmat`> spreadsheet form
[15:47] <hazmat`> sinzui, the three bug tags were usability, observability, reliability afaicr
[15:47] <sinzui> hazmat`, may I see it. I want to ensure the bugs are kept high and on our list to fix soon
[15:48] <hazmat`> sinzui, yup.. just looking up the url.. ods network is a bit spotty
[16:27] <voidspace> does lbox propose work if there's a pre-requisite branch?
[16:28] <voidspace> or is the fact that it's *not* working for me due to some other reason...
[16:29] <perrito666> voidspace: did you tell lbox that there is such pre requisite?
[16:29] <perrito666> --req iirc
[16:29] <voidspace> perrito666: hm... no, but it's set in launchpad
[16:29] <voidspace> I did not know I needed to
[16:29] <voidspace> when I run lbox propose it just says:
[16:30] <perrito666> not to sound like fwereade but "its on the docs"
[16:30] <voidspace> Branches: lp:~mfoord/juju-core/ha-rsyslog => lp:juju-core
[16:30] <voidspace> perrito666: hah, sure - along with a million other things you don't need to know until you do
[16:30] <perrito666> lol
[16:30] <voidspace> perrito666: which docs specifically?
[16:31] <voidspace> it's not in CONTRIBUTING
[16:31] <voidspace> ah, I stand corrected
[16:31] <voidspace> it is
[16:31] <voidspace> let me try that
[16:31] <voidspace> perrito666: thanks
[16:31] <perrito666> CONTRIBUTING
[16:31] <perrito666> tx
[16:31] <perrito666> heh your grep is faster than mine
[16:32] <voidspace> hmm... nope - still does the same no-op
[16:39] <stokachu> sinzui: did juju 1.18.3 get uploaded to proposed in trusty yet?
[16:51] <sinzui> stokachu, I have not seen it
[16:52] <stokachu> sinzui: ok do i need to file a proper sru for it?
[16:52] <sinzui> thank you stokachu
[16:53] <stokachu> sinzui: np i didnt know if juju was bound by same sru process
[16:54] <sinzui> stokachu, yes, the issue is a bit ambiguous. we are working on an MRE exception for the "juju" command to ensure trusty will always have the latest command-line client
[16:55] <stokachu> sinzui: ok cool, ill create the necessary diffs and get the sru process rolling for those bugs you listed in the release note
[16:55] <stokachu> ill only do trusty though since precise only has 1.16 in the archive
[16:57] <voidspace> ok, so deleting the existing merge proposal allowed lbox to work
[16:57] <voidspace> I don't think lbox can handle an existing merge proposal with a pre-requisite branch already set
[16:57] <stokachu> wallyworld_: this fast clone is soooooo awesome
[16:58] <perrito666> voidspace: there are lots of things lbox cant handle :) yet i like it
[16:58] <stokachu> openstack deployed (not including initial cloud dl and import of glance iamges) in 3 minutes
[16:58] <voidspace> going to krav maga
[16:58] <voidspace> back online later
[16:58] <stokachu> voidspace: gl dont kill anyone with single strike
[16:59] <stokachu> ;p
[16:59] <voidspace> stokachu: hah, I spend more time trying to avoid being killed...
[16:59] <voidspace> first one for four weeks
[16:59] <voidspace> bye all
[16:59] <stokachu> cya :)
[20:21] <natefinch> alexisb: sorry, that took a lot longer than I expected.  $800 down and now we have AC again.
[20:21] <alexisb> the $800 sucks
[20:22] <alexisb> but yay for AC!
[20:22] <alexisb> natefinch, do you still have time to meet?
[20:22] <alexisb> if not I can reschedule for tomorrow
[20:22] <wwitzel3> EOD for me, see everyone tomorrow
[20:22] <alexisb> bye wwitzel3
[20:23] <natefinch> alexisb: yeah, I can meet more now.  Honestly, $800 sucks, but I'd pay $800 every year for central A/C if I really had to.
[20:24] <alexisb> ok natefinch, I will rejoin the hangout
[20:40] <perrito666> mongo is really anoying regarding errors
[20:47] <waigani> fwereade: you about?
[20:49] <fwereade> waigani, yeah, with you in 5
[20:50] <thumper> hmm... need socks
[20:54] <natefinch> thumper: pretty sure socks are optional in the home office.
[20:57] <waigani> natefinch: not in dunedin!
[20:58] <thumper> natefinch: a bit too cold to work sockless right now
[20:59] <natefinch> hmm good point, you guys are going into winter
[21:00] <natefinch> thumper: there's a revision on trunk, 2656, that I want to update my branch to.... bzr update -r --revision=2656 doesn't work.  How do I do it?
[21:00] <thumper> natefinch: what do you mean by update to?
[21:00] <natefinch> (bzr says the revision doesn't exist)
[21:00] <thumper> you can change the tree
[21:00] <thumper> pull first
[21:00] <thumper> then either revert the tree to the revision
[21:01] <thumper> or pull that revision
[21:01] <thumper> you can branch from a particular revision
[21:01] <natefinch> thumper: not sure what the bzr word for it is.  I'm on HEAD of trunk, I need to go back in time to 2656, without making a new branch
[21:02] <thumper> natefinch: for what purpose?
[21:02] <bac> ping jamespage
[21:02] <natefinch> thumper: diffing versus the branch the windows guys worked on
[21:02] <thumper> natefinch: if you just want to look at the files at that revision
[21:02] <thumper> you can go "bzr revert -r 2656"
[21:03] <menn0> natefinch: bzr update -r 2656
[21:03] <thumper> that will make the working tree reset to that revision I think
[21:04] <menn0> natefinch: using update will get you the same thing but bzr will warn you that your working copy is out of date. bzr update without a rev gets you back.
[21:04] <natefinch> ahh, ok, so --revisionm was a red herring
[21:05] <natefinch> I see what I did, -r and --revision were the same thing, so I confused it
[21:05] <natefinch> funny
[21:07] <natefinch> ok, I gotta run, thanks for the help
[21:08] <menn0> anyway... morning all :)
[21:10] <thumper> o/ menn0
[21:13] <menn0>   \o thumper
[21:15]  * thumper grunts
[21:15] <thumper> why is reversing a slice not a one liner?
[21:17] <menn0> thumper: do you mind if I reorg our identity document a little?
[21:17] <menn0> thumper: sort.Sort(sort.Reverse(sort.IntSlice(keys))) apparently
[21:17] <thumper> menn0: not at all
[21:17] <thumper> menn0: that doesn't work
[21:17] <thumper> I don't want it sorted
[21:17] <thumper> I just want it reversed
[21:17] <menn0> thumper: ah of course
[21:18] <thumper> the Reverse function doesn't actually reverse it
[21:18] <thumper> it just changes the meaning of 'less'
[21:19] <menn0> thumper: yep. my brain is still waking up.
[21:19] <menn0> thumper: re the doc. I promise I won't delete anything just shuffle things around.
[21:20] <thumper> ok
[21:29] <fwereade> fwiw I will probably land this anyway first thing tomorrow, but I would appreciate eyeballs on https://codereview.appspot.com/91390043/ regardless, because I probably get *something* hilariously wrong
[21:29] <fwereade> (just docs, I'm not going to *break* anything with wanton landing, except possibly people's brains)
[21:32] <waigani> fwereade: yay I've been wanting something like this :D
[21:33] <fwereade> waigani, it's not complete by any means, but it's a start :)
[21:39]  * perrito666 obtains a succesful restore of HA machine and with it EODS
[21:43] <menn0> thumper: doc reorg done. I'm still going through the source doc and adding more notes about stuff nobody has commented on yet.
[21:44] <thumper> cool
[21:53] <thumper> oh ffs
[21:54] <thumper> why oh why does go make it so hard to test for instance equality?
[21:54] <thumper> I just want to know if these two errors are the same error
[21:54] <thumper> and I can't
[21:54] <thumper> (for one of the tests)
[21:54] <thumper> because of incomparable error types
[21:54] <thumper> all I want to know is if they are the same, not equal
[21:57] <voidspace> can't you get an unsafe pointer from them and compare?
[21:57]  * voidspace not entirely serious
[21:57] <voidspace> morning southern folk
[21:57] <rick_h_> southern folk, haven't run into that one
[21:57] <menn0> voidspace: hi!
[21:58] <voidspace> o/
[21:58] <voidspace> hi rick_h_
[21:59] <rick_h_> voidspace: hey, jealous of your pics and travels. The canyon heli tour is on my todo list.
[21:59] <voidspace> rick_h_: yeah, it was pretty awesome
[22:00] <voidspace> just being at the canyon is awesome. It's as spectacular as you would expect.
[22:00] <rick_h_> yea, I'm trying to talk my wife into renting an airstream and doing a tour around all the parks around there.
[22:01] <rick_h_> https://airstream2go.com/
[22:02] <jcw4> rick_h_ oooh... always wanted one of those
[22:03] <rick_h_> jcw4: yea, working on getting her used to that idea for our next camper
[22:03] <jcw4> :)
[22:05] <thumper> o/ voidspace
[22:05] <thumper> voidspace: I have been thinking of doing exactly that
[22:05] <voidspace> thumper: hi
[22:05] <voidspace> oh dear
[22:05] <thumper> it seems like a complete deficiency in the language to not allow that
[22:05] <voidspace> wrap it in a helper and hide the shame
[22:06] <voidspace> yeah, seems pretty basic
[22:06] <voidspace> but then when you have pass by value the concept of "identity" is less useful
[22:18] <jamespage> bac, pong
[22:36] <thumper> cmars: oh hai
[22:37] <thumper> cmars: we need to make sure we have regular calls
[22:58] <cmars> thumper, hi
[22:58] <thumper> cmars: how's things?
[22:58] <cmars> thumper, not too bad
[22:59] <thumper> cmars: are you up for a catch up call tomorrow?
[22:59] <thumper> say 21:00 UTC?
[22:59] <cmars> thumper, certainly. that's 16:00 here i think. definitely wfm
[22:59] <thumper> cmars: would that work weekly for you?
[22:59] <cmars> thumper, yep, sounds good
[23:00] <thumper> ok, I'll book it in
[23:00] <thumper> cmars: I want to make sure we keep in sync with the identity work
[23:05] <waigani> davecheney: https://plus.google.com/hangouts/_/canonical.com/onyx-standup
[23:05] <waigani> cmars: https://plus.google.com/hangouts/_/canonical.com/onyx-standup
[23:40] <thumper> davecheney: I lost that last play link when I closed the hangout
[23:41] <thumper> davecheney: is there a real golang reason why I can't test for interface identity across (for example errors of uncomparable types) ?
[23:41] <thumper> hmm... that didn't read very well
[23:42] <thumper> I have two errors, I just want to see if they are the same error (not equal, but actually the same interface)
[23:42] <thumper> effectively don't deep compare the implementation of the interface, just compare the interface tuple
[23:54] <davecheney> thumper: so you want to say 'are these two errors the same type' ?
[23:54] <davecheney> use reflection
[23:54] <davecheney> 4 lines
[23:55] <davecheney> it gets easier if you don't want to compare every possible error implementation eith every other error implmentation
[23:55] <davecheney> ie, is this error and instance of the errgo wrapper and that one not
[23:56] <thumper> I just care about the implementation pointers
[23:56] <thumper> (and the type I guess)
[23:56] <davecheney> and only the implementation pointers ?
[23:56] <thumper> that is sufficient
[23:56]  * thumper heads to the gym
[23:58] <davecheney> thumper: http://play.golang.org/p/F2oLtm3jd7
[23:58] <davecheney> ^ a little surprising
[23:58] <davecheney> but you just figured out how io.EOF is implemented
[23:58] <davecheney> /s/you/i
[23:59] <davecheney> hmm