[00:07] <waigani_> thanks for the review thumper
[03:31] <menn0> wallyworld, thumper: joyent fix: http://reviews.vapour.ws/r/633/
[03:32] <menn0> wallyworld, thumper: I spent a long time trying to make a unit test for this work
[03:32] <menn0> wallyworld, thumper: but it looks like there's something missing from the fake Joyent API implementation
[03:35] <wallyworld> menn0: lgtm
[03:35] <anastasiamac_> menn0: u r a Joyent champion now :D
[03:38] <menn0> wallyworld: cheers. Just fixing commit message text as it doesn't quite make sense and will then merge.
[03:38] <menn0> anastasiamac_: oh no you don't... :)
[03:39] <wallyworld> hey yeah, that gets me off the hook :-D
[03:39] <anastasiamac_> menn0: kkkk... if u get CI unblocked - then u r  a champion?...
[03:40] <anastasiamac_> wallyworld: there mayb a room for joyent fan club...
[03:40] <menn0> anastasiamac_: maybe, but I don't think this is the entire fix because Joyent is consistently failing for master
[03:40] <menn0> anastasiamac_: I think there might be another problem to fix yet
[03:40] <menn0> anastasiamac_: looking at that now
[03:41] <anastasiamac_> menn0: there must b - it would not b xmas without more problems to fix :p
[03:41] <anastasiamac_> menn0: thnx!
[03:54] <mattyw> menn0, ping?
[03:57] <menn0> mattyw: pong!
[04:01] <thumper> sorry, was drinking coffee and arguing with kids on holiday
[04:02] <thumper> anastasiamac_: I feel like you type like teenagers text now :-)
[04:02] <anastasiamac_> thumper: why?
[04:04] <thumper> 'u' 'r'
[04:05] <thumper> menn0: I see that wallyworld has reviewed already and I don't need to do anything
[04:06] <menn0> thumper: correct :)
[04:10] <thumper> \o/
[04:21] <anastasiamac_> thumper: i think and type young
[04:21] <thumper> bwahaha
[04:24]  * anastasiamac_ considers whether it's worthwhile to be offended
[04:28] <thumper> no, don't be offende
[04:28] <thumper> d
[04:31] <menn0> wallyworld, thumper: the merge job seems to have gotten stuck.
[04:31] <menn0> wallyworld, thumper: it's been like this for a long time: http://juju-ci.vapour.ws:8080/job/github-merge-juju/1595/console
[04:32] <thumper> menn0: I'd give it 20 minutes
[04:32] <wallyworld> i've seen it take a while
[04:32] <thumper> menn0: how long?
[04:33] <menn0> thumper: at least half an hour at this point in the tests
[04:33] <thumper> hmm...
[04:33] <thumper> well...
[04:33] <menn0> i'm running the apiserver tests for this branch locally now
[04:33] <thumper> the test timeout is 20 minutes
[04:33] <thumper> and there are multiple slow packages there
[04:33] <thumper> and it buffers the output
[04:34] <thumper> so it is hard to say
[04:34] <menn0> thumper: ok, so i'll wait
[04:34] <thumper> but normally it is done in under 20 minutes for everything...
[04:34] <thumper> so perhaps it is wedged
[04:34] <thumper> wallyworld: do you know how to kick it right?
[04:34] <wallyworld> yes
[04:34] <wallyworld> i can restart it
[04:35] <menn0> wallyworld, thumper: the apiserver tests have gotten past that point on my machine already
[04:36] <wallyworld> menn0: thumper: jon restarted
[04:36] <wallyworld> job
[04:36] <menn0> wallyworld: cheers
[04:37] <anastasiamac_> do i need something special for juju/names?
[04:37] <anastasiamac_> I am getting http://pastebin.ubuntu.com/9524130/
[04:38] <thumper> anastasiamac_: shouldn't do, let me check
[04:38] <thumper> anastasiamac_: pretty sure that access control is for everything under github.com/juju
[04:39] <thumper> anastasiamac_: nope, you are there
[04:39] <thumper> anastasiamac_: what is your remote set to?
[04:40] <anastasiamac_> thumper: i ran git remote add upstream https://github.com/juju/names.git
[04:40] <anastasiamac_> remote set-url origin git@github.com:anastasiamac/names.git
[04:41] <thumper> anastasiamac_: have you forked it?
[04:41] <thumper> on github?
[04:41] <anastasiamac_> thumper: :D yes
[04:41] <thumper> only asking because I have forgotten before
[04:41] <anastasiamac_> thumper: i forked it first
[04:41] <anastasiamac_> then ran cmds above
[04:42] <thumper> anastasiamac_: FYI, I would do the following:
[04:42] <anastasiamac_> i can do pretty much everything - run status, etc... but cannot pull ;(
[04:42] <thumper> git remote --set-url upstream git@github.com/juju/names.git
[04:42] <thumper> git remote --set-url --push upstream no-pushing
[04:43] <thumper> the git wire protocol is much faster than https
[04:43] <axw> anastasiamac_: dumb question, have you added your public SSH key to GitHub?
[04:44] <axw> (are you using SSH for juju/juju?)
[04:44] <anastasiamac_> axw: i must have to commit to juju/juju no?
[04:44] <axw> there are other transports
[04:44] <anastasiamac_> axw: dunno... how can i find out what m using for juju/juju
[04:44] <anastasiamac_> axw: ?
[04:44] <axw> anastasiamac_: "git remote -vv" in the local repo
[04:46] <anastasiamac_> axw: fetch and pull for origin and upstream are prefixed with https
[04:47] <axw> anastasiamac_: right, so you're not using SSH. if you want to use it, then you should add your public key in your user settings on GitHub
[04:47] <axw> anastasiamac_: https://github.com/settings/ssh
[04:48] <anastasiamac_> axw: i dont want ssh :)
[04:48] <anastasiamac_> axw: but m seeing the diff btw how my juju/juju and juju/names are setup..
[04:48] <anastasiamac_> axw: let me align them with party line
[04:50] <axw> anastasiamac_: git remote set-url origin https://github.com/anastasiamac/names.git
[04:50] <thumper> anastasiamac_: why don't you want ssh?
[04:50] <thumper> ssh is awesome
[04:50] <axw> if you really don't want SSH... I would use it though, you don't need to keep putting your password in
[04:52] <anastasiamac_> axw: thnx. i've sorted and will consider ssh :D u r awesome!
[05:14] <menn0> and we have a new CI blocker....
[05:14] <menn0> https://bugs.launchpad.net/juju-core/+bug/1402495
[05:14] <mup> Bug #1402495: Joyent deploys fail in CI <ci> <joyent-provider> <regression> <juju-core:New> <https://launchpad.net/bugs/1402495>
[05:15] <menn0> with the Joyent routing issue fixed, this CI with Joyent issue remains
[06:50] <mattyw> folks - is juju/utils a review board project now?
[07:09] <anastasiamac_> mattyw: it looks like it :-)
[07:12] <mattyw> anastasiamac_, :) I was sure I saw an old email that said it was, I saw it and went for it :)
[09:02] <dimitern> morning TheMue
[09:02] <dimitern> TheMue, 1:1?
[09:05] <dimitern> fwereade, hey
[09:07] <dimitern> fwereade, if you're around today, and hopefully feeling better, can I trouble you for a review on http://reviews.vapour.ws/r/611/ please?
[09:09] <TheMue> morning
[09:10] <TheMue> dimitern: oh, eh, yes, coming
[09:24] <perrito666> morning
[09:28] <fwereade> dimitern, not better, but looking anyway, because it looks completely awesome from the description and I feel bad for not having done it on fri
[09:28] <dimitern> fwereade, cheers :)
[09:50] <dimitern> morning perrito666, voidspace
[09:55] <fwereade> dimitern, that's *awesome*, LGTM with trivials
[09:55] <fwereade> dimitern, check with TheMue about YamlEquals though
[09:55] <voidspace> dimitern: morning
[09:55] <voidspace> perrito666: o/
[09:55] <dimitern> fwereade, thank you very much!
[09:55] <voidspace> fwereade: o/
[09:56] <voidspace> fwereade: sorry you're still unwell
[09:56] <voidspace> my wife brought a cold into the house - she's had it for more than two weeks
[09:56] <voidspace> and it's been rumbling on and off with me for a week
[09:56] <voidspace> not bad, just annoying
[11:54] <rogpeppe1> fwereade: hiya
[11:56] <rogpeppe1> anyone know when it's ok to call "unit-get public-address" from a charm hook?
[11:57] <rogpeppe1> is it supposed to be ok to call it in the install hook for example?
[11:57] <rogpeppe1> dimitern: ^
[11:58] <dimitern> rogpeppe1, it's set when the uniter starts (if it's available) so I'd say right away
[11:59] <rogpeppe1> dimitern: ok, thanks
[12:00] <dimitern> rogpeppe1, it might not be available if the machine doesn't have an address yet (i.e. still pending/provisioning)
[12:05] <jam1> hey axw, wallyworld, maybe we should be chatting here instead :)
[12:05] <wallyworld> maybe
[12:05] <dimitern> jam1, o/
[12:06] <jam1> hi dimitern, how's your day been going?
[12:07] <rogpeppe1> dimitern: thanks
[12:07] <rogpeppe1> dimitern: one other thing: if you call config-get in an install hook, should you get something sensible?
[12:08] <dimitern> jam1, not bad, multitasking already :) with goamz migration, ci blockers and addressability stuff..
[12:09] <dimitern> rogpeppe1, you know what - I was just looking at the code and I can't say I'm 100% sure the install hook will have public-address set
[12:09] <jam1> rogpeppe1: given you can do "juju deploy —config=" why wouldn't you get config in install ?
[12:09] <dimitern> rogpeppe1, better ask fwereade for sure
[12:10] <rogpeppe1> jam1: i *think* i'm seeing a null result from config-get in an install hook; i need to check a bit more
[12:11] <fwereade> rogpeppe1, config-get should always be fine; I *think* unit-get public address should always work but it's possible that state and/or networking changes have made it "" instead of <whatever the private address is>
[12:11] <rogpeppe1> fwereade: ok, thanks. it's probably my code.
[12:12] <fwereade> rogpeppe1, sorry I'm not very around today, I'm mostly in bed, will be on a bit later for meetings
[12:12] <rogpeppe1> fwereade: aw, gbs
[12:12] <axw> jam1: heya. chatting about what? is there a meeting?
[12:13] <wallyworld> axw: jam1 was look at storage spec, but the 15.04 "official" one in the master spec list pointed to the old spec, not the newest, latest one
[12:14] <axw> ok
[12:15] <dimitern> fwereade, rogpeppe1, unit-get private-address|public-address always triggers unit.Private|PublicAddress() getting called; however looking at the filter/modes/etc. it seems until we know there's an address (i.e. the addresses watcher returned changes) config-change won't be triggered
[12:16] <fwereade> dimitern, not *quite* true, we always get a config-changed after install, so, ha, yes, it's possible that nothing's set the machine's address by the time it's running a unit
[12:16] <rogpeppe1> dimitern: the behaviour i *think* i'm seeing (i'm just about the test it) is that config-get is returning a null value even when there's a default value set
[12:16] <rogpeppe1> dimitern: this is in the install hook
[12:16] <rogpeppe1> s/the test/to test/
[12:17] <fwereade> rogpeppe1, that seems odd, for sure, what do you get with --all?
[12:17] <rogpeppe1> fwereade: not sure yet - i haven't got a hook context to play with
[12:17] <fwereade> rogpeppe1, juju-run?
[12:18] <rogpeppe1> fwereade: ok, trying that
[12:19] <rogpeppe1> fwereade: should this work? juju run --unit identity/0 -- config-get --all
[12:19] <dimitern> rogpeppe1, why config-get ?
[12:20] <fwereade> rogpeppe1, can't remember if you use -- or quote the command
[12:20] <dimitern> rogpeppe1,  it should be unit-get
[12:20] <rogpeppe1> dimitern: i'm interested in config values here
[12:21] <dimitern> rogpeppe1, private|public-address is not a config setting, it's a separate thing
[12:21] <rogpeppe1> dimitern: yeah, i know. i was asking about public address initially, but i think the problem may lie elsewhere
[12:22] <dimitern> rogpeppe1, ah, ok
[15:08] <sinzui> dimitern, how goes bug 1397376?
[15:08] <mup> Bug #1397376: maas provider: 1.21b3 removes ip from api-endpoints <api> <cloud-installer> <fallout> <landscape> <maas-provider> <juju-core:In Progress by dimitern> <juju-core 1.21:In Progress by dimitern> <https://launchpad.net/bugs/1397376>
[15:09] <dimitern> sinzui, fixing last suggestions from fwereade and setting to land
[15:09] <sinzui> rock
[15:11] <dimitern> sinzui, I'm already working on the backport for 1.21 as well
[15:11] <sinzui> rock
[15:14] <dimitern> with markdown everywhere how are you supposed to curse politely? s**t becomes s[bold]t[/bold] :) I guess s\*\*t is the way to go
[15:33] <dimitern> sinzui, should I JFDI it?
[15:34] <dimitern> Does not match ['fixes-1401130', 'fixes-1400358']; Build rejected, reporting on proposal - both of these are fix committed btw
[15:34] <sinzui> dimitern, __JFDI__ for master
[15:35] <dimitern> sinzui, ok
[15:41] <ackk> hi, could someone please point me to where should I look to get the info shown in "state-server-member-status" from the delta stream?
[15:43] <perrito666> marcoceppi: I think I just opened the doc for comments, could you try?
[15:44] <marcoceppi> perrito666: :thumbsup: will circle back in a bit, got other things I need to clear out first
[15:44] <perrito666> marcoceppi: np jut shout if it still not open when you try again, google docs hates me sometimes
[15:44] <marcoceppi> perrito666: it's open, thanks!
[15:47] <dimitern> rogpeppe1, did you just merge http://reviews.vapour.ws/r/636/ into utils?
[15:48] <dimitern> rogpeppe1, sinzui, this causes the CI bot to fail with: Extant directories unknown:
[15:48] <dimitern>  golang.org/x/crypto
[15:51] <sinzui> dimitern, looks like something got imported that wasn't documented
[15:52] <dimitern> sinzui, that's right - but it's coming from juju/utils
[15:53] <sinzui> dimitern, I believe the simple fix is to add it to dependencies.tsv.
[15:54] <sinzui> dimitern, I don't think this is like the cases were conflicting or ambiguous licensing prevent the package from being included. We just need to document it
[15:55] <dimitern> sinzui, to unblock the bot, yeah
[15:55] <sinzui> right
[15:55] <dimitern> sinzui, but we need to make sure this doesn't happen again, agreed?
[15:56] <dimitern> sinzui, that fits in just perfect with what I wanted to discuss with you guys - bringing juju-core dependencies under the ci bot (or at least the more actively developed ones)
[15:57] <dimitern> sinzui, which will force the devs to actually care about the CI/release process more - like adding a dependencies.tsv for juju/utils, etc. and making sure it works before just merging changes
[15:57] <sinzui> dimitern, the check_dependencies phase of building the release tarball ensures no deps that aren't in dependencies.tsv are included. The inclusion assumes we checked the license of the package and pinned it to a version we know is good
[15:57] <dimitern> sinzui, even simpler fix will be to actually revert rogpeppe1's change to utils and ask him to do it properly :)
[15:58] <sinzui> dimitern, yep
[15:58] <dimitern> sinzui, that's totally fine and good we have this check
[15:58] <rogpeppe1> dimitern: what was the problem with my change to utils?
[15:58] <dimitern> sinzui, it actually helped to catch this
[15:59] <dimitern> rogpeppe1, you added a dependency which wasn't mentioned anywhere and caused the build to potentially fail (it didn't due to a precaution on the bot)
[15:59] <rogpeppe1> dimitern: why should that cause the build to fail?
[15:59] <rogpeppe1> dimitern: aren't juju deps pinned?
[15:59] <dimitern> rogpeppe1, because it's not in dependecies.tsv of juju-core
[15:59] <rogpeppe1> dimitern: but juju-core won't be getting the latest version of utils
[15:59] <dimitern> rogpeppe1, and the bot verifies all imports are properly listed as deps
[15:59] <rogpeppe1> dimitern: because dependencies.tsv will mention an earlier version
[16:00] <dimitern> rogpeppe1, that sounds right... hmm why did it happen then?
[16:00] <rogpeppe1> dimitern: i don't see why making a change to utils tip could have broken juju-core, unless someone updated juju-core deps improperly
[16:01] <dimitern> rogpeppe1, can you please work with sinzui to see how this could've happened and how to fix the bot?
[16:01] <sinzui> dimitern, rogpeppe1 is Juju importing a non-master branch in this case?
[16:01] <dimitern> I have a pending bugfix which can't land because of that
[16:01] <rogpeppe1> dimitern: i'm in a standup right now
[16:01] <rogpeppe1> sinzui: not AFAIK
[16:01] <dimitern> rogpeppe1, np, later then?
[16:02] <rogpeppe1> dimitern: possibly, although i'm quite busy
[16:02] <sinzui> thank you rogpeppe1 , that eliminates the common reason for phantom deps
[16:02] <rogpeppe1> sinzui: what do you mean by "importing a non-master branch" ?
[16:03] <sinzui> dimitern, rogpeppe1 : Go checks out the master branch and gets its deps. Even when we ask for a non-master branch which has different deps, Go will leave the unwanted deps behind
[16:05] <rogpeppe1> sinzui: ah, so that'll be the reason this has broken
[16:05] <rogpeppe1> sinzui: because tip has dependencies that before-tip doesn't
[16:06] <sinzui> okay.
[16:07] <sinzui> rogpeppe1, dimitern 1 fix might be to change the build scripts to remove golang.org/x/crypto. I will consult with mgz_
[16:07] <rogpeppe1> sinzui: yeah, ISTR some hack we had to get around this kind of problem
[16:07] <rogpeppe1> sinzui: but i can't remember what the official way to do this was
[16:09] <sinzui> rogpeppe1, mgz_ pushed a change to a master branch I think. This case is different though.
[16:10] <rogpeppe1> sinzui: what we do is we don't use go get at all AFAIR
[16:10] <mgz_> yeah, I can hackit out
[16:10] <mgz_> we kind of have to use go get in some capacity
[16:11] <mgz_> godeps doesn't implement branch/clone
[16:13] <natefinch> why are unwanted dependencies a problem?
[16:14] <natefinch> I can see missing dependencies, but unwanted should just be garbage on disk that we ignore
[16:15] <mgz_> natefinch: we package this stuff, and also don't get great feedback from go about what actually goes into the binaries
[16:17] <mgz_> so, maybe some new random github package we downloaded doesn't actually end up in the binary... but that does need asserting for licence/security raisins
[16:21] <voidspace> it would be a pain to have to do an emergency security release because of a bug in a package that we're not actually using...
[16:21] <rogpeppe1> natefinch: it's good to check that we don't have unwanted deps because it's easy to omit deps from dependencies.tsv
[16:22] <natefinch> rogpeppe1: that's true... but the way to do that should be to check that godeps -f produces the same output as dependencies.tsv ... that could also be what we use to determine the files to package
[16:23] <rogpeppe1> natefinch: unfortunately it won't
[16:23] <rogpeppe1> natefinch: because i haven't come up with a nice way of including windows deps in godeps output yet
[16:23] <rogpeppe1> natefinch: i will, but it's more work that i originally thought
[16:24] <natefinch> rogpeppe1: oh right... dang
[16:25] <natefinch> rogpeppe1: I wonder if the answer is just to switch to godep.... vendor the dependencies, then there's no question about keeping things in sync.
[16:44] <rogpeppe1> natefinch: that may well be a reasonable thing to do
[16:51] <sinzui> dimitern, could the network fix joyent need a separate change for precise. I see the same kind of log error in cloud-init for precise as last week and the week before
[16:55] <dimitern> sinzui, the one menno did?
[16:55] <sinzui> dimitern, yes
[16:56] <sinzui> r=me mgz_ . Can you merge and deliver it to unblock dimitern
[16:57] <dimitern> sinzui, unlikely, might be intermittent
[16:58] <dimitern> sinzui, menno's fix is not precise-specific, but then again if what joyent call a "precise image" is very different from the CPC images..
[16:59] <sinzui> dimitern, I am going to re-run tests again to capture logs and hope that trusty repeatedly passes
[16:59] <dimitern> sinzui, +1
[17:00] <mgz_> sinzui: sure thing
[17:05] <mgz_> sinzui: done - I only pulled the rev in on the juju-core-slave atm as the update all script barfs on an unknown 10. address for me - doesn't seem we have all of them in the .ssh/config on master somwhow?
[17:05] <sinzui> :/
[17:06] <sinzui> mgz_, ssh and pull tip. I need to rethink the ssh/config rules for or private machines
[17:08] <mgz_> hm, could I run in from the jenkins master isntead of locally?
[17:15] <voidspace> natefinch: want another puzzle
[17:16] <voidspace> natefinch: ?
[17:16] <voidspace> natefinch: given a cidr I need to calculate the last allocatable IP address, knowing that the *last one* is reserved
[17:16] <voidspace> natefinch: this is what I have: http://pastebin.ubuntu.com/9530758/
[17:16] <voidspace> natefinch: (includes code to work out lowest allocatable one - the first four are reserved too)
[17:18] <voidspace> natefinch: I'm sure there must be a better way...
[17:19] <dimitern> mgz_, \o/ ta!
[17:19] <tasdomas> a quick question about FormatSmart in juju/cmd - if I pass it a struct, is it supposed to delegate the marshalling to FormatYaml or return an error?
[17:20] <dimitern> tasdomas, that's an excellent question - I was poking into it earlier today, let me check..
[17:20] <tasdomas> dimitern, the comments on FormatSmart indicate that it should delegate a struct to FormatYaml
[17:20] <tasdomas> dimitern, but it seems to me that the default: case in the switch makes that impossible
[17:20] <dimitern> tasdomas, yes it appears so
[17:21] <dimitern> tasdomas, you refer to the last 3 empty cases?
[17:22] <dimitern> tasdomas, and the default as well
[17:22] <tasdomas> dimitern, https://github.com/juju/cmd/blob/master/output.go#L75
[17:22] <tasdomas> dimitern, yes
[17:22] <dimitern> tasdomas, looking at the code invokes a WTF in me
[17:23] <natefinch> voidspace: not sure I understand exactly what you're trying to do... can you give me an example?
[17:23] <tasdomas> dimitern, I was trying to use the smart formatter in a side project and passing it a struct returns an error
[17:23] <dimitern> tasdomas, well.. I wouldn't recommend using this for anything
[17:23] <tasdomas> dimitern, heh, thanks for the recommendation
[17:23] <dimitern> tasdomas, it's a (weak) attempt to make the output more pyjuju compatible
[17:24] <dimitern> tasdomas, but it's neither well tested nor anyone cares too much it seems
[17:24] <dimitern> tasdomas, I'd even say this deserves a bug report
[17:24] <tasdomas> dimitern, I see
[17:25] <voidspace> natefinch: when we get a subnet from ec2 the last ip address is reserved
[17:25] <tasdomas> dimitern, I'll probably submit a pull request for it
[17:25] <voidspace> natefinch: given the subnet in CIDR format I need to know what is the highest *allocatable* IP address
[17:25] <dimitern> tasdomas, that'll be awesome, thanks!
[17:26] <voidspace> natefinch: so, I take the "zeros" portion of the netmask, turn them to ones and OR it with the start IP address of the CIDR
[17:27] <voidspace> natefinch: that gives me the last IP address, from which I subtract one to calculate the last *allocatable* IP address
[17:28] <natefinch> voidspace: ok, so like if you get 123.123.123.0, then 123.123.123.255 is the "reserved" address, and .254 is the last allocatable one?
[17:28] <natefinch> voidspace: sorry, my networking skills are *really* rusty
[17:28] <voidspace> natefinch: correct
[17:29] <voidspace> natefinch: a CIDR (subnet block) is an IP address plus netmask
[17:30] <voidspace> natefinch: I believe my bit twiddling is correct, just not terribly elegant
[17:31] <voidspace> natefinch: also going via a float (math.Pow) is pretty awful
[17:31] <voidspace> natefinch: is there syntax for an integer power operation?
[17:31] <natefinch> voidspace: math.pow IIRC
[17:32] <voidspace> natefinch: that's float
[17:32] <voidspace> natefinch: which I'm using currently
[17:33] <natefinch> voidspace: just cast back and forth unless you think you'll run off the top of float64
[17:33] <voidspace> natefinch:    highMask := uint32(math.Pow(2, float64(zeros))) - 1
[17:33] <voidspace> yuck :-)
[17:33] <natefinch> yep
[17:34] <natefinch> somebody might have made a copy of the package for integers
[17:41] <jw4> OCR PTAL : Fixes Go 1.4 vet error that blocks pre-push hook: (got approval, just looking for senior reviewer sign off) http://reviews.vapour.ws/r/630/
[17:45] <natefinch> jw4: FYI we shouldn't be building with 1.4.... though if we're at still backwards compatible with 1.2, that's fine, I guess.
[17:46] <jw4> natefinch: I see - I figured its fine to develop in 1.4 assuming that the CI tests are all running in 1.2.1 or whatever the supported version is
[17:47] <natefinch> jw4: yeah, like I said, it's fine, but I think it does break some tests.
[17:48] <jw4> natefinch: Interesting.  I'm pretty sure we found and fixed some tests that were broken by 1.3, which was a good thing.
[17:52] <natefinch> jw4: it was something about our version of deep equals needing a different implementation in 1.4 or something...  not actually a test failing, but something in our test frameworks not working correctly.  IIRC Rogpeppe was the one who wrote the code
[17:53] <jw4> natefinch: I see.  I'm guessing we're not moving to 1.4 (or later) until our next LTS version of Ubuntu?
[17:54] <natefinch> jw4: or until it gets backported to Trusty.  I'm not 100% certainly on all the ins and outs of Ubuntu packaging... but yes, Ubuntu is the limiting factor in one way or another
[17:55] <jw4> natefinch: cool
[18:17] <voidspace> are we still blocked?
[18:21] <jw4> voidspace: yep
[18:22] <jw4> voidspace: http://goo.gl/4zd1e9 (shortened because the original is so long)
[18:24] <voidspace> jw4: thanks
[18:24] <voidspace> jw4: I've saved the bookmark this time :-)
[18:24] <jw4> lol
[18:45] <voidspace> right, g'night all
[18:45] <voidspace> see you tomorrow
[18:47] <wwitzel3> fwereade: ping
[20:07] <thumper> sinzui: ci status?
[20:07] <sinzui> thumper, We think we have a working rev.
[20:07] <thumper> sinzui: seems that both critical bugs are fix committed
[20:07] <thumper> \o/
[20:07] <sinzui> thumper, we are retesting to show that we didn't get lucky
[20:08] <thumper> kk
[20:08] <katco> thumper: hey thank you for reviewing my branch
[20:09] <thumper> katco: np, was a mix of RB and github reviews, as the github diff handles moves much better
[20:09] <katco> thumper: oh... i didn't even check gh; do i need to look there, or you were just using that as a tool?
[20:09] <thumper> katco: no, all comments are on rb
[20:10] <katco> thumper: ah cool
[20:11] <katco> thumper: thanks for aggregating them for me. i responded to all of your comments. please lmk if you disagree with anything (few questions in there too)
[20:11] <thumper> kk
[20:17] <menn0> sinzui: your comments on bug 1401130 indicate that the problem isn't fixed yet. is that right?
[20:17] <mup> Bug #1401130: Joyent instances sometimes can't communicate via the internal network <ci> <joyent-provider> <regression> <juju-core:Fix Committed by hduran-8> <juju-core 1.21:In Progress by menno.smits> <https://launchpad.net/bugs/1401130>
[20:17] <sinzui> menn0, don't panic
[20:17] <sinzui> menn0, The bug has always been about 1.20 and beta4 pass, alpah1 does not. 1.20 was failing about the time of the ci tests.
[20:19] <sinzui> menn0, perrito666 and i have been doing repetitive deploys with 1.20, beta4, and  alpha1. We have no failures for beta4, 1 failure for 1.20, and *no* failures for alpha1 so far. I and starting the last test to remove the regressions
[20:20] <perrito666> sinzui: tx
[20:20] <menn0> sinzui: ok
[20:20] <menn0> sinzui, perrito666:  has the joyent routing fix been backported to 1.21?
[20:21] <sinzui> menn0, no, 1.21 never failed the tests
[20:21] <perrito666> menn0: not that I am aware of it
[20:21] <sinzui> menn0, I am not against backporting. I want dimiter's backport first though
[20:23] <menn0> sinzui: the routing fix is tiny and will be trivial to backport
[20:23] <sinzui> menn0, okay, thank you
[20:23] <perrito666> +1 to menn0 backporting his fix one cannot always be too sure
[20:23] <menn0> sinzui: I really do think that it's just luck that 1.21 has been passing
[20:24] <menn0> sinzui: master hasn't been passing because of that and this other issue
[20:24] <menn0> sinzui: where the deploy command fails
[20:24] <sinzui> menn0, menn0 Its been a good streak given the number of bundles and release tests we did during the last two weeks
[20:25] <menn0> sinzui: indeed. that I can't quite understand given how easily I ran into the problem with 1.21 when testing manually
[20:26] <menn0> sinzui: I've been meaning to ask... with these deploy tests it looks like the Juju binary is pulled from a deb created elsewhere (I presume another jenkins job)
[20:27] <menn0> sinzui: how sure can we be that the deb matches the rev we think we're testing?
[20:28] <sinzui> menn0, if you expand the listing of successes, you can see joyent has been partularly bad recent http://juju-ci.vapour.ws:8080/view/Cloud%20Health/job/test-cloud-joyent/
[20:30] <sinzui> menn0, The cloud jobs require a binary built by the publishing job. Each job downloads the deb that matches the hosts arch. many machines don't have juju installed. those that do can only have juju stable, and the log of the test would show the wrong version during bootstrap
[20:31] <sinzui> menn0, you can see that the when destroy-env fails, we fallback to the stable juju to call destroy-environment --force
[20:31] <menn0> sinzui: ok
[20:32] <menn0> sinzui: my concern is that the tests pull the "last successful" build for the arch from revision-publish but how does the job know what revision that build was built from and whether it matches what the deploy test thinks it's testing?
[20:33] <menn0> sinzui: I'm not saying it doesn't work. I would just like to understand how it works :)
[20:34] <sinzui> meno Those job cannot run of publish-revision fails. CI exits early and the other jobs are all left in pending
[20:36] <sinzui> menn0, CI has a requirements chain. Jenkin's crap UI corrupts this stanza, which is the configuration of the job
[20:36] <sinzui> [ci-director]
[20:36] <sinzui> group-name: tests
[20:36] <sinzui> requires: publish-revision
[20:36] <sinzui> failure-threshold: 2
[20:36] <sinzui> vote: true
[20:36] <sinzui> tags: subjects=substrate,CPC function=deploy substrate=joyent arch=amd64 series=precise
[20:38] <menn0> right, got it. so it's ci-director plugin that does the dependencies between jobs. I hadn't noticed that stuff before.
[20:39] <menn0> sinzui: thanks for the explanation
[20:39] <rick_h_> thumper: ping
[20:46] <natefinch> ericsnow, wwitzel3: can you guys update this doc with how things are going?
[20:47] <wwitzel3> natefinch: what doc?
[20:47] <natefinch> https://docs.google.com/a/canonical.com/document/d/1eqDC5GwcLor1dpl8Wg3nYZw9xHuIqdxXXJEeIRJ883w/edit
[20:49] <ericsnow> natefinch: can't edit
[20:49] <natefinch> ericsnow: reload
[20:52] <ericsnow> natefinch: updated
[20:53] <natefinch> ericsnow: thanks.  Obviously feel free to edit bullet points to fit whatever the actual tasks are etc.  Just want that doc to be up to date since The Powers That Be are looking at it (and I just realized I'd never shown it to you and Wayne ;)
[20:53] <ericsnow> natefinch: got it
[21:23] <sinzui> natefinch, thumper I think CI will bless master, but abentley found a regression when in the new weekly tests we added. We need someone look look into bug 1402826
[21:23] <mup> Bug #1402826: 1.21 cannot "add-machine lxc" to 1.18.1 <add-machine> <ci> <lxc> <regression> <juju-core:Triaged> <juju-core 1.21:Triaged> <https://launchpad.net/bugs/1402826>
[21:50] <menn0> sinzui: the joyent routing fix is now backported to 1.21
[21:50] <thumper> sinzui: hmm...
[21:51] <menn0> sinzui: should I also backport to 1.20? (will there be more 1.20 releases?)
[21:51] <thumper> menn0: what do you know about the networker job?
[21:51] <thumper> JobManageNetworking
[21:51] <sinzui> menn0, I never want to do a 1.20 release again.
[21:51] <menn0> thumper: very little
[21:51] <sinzui> menn0, I hope 1.21 is the new stable soon
[21:51] <thumper> sinzui: to we expect to be able to have 1.21 talk to 1.18?
[21:51] <thumper> hmm..
[21:52] <thumper> how can we work out the version of the server?
[21:52] <thumper> I suppose we can go "environment get agent-version
[21:52] <thumper> omg that is hacky
[21:52] <thumper> but probably necessary
[21:53] <menn0> thumper: why do you ask about the networker?
[21:53] <thumper> bug 1402826
[21:53] <mup> Bug #1402826: 1.21 cannot "add-machine lxc" to 1.18.1 <add-machine> <ci> <lxc> <regression> <juju-core:Triaged> <juju-core 1.21:Triaged> <https://launchpad.net/bugs/1402826>
[21:53] <thumper> menn0: but it is a different issue
[21:59] <menn0> thumper: looking at that bug
[22:00] <menn0> thumper: if the error about JobManageNetworking not existing is returned could the client try again without it?
[22:00] <thumper> that could work but is also hacky
[22:00] <menn0> thumper: agreed
[22:01] <menn0> thumper: but is capability based instead of hardcoding versions
[22:01]  * thumper nods
[22:02] <menn0> thumper: either way works though
[22:07] <menn0> thumper: are you looking at that one?
[22:08] <thumper> menn0: can you? I'm otp
[22:08] <menn0> thumper: ok ... another day another CI blocker...
[22:20] <lazyPower> heyyyyyy thumper
[22:20] <mbruzek> hello thumper
[22:20] <lazyPower> where are you guys stashing the lxc downloads these days?
[22:20] <lazyPower> i have a feeling i've got a cached lxc image, and i've wiped all my templates
[22:21] <lazyPower> however - watching my process list i only see gzip -d pop up, not the wget
[22:21]  * thumper hides behind the call he is on
[22:21] <lazyPower> mmhmmmm
[22:21]  * lazyPower snaps
[22:21] <lazyPower> aint nobody got time for that
[22:21] <mbruzek> thumper: all my machines are in pending and I wiped out /var/lib/juju/containers/*
[22:40] <mbruzek> thumper: can you ping me after your call
[22:40] <jose> thumper: hey, I'm having some problems with local, lxc is stuck in pending
[22:50] <thumper> mbruzek, jose, lazyPower: off the call now and here to attempt to answer your lxc questions
[22:50] <lazyPower> thumper: you've got brilliant timing
[22:50] <lazyPower> thumper: ready for the awesomeness? it was UFW
[22:50] <thumper> UFW?
[22:50] <lazyPower> yeah UFW managed to block the agent => stateserver communication
[22:50] <thumper> what is UFW?
[22:51] <lazyPower> its a firewall
[22:51] <jose> supposed to be uncomplicated
[22:51] <lazyPower> the complicated uncomplicated fire wall
[22:51] <thumper> ah
[22:51] <lazyPower> so, nothing moved, nothing to blame juju core about
[22:51] <thumper> oh good
[22:53] <lazyPower> thumper: what ports would i open in a firewall for the stateserver?
[22:53] <thumper> lazyPower: the apiserver port
[22:53] <thumper> and ssh
[22:53] <thumper> and any port opened by a service deployed to it
[22:54] <thumper> lazyPower: if you are doing HA then there are probably other state ports needed too
[22:54] <thumper> lazyPower: in order for the mongo dbs to sync
[22:54] <lazyPower> thumper: just local, non ha
[22:54] <lazyPower> 17017 is the default stateserver port correct?
[22:54] <thumper> I wish I had a week to hack to do what I liked
[22:54] <thumper> then I'd fix the local provider
[22:54] <thumper> yes
[22:54] <jose> 17070*
[22:54] <lazyPower> youv'e got 2 weeks coming up
[22:54] <thumper> lazyPower: hahaha
[22:54] <thumper> nope
[22:55] <thumper> I have a busy personal project for those
[22:55] <lazyPower> dont we all :P
[22:55] <thumper> :)
[22:55] <lazyPower> i'm going to tackle my basement
[22:55] <lazyPower> allright, thanks for humoring me thumper all the same
[22:55]  * jose has left #juju-dev (part)
[23:24] <menn0> wallyworld, thumper: fix for the current CI blocker. https://github.com/juju/juju/pull/1319
[23:34] <thumper> menn0: looks good, I think it needs to be in 1.21 too
[23:34] <thumper> menn0: and thanks
[23:35] <wallyworld> menn0: i am currently looking - i have a suggestion i'm part way through writing
[23:36] <menn0> menn0: yep was planning on backporting once the PR for master got the green light. i suspect it won't be a straightforward cherry pick because the machine command been refactored recently.
[23:36] <menn0> thumper: urgh... ^^^ this was for you.
[23:37] <menn0> wallyworld: no probs. I will wait.
[23:37] <thumper> :-)
[23:37] <wallyworld> thumper: menn0: i wonder, instead of looking at agent version, should we not query the state server for the jobs it has and use that as the determiner?
[23:38] <menn0> wallyworld: is that really sensible though?
[23:38] <menn0> wallyworld: just because the state server doesn't have JobManageNetworking doesn't mean the new machine can't
[23:39] <menn0> wallyworld: (not that I'm completely sure about this)
[23:39] <wallyworld> not sure, i'd have to check in more detail. i thought that state server would have manage network job
[23:39] <wallyworld> if it doesn't then suggestion is moot
[23:39] <wallyworld> versiob check is probably ok
[23:39] <thumper> wallyworld: we don't have the facility to ask the machine what jobs it has
[23:39] <wallyworld> i just generically prefer checks based on capability rather thaN VERSION
[23:39] <thumper> wallyworld: unless you are the machine agent
[23:40] <thumper> the client doesn't
[23:40] <thumper> wallyworld: understood
[23:40] <wallyworld> fair enough, was just thinking out loud
[23:40] <menn0> wallyworld: I prefer capability based checks too
[23:40] <wallyworld> my real suggestion was to make the get server version method a base method on EnvCommand
[23:40] <menn0> wallyworld: but I don't think it works in this case
[23:40] <wallyworld> np
[23:41] <menn0> wallyworld: sure, moving the method to EnvCommand probably makes sense
[23:41] <menn0> wallyworld: you're thinking we'll have more of these things come up?
[23:41] <wallyworld> ty
[23:41] <wallyworld> could do
[23:42] <wallyworld> i think we may well, but of course am not sure
[23:42] <wallyworld> seems plausible though
[23:42] <menn0> wallyworld: ok, i'll move it
[23:42] <wallyworld> thanks, seems like the bestter place for it
[23:53] <menn0> LOL! bestter