[00:13] <cmars> contrib Casey Marshall
[00:13] <cmars> (that was for mup_)
[00:18] <wallyworld> thumper: i disagree that bug 1385289 should block landings. stuff isn't broken, it's slower
[00:18] <mup_> Bug #1385289: local storage migration is very slow <ci> <regression> <test-failure> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1385289>
[00:18] <wallyworld> it should be critical for sure
[00:18] <davecheney> $ go version
[00:18] <davecheney> go version devel +211c346906d0 Tue Oct 28 09:56:33 2014 +1100 linux/power64le
[00:18] <thumper> wallyworld: agreed
[00:18] <davecheney> ^ real go, on power
[00:18] <wallyworld> \o/
[00:19] <thumper> nice
[00:19] <davecheney> thumper: fixed a few bugs there today
[00:19] <thumper> \o/
[00:20] <davecheney> including some stuff that makes me wonder how the guys at google were testing this
[00:20] <davecheney> https://code.google.com/p/go/source/detail?r=cf3c9803a3d2116a9621fb1e9b2226a8fa3d5e10
[00:20]  * thumper -> physio
[00:20] <davecheney> for example
[00:32] <waigani> menn0: you about?
[00:33] <menn0> waigani: yep
[00:33] <waigani> menn0: i've hit a tricky one, can we talk it over?
[00:33] <waigani> menn0: standup channel?
[00:33] <menn0> yep and yep
[00:45] <perrito666> sorry did you say that the critical bug was fixed?
[01:25] <waigani> menn0: should we add a check for the EnvUUID as a generic test for all the upgrade steps?
[01:26] <menn0> waigani: it already does :)
[01:26] <menn0> see checkAddEnvUUIDToCollection
[01:27] <waigani> menn0: ? how did the tests pass then?
[01:27] <menn0> that checks that document has the env-uuid field
[01:27] <menn0> but it doesn't check that the struct has it added
[01:27] <menn0> there's no generic way to do that
[01:27] <menn0> well... I maybe there is but it won't be pretty
[01:28] <waigani> reflection and assuming the name is EnvUUID?
[01:28] <waigani> yeah, it would be ugly
[01:28] <waigani> I'm happy to let it go
[01:28] <menn0> I think the best bet is to check in the test function after checkAddEnvUUIDToCollection has done its thing
[01:29] <menn0> so for instanceData, check EnvUUID where MachineId and Status are checked
[01:31] <rick_h_> wallyworld_: I'm going to check with frankban on adding this as a new bug but fyi on https://bugs.launchpad.net/juju-core/+bug/1336843 as it's a fix in alpha2 that has a follow up effect.
[01:31] <mup_> Bug #1336843: bootstrap without a jenv destroys an existing environment  <bootstrap> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1336843>
[01:32] <wallyworld_> rick_h_: no worries, file a bug if needed and katco can look into it
[01:32] <rick_h_> wallyworld_: cool
[01:33] <wallyworld_> rick_h_: if there's an issue, we'll fix asap
[01:33] <wallyworld_> thanks for letting us know
[01:37] <davecheney> thumper: $ juju version
[01:37] <wallyworld_> axw: long story wrt SyncBuiltTools, maybe my explanation was crap. i took out the stream param and wrote to a "custom" directory/stream when building the tools but it failed and i traced through the code to figure out what was happening. but the logic that syncs expects the tools in the source datasource to be in the same stream as the destination, since it needs to read from the public cloud that way. hard to explain,
[01:37] <wallyworld_> happy to discuss if needed
[01:37] <davecheney> 1.21-alpha3-trusty-power64le
[01:44] <thumper> davecheney: awesome to see
[01:45] <waigani> menn0: http://reviews.vapour.ws/r/269/
[01:45]  * menn0 looks
[01:46] <davecheney> ok, now back to what I was supposed to be doing
[01:47] <axw> wallyworld_: SyncBuiltTools needs a stream param; I was just saying --upload-tools should set an obviously arbitrary value
[01:48] <wallyworld_> axw: trouble is that an arbitrary value breaks the processing, but i'll take another look to be sure
[01:48] <menn0> waigani: done
[01:48] <waigani> sweet, thanks menno
[01:48] <wallyworld_> since upload-tools calls a method which copies from source to dest, and that copy needs to have same stream for source and dest, not an arbitrary one for source
[01:49] <axw> wallyworld_: upload-tools does not do that anymore.
[01:49] <axw> look at uploadTools in cmd/juju/upgradejuju.go; it opens the file and passes it to UploadTools
[01:49] <wallyworld_> some infrastructure did in the tests
[01:50] <wallyworld_> i'll take another look to be sure,but a whole bunch of sync tools tests failed
[01:50] <wallyworld_> if the source stream != dest stream
[01:51] <wallyworld_> and sync tools calls that same shared function as upload tools ses from emory
[01:51] <waigani> menn0: testing for EnvUUID field, we currently are not doing it for any of the collections. Shouldn't be for all or none?
[01:51] <menn0> waigani: probably
[01:52] <menn0> waigani: your call
[01:52] <menn0> waigani: the most important thing I guess is that the field is there in the DB, which we are testing
[01:52] <menn0> waigani: it's trivial to add the field to the struct later if we've forgotten
[01:52] <waigani> menn0: yep, I'll leave it as is
[01:54] <axw> wallyworld_: http://paste.ubuntu.com/8712745/    <- all tests pass in cmd/juju with this, and upgrade-juju works as expected. haven't tested bootstrap, but should be the same.
[01:55] <wallyworld_> axw: trouble is, when i made that change, most tests in environs.sync fail
[01:55] <axw> ?? how could a change in cmd/juju affect environs/sync?
[01:55] <wallyworld_> oh, i see what your saying
[01:55] <wallyworld_> you're
[01:56] <wallyworld_> i had experimented with not having the streams parameter at all
[01:56] <wallyworld_> and just using "custom" all the time
[01:56] <axw> right, the param is needed for *other* cases, just no in this and the bootstrap case
[01:56] <axw> not*
[01:56] <wallyworld_> right ok, fair point. sorry i misunderstood you
[01:56] <axw> nps
[01:57] <wallyworld_> i'll change that when i do the index file fix
[01:57] <axw> it's a fairly minor thing, just wanted to make sure we both were on the same page
[01:57] <wallyworld_> yeah, np. thanks for pushing the issue
[02:22] <waigani> menn0: the instanceData upgrade step. To add a post-env UUID, based on the machine _id have to either: assume machine _id is pre-env and pass it through st.docID() (in which case, the upgrade step has to come before machine upgrade), or assume the machine _id is post-env (in which case, the upgrade step has to come after the machine upgrade)
[02:22] <waigani> menn0: i.e. I don't see how we can make the step not care about order
[02:23] <menn0> waigani: or you could detect if the machine is pre or post env UUID upgrade
[02:23] <menn0> waigani: by checking the env-uuid field
[02:24] <waigani> menn0: or that
[02:24] <waigani> menn0: okay, will do
[02:24] <menn0> waigani: if that seems too messy, then pick and ordering and add a comment in the list of steps explaining that the ordering is important and why
[02:26] <waigani> menn0: I didn't suggest it as it seemed a bit messy, I'll see how bad it looks and fall back to a comment
[02:26] <waigani> menn0: I also forgot about just checking the EnvUUID - thought we'd have to regex to _id
[02:27] <menn0> waigani: cool. yeah, just see how it looks.
[02:32] <axw> wallyworld_: what's the simplest way to create tools+metadata from source?
[02:32] <axw> I only want the built tools in the metadata I create
[02:35] <wallyworld_> axw: there's no easy way as such. if you have a tarball, you can easily generate metadata, but as far as i know, there's no easy way to get a tarball from source
[02:35] <axw> wallyworld_: that's what I though - thanks
[02:35] <wallyworld_> axw: what are you trying to do?
[02:36] <axw> just want to live test the upgrade bug
[02:36] <axw> I've got a unit test
[02:36] <axw> I'll bootstrap local and pull the tarball out of there
[02:36] <wallyworld_> yep, that's what i do
[02:36] <wallyworld_> maybe we need a maketarball plugin
[02:39] <waigani> menn0: so we search for instanceData docs with the machineID - pre or post env depending on whether or not the instanceData upgrade step has been run
[02:40] <waigani> menn0: I'm thinking order just matters: migrate 1. instanceid, 2. machines envUUID 3. instanceData envUUID
[02:41] <menn0> menn0: that's probably less messy isn't it?
[02:41] <menn0> menn0: just write a comment explaining it and be done with it
[02:42] <waigani> menn0: sgtm!
[02:53] <waigani> menn0: http://reviews.vapour.ws/r/177/
[03:06] <waigani> menn0: http://reviews.vapour.ws/r/269/
[03:12]  * menn0 is looking
[03:16] <menn0> waigani: done
[03:17] <waigani> menn0: cheers
[03:17] <waigani> thumper: http://reviews.vapour.ws/r/269/
[03:17] <waigani> thumper: http://reviews.vapour.ws/r/177/
[03:18] <thumper> ack
[03:58] <wallyworld_> thumper: remind me, we're not meant to run apt-get update for local provider are we? i thought we'd check for dependencies like cpu-checker and complain if they weren't there, but not just install shit onto a user's host http://pastebin.ubuntu.com/8714033/
[04:03] <davecheney> thumper: menn0 https://github.com/juju/juju/pull/976
[04:03] <davecheney> i haven't created an rbt review for this yet
[04:03] <davecheney> i want to get a sense for how gross you think this is
[04:03] <davecheney> (I think it's 50% gross)
[04:03] <thumper> davecheney: rbt gets magically created for you now
[04:04] <thumper> wallyworld_: yes we are supposed to run update by default for local
[04:04] <thumper> wallyworld_: we should be able to configure it off for those who know what they are doing
[04:04] <wallyworld_> thumper: ok, ta
[04:04] <thumper> and have done an update recently themselves
[04:04] <wallyworld_> thumper: that was on my hst
[04:04] <wallyworld_> host
[04:04] <wallyworld_> not the lxc container
[04:04] <thumper> oh...
[04:05] <thumper> no, we install shit
[04:05] <wallyworld_> that log from when i bootstrapped
[04:05] <thumper> which is just another reason to fix the local provider
[04:05] <wallyworld_> yeah :-(
[04:05] <thumper> so that machine zero is a container too
[04:05] <wallyworld_> i thought we'd check but not install
[04:05] <thumper> I don't remember that
[04:05] <wallyworld_> maybe i'm making it up
[04:20] <wallyworld_> jam: i have an lxc question - you suggested in a bug comment setting a http/apt/whatever proxy to http://10.0.3.1:8000 instead of http://localhost:8000 where 10.0.3.1 is the lxcbr0 bridging address. i have set up a little http server on the host and can wget to it fine from the host, but inside the lxc container, wget can't get out due to connection refused. could be a firewall issue between container and host? do you
[04:20] <wallyworld_> have any suggestion?
[04:21] <jam> wallyworld_: so you put an http server on your host machine, and then are unable to access 10.0.3.1:PORT from lxc, is that correct?
[04:21] <jam> wallyworld_: did you start your HTTP server before starting LXC?
[04:22] <jam> Is it binding to "*" (0.0.0.0) or is it binding to 127.* or to an explicit IP address?
[04:22] <wallyworld_> jam: yes, but can access localhost:PORT from host
[04:22] <jam> wallyworld_: from the *host* can you access 10.0.3.1:PORT
[04:22] <wallyworld_> yes
[04:22] <wallyworld_> from my PC
[04:22] <wallyworld_> oh
[04:22] <wallyworld_> sec, let me check
[04:23] <jam> wallyworld_: my first thought is that your service isn't bound to all networks
[04:23] <wallyworld_> jam: yeah, can't connect to 10.0.3.1 from host. i was just trying to validate the replacing localhost with the bridge address actualy worked
[04:24] <wallyworld_> maybe there's no need to do that
[04:24] <jam> wallyworld_: so are you just doing something like python SimpleHTTPServer ?
[04:24] <wallyworld_> the http proxy, apt proxy etc are all set to <bridge addr> as required
[04:24] <jam> Are you handing it an address to bind to ?
[04:24] <jam> Rather than "0.0.0.0" ?
[04:25] <jam> wallyworld_: it might certainly be a real life issue, as having your HTTP proxy set up may not bind it to the LXC bridge address if it comes up first.
[04:25] <wallyworld_> jam: ah, i see the issue, the little go prog is binding to localhost
[04:25] <wallyworld_> http.ListenAndServe("localhost:8000", nil)
[04:25] <jam> wallyworld_: right, I think you can change that to ":8000' or maybe "0.0.0.0:8000"
[04:26] <wallyworld_> jam: i assumed that would cover all connections into the physical host
[04:26] <jam> wallyworld_: nope, specifying localhost restricts it to 127.* as it is how you avoid making something publi
[04:26] <wallyworld_> oh, i'll try that, ty :-)
[04:27] <wallyworld_> jam: you are awesome, that worked, ty
[04:28] <jam> wallyworld_: np
[04:38] <menn0> davecheney: looking now
[04:58] <menn0> davecheney: done
[05:13] <stokachu> anyone notice when doing a juju bootstrap with a maas provider that when it runs its apt-get update/upgrade it returns a non zero status?
[05:13] <stokachu> even though its chugging right along
[05:24] <axw> stokachu: apt-get update/upgrade fails, but bootstrap continues on? what version of juju is that?
[05:24] <stokachu> 1.20.10
[05:24] <stokachu> the upgrade doesn't fail though
[05:27] <axw> can't say I've seen that. we do "set -e" ... can't see any obvious reason why that'd happen
[05:27] <stokachu> it returns the same 124 status code
[05:29] <davecheney> stokachu: exit 124 is timeout
[05:29] <davecheney> that is coming from deep the bowels of apt
[05:30] <stokachu> ok
[05:30] <davecheney> stokachu: did you paste the bootstrap log ?
[05:30] <stokachu> nah im doing another deploy
[05:31] <stokachu> ill paste once it fails again
[05:37] <davecheney> kk
[06:14] <davecheney> The following packages have unmet dependencies: gdb : Depends: libpython3.4 (>= 3.4~b1) but it is not going to be installed
[06:14] <davecheney> kill me
[06:48] <tasdomas> morning
[06:49] <tasdomas> ashipika was trying to land his PR and got this response from the buildbot: http://juju-ci.vapour.ws:8080/job/github-merge-juju/1029/console
[06:49] <tasdomas> any ideas as to what may be causing this?
[06:51] <ashipika> and this one is interesting, as well http://juju-ci.vapour.ws:8080/job/github-merge-juju/1028/console
[06:59] <davecheney> my tasdomas mongodb shat itself on startup
[07:00] <davecheney> that is why it cannot complete the TLS handshake
[07:01] <tasdomas> davecheney, should we retry the merge?
[07:01] <davecheney> yes
[07:01] <davecheney> it'll pass eventually
[07:02] <tasdomas> davecheney, ok - ashipika's first attempt failed when godeps failed to retrieve launchpad.net/tomb
[07:02] <davecheney> yup
[07:02] <davecheney> in that case
[07:02] <davecheney> launchpad shat itself
[07:02] <ashipika> :D
[07:02] <ashipika> shit is definitely transitive..
[07:02] <davecheney> ashipika: we checkout all the build deps on each build
[07:03] <davecheney> makes the build suceptable to other systems that are weakly available
[07:03] <ashipika> davecheney: no problem, should i just keep retrying them until the build succeeds?
[07:03] <davecheney> ashipika: yup
[07:03] <davecheney> both of those failures are transiant
[07:03] <ashipika> davecheney: rogerroger
[07:03] <davecheney> sorry for the inconveneince
[07:04] <ashipika> davecheney: no, no, i completely  understand.. thanks for the info
[07:05] <davecheney> ashipika: np
[07:26] <dimitern> morning all
[07:30] <jam> morning dimitern
[07:31] <dimitern> hey, jam!
[07:53] <mattyw> morning all
[07:53] <mattyw> dimitern, morning
[08:02] <TheMue> morning
[08:26] <voidspace> morning all
[09:58] <jam> dimitern: I'm not sure if you saw the calendar update, but we moved the standup time by 45 minutes to see if it would work better for everyone (you may not be back yet, I guess)
[09:59] <jam> TheMue: voidspace: just a poke that it is soon since I know the calendar is changed
[09:59]  * fwereade out for a bit
[09:59] <jam> hey fwereade
[09:59] <jam> have fun :)
[09:59] <voidspace> jam: omw
[10:00] <voidspace> jam: hmmm... firefox is in a redirect loop
[10:00] <voidspace> restarting browser time...
[10:00] <jam> voidspace: sounds exciting
[10:00] <jam> where will you end up next ? :)
[10:00] <voidspace> oh it is
[10:00] <voidspace> hah
[10:00] <voidspace> always an adventure
[10:05] <dimitern> voidspace, jam, TheMue, hey guys, I've just came back - omw for the standup
[10:05] <voidspace> dimitern: hey
[10:06] <perrito666> morning
[10:06] <voidspace> perrito666: morning
[10:09] <mattyw> dimitern, after the standup could you take a look at this? http://reviews.vapour.ws/r/265/
[10:11] <dimitern> mattyw, sure
[10:12] <mattyw> dimitern, thanks very much - it's the fix for the add-unit bug you found last week
[10:15] <dimitern> mattyw, sweet!
[10:40] <perrito666> is anyone using exuberant-ctags?
[10:46] <mattyw> perrito666, I am
[10:47] <perrito666> mattyw: and that with vi?
[10:47] <mattyw> perrito666, of course (are there other editors ;) )
[10:47] <perrito666> mattyw: I am getting an empty tagbar with go
[10:47] <perrito666> yet, if I ctags -x on a go file I get results
[10:48] <mattyw> perrito666, you use tagbar? wow - not used that in a loooong time.
[10:48] <mattyw> perrito666, which ctags are you using for go?
[10:48] <perrito666> mattyw: care to share what are you using?
[10:48] <perrito666> 5.9-svn
[10:49] <mattyw> perrito666, https://github.com/jstemmer/gotags
[10:51] <perrito666> mattyw: I am a bit wary of using go get as a package manager, since I tend to nuke my /bin in go
[10:52] <mattyw> perrito666, I don't update it enough to worry about package management - just build it then move it somewhere you don't nuke
[10:52] <mattyw> perrito666, do you use godef?
[10:52] <mattyw> perrito666, and by extension vim-godef?
[10:52] <perrito666> mattyw: I dont, what is it?
[10:54] <mattyw> perrito666, https://github.com/dgryski/vim-godef
[10:54] <perrito666> sweeeet
[13:09] <wwitzel3> fwereade: so I decided to go the explicit route and add a --no-remote-unit option
[13:10] <fwereade> wwitzel3, ok, that sounds sane
[13:11] <fwereade> wwitzel3, what do we do if nothing's specified and no units are in the relation? require --no-remote-unit, or just execute without one?
[13:14] <perrito666> ericsnow: ping
[13:14] <wwitzel3> fwereade: my thought was require --no-remote-unit .. easiest is to just ask what they'd prefer as the desired behavior and make sure it is documented accordingly
[13:18] <perrito666> rogpeppe: your timing is not the best :p
[13:19] <rogpeppe> perrito666: i'm sorry - i only just saw it
[13:20] <rogpeppe> perrito666: it shouldn't be hard to change though, should it?
[13:20] <perrito666> nope, its just that I have to revert it :p it would be nice to have a $$revert$$
[13:20] <rogpeppe> perrito666: ha, yes
[13:21] <rogpeppe> perrito666: git makes it very easy to do that though
[13:24] <perrito666> rogpeppe: It does since the bot treats it as a merge
[13:25] <perrito666> rogpeppe: Ill make sure that you did not miss anything before making the change but you might be right
[13:25] <rogpeppe> perrito666: thanks
[13:26] <rogpeppe> perrito666: i don't like seeing nice small APIs made bigger because of testing issues :)
[13:28] <perrito666> rogpeppe: I like it no more than having smart dummy objects :p It was a fast trade off, anyway going for your way now BUT, you will have to review it :p
[13:28] <rogpeppe> perrito666: np
[13:37] <fwereade> wwitzel3, I'm inclined to think that not having a remote unit is actually perfectly valid
[13:37] <fwereade> wwitzel3, and I'm also starting to feel sure that specifying a remote unit that appears not to exist *may* be valid
[13:38] <fwereade> wwitzel3, so it shouldn't be an automatic error
[13:38] <fwereade> wwitzel3, but it should probably be some sort of warning..?
[13:39] <fwereade> sorry bbiab
[13:39] <wwitzel3> fwereade: ok, that sounds fine to me
[13:46] <voidspace> rebooting
[14:03] <voidspace> dimitern: ping
[14:18] <ericsnow> dimitern: could you take a look at the patches I have up?
[14:18] <ericsnow> dimitern: the most important are http://reviews.vapour.ws/r/67/ and http://reviews.vapour.ws/r/135/)
[14:19] <ericsnow> dimitern: however, I could really use eyes on http://reviews.vapour.ws/r/79/ and http://reviews.vapour.ws/r/126/ as well
[14:31]  * perrito666 looks at the 47" tv and wonders how comfortable its as a work screen
[14:36] <dimitern> voidspace, pong
[14:36] <dimitern> ericsnow, sure, looking
[14:36] <ericsnow> dimitern: thanks :)
[14:36] <dimitern> I was on a call, sorry for responding slowly
[14:37] <voidspace> dimitern: so we have stub implementation of AllocateAddress for all providers
[14:37] <voidspace> dimitern: that needs to change
[14:37] <voidspace> dimitern: I've mostly done that
[14:37] <dimitern> voidspace, great
[14:37] <voidspace> dimitern:  AllocateAddress(_ instance.Id, _ network.Id, _ network.Address) error
[14:37] <voidspace> dimitern: ?
[14:38] <dimitern> voidspace, that sounds good to me
[14:38] <voidspace> dimitern: cool, there's an existing test (dummy provider) that needs to change too
[14:38] <voidspace> just finishing that
[14:38] <dimitern> voidspace, possibly with the addition of a concrete error type returned for "conflict" or "address already reserved" sort of error
[14:39] <voidspace> dimitern: I figured we could sort that out later...
[14:40] <voidspace> dimitern: as we also need to handle "request failed - couldn't reach server" errors too
[14:40] <dimitern> voidspace, yeah, fair point -- just a reminder
[14:40] <voidspace> changing error type to a concrete type is trivial
[14:41] <dimitern> voidspace, "request failed", as in "i did 5 attempts so far and it failed all" ?
[14:41] <voidspace> dimitern: I guess
[14:41] <voidspace> whatever strategy we pick
[14:41] <voidspace> I did discuss that with John a bit yesterday but it morphed into a discussion about race conditions
[14:42] <dimitern> voidspace, I think ec2 deserves some retrying logic, being occasionally flaky
[14:42] <voidspace> dimitern: yep, ideally we'd abstract the retry machinery so it can just be reused
[14:43] <voidspace> rather than ad-hoc retrying of all api calls
[14:44] <dimitern> voidspace, there's *some* reusable bits already - around the AttemptStrategy type
[14:44] <voidspace> right
[14:44] <dimitern> voidspace, I even seem to recall a strategy that takes rate limiting into account for ec2, but I might be wrong
[14:45] <voidspace> dimitern: right, we did some rate limiting work
[14:45] <voidspace> man, that was one of the first things I worked on
[14:46] <dimitern> voidspace, yeah :) -- but wasn't that related to our apiserver only?
[14:47]  * TheMue listens
[14:48] <dimitern> ericsnow, hey
[14:48] <voidspace> dimitern: I don't recall the details...
[14:48] <dimitern> ericsnow, so wrt http://reviews.vapour.ws/r/126/ - did you reach an agreement with davecheney about the approach ?
[14:49] <voidspace> dimitern: http://reviews.vapour.ws/r/274/
[14:50] <dimitern> voidspace, cheers -- will have a look after I'm done with ericsnow's
[14:50] <voidspace> dimitern: cool, thanks
[14:55] <TheMue> voidspace: AllocateAddress looks fine, implementation is nice short so far :D
[14:55] <voidspace> TheMue: :-)
[14:56] <voidspace> TheMue: yeah, done...
[14:56] <TheMue> +1
[14:56] <voidspace> TheMue: thanks
[14:58] <ericsnow> dimitern: wrt 126, while I addressed all his comments, I never heard back from Dave
[15:00] <dimitern> ericsnow, right, I'd send him a mail and ask for a final look, considering his initial not lgtm, just to be on the nice side
[15:02] <ericsnow> dimitern: oh, that...we did sort that out.  I re-wrote it using direct HTTP requests rather than going through the websockets API
[15:02] <dimitern> ericsnow, ah, ok -- it's fine then :)
[15:38] <dimitern> voidspace, you've got a review
[15:38] <dimitern> ericsnow, 1 review done, 3 to go
[15:38] <ericsnow> dimitern: rock on! :)
[15:41] <voidspace> dimitern: thanks
[15:41] <voidspace> dimitern: cool, I'll remove the TODO and merge
[15:42] <dimitern> voidspace, +1
[15:59] <voidspace> how odd
[15:59] <voidspace> internet on my desktop just stopped working
[15:59] <voidspace> was still working for everyone else in the house
[15:59] <voidspace> anyway, a reboot fixed it
[16:07] <perrito666>  rogpeppe https://github.com/juju/juju/pull/978
[16:09] <rogpeppe> perrito666: one thing: please make sure that the call to MgoTestPackage is in peergrouper_test, not peergrouper
[16:10] <rogpeppe> perrito666: otherwise the tests in your workerJujuConnSuite will never be tested
[16:11] <rogpeppe> perrito666: (in Go 1.3 at any rate)
[16:13] <rogpeppe> perrito666: ah, i see that's already the case
[16:14] <perrito666> rogpeppe: it is there where already tests in peergrouper_test
[16:15] <rogpeppe> perrito666: BTW in the peergrouper version I've got, workerJujuConnSuite is already defined in peergrouper_test
[16:15] <perrito666> rogpeppe: ?
[16:16] <rogpeppe> perrito666: that was a week ago, commit 9301dcf0b882a1ab389297518a145b29668da51e
[16:16] <rogpeppe> perrito666: so i'd just revert the name to the same thing - worker_test.go
[16:17] <rogpeppe> perrito666: (there are other external tests too, such as initiate_test.go)
[16:17] <rogpeppe> perrito666: anyway, reviewed
[16:19] <perrito666> rogpeppe: I did not change the name, I have worker_external_test.go and worker_test.go is still there
[16:19] <perrito666> tx for the review btw
[16:19] <rogpeppe> perrito666: but worker_test.go is in peergrouper_test already, right?
[16:20] <perrito666> rogpeppe: no more, I also reverted that :)
[16:20] <rogpeppe> perrito666: ok
[16:22] <perrito666> my lunch was so diet that I actually took me less to cook it than a full test run
[16:23] <perrito666> rogpeppe: this patch was sponsored by my new diet :p you can thank my wife
[16:24] <rogpeppe> perrito666: i highly recommend the 5-2 diet BTW
[16:25] <rogpeppe> perrito666: means you only have to try hard on two days in a week
[16:27] <perrito666> rogpeppe: there is no spec on wp on how "normal" can be a normal day
[16:27] <rogpeppe> perrito666: whatever you like, pretty much
[16:28] <perrito666> rogpeppe: according to the dietitian my problem is not that I eat caloric things, its just that I eat a lot of it, for any definition of it
[16:28] <rogpeppe> perrito666: yeah, so the 5-2 diet is good for just that
[16:28] <rogpeppe> perrito666: you can still pig out :)
[16:28] <rogpeppe> perrito666: (all but those two days)
[16:28] <perrito666> :p
[16:28]  * perrito666 gives it a shot
[16:29] <perrito666> uff, we are blocked.. again?
[16:29] <rogpeppe> perrito666: you should watch the original documentary
[16:30] <rogpeppe> perrito666: google for [horizon 5-2 diet documentary]
[16:30] <sinzui> natefinch, can you arrange for a rollback to address bug 1386766? when it is removed, we can then test a fix the the upgrade regression that merged a few hours ago
[16:30] <mup> Bug #1386766: generate-tools breaks scripted and documented behaviour <ci> <metadata> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1386766>
[16:31] <TheMue> dimitern: thx
[16:32] <dimitern> TheMue, np :)
[16:42] <dimitern> ericsnow, all reviews done
[16:42] <ericsnow> dimitern: awesome! Thanks for doing that.
[16:44] <dimitern> ericsnow, np :) thanks for doing all that work
[17:00] <natefinch> sinzui: ok
[17:02] <voidspace> dammit, dimitern has left
[17:02] <voidspace> he always escapes before I can harrass him
[17:13] <wwitzel3> having this bag of pistachios by my desk was a mistake, I can't feel the tips of my fingers now
[17:13] <voidspace> wwitzel3: :-)
[17:14] <voidspace> wwitzel3: a bag of pistachios is never a mistake
[17:14] <voidspace> pretty much by definition
[17:15] <perrito666> wwitzel3: that sounds a lot like you are alergit to pistachios
[17:16] <voidspace> TheMue: have you made progress?
[17:16] <wwitzel3> perrito666: haha, my fingers are alergic to cracking open 100 of them
[17:17] <perrito666> wwitzel3: oh, you shoult learn to do with your midle front tooth, its a life changer
[17:17] <perrito666> :p
[17:17] <natefinch> wwitzel3: you should buy them shelled :)
[17:18] <natefinch> wwitzel3: I actually did the math and at least at my grocery store, they actually come out to almost the exact same price per pound of edible bits
[17:18] <perrito666> oh, that is an option too
[17:18] <perrito666> natefinch: for that you actually had to peel all of them and weight them
[17:18] <perrito666> right?
[17:18] <wwitzel3> natefinch: yeah, but then, I eat too many and I've complaining about a stomach ache .. either way I'd be complaining :P
[17:18] <natefinch> haha yep
[17:19] <perrito666> anyway I cannot eat pistachios while not having beer, so I eat peanuts which are easier to peel
[17:19] <natefinch> perrito666: yes, actually.  I bought a bag of the in-shell version, ate them, then weighed the shells
[17:19] <perrito666> natefinch: clever
[17:19] <natefinch> perrito666: I forget the percentage right now... maybe 30% of the weight was shell... something like that
[17:44] <perrito666> ericsnow: ping
[17:44] <ericsnow> perrito666: hey
[17:44] <perrito666> ericsnow: priv
[18:17] <wwitzel3> well that 14.10 upgrade did not go well .. had to boot a live CD and do a boot repair, and now things are a bit wonky
[18:17] <wwitzel3> all my dev stuff still works fine though, so that's good
[18:18] <perrito666> wwitzel3: ah I got an "interesting" behavior in network for a moment
[18:19] <perrito666> wwitzel3: and in the rest of my system too as you most likely saw in my attempt at a hangout yesterday
[18:24] <natefinch> sinzui: reverting that merge here: https://github.com/juju/juju/pull/979
[18:26] <sinzui> thank you very much natefinch .
[18:26] <perrito666> natefinch: you missed the fix tag
[18:35] <natefinch> dammit
[18:37] <natefinch> now remind me, because I always forget... do I put that in the $$ $$ or do I have to put it in the previous commit message?  Also does it have to be 'fixes-1386766' or is it supposed to be 'fixes 1386766'  (space vs. hyphen)?
[18:37] <natefinch> sinzui, perrito666 ^^
[18:45] <sinzui> natefinch, fixes-1386766
[18:46] <sinzui> you can optionally __fixes-1386766__ to make it bold and show the works you mean it
[18:48] <natefinch> sinzui: yes, but does that work inside the $$stuff$$?  I feel like I've tried that in the past and it didn't work
[18:48] <natefinch> since I think it expects just alphanumeric inside the $$ $$
[18:48] <sinzui> natefinch, "$$merge$$ fixes-1386766" in the same comment is fine
[18:48] <natefinch> ok
[18:49] <sinzui> natefinch, and if you think CI is blocking you from merging a critical fix, add __JFDI__ to make it be tested and merged on success. __JFDI__ is our safety to ensure an automated system doesn't stop os from doing the right thing
[18:50] <natefinch> ok, I thought i remembered as JFDI, but couldn't remmeber the exact syntax.  Is there documentation of this stuff somewhere?
[18:52] <sinzui> natefinch, in an email to canonical-juju.
[18:52] <natefinch> email is not documentation :)
[18:52] <sinzui> natefinch, I can add a page to reports.vapour.ws
[18:52] <natefinch> sinzui: that would be awesome, thank you :)
[18:53] <sinzui> natefinch, I am not saying it is, but I write a lot of official gdocs, an no one reads them
[18:54] <katco> sinzui: i am familiar with that problem; one solution is to have much fewer docs, but more sections so that you have a single starting point to give people
[18:55] <katco> sinzui: and when these situations arise, you just point and go: "go forth and learn"
[18:55] <natefinch> sinzui: I understand the frustration.  hopefully we can help people remember that documentation exists
[18:55] <natefinch> yep
[18:55] <katco> it doesn't help that docs is not very navigatable imo
[18:56] <natefinch> yes, sorry, I thought gdocs was a typo.   Google docs is a terrible place to keep documentation
[18:57] <katco> natefinch: well, it's not _horrible_, but i feel like there needs to be some sort of indexing mechanism
[18:57] <natefinch> docs should be at a website with a url that means something
[18:57] <rick_h_> katco: howdy, I mentioned it to wallyworld last night. There was a small issue that came out of the work to not destroy envs when there's no jenv file that we hit. https://bugs.launchpad.net/juju-core/+bug/1386703
[18:57] <mup> Bug #1386703: New instance created when bootstrapping an existing env without the jenv file <juju-core:New> <https://launchpad.net/bugs/1386703>
[18:57] <natefinch> like juju.ubuntu.com/docs
[18:58] <katco> rick_h_: thanks, i'll have a look. are you all blocked on this or what's going on?
[18:58] <rick_h_> katco: as an fyi, not sure where it'll fit on the todo list but wanted to bring it front/center as it's an issue for quickstart
[18:58] <katco> rick_h_: yeah... i'll bring it up to wallyworld_ tonight in our standup
[18:58] <rick_h_> katco: no blocked, but would be great if the release didn't close one bug and open another right next to it.
[18:58] <rick_h_> katco: ty
[18:58] <katco> rick_h_: gotcha. thanks for the ping.
[18:59] <voidspace> g'night all
[19:12] <natefinch> I wish godeps didn't say "failed" when it's actually working as designed
[19:17] <natefinch> ah dammit.... google's winsvc package isn't importable in linux, because it's all // +build !windows   ... and so godeps can't find it, so godeps -t doesn't include it in the output
[19:22] <perrito666> natefinch: fork it?
[19:23] <natefinch> ahh, you know what works?  GOOS=windows godeps -t ./... > dependencies.tsv
[19:24]  * natefinch feels pretty smart
[19:24] <natefinch> it works because windows dependencies are a strict superset of the linux dependencies
[19:30] <natefinch> perrito666: quick review? http://reviews.vapour.ws/r/277/diff/#
[19:31] <perrito666> natefinch: what is the 58 added at the end of service?
[19:33] <natefinch> perrito666: I forget.  some hg thing.... possibly an easy to remember incremental diff id.  It's optional, so people who insert dependencies in the file by hand often forget it.... but since I generate the file with godeps, it remembers and puts it in there
[19:33] <perrito666> good enough for me
[19:33] <perrito666> natefinch: lgtmd
[19:34] <natefinch> perrito666: thanks
[19:41] <thumper> morning folks
[19:41] <natefinch> morning thumper
[19:41] <alexisb> morning thumper
[19:49]  * thumper looks sadly at his very full inbox
[20:31] <natefinch> thumper or fwereade: got a second for a UX question?    charm level constraints include "container type blacklist"  what do we do if the user says to deploy a charm to a container that the charm says it doesn't support?  Warn the user?  prevent the deployment?  If we prevent it, is there a way to override?
[20:32] <natefinch> (or anyone else who has an opinion is free to speak up too :)
[20:32] <thumper> I think that we shouldn't stop them, but yes to warning
[20:32] <thumper> and say "the charm says this won't work"
[20:32] <perrito666> did the metadata fix land?
[20:33] <thumper> but if the user explicitly says "do it this way" we let them try
[20:33] <natefinch> thumper: sounds good
[20:33] <natefinch> thumper: do we still skip over newly created containers made with add-machine?
[20:34] <natefinch> thumper: actually... do we auto-deploy to an empty container?
[20:35] <natefinch> thinking about 'juju add-machine && juju deploy foo' will add a machine and then deploy foo to it... but not sure if that's true of 'juju add-machine lxc:4 && juju deploy foo'
[20:35] <thumper> maybe...
[20:35] <waigani> menn0_: was it relations or relationScopes that broke the world?
[20:36] <natefinch> thumper: easy enough to test.... still, I presume it would skip containers by default
[20:39] <natefinch> dammit
[20:39] <natefinch> stupid replicaset tests, who wrote that crap, anyway?
[20:41] <natefinch> sinzui: tests giving "No space left on device"
[20:41] <sinzui> hmm
[20:41] <sinzui> which one natefinch
[20:43] <natefinch> sinzui: http://juju-ci.vapour.ws:8080/job/github-merge-juju/1042/console
[20:45] <sinzui> natefinch, this might be tricky I see 29G free.
[20:45]  * sinzui pokes arround
[20:45] <sinzui> ah, this is the machine instance
[20:50] <waigani> menn0: remember those selects from yesterday? They just selected _id. But then the unmarshalled charm doc is expected to have a URL - which *was* the id but is now it's own field. So I've added "url" to the select and all works again.
[20:50] <sinzui> natefinch, this perplexing. I don't see evidence of the instance size being changed. We can see about 6G available on the root disk
[20:51] <menn0> waigani: that makes sense
[20:51] <waigani> menn0: yep, cool
[20:51]  * sinzui ponders changing the working dir to one of the mounted disks
[20:52] <perrito666> sinzui: I am having an interesting issue, I have my script, that creates a stream structure with "juju metadata generate-tools -d ${BUILD_FOLDER}/juju-dist/" creating, among other things, juju-dist/tools/streams/v1/index.json but then I have juju failing because it cannot find juju-dist/tools/streams/v1/index.sjson
[20:52] <perrito666> sinzui: happened to you?
[20:52] <sinzui> perrito666, pull trunk, natefinch reverted wallyworld_'s commit that changed the default behaviour
[20:52]  * perrito666 pulls again
[20:52] <waigani> menn0: and with that state passes
[20:52] <waigani> \o/
[20:53] <menn0> waigani: awesome
[20:53] <natefinch> perrito666: it hasn't landed yet.... it keeps failing
[20:53] <perrito666> natefinch: ah
[20:54] <perrito666> a world of joy
[20:54] <natefinch> indeed
[20:54] <natefinch> sinzui: I'm happy to retry, but my EOD is in 5 minutes
[20:54]  * perrito666 drinks a beverage at 90°C to protest against the summer heat
[20:54] <sinzui> perrito666, maybe something else is broken or we aren't testing the reverted commit. CI still cannot deploy anything using streams
[20:55] <sinzui> natefinch, retry your branch
[20:57] <natefinch> sinzui: it's going
[21:00] <sinzui> thank you for your time natefinch
[21:00] <waigani> menn0/thumper: what do you thing about creating an interface for environment specific collection documents with DocID and EnvUUID fields? One use would be testing for those fields in the upgrade steps. Would there be other value to make it worthwhile?
[21:01]  * thumper otp
[21:01] <menn0> waigani: what would the interface look like? remember that interfaces are only for methods, not for fields
[21:02] <waigani> menn0: ah right, it would have to be an embedded struct - messy
[21:13] <perrito666> ok I am taking a break while natefinch chanes land, bbl
[21:13] <perrito666> and as I said that, the change landed
[21:14] <natefinch> huzzah
[22:24] <thumper> perrito666: plz send some summer heat my way
[22:24] <thumper> bit chilly here still
[22:48] <perrito666> thumper: sure, Ill be glad to get rid of some summer heat, specially because its still freaking spring
[22:58] <alexisb> wallyworld_, you around?
[22:59] <wallyworld_> alexisb: yes, ia am, i have a few emails to respond to, did you want to chat? i'm in a meeting now but will be free soon
[22:59] <alexisb> yeah I do, if you are in a meeting now then I will ping you in about 20 minutes
[23:00] <wallyworld_> sure
[23:11] <perrito666> ericsnow: what implementation of tar are you using?
[23:12] <ericsnow> perrito666: pretty sure it's the one you put into the utils repo
[23:13] <perrito666> ericsnow: the symlinks implementation is broken and I recall fixing it
[23:32] <perrito666> ericsnow: 972 has a rb link?
[23:36] <waigani> what should happen if you upgrade juju while a unit agent-state is pending?
[23:50] <davecheney> thumper: the uniter tests don't pass for me at trunk
[23:50] <davecheney> have you seen that ?
[23:58] <anastasiamac> perrito666: thnx 4 z review!
[23:59] <perrito666> np, don't take me as an authority anyway
[23:59] <perrito666> most where suggestions