[01:03]  * perrito666 is waiting to land changes
[01:20] <axw> wwitzel3: worker.ErrRebootMachine
[01:27] <anastasiamac> ericsnow: thnx for review! it's amazing what ppl with fresh eyes can see! I really appreciate ur patience with such a huge PR too :D
[01:27] <wallyworld> perrito666: looks like trunk will be blocked for a while sadly
[01:27] <perrito666> wallyworld: t's ok, I dont need sleep
[01:28]  * perrito666 wants to be like dimitern 
[01:28] <ericsnow> anastasiamac: glad to do it; I try to be thorough and honest :)
[01:28] <ericsnow> anastasiamac: and I like to learn a thing or two in the process :)
[01:28] <wallyworld> perrito666: it won't be fixed till your SOD tomorrow because a fix landed but the tests are failing on CI even though they pass locally and there's no build artifacts
[01:28] <wallyworld> so we need QA team input
[01:28] <ericsnow> wallyworld: I made some broad changes on http://reviews.vapour.ws/r/1164/
[01:29] <ericsnow> wallyworld: when you have a few minutes could you take a look?
[01:29] <wallyworld> ok
[01:29] <ericsnow> wallyworld: ta
[01:29] <wallyworld> ericsnow: btw, pool config attributes may be string, int, bool, whatever
[01:29] <ericsnow> wallyworld: FYI, I'm manually testing while doing other stuff
[01:30] <wallyworld> ok
[01:30] <ericsnow> wallyworld: I was confused because at the CI the attrs are always strings
[01:30] <wallyworld> well of course!
[01:30] <ericsnow> :)
[01:30] <perrito666> wallyworld: well one of the things I wanted to land is for 1.23 :p
[01:30] <perrito666> so I might still be in luck
[01:31] <wallyworld> ericsnow:  they have to be parsed into the right type - we don't use a schema yet though
[01:31] <wallyworld> perrito666: yeah, 1.23 is open
[01:31] <ericsnow> wallyworld: np, just trying to make sense of a bunch of new-to-me code :)
[01:31] <perrito666> now I just need a review
[01:31]  * perrito666 looks at ericsnow  :)
[01:32] <ericsnow> wallyworld: to be honest, very little was hard to grok
[01:32] <ericsnow> wallyworld: that's certainly to anastasiamac's credit
[01:32] <perrito666> one of my favorite code lines --> // TODO(fwereade) GAAAAAAAAAAAAAAAAAH this is LUDICROUS.
[01:32] <wallyworld> ericsnow: indeed, i feel the same looking at other work myself
[01:33] <wallyworld> perrito666: that's hilarious
[01:33]  * wallyworld checks to see if it is his code
[01:33] <ericsnow> perrito666: I probably won't be able to give you a review on the until tomorrow, sorry
[01:33] <anastasiamac> ericsnow: i appreciate ur consideration :D
[01:33] <perrito666> wallyworld: its a test that compiles jujud
[01:34] <wallyworld> yeah, not me. whew :-)
[01:34] <perrito666> wallyworld: lol
[01:35] <perrito666> :p lol it is the person that first taught me how to code in juju
[01:45] <wallyworld> ericsnow: that's done
[01:45] <ericsnow> wallyworld: thanks :)
[01:45] <wallyworld> but trunk is blocked anyway :-(
[02:05] <perrito666> natefinch: kids asleep driven development?
[02:05] <natefinch> perrito666: yep
[02:40] <natefinch> wwitzel3: you around?
[03:14] <natefinch> whelp, when things start randomly breaking that were working for the last two weeks, it's time for bed.
[03:25] <anastasiamac> axw: wallyworld: cleared PR \o/ PTAL http://reviews.vapour.ws/r/1213/
[03:26] <wallyworld> ok
[03:30] <anastasiamac> ericsnow: I was not just dropping issues... I just was not publishing all comments in hope to minimise volumes of email. plz let me know if there r still some issues that i have not addressed or answered
[03:30] <ericsnow> anastasiamac: will do, thanks for taking the time to work through all that :)
[03:31] <anastasiamac> ericsnow: thnx for review - really appreciated ur time and input
[03:52] <anastasiamac> wallyworld: so with respect to your last comment about "called".... r u happy with my amendments - i.e. removing the "called" from places where I *know* my implementation was called using other checks?...
[03:53] <wallyworld> anastasiamac: yes, so long as there are other side effects to ensure the code was called
[03:53] <anastasiamac> wallyworld: tyvm :D there are other means!
[03:53] <wallyworld> in the past, people have had blocks of code with c.Assert() that we not called
[03:55] <anastasiamac> wallyworld: good to know :) will keep in mind...
[04:19] <axw> wallyworld: rootfs attachment via storageprovisioner: http://paste.ubuntu.com/10631966/
[04:21] <axw> wallyworld: hooks firing: http://paste.ubuntu.com/10631970/
[04:24] <davecheney> oh gawd -- autotools!
[04:26] <axw> davecheney: ?
[04:26]  * axw enjoys not thinking about build configuration anymore
[04:27] <davecheney> ../gcc/trunk/configure --prefix=/opt/gccgo --enable-languages=c,c++,go
[04:28] <axw> fun times
[04:34] <davecheney> for increasingly small values of fun
[04:57] <davecheney> protip, building gcc requires more than 7gb of disk space
[05:05] <axw> davecheney: my LLVM/Clang/llgo build dir is 15GB :\    that is with full debug and all targets, but still...
[05:19] <wallyworld> axw: awesome on the storage provisioning :-)
[05:19] <axw> wallyworld: just reworking the tmpfs provider now, then I'll write some tests and propose
[05:20] <axw> wallyworld: I think I'll remove the FilesystemParams.Attachment field while I'm at it
[05:20] <axw> but leave volume alone for now
[05:20] <wallyworld> sgtm
[07:41] <axw> wallyworld: are you around?
[07:41] <wallyworld> yeah
[07:41] <axw> wallyworld: hey, I'm just thinkg about how rootfs should work...
[07:42] <axw> wallyworld: I think it should be using "mount --bind" to put things in place properly
[07:42] <axw> but that won't work on LXC
[07:42] <axw> OOTB
[07:42] <wallyworld> we can create an lxc.conf that will allow that though
[07:43] <wallyworld> and it's a use use we want too right - bind mount a host dir
[07:43] <axw> wallyworld: yeah, but not by default. it'd be nice if we could at least get rootfs to work OOTB
[07:43] <wallyworld> hmmm, yeah
[07:44] <axw> wallyworld: so, the alternative is we symlink on LXC
[07:44] <wallyworld> i wonder if we can do it different just for lxc, sounds messy
[07:44] <axw> wallyworld: which would fail if the path already exists... but it's an option
[07:44] <wallyworld> what is the likeihood of the path existing - on boot, not very high
[07:44] <wallyworld> on storage add, we can just error
[07:45] <axw> wallyworld: the charm can specify any path, so it depends on what the charm specifies
[07:45] <axw> wallyworld: i.e. a charm could say mount the filesystem at "/var/lib/foo", and it *should* work even if that directory exists
[07:46] <wallyworld> true
[07:46] <wallyworld> would be easier if users just let juju decide and inform via the attached hook
[07:46] <axw> wallyworld: so I could have it try to "mount --bind", and if that fails then try to symlink if the dir doesn't exist
[07:46] <axw> indeed
[07:46] <wallyworld> and if that fails, then what's the fallback
[07:46] <axw> no fallback, can't do anything else
[07:47] <wallyworld> i *think* that's reasonable, maybe if the dir is empty we remove it
[07:47] <axw> wallyworld: actually, if the dir exists *and* it's on the same filesystem as /, then we could just carry on
[07:47] <wallyworld> and only fail if dir contains data
[07:48] <wallyworld> s/remove/use
[07:48] <wallyworld> yes, just carry on
[07:48] <axw> hrm, I feel a bit nervous about removing existing things
[07:48] <axw> even if they're empty
[07:48] <wallyworld> sorry, i mistyped
[07:48] <wallyworld> if the dir is empty, we just carry on
[07:48] <wallyworld> as to the charm it's the same thing
[07:49] <axw> yep
[07:49] <wallyworld> but fail if there's data there
[07:49] <wallyworld> for safety
[07:49] <axw> ok, that sounds good
[07:49] <wallyworld> great
[08:09] <dimitern> morning o/
[08:33] <mup> Bug #1434437 was opened: juju restore failed with "error: cannot update machines: machine update failed: ssh command failed: " <juju-core:New> <https://launchpad.net/bugs/1434437>
[08:56] <dimitern> fwereade, hey, are you around?
[09:00] <dimitern> fwereade, also jam if here - please have a look at this http://reviews.vapour.ws/r/1118/
[09:02] <fwereade> dimitern, heyhey
[09:02] <dimitern> fwereade, hey, that's the PR which will hopefully fix some flaky filter tests
[09:08] <fwereade> dimitern, did we have intermittent failures in all the ones you touched?
[09:09] <dimitern> fwereade, yes, esp. depending on the stress the machine running them is under
[09:09] <fwereade> dimitern, the thing is I'm a bit worried about the AssertReceiveBetween stuff
[09:10] <dimitern> fwereade, yeah?
[09:10] <fwereade> dimitern, particularly the (0, 2) case -- won't that pass *every time* whatever the behaviour is?
[09:10] <dimitern> fwereade, it's unavoidable to allow for some flexibility
[09:10] <fwereade> dimitern, yeah, but... the flexibility is the problem
[09:10] <dimitern> fwereade, in the normal case it passes with the lower bound
[09:10] <fwereade> dimitern, right, but we have no way of knowing that
[09:11] <dimitern> fwereade, I've verified it by looking at the logs, but I guess it's not obvious
[09:12] <fwereade> dimitern, yeah, I believe it works, I just don't think it gives us protection going forward
[09:12] <fwereade> dimitern, I suspect the mocked-out-api approach is the one we need to take to actually fix this
[09:12] <dimitern> fwereade, the right way going forward is to make all the watchers in the filter mockable
[09:13] <dimitern> :) yeah
[09:13] <fwereade> dimitern, yeah, exactly
[09:15] <dimitern> fwereade, so you'd rather leave the tests flaky for now and not land my "fix" until a proper one can be done?
[09:18] <fwereade> dimitern, I think that false positives are worse than false negatives, yes
[09:18] <fwereade> dimitern, if you consider a test failure to be a positive, that is
[09:18] <fwereade> ;p
[09:19] <fwereade> er wait I think I have that the wrong way round
[09:20] <fwereade> dimitern, a test suite that sometimes fails for code that works is inconvenient; a test suite that always passes for code that doesn't work is deadly
[09:20] <dimitern> fwereade, I agree :)
[09:21] <dimitern> fwereade, ok, I'd ask you to comment on it at least, before I close it then
[09:21] <fwereade> dimitern, will do
[09:21] <dimitern> fwereade, cheers
[09:42] <natefinch> wwitzel3: are/were you up early, or late? :)
[09:45]  * natefinch notes the email wayne sent him an hour and a half go... at 4am
[10:05] <perrito666> natefinch: morning
[10:06] <natefinch> perrito666: morning
[10:27] <voidspace> natefinch: are you using nvidia-prime?
[10:27] <voidspace> natefinch: or are you using the intel graphics, or using something else to enable optimus?
[10:28] <natefinch> voidspace: I twiddled with it for a while to try to get nvidia to work, I think I'm just using intel graphics, but honestly don't remember where I ended up with that.  Linux + graphics = pain
[10:28] <voidspace> natefinch: ok, thanks
[10:28] <natefinch> sorry :)
[10:29] <voidspace> natefinch: I have an issue with a kvm image not booting (black screen) and I wonder if it's a graphics driver issue
[10:29] <voidspace> natefinch: it went through the install fine and then black screen on reboot
[10:29] <voidspace> natefinch: I'm trying again with installing trusty into kvm instead of utopic to see if that makes a difference
[10:29] <voidspace> natefinch: it *shouldn't*
[10:33] <natefinch> I know vivid containers on non-vivid hosts was having problems, but haven't heard of utopic being a problem
[10:33] <voidspace> utopic is the host, so really shouldn't be an issue
[10:33] <voidspace> I couldn't find anyone else with the same issue either - which is what makes me suspect driver issues
[10:33] <voidspace> anyway, trying trusty
[10:36] <voidspace> natefinch: these instructions make it seem simple to enable nvidia...
[10:36] <voidspace> natefinch: http://www.webupd8.org/2013/08/using-nvidia-graphics-drivers-with.html
[10:36] <voidspace> natefinch: for 14.04 but should be the same for utopic I guess...
[10:37] <natefinch> "Multiple monitors don't work out of the box"
[10:38] <voidspace> well, you can't have *everything*
[10:38] <natefinch> ...... see.. multiple monitors works for me now, no way am I going to risk spending 4 hours fiddling with it if I accidentally break that
[10:38] <voidspace> :-)
[10:38] <voidspace> coffee
[10:40] <natefinch> wget https://raw.githubusercontent.com/kovidgoyal/calibre/master/setup/linux-installer.py | sudo python   .....sure, why not! :/
[10:41] <voidspace> natefinch: dimitern: trusty install works
[10:41] <natefinch> voidspace: weird... well, good enough for now I guess
[10:41] <voidspace> natefinch: dimitern: so either a problem with utopic or just a problem with that particular install I did
[10:41] <dimitern> voidspace, awesome!
[10:41] <voidspace> natefinch: dimitern: yep, trusty fine for this
[10:42] <voidspace> hah, seems like the virsh network I configured isn't working though
[10:43] <voidspace> I'll look at it later
[10:50] <dimitern> voidspace, cool
[12:06] <voidspace> dimitern: as far as I can tell in workers the standard way to get an environ is to use WatchForEnvironConfigChanges
[12:06] <voidspace> dimitern: does that sound right?
[12:07] <voidspace> dimitern: it's what the provisioner and firewaller do
[12:07] <dimitern> voidspace, in general - yeah, but you should have it simpler as you have access to stare directly, like the cleaner, right?
[12:07] <voidspace> dimitern: I didn't see the cleaner using envron
[12:08] <voidspace> cleaner  worker does next to nothing - it calls state.Cleanups
[12:08] <voidspace> st.Cleanup
[12:09] <dimitern> voidspace, yeah, but my point was - it takes a *state.State in its ctor
[12:09] <voidspace> dimitern: right, we have a state.State
[12:09] <dimitern> voidspace, from st, you could always construct and environ
[12:09] <voidspace> dimitern: so should I get the config from state and open a new environment
[12:09] <voidspace> right
[12:09] <dimitern> voidspace, get the EnvironConfig and use config.New
[12:09] <voidspace> yep
[12:10] <dimitern> voidspace, you'll be running on the state servers only, so it's ok
[12:11] <dimitern> voidspace, the difference for other (normal) workers running on other machines, is they need to use the api
[12:11] <voidspace> right
[12:11] <voidspace> understood
[12:12] <dimitern> voidspace, hence that WatchForEnvironConfigChanges (which is not even needed just to get the config for some time now IIRC)
[12:12] <dimitern> voidspace, ok :) cheers
[13:01] <voidspace> dimitern: worker is again "done", so back to testing
[13:02] <voidspace> dimitern: using an interface for the environ (a "releaser") for testing - and getting a NetworkingEnviron from the state
[13:02] <dimitern> voidspace, great! have you pushed anything?
[13:02] <voidspace> and properly releasing
[13:02] <voidspace> dimitern: https://github.com/juju/juju/compare/master...voidspace:address-life-worker
[13:02] <dimitern> voidspace, looking
[13:03] <voidspace> dimitern: addressWorker.removeIPAddresses is essentially the code copied from the provisioner api - with added logic to get the instance ID from the machine ID
[13:03] <sinzui> sorrry perrito666 : your HA branch broke Windows and OS X builds bug 1434544
[13:03] <mup> Bug #1434544: backups broke non-linux builds <ci> <osx> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1434544>
[13:04] <perrito666> sinzui: WHAT?
[13:04] <perrito666> sinzui: hold, I have that change in my local machine :| wtf did happen there, sending a fix right away, apologies
[13:04] <voidspace> dimitern: damn, some calls to errors.Annotatef that should be logging instead
[13:05] <voidspace> dimitern: friend arrived, so breaking for lunch
[13:05] <sinzui> perrito666, I think a function signature changed
[13:05] <dimitern> voidspace, ok, I have some comments - ping me when back please
[13:06] <perrito666> sinzui: It did, I just stashed that change instead of committing it
[13:06] <mup> Bug #1434544 was opened: backups broke non-linux builds <ci> <osx> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1434544>
[13:06] <perrito666> yes, yes mup I know
[13:08] <sinzui> perrito666, we've all done that
[13:08] <perrito666> I should stopworking at two branches at once
[13:10] <perrito666> can I get an amen? http://reviews.vapour.ws/r/1217/
[13:10] <dimitern> perrito666, ship it!
[13:11] <perrito666> merging
[13:11] <perrito666> btw sinzui we need to discuss new CI testing for new restore
[13:12] <perrito666> I am pretty sure that the next spot in your agenda is somewhere around 2018 :p so save it for me plz
[13:12] <sinzui> :)
[13:13] <perrito666> this patch, besides breaking windows and osx, fixes the issue we had when we first tried to deprecate old restore but somehow changes a bit the behavior of ha restore and also now we support systemd
[13:18] <mup> Bug #1434544 changed: backups broke non-linux builds <ci> <osx> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1434544>
[13:24] <mup> Bug #1434544 was opened: backups broke non-linux builds <ci> <osx> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1434544>
[13:31]  * perrito666 looks at the pr to see if it merges faster
[13:31] <perrito666> wee, merged
[13:54] <mup> Bug #1434555 was opened: ppc64el unit test timeout <blocks-release> <ci> <ppc64el> <regression> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1434555>
[14:03]  * fwereade out laura's school again, bbl
[14:03] <ericsnow> perrito666, wwitzel3: standup?
[14:03] <perrito666> ericsnow: going
[14:05] <voidspace> dimitern: back
[14:06] <dimitern> voidspace, hey, so a few comments for the worker implementation
[14:07] <voidspace> dimitern: shot
[14:07] <voidspace> *shoot
[14:07] <dimitern> voidspace, 1) you could do it much simpler if you implement it as a StringsWorker - no need to do most of the things for a full worker, e.g. implement the loop
[14:08] <voidspace> dimitern: ah, ok
[14:09] <dimitern> voidspace, e.g. return NewStringsWorker in the ctor and just do the SetUp and Handle implementation
[14:10] <voidspace> dimitern: yeah, I see - looking at StringsWorker now
[14:10] <voidspace> nice, thanks
[14:10] <dimitern> voidspace, another thing - it needs to be a singular worker as well
[14:11] <voidspace> dimitern: right - I thought that was done in the way we start it
[14:11] <voidspace> dimitern: don't we wrap singular workers?
[14:12] <dimitern> voidspace, hmm let me check
[14:13] <dimitern> voidspace, awesome - you're correct - the state worker is already a singular
[14:14] <dimitern> voidspace, and a final thing - I'd rather define an interface with all state methods you need and take that in the ctor
[14:14] <dimitern> voidspace, this way it's much easier to mock and test later
[14:14] <voidspace> dimitern: ao an interface for state.State
[14:14] <voidspace> dimitern: ok, I was just following the other workers
[14:15] <voidspace> easy to do though, we don't use many methods
[14:15] <voidspace> dimitern: so no problem
[14:15] <dimitern> voidspace, yeah, only a subset of it - the methods you need
[14:15] <dimitern> voidspace, great, thanks!
[14:15] <voidspace> dimitern: thank you
[14:32] <voidspace> dimitern: so in my worker I want to kick off a goroutine to remove the initial dead addresses
[14:33] <voidspace> dimitern: and the way that's done currently is in a loop watching to see if the worker has died
[14:33] <voidspace> dimitern: selecting on the dying channel
[14:33] <voidspace> dimitern: which is really tomb.Dying()
[14:33] <dimitern> voidspace, yeah, that sounds correct
[14:33] <voidspace> dimitern: if I use a StringsWorker I don't have access to that
[14:34] <voidspace> dimitern: as tomb is private to the StringsWorker
[14:34] <voidspace> just looking to see if there's anything else
[14:34] <voidspace> in a handler I would do it in SetUp
[14:35] <voidspace> dimitern: not that I can see
[14:35] <voidspace> dimitern: we don't necessarily need to worry about the worker dying
[14:35] <voidspace> dimitern: if the *state* dies then a call will error out and we'll bail immediately
[14:35] <voidspace> dimitern: ditto on the connection to the environ
[14:36] <dimitern> voidspace, right, well you do have TearDown()
[14:36] <dimitern> voidspace, which could be used to stop that goroutine
[14:36] <voidspace> dimitern: so write to a channel in TearDown and select on that?
[14:36] <voidspace> cool, that'll do
[14:37] <voidspace> dimitern: thanks :-)
[14:37] <dimitern> voidspace, np, hope it looks nicer this way :)
[14:37] <voidspace> well I deleted a bunch of code which is always good
[14:38] <dimitern> voidspace, indeed!
[14:44] <voidspace> dimitern: hmm... except StringsWorker is designed to work with api watchers
[14:44] <voidspace> dimitern: SetUp must return an api watcher
[14:44] <voidspace> and we're using EnvironObserver
[14:45] <voidspace> well
[14:45] <voidspace> I'm not sure that we even need that
[14:45] <voidspace> and we are using a watcher
[14:45] <voidspace> let me look into it
[14:46] <dimitern> voidspace, hmm that's true - but api.StringsWatcher is just an interface
[14:47] <dimitern> voidspace, and there's the equivalent state.StringsWatcher interface which is the same
[14:48] <dimitern> voidspace, except the api one does not have Wait and Kill, only Stop
[14:49] <dimitern> voidspace, but Stop is calling Wait(Kill(nil)) internally, so that's fine
[14:50] <voidspace> dimitern: so I can call and use WatchIPAddresses
[14:50] <voidspace> dimitern: which is the point I guess...
[14:50] <dimitern> voidspace, yeah
[14:50] <voidspace> :-)
[14:50] <dimitern> voidspace, btw why did you need EnvironObserver?
[14:50] <voidspace> dimitern: pretty sure now that we didn't
[14:50] <voidspace> dimitern: it was code I "borrowed" and didn't trim correctly...
[14:50] <voidspace> it's gone
[14:53] <dimitern> voidspace, sweet :)
[15:00] <voidspace> dimitern: so I now have a dying channel that my goroutine is selecting on
[15:00] <voidspace> dimitern: in TearDown shall I just write something arbitrary to the channel?
[15:01] <voidspace> in the tomb package I can't see the dying channel ever written to (probably I just haven't found it)
[15:01] <voidspace> in Kill it is just closed
[15:01] <dimitern> voidspace, I think it's better to just close that channel in TearDown - no need to send (any potentially block)
[15:02] <voidspace> dimitern: but if we close it, does that trigger the select listening on it?
[15:02] <dimitern> voidspace, yes it does
[15:02] <voidspace> ah, that'll be how the code works then
[15:02] <voidspace> I tried googling but didnt' find that particular fact
[15:02] <voidspace> thanks
[15:02] <dimitern> voidspace, e.g. case _, ok := <-someChan: if !ok { closed .. }
[15:03] <voidspace> dimitern: right, our particular select is just  case <-a.dying:
[15:03] <voidspace> dimitern: that won't be enough, we'll need the ok
[15:03] <dimitern> voidspace, giving a chan to a goroutine and closing it to signal something to it is pretty common :)
[15:04] <dimitern> voidspace, I'm not even sure you'll need to check for !ok if that's the only reason for the select case
[15:04] <dimitern> voidspace, try it out :)
[15:04] <voidspace> dimitern: yeah, I'll try it in the playground
[15:06] <voidspace> dimitern: yep, close triggers it without needing ok
[15:06] <voidspace> dimitern: http://play.golang.org/p/x8O2HdTgOm
[15:09] <dimitern> voidspace, nice :)
[15:43] <alexisb> wwitzel3, ericsnow ping
[15:43] <ericsnow> alexisb: hi
[15:43] <alexisb> and happy friday!
[15:43] <alexisb> heya ericsnow
[15:43] <alexisb> can you work with sinzui and xwwt to get release notes together for gce provider?
[15:44] <ericsnow> alexisb: I started to add an entry the other day and wasn't sure what more to say than "GCE is now supported as a provider" :)
[15:44] <ericsnow> alexisb: so what else should be there?
[15:45] <alexisb> I am sure that evilnickveitch and sinzui can give you better details then I
[15:45] <sinzui> ericsnow, I am editing https://docs.google.com/a/canonical.com/document/d/1V6AU2mEbTOXQygsn-9eZg-DHjxcVpMylyq017KS78mU/edit and writing everything based on what I think is needed reading juju ini --show
[15:45] <alexisb> however at minimum we should have details on what is supported and how it is used
[15:46] <ericsnow> sinzui, alexisb: got it
[15:46] <ericsnow> alexisb, sinzui: I'll add that
[15:47] <alexisb> ericsnow, thanks
[15:55] <perrito666> sinzui: did my fix unlock 1.23?
[15:58] <sinzui> perrito666, CI is still testing the previous revision
[15:58]  * perrito666 headbutts the desk
[15:58] <jw4> perrito666: you've been learning from thumper
[15:59] <perrito666> jw4: I think he headbutts other people's hands
[15:59] <jw4> perrito666: it's okay - you'll get there too
[16:03] <alexisb> katco, ping
[16:04] <katco> alexisb: hi hi
[16:04] <alexisb> hey there katco are you in full force today?  or our you out?
[16:05] <alexisb> s/our/are
[16:05] <katco> alexisb: i took today off to celebrate the equinox (plus i happen to be sick :( ) you caught me checking email though. what can i do for you?
[16:05] <alexisb> nothing, you are off, go!
[16:05] <katco> alexisb: had i not been sick we would be at the butterfly house with our daughter =/
[16:06] <alexisb> :(
[16:06] <ericsnow> natefinch: ping
[16:06] <alexisb> ok ericsnow given you are already in the release notes and my internete sucks and google docs hates me I am volunteering you :)
[16:07] <alexisb> I need someone to add leader elections to the 1.23 release notes, noting that it is behide a feature flag
[16:07] <alexisb> william, jam and I can add details later
[16:07] <ericsnow> alexisb: yikes
[16:07] <ericsnow> alexisb: FWIW, I know nearly nothing about it
[16:08] <alexisb> just need a placeholder
[16:08] <ericsnow> alexisb: I'm glad to give it a go though
[16:08] <ericsnow> alexisb: k
[16:08] <alexisb> ericsnow, please dont send anytime on the details
[16:08] <ericsnow> alexisb: got it
[16:08] <dimitern> voidspace, dooferlad, as OCRs can you have a look at this huge, but mostly mechanical refactoring branch? http://reviews.vapour.ws/r/1219/
[16:15] <dooferlad> dimitern: on it
[16:15] <dimitern> dooferlad, ta
[16:24] <mattyw> natefinch, ping?
[16:25] <perrito666> mattyw: he is most likely OoO
[16:26] <mattyw> perrito666, ok thanks
[16:42] <natefinch> back everyone
[16:44] <natefinch> wwitzel3: you around?
[16:57] <sinzui> natefinch, dimitern do you have a minute to review http://reviews.vapour.ws/r/1220/
[16:57] <dimitern> sinzui, looking
[16:58] <dimitern> sinzui, ship it!
[16:59] <sinzui> thank you dimitern
[17:19] <mup> Bug #1431918 changed: gce minDiskSize incorrect <tech-debt> <juju-core:Fix Released by wwitzel3> <juju-core 1.23:Fix Released by wwitzel3> <https://launchpad.net/bugs/1431918>
[17:22] <mup> Bug #1431918 was opened: gce minDiskSize incorrect <tech-debt> <juju-core:Fix Released by wwitzel3> <juju-core 1.23:Fix Released by wwitzel3> <https://launchpad.net/bugs/1431918>
[17:25] <mup> Bug #1431918 changed: gce minDiskSize incorrect <tech-debt> <juju-core:Fix Released by wwitzel3> <juju-core 1.23:Fix Released by wwitzel3> <https://launchpad.net/bugs/1431918>
[17:38] <natefinch> what's the metadata to turn off apt-get upgrade, anyone remember?
[17:57] <hazmat> os-update: false?
[17:58] <hazmat> natefinch: per the latest cloud init docs package_upgrade: false
[17:58] <hazmat> http://cloudinit.readthedocs.org/en/latest/topics/examples.html#run-apt-or-yum-upgrade
[17:58] <hazmat> oh.. you mean the juju syntax
[17:58] <natefinch> hazmat: heh, yeah, I think it's apt_upgrade: false  ... at least that seems to be what the code is saying, if I'm in the right place
[17:59] <hazmat> natefinch: enable-os-upgrade: false
[17:59] <hazmat> https://github.com/juju/docs/blob/master/src/en/config-general.md
[18:00] <natefinch> hazmat: wow, I didn't know we had all that documented, that's awesome.  Thanks for the pointer
[18:42] <perrito666> it is amazing how noticeable is the fact that mosquito repellent effect has passed
[18:42] <natefinch> haha
[18:43] <perrito666> it seems to work as an interruptor, from one moment to the other, everything itches
[18:43] <natefinch> yuup
[18:44]  * natefinch contemplates ignoring all this "return a special error to get jujud to restart" and just calling os.Exit(1)
[18:45] <perrito666> natefinch: It took me 6 months to get approval on a patch that does that same thing and has a very good excuse :p hope you have patience
[18:46] <natefinch> perrito666: I don't need patience, I have internal customers that need this in 1.23
[18:46] <natefinch> perrito666: the annoying thing is, there's a few different things that LOOK like they should do the right thing.... and don't.
[18:47] <perrito666> well as fwereade would certainly say, "have you read the doc about that?"
[19:16] <perrito666> and, me too, was just 10 mins looking at an error that arose from plural vs singular
[19:23] <natefinch> ug
[19:23] <natefinch> gah... now jujud isn't restarting at all. Sonofa
[19:23] <natefinch> like, it stays dead
[19:24] <natefinch> thanks a lot, upstar
[19:24] <natefinch> t
[19:24] <perrito666> natefinch: what did you do?
[19:24] <natefinch> os.Exit(1)
[19:24] <perrito666> oh, that is odd, upstart should restart you on 1
[19:25] <natefinch> and somehow the logging statements right before that aren't being flushed to disk.  Thanks a lot, loggo.
[19:26] <perrito666> natefinch: well you are exiting :)
[19:26] <perrito666> which most likely prevents all other routines to finish whatever they where doing
[19:26] <natefinch> perrito666: it should be flushing the log messages.. what if I log something important right before a crash?
[19:26] <jw4> OCR PTAL : http://reviews.vapour.ws/r/1221/ <--- reviewed previously and merged to 1.23 - this is to forward port it to master
[19:27] <natefinch> it's in the same goroutine, too
[19:27] <perrito666> can you not force loggo to flush?
[19:28] <natefinch> perrito666: I don't immediately see a way to... the problem is that it just takes writers
[19:28] <natefinch> so there's no flush, even if the underlying thing can flush
[19:29] <perrito666> you are having a classic friday problem
[19:31] <natefinch> been having friday problems for two weeks
[19:46] <sinzui> hi natefinch bug 1434680 is super critical
[19:47] <mup> Bug #1434680: 1.22.0 cannot upgrade to 1.23-beta1 or 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1434680>
[19:47] <sinzui> natefinch, We are in the middle of a release of 1.23-beta1. we have a choice of aborting or continuing with the caveate that upgrades from 1.22.0 are broken
[19:49] <mup> Bug #1434070 changed: upgrades are broken in master 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1434070>
[19:49] <mup> Bug #1434680 was opened: 1.22.0 cannot upgrade to 1.23-beta1 or 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1434680>
[19:51] <jw4> sinzui, natefinch I don't have access to the upgrade logs for those bugs - can you verify whether they are related to the uniter stopped state upgrades I just merged into 1.23 ?
[19:55] <perrito666> I can see this happening a lot machine-0: 2015-03-20 15:19:51 ERROR juju.rpc server.go:554 error writing response: EOF
[19:57] <perrito666> that was from allmachines
[19:57] <perrito666> machine 0 is full of
[19:57] <perrito666> 2015-03-20 15:43:16 DEBUG juju.mongo open.go:122 TLS handshake failed: x509: certificate is valid for localhost, juju-apiserver, not juju-mongodb
[20:06] <sinzui>  natefinch has access to everything
[20:06] <jw4> thanks sinzui - I'll eagerly ^H^H^H^H anxiously await natefinch 's verdict
[20:07]  * natefinch crackles with power.
[20:07] <jw4> hehe
[20:07] <jw4> is that kragle?
[20:07] <natefinch> oh wait, no, I was just sitting on a bag of chips
[20:07] <jw4> haha
[20:08]  * jw4 assumes *everyone* has watched the LEGO movie
[20:09] <natefinch> I have not.... we got it, but it was too intense for our 3 year old
[20:09] <natefinch> someday I should throw it in after they go to bed
[20:09] <jw4> yeah; 3 is a bit young for it - my 9 year old liked it though
[20:10] <sinzui> natefinch, as a canonical employee you can see every log at http://reports.vapour.ws/releases/2466
[20:10] <jw4> although it RUINED watching The Hobbit with him - he kept associating Gandalf with the 'Gandalf' in LEGO movie
[20:10] <natefinch> sinzui: ima lookin
[20:10] <natefinch> jw4: haha... always watch the old movies first
[20:10] <sinzui> natefinch, and maybe jw4, as project members of juju-core, you must have permission to see hidden comments in Lp...otherwise there is a regression
[20:11] <jw4> kk
[20:11] <natefinch> sinzui: is there a trick to seeing the hidden ones?
[20:12] <sinzui> they should just be dark grey
[20:12]  * sinzui reviews project
[20:12] <natefinch> sinzui: btw, the comments are hidden, but I still see the attachments in the lower right
[20:13] <sinzui> ha ha. Launchpad really sucks
[20:13] <natefinch> lol
[20:13] <sinzui> natefinch, I need to review them now :(
[20:14] <jw4> perrito666: oh - you were talking about the upgrade regression when you mentioned the TLS handshake failures
[20:14] <sinzui> natefinch, I deleted the attachments because Juju still puts certs in DEBUG when juju is not run a sDEBUG
[20:15] <natefinch> sinzui: sorry about that.... juju sucks, too
[20:15] <jw4> I'm seeing those too in my local upgrade test
[20:16] <sinzui> natefinch, you are an admin of ~juju. as the project maintainer you should be seeing every hidden message on the juju-core project? Are you logged in?
[20:16] <perrito666> jw4: I was, I thought that since natefinch was not answering I could throw you a line
[20:16] <natefinch> sinzui: it says i'm logged in
[20:17] <jw4> perrito666: thank you! :)  I was trying to figure out you were referring to your previous conversation with natefinch
[20:21] <sinzui> natefinch, jw4, I uploaded redacted logs to https://bugs.launchpad.net/juju-core/+bug/1434680
[20:21] <mup> Bug #1434680: 1.22.0 cannot upgrade to 1.23-beta1 or 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1434680>
[20:21] <jw4> sinzui: thank you!!
[20:23] <jw4> okay, those seem to be the same errors I'm getting that perrito666 reported
[20:23] <jw4> it's like somehow the state server started presenting the wrong TLS certificate?
[20:23] <natefinch> sinzui: I can get to the logs on the vapour link
[20:24] <jw4> or more like the client is trying to connect using the wrong server name?
[20:25] <sinzui> jw4, Those might also be what dimitern reported on https://bugs.launchpad.net/juju-core/+bug/1434070
[20:25] <mup> Bug #1434070: upgrades are broken in master 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1434070>
[20:26] <jw4> hmm; looks likely
[20:28] <ericsnow> sinzui: could it be a network issue?
[20:28] <ericsnow> sinzui: not the network but juju networking
[20:28] <sinzui> ericsnow, for every substrate?
[20:29] <jw4> hmm I take it back - the logs reported by dimitern don't include the TLS handshake error
[20:29] <perrito666> jw4: sure they do
[20:29] <ericsnow> sinzui: is there a machine-0.log somewhere from a successful run that we could compare?
[20:29] <perrito666> https://bugs.launchpad.net/juju-core/+bug/1434070/comments/8
[20:29] <mup> Bug #1434070: upgrades are broken in master 1.24-alpha1 <ci> <regression> <upgrade-juju> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1434070>
[20:29] <sinzui> ericsnow, no upgrade passed so no
[20:31] <sinzui> ericsnow, I think a downgrade to 1.21.3, is required. that is what was stable until yesterday
[20:31] <jw4> perrito666: ah I see - the initial attachements didn't include machine-0
[20:33] <jw4> maybe this is normal, but when I try to do 'juju status' with my borked upgrade it looks like 'juju status' is starting a new mongod process under my uid ?
[20:36] <natefinch> jw4: that doesn't sound normal to me
[20:36] <jw4> its running using my UID and the PPID is 1
[20:36] <jw4>  /usr/lib/juju/bin/mongod
[20:36] <natefinch> 0.o
[20:37] <perrito666> well it is local provider
[20:37] <perrito666> and your machine IS machine-0
[20:38] <perrito666> jw4: care to pastebin you ps faxu?
[20:38] <jw4> perrito666: sure
[20:39] <jw4> http://paste.ubuntu.com/10637523/
[20:40] <jw4> pid 33016 is the mongod that appears when I use 'juju status'
[20:41] <perrito666> so
[20:41] <perrito666> weldon   33016  0.5  0.5 250800 57240 pts/3    Sl   12:43   0:17 /usr/lib/juju/bin/mongod --dbpath /tmp/test-mgo229055436 --port 59450 --nssize 1 --noprealloc --smallfiles --nohttpinterface --oplogSize 10 --ipv6 --nounixsocket --nojournal --sslOnNormalPorts --sslPEMKeyFile /tmp/test-mgo229055436/server.pem --sslPEMKeyPassword xxxxxxx
[20:41] <perrito666> that is part of a testrun
[20:41] <jw4> gah!
[20:41] <perrito666> that is why it has your uid
[20:42] <jw4>  sudo redact-last-20-lines
[20:42] <jw4> :)
[20:42] <perrito666> look into your /tmp you might also have some garbage from old tests
[20:42] <perrito666> brb
[20:42] <jw4> tx perrito666
[20:43] <natefinch> dumb test mongo sticking around
[20:44] <jw4> yeah, sorry for the noise
[20:44] <natefinch> it's happened to all of us
[20:46] <perrito666> jw4: local provider happens
[20:46] <jw4> hehe
[20:49] <mup> Bug #1313016 changed: allow annotations to be set on charms <api> <charms> <improvement> <juju-core:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1313016>
[20:49] <mup> Bug #1389326 changed: juju-backup is not a valid plugin <backup-restore> <plugins> <juju-core:Fix Released by marcoceppi> <https://launchpad.net/bugs/1389326>
[20:49] <mup> Bug #1403955 changed: DHCP's "Option interface-mtu 9000" is being ignored on bridge interface br0 <cts> <kvm> <lxc> <network> <juju-core:Fix Released> <isc-dhcp (Ubuntu):Confirmed> <https://launchpad.net/bugs/1403955>
[20:49] <mup> Bug #1409639 changed: juju needs to support systemd for >= vivid <hs-arm64> <systemd-boot> <juju-core:Fix Released by ericsnowcurrently> <juju-core (Ubuntu):Triaged> <juju-core (Ubuntu Vivid):Triaged> <https://launchpad.net/bugs/1409639>
[20:49] <mup> Bug #1415671 changed: Joyent provider uploads user's private ssh key by default <joyent-provider> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1415671>
[20:49] <mup> Bug #1415693 changed: Unable to bootstrap on cn-north-1 <bootstrap> <ec2-provider> <online-services> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1415693>
[20:49] <mup> Bug #1421237 changed: DEBUG messages show when only INFO was asked for <ci> <security> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1421237>
[20:49] <mup> Bug #1423454 changed: cloud-image-utils needs to be installed <tech-debt> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1423454>
[20:49] <mup> Bug #1424069 changed: juju resolve doesn't recognize error state <regression> <resolved> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1424069>
[20:50] <mup> Bug #1424590 changed: juju status --format=tabular <juju-core:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1424590>
[20:50] <mup> Bug #1427840 changed: ec2 provider unaware of c3 types in sa-east-1 <juju-core:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1427840>
[20:50] <mup> Bug #1428117 changed: EC2 eu-central-1 region not in provider <juju-core:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1428117>
[20:50] <mup> Bug #1428119 changed: EC2 provider does not include C4 instance family <juju-core:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1428119>
[20:50] <mup> Bug #1428430 changed: AllWatcher does not remove last closed port for a unit, last removed service config <api> <juju-core:Fix Released by themue> <https://launchpad.net/bugs/1428430>
[20:50] <mup> Bug #1431130 changed: make kvm containers addressable (esp. on MAAS) <addressability> <kvm> <maas-provider> <network> <juju-core:Fix Released by dooferlad> <https://launchpad.net/bugs/1431130>
[20:50] <mup> Bug #1431134 changed: fix container addressability issues with cloud-init, precise, when lxc-clone is true <addressability> <cloud-init> <ec2-provider> <lxc> <maas-provider> <network> <precise> <usability> <juju-core:Fix Released by dimitern> <https://launchpad.net/bugs/1431134>
[20:50] <perrito666> daf...
[20:51] <perrito666> sinzui: that is you or mup went bonkers again?
[20:51] <sinzui> perrito666, some subscribed mup useless bug information
[20:52] <jw4> perrito666: well I just got 15 emails too so I suspect it's not mups fault
[20:52] <perrito666> jw4: i did not, or I have those filtered
[20:52] <sinzui> Team ~juju should be getting emails about bugs, but I don't think anyone cares about closing except the reporter
[20:53] <jw4> yeah, it was just the Fix Released notification
[20:55] <natefinch> perrito666: probably filtered.  Mine are...
[20:58] <natefinch> sinzui: I gotta run, have company and they're expecting me to get dinner.    Afraid I've not been much use anyway (other than pointing out limitations in Launchpad)
[20:58] <sinzui> natefinch, okay, I wasn't really expecting a fix today
[21:09] <ericsnow> sinzui:  I just bootstrapped local provider with a fresh-built 1.22 (tip), and then did the same upgrade command from the CI logs
[21:10] <ericsnow> sinzui: and it worked fine
[21:10] <ericsnow> sinzui: maybe
[21:11] <ericsnow> sinzui: the command finished successfully but it looks like it didn't do much, so I'm not sure I have much to add yet
[21:13] <sinzui> ericsnow, I bootstrapped with 1.22.0 locally, then upgraded to 1.23-beta1 and lost control
[21:15] <ericsnow> sinzui: which revision is 1.22.0?
[21:15] <sinzui> ericsnow, 1.22.0 the package we ask users to use
[21:16] <sinzui> ericsnow, per https://docs.google.com/a/canonical.com/document/d/1ILRWMChkqZ7YeXNCNsmaF89ewqAL_j0qVeF3E9BMwys/edit#heading=h.dp1wyrj1wujg, It was commit 44caaac
[21:21] <ericsnow> sinzui: if I build from that revision it also works
[21:21] <ericsnow> (I'm on trusty)
[21:24] <sinzui> ericsnow, I suspect you are tainted. We one more machines in more environments without developer tools installed.
[21:24] <sinzui> ericsnow, 14 tests means 14 substrates, machines, archs, and series. it isn't a single dirty machine
[21:24] <ericsnow> sinzui: sure
[21:24]  * sinzui is boostrapping again
[21:24] <perrito666> sinzui: which might be the reason why no one broke this in its own machine
[21:25] <sinzui> ericsnow, I think I got a volunteer to test vivid with 1.23-beta1 on monday. maybe he can explain the lxc breakage
[21:27] <ericsnow> sinzui: sweet
[21:27] <ericsnow> sinzui: also I don't think the unit tests failures on vivid are me :)
[21:28] <sinzui> ericsnow, no, I don't think so either. the something is very odd about two of them because I only see ping failures in when running tests in lxc
[21:32] <sinzui> ericsnow, I cannot upgrade
[21:33] <sinzui> I will attach my personal log
[21:34] <ericsnow> oh how I wish local provider ran entirely in VMs
[21:36] <sinzui> ericsnow, I think we all do.
[21:37] <ericsnow> sinzui: I put it on the agenda for Nuremberg (and thumper agreed to own it)
[21:38] <perrito666> you migth have the most attended session of all
[21:38] <perrito666> beware
[21:38] <sinzui> ericsnow, indeed thump has ideas to make it work right
[21:39] <ericsnow> sinzui: we've had the ideas for a while, just not the resources
[21:39] <sinzui> yeah. I know the pain.
[21:42] <ericsnow> sinzui: speaking of vivid, how do I re-run local-deploy-vivid-amd64 with --DEBUG?
[21:43] <sinzui> ericsnow, we can add --debug to the command line arg of the test. then rerun the test with the current packages. CI only retests current packages
[21:44] <ericsnow> sinzui: that's fine, I just want DEBUG logging
[21:46] <sinzui> ericsnow, bugger --debug was lost this week in an upgrade...I will have it back in a few minutes
[21:47] <ericsnow> sinzui: no worries
[23:02] <jw4> in the TestPrune* dblogpruner tests I keep getting an error : "failed to retrieve log counts: no such cmd: scale"
[23:02] <jw4> running 'scale' has ubuntu advising me to install csound-utils which doesnt' seem right
[23:04] <jw4> hmm interesting "scale" seems to refer to a key in a bson.M{} struct
[23:10] <jw4> holy moly it's a bug
[23:10] <jw4> :)
[23:26] <jw4> OCR PTAL : Fix for bug 1434741 http://reviews.vapour.ws/r/1222/
[23:26] <mup> Bug #1434741: PruneLogs suffers from an indeterminate map iteration order bug <juju-core:New> <https://launchpad.net/bugs/1434741>
[23:32] <mup> Bug #1434741 was opened: PruneLogs suffers from an indeterminate map iteration order bug <juju-core:New> <https://launchpad.net/bugs/1434741>