[00:25] <mup> Bug #1569097 opened: jujud fails to start with "could not find a suitable binary for "0.0/mmapv1"" <blocker> <mongodb> <juju-core:Triaged> <https://launchpad.net/bugs/1569097>
[00:30] <cherylj> wallyworld: could you or someone from your team take bug 1569097?
[00:30] <mup> Bug #1569097: jujud fails to start with "could not find a suitable binary for "0.0/mmapv1"" <blocker> <mongodb> <juju-core:Triaged> <https://launchpad.net/bugs/1569097>
[00:30] <wallyworld_> ok
[00:30] <cherylj> thanks!
[00:31] <wallyworld_> cherylj: part of the issue is the mongo stuff got erged too soon, so we'll need to look into how to deal with that. i'm still ramping up on the issues
[00:31] <wallyworld_> cherylj: i also added a bug to the board - HA doesn't use bootstrap constraints
[00:31] <cherylj> wallyworld_: I guess that was a miscommunication - we thought it was safe to merge because it had a fallback :/
[00:32] <wallyworld_> cherylj: my PR didn't have a fallback - it expected mongo 3.2 to be in xenial
[00:33] <cherylj> the above bug happened on trusty, if it makes a difference
[00:33] <wallyworld_> anyways, all good, we'll fix
[00:33] <wallyworld_> on trusty it was supposed to use mongo 2.4 stull, hmmm, i'll need to check
[00:34] <wallyworld_> i bootstrapped yesterday without issues, but that may have been on xenial, i'll need to check
[00:34] <wallyworld_> i wonder if wily is also broken
[00:35] <wallyworld_> cherylj: good news though - it's in the queue, so progress :-) https://launchpad.net/ubuntu/xenial/+queue
[00:35] <cherylj> yay!
[00:35] <cherylj> brb
[00:44] <wallyworld_> cherylj: i reckon bug 1534627 should be high rather than medium, since it quite adversely affects stakeholder deployments
[00:44] <mup> Bug #1534627: Destroyed models still show up in list-models <2.0-count> <conjure> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1534627>
[00:46] <rick_h_> wallyworld_: +1 and the change on it is backward incompatible
[00:46] <wallyworld_> yep, that too
[00:46] <mup> Bug #1569106 opened: juju deploy  <service> --to lxd:0 does not work <conjure> <juju-core:New> <https://launchpad.net/bugs/1569106>
[01:07] <perrito666> wallyworld_: hey, ruthere?
[01:08] <wallyworld_> maybe
[01:08] <wallyworld_> depends who's asking
[01:08] <perrito666> I would make a taxes joke, but I have no idea how is the Ausie irs called
[01:08] <wallyworld_> ATO
[01:09] <wallyworld_> australian tax office
[01:09] <perrito666> will it kill you? like everything in australia?
[01:09] <wallyworld_> it can
[01:09] <wallyworld_> feeding it money helps
[01:10] <perrito666> so lemme know when you can ho
[01:11] <wallyworld_> anytime
[01:11] <perrito666> k standup room?
[01:11] <wallyworld_> ok
[01:25] <mup> Bug #1569109 opened: Juju makes wrong network configuration when adding physical machine <juju-core:New> <https://launchpad.net/bugs/1569109>
[01:36] <natefinch> evening folks
[01:43] <alexisb> good evening all, see you in the morning
[01:47] <thumper> bugger...
[01:48]  * thumper sighs
[01:48] <thumper> shelving all current work to pop the stack and fix other bits.
[01:55] <natefinch> cherylj: is there something I should be working on to help unblock master?
[01:56] <cherylj> natefinch: want to take a look at https://bugs.launchpad.net/juju-core/+bug/1564791 ?
[01:56] <mup> Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch <blocker> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1564791>
[01:56] <cherylj> looks like an interesting one
[01:56] <natefinch> not really ;)  But I will :)
[01:57] <natefinch> cherylj: actually, it gets less bad toward the end of the bug :)
[01:59] <mup> Bug #1569120 opened: wrong lxc bridge still used in juju beta4 <conjure> <juju-core:New> <https://launchpad.net/bugs/1569120>
[02:07] <cherylj> axw: got a sec?
[02:07] <axw> cherylj: yup?
[02:07] <cherylj> axw: I'm looking at bug 1569024
[02:07] <mup> Bug #1569024: Region names for rackspace should accept caps and lowercase <blocker> <rackspace> <juju-core:In Progress by cherylj> <https://launchpad.net/bugs/1569024>
[02:07] <cherylj> and was thinking that for public clouds, we could strings.ToLower the region names
[02:07] <cherylj> that way we don't mess with any user defined cloud regions
[02:08] <cherylj> and maintain compatibility for rax
[02:08] <axw> cherylj: gah, yeah, we should and I meant to do that
[02:08] <axw> cherylj: on input, lower case
[02:08] <cherylj> but just for public clouds, yes?
[02:08] <cherylj> or for all?
[02:09] <axw> cherylj: hrm. well, maybe not lowercase when we pass through, just compare case insensitive
[02:09]  * axw looks at the code
[02:10] <cherylj> ah, that works too
[02:10] <cherylj> strings.EqualFold()
[02:10] <cherylj> neato
[02:11] <axw> cherylj: I *think* it's just a matter of changing "getRegion" in cmd/juju/commands/bootstrap.go
[02:11] <axw> where we check region.Name ==
[02:11] <axw> cherylj: also the set-default-region command
[02:12] <cherylj> axw: yeah, I had some changes in there already, just wanted to verify what we should do
[02:12] <cherylj> axw: so don't change the region, just do a case insensitive comparison?
[02:12] <axw> cherylj: I think that's safest, yeah
[02:12] <cherylj> axw: sounds good, thanks
[02:24] <cherylj> natefinch: I have access to the arm hardware for that lxd bug.  Need me to forward it your way?
[02:26] <natefinch> cherylj: yes please
[02:27] <natefinch> cherylj: though it probably will be a matter of looking at the code and then thinking real hard.
[02:27] <cherylj> break out the hamster
[02:27] <cherylj> hey rcj, slumming it with the juju devs?
[02:37] <natefinch> cherylj: my brain refuses to read arm64 ... every time it translates it into amd64, and I have to do a double take to make sure it says the right thing
[02:38] <cherylj> natefinch: oh me too
[03:19] <axw> menn0: when you're importing a model, will it be visible during import? will it be mutable while importing?
[03:19] <axw> (import as in migration)
[03:20] <menn0> axw: there's a migration-mode flag which will be set to "importing"
[03:20] <menn0> axw: that blocks critical txns as well as preventing API logins for it
[03:20] <menn0> axw: the former has been done but not the latter
[03:20] <axw> menn0: ok, cool. but you'll still be able to see it in list-models?
[03:21] <menn0> axw: I guess so, but we could make it so they didn't show up
[03:21] <axw> menn0: I'm thinking it might make sense to have a status entry for models
[03:21] <menn0> axw: that could be done
[03:21] <axw> available, importing, destroying, archived
[03:21] <axw> something like that
[03:21] <menn0> axw: sounds useful
[03:22] <axw> menn0: we need to be able to filter out Dead models in list-models, but I think we should show status of Alive vs. Dying
[03:22] <axw> but a more descriptive status would be better
[03:22] <axw> I'll look at adding that
[03:24] <natefinch> menn0: speaking of migrations... I added a field to charmDoc and tried to figure out if there was anything I needed to do for migration, but couldn't find code migrating charm stuff. What's up with charms and migration?
[03:26] <menn0> natefinch: migration of charms and tools is still in-progress... there is code but it needs reworking and isn't plugged in to the process yet
[03:27] <menn0> natefinch: for most collections there's tests that fail if fields are added
[03:27] <menn0> natefinch: but probably not for charms yet
[03:27] <menn0> natefinch: so just add your field for now and email thumper and me about it just to make sure
[03:27] <natefinch> menn0: ok, cool, will do
[03:28] <menn0> axw: you thinking this status would replace the migration-mode field?
[03:28] <menn0> axw: or is the status a virtual concept only for the status API?
[03:28] <axw> menn0: probably not, it's just for human consumption
[03:28] <menn0> axw: kk
[03:29] <menn0> axw: you know that there's already a environment-status (hopefully model-status) section which can optionally appear in the status output
[03:29] <menn0> axw: perrito666 added it to support reporting that there's a tool upgrade available
[03:30] <menn0> axw: model migration status will appear there too
[03:30] <axw> menn0: ah ok, I'll check that out - thanks
[03:37] <rcj> cherylj, what did I do?
[03:38] <rcj> cherylj, I'm just here to remind everyone to use the 'daily' stream when running with the xenial series until it ships, otherwise you have a very stale experience.
[03:39] <cherylj> heh
[03:39] <rcj> I mean, that's not why I'm here, but I'll make that public service address whenever the opportunity presents itself.
[03:39] <cherylj> this has been a CPC public service announcement
[03:39] <rcj> cloud images, best consumed fresh
[03:40] <rcj> also, I'm not in charge of any actual branding efforts
[03:45] <cherylj> axw:  Can you do a quick review?  http://reviews.vapour.ws/r/4528/
[03:45] <axw> cherylj: looking
[03:46] <cherylj> natefinch: how's that arm bug coming?  (I'm curious because it's such a weird bug)
[03:46] <cherylj> it's not a nag :)
[03:47] <axw> cherylj: looks good, but can you please do set-default-region while you're there?
[03:47] <cherylj> gah, I forgot you said that
[03:47] <cherylj> yes
[03:47] <axw> cherylj: thanks :)
[04:19] <cherylj> axw: can you take another look?  http://reviews.vapour.ws/r/4528/
[04:19] <cherylj> I had to do it a bit differently for set-default-region
[04:20] <cherylj> so that we wrote out what was in the cloud region list, not what the user specified
[04:30] <menn0> cherylj, thumper: what was the decision on where to land stuff while master is blocked?
[04:30] <cherylj> bleh, I haven't done that.
[04:30]  * thumper waits...
[04:30] <thumper> cherylj: what was the decision?
[04:31] <cherylj> it was a back and forth for a while, but the general consensus was "yeah, sure"
[04:32] <axw> cherylj: sorry was afk, looking
[04:33] <cherylj> menn0, thumper, since it's already tomorrow, I can go either way on a bug branch.
[04:33] <cherylj> if either one of you wants to create one, go for it
[04:33] <cherylj> I'm just waiting to land this rackspace fix so I can go to bed
[04:33] <axw> cherylj: LGTM, thank you
[04:33] <axw> sorry for keeping you from bed :(
[04:33] <cherylj> it happens :)
[04:33] <cherylj> thanks for the review!
[04:33] <thumper> cherylj: was it acceptable to have a release branch?
[04:34] <cherylj> thumper: I'd rather not do that at this point because I do'nt know if CI would run on it tonight (until the QA team wakes up)
[04:34] <thumper> ah... good point
[04:40] <menn0> thumper, cherylj: let's make a "next" branch
[04:41] <thumper> ack
[04:41] <thumper> next branch created
[04:41] <menn0> the compression ration acheived by lrzip is amazing but geez it's slow
[04:42]  * menn0 has been waiting for almost 2 hours for a file to decompress
[04:43] <mwhudson> menn0: two *hours*?
[04:43] <mwhudson> menn0: seems unlikely the extra compression saved you two hours of download time...
[04:46] <menn0> mwhudson: I agree but that's how the file came
[04:46] <menn0> it's a 365MB file that currently up to 11GB and climbing
[04:46] <menn0> lrzip is even using every core and it's still taking this long
[04:57] <thumper> wow
[05:05] <menn0> mwhudson, thumper: just finished... a little over 2 hours. 365 MB to 14GB
[05:08] <davecheney> what was in that giant file ?
[05:09] <mwhudson> menn0: that is quite a ratio
[05:11] <menn0> davecheney: DB dump from a broken system
[05:14] <wallyworld_> axw: if you get a chance, here's a small mongo ha fix for beta4 http://reviews.vapour.ws/r/4529/
[05:14] <axw> wallyworld_: ok, a little later, trying not to context switch right now
[05:15] <axw> (unless it's urgent)
[05:15] <wallyworld_> tis fine, whenever suits
[05:15] <wallyworld_> nah, can wait
[05:15] <wallyworld_> so long as it lands sometime today so Ci can run
[05:16] <wallyworld_> i could bug menn0 :-) if he is waiting for lrzip
[05:17] <davecheney>         m.Server = httptest.NewServer(nil)
[05:17] <davecheney>         c.Assert(m.Server, gc.NotNil)
[05:17] <davecheney>         m.oldHandler = m.Server.Config.Handler
[05:17] <davecheney> create a new server, then save the value of it's handler ...
[05:17] <davecheney> then restore the handler in the tear down
[05:17] <davecheney> then the new test overwrites the value we just restored ...
[05:17] <davecheney> wat
[05:24] <menn0> wallyworld_: i'm waiting for a long mgopurge run
[05:24] <menn0> wallyworld_: i'll take a look
[05:24] <wallyworld_> ty
[05:25] <wallyworld_> is there a custimer issue?
[05:38] <menn0> wallyworld_: ship it
[05:38] <wallyworld_> menn0: yay, tyvm
[05:39] <menn0> wallyworld_: even though you're deleting a lot of my turd polishing :)
[05:39] <wallyworld_> menn0: sorry :-)
[05:39] <wallyworld_> less turds left now
[05:41] <menn0> wallyworld_: actually hang on
[05:41]  * wallyworld_ hangs
[05:41] <menn0> wallyworld_: can't you be a bit more aggressive about test removal
[05:41] <wallyworld_> possibly
[05:41] <wallyworld_> i thought about removing the whole fakeensure stuff
[05:41] <menn0> wallyworld_: some of those asserts you've removed were the point of those tests so I suspect the whole test can go
[05:41] <menn0> that's what I was thinking too
[05:42] <wallyworld_> yeah, had the same thought
[05:42] <menn0> if it's not being used
[05:42] <wallyworld_> i was trying to be a bit conservative
[05:42] <wallyworld_> i'll take another look
[05:42] <menn0> wallyworld_: it's really just TestMachineAgentUpgradeMongo
[05:42] <menn0> and perhaps the fakeensuremongo
[05:42] <wallyworld_> yep, i convinced myself that test remained useful
[05:43] <wallyworld_> but seems not
[05:50] <wallyworld_> menn0: yeah, a lot of extra code can just be deleted
[05:50] <menn0> wallyworld_: excellent
[05:51] <wallyworld_> peergroup is having a big haircut
[05:51] <wallyworld_> peergrouper
[06:15] <axw> wallyworld_: before you delete all that...
[06:15] <wallyworld_> already gone :-)
[06:15] <axw> is it still possible to promote machines to controllers with your changes?
[06:16] <wallyworld_> axw: you mean ones which are not yet has-vote
[06:16] <axw> wallyworld_: I mean "enable-ha --to 0,1,2"
[06:16] <axw> where we transform a non-state-server into a state-server
[06:16] <wallyworld_> i'll double check, i didn't test that explicitly
[07:08] <dimitern> whew it finally worked !
[07:31] <mup> Bug #1569196 opened: enable-ha with placement fails due to invalid JobManageNetworking <juju-core:Triaged> <https://launchpad.net/bugs/1569196>
[08:04] <voidspace> morning everyone
[08:05] <voidspace> back to the routine of the school-run this morning
[08:05] <voidspace> *sigh*
[08:13] <dimitern> morning voidspace
[08:23] <voidspace> dimitern: so thumper broke my code *again* overnight :-)
[08:23] <dimitern> voidspace: oh yeah? :)
[08:24] <voidspace> dimitern: see here: https://docs.google.com/document/d/1YmbdGpP7Oy5uglOwqbRXf1k_7siaxfEpoWkshk5_oPo/edit?ts=56fb30ca#
[08:24] <voidspace> dimitern: basically you were right about not_networks so he changed the allocate machine args again
[08:24] <voidspace> dimitern: and I was just updating the code to work with master as it was yesterday :-)
[08:24] <voidspace> it's not a big change - so not difficult
[08:25] <dimitern> voidspace: cool :)
[08:36] <thumper> voidspace: it is my mission in life to make your mornings miserable
[08:37] <thumper> however, dimitern will like to hear that he was right
[08:37] <voidspace> thumper: ah, that explains why you joined our standups!
[08:37] <voidspace> thumper: :-)
[08:37] <voidspace> thumper: hey, so gomaasapi now has its own dependencies.tsv
[08:38] <thumper> voidspace: if you want to jump in the hangout now, we can chat that way I can not work later
[08:38] <thumper> voidspace: yeah, needed for the merge bot
[08:38] <voidspace> thumper: sure
[08:38] <voidspace> thumper: right, but the versions of its dependencies are different than the juju ones
[08:38] <voidspace> thumper: I'll join the hangout
[08:38] <voidspace> babbageclunk: you too?
[08:38] <thumper> voidspace: shouldn't be off by much
[08:38] <voidspace> babbageclunk: early hangout
[08:38] <thumper> voidspace: probably just testing
[08:38] <voidspace> all of them are now different I think
[08:38] <babbageclunk> voidspace: sure
[08:39] <voidspace> thumper: but everything still works
[08:39] <voidspace> we just need to be careful
[08:39]  * thumper nods
[08:49] <dimitern> thumper: I told you ;)
[09:10] <TheMue> morning
[09:10] <dimitern> TheMue: \o
[09:15] <TheMue> dimitern: heavy on fire for Juju 2 and also 16.04?
[09:17] <dimitern> TheMue: oh yeah :)
[09:18] <Alex____> Hi, I wonder if somebody could please point to the place for a quick question on BigData charms?
[09:18] <TheMue> dimitern: how is J2 different from the J1.*? so many incompatible changes to change the major release number?
[09:20] <dimitern> TheMue: a lot has changed, and some things in an incompatible way, check the release notes :)
[09:23] <TheMue> dimitern: will do. still very interested in juju and always trying to place it in projects or give interested people a hint. many don't know it.
[09:35] <axw> fwereade_: the branch I put up is for 2.0, in which compatibility breaks are many and varied
[09:44] <babbageclunk> Anyone know why building the next branch is failing?
[09:59] <fwereade_> axw, oops, fair enough, I do default to unthinkingly-maintain-compat
[09:59] <dimitern> frobware: managed to figure it out - erc-email-userid needs to match my nick for i.canonical.c to accept it along with the server password
[09:59] <axw> fwereade_: and I thank you for it :)
[09:59] <frobware> dimitern: no turning back now :)
[10:00] <dimitern> frobware: indeed :)
[10:00] <frobware> babbageclunk: guessing... did you run godeps -u ...
[10:01] <babbageclunk> frobware: not locally - in the github-merge-juju Jenkins job.
[10:01] <babbageclunk> frobware: http://juju-ci.vapour.ws:8080/job/github-merge-juju/7313/console
[10:02] <babbageclunk> frobware: looks like lots of provider/lxd tests
[10:02] <dimitern> babbageclunk: fwiw I see the same errors even after upgrading to xenial when running make check on master tip
[10:02] <axw> fwereade_: responded to your other questions on RB, will look again tomorrow. thanks for the review
[10:03] <dimitern> if anything it got worse - I only saw a couple of failures yesterday on wily
[10:03] <babbageclunk> frobware: some of the failing builds under that are against master, some against next.
[10:04] <babbageclunk> frobware, dimitern: I tried running provider/lxd tests for next locally and I don't see the failures (although I didn't run the full test suite).
[10:05] <babbageclunk> dimitern: I'll try running make check
[10:05] <dimitern> babbageclunk: I'll try next now so see if it's any better
[10:06] <dimitern> but first I need to reboot..
[10:18] <menn0> hi all
[10:23] <babbageclunk> menn0: hi!
[10:23] <menn0> babbageclunk: how's it?
[10:23] <babbageclunk> Hey, I saw a build of yours failed with lots of lxd provider failures.
[10:24] <babbageclunk> Did you work out why? A branch of mine had that just now too.
[10:24] <menn0> babbageclunk: everyone's seem to be failing like that. I wonder if there's a problem with the test runner hosts.
[10:24] <menn0> any QA people about?
[10:24] <dooferlad> frobware: launchpad seems to have gone read only, so I can't put this in the bridged bond bug right now. The pre-up/post-down thing is a red herring. Even if you include them cloudinit hangs. Rebooting always works and cloudinit seems to finish happily.
[10:25] <dooferlad> frobware: and I really need to get the proxy bug fix landed, so pausing on this for now.
[10:25] <frobware> dooferlad: ack
[10:26] <dooferlad> frobware: ah, bug just updated. Yay web services.
[10:26] <frobware> dooferlad: really need to conclude on an investigation of replace ENI and reboot...
[10:28] <babbageclunk> menn0: Running the full test suite locally (on juju/next) I get the same failures
[10:30] <menn0> babbageclunk: interesting... so not the build hosts then
[10:31] <menn0> babbageclunk: I'm just finishing something else up and then I'll try on my machine.
[10:31] <babbageclunk> menn0: takes ages though so I haven't run the tests against master as well yet - I saw that cherylj has some failing runs against master with the same errors.
[10:34] <frobware> dimitern: whoa! that's subtle...
[10:35] <frobware> dimitern: we currently have 00-juju.cfg and eth0.cfg
[10:35] <frobware> dimitern: which would/could/should give us 2 addresses on eth0
[10:36] <frobware> dimitern: but because we specify a mac addr, the ifup via DHCP on eth0.cfg gives us the same IP addr
[10:37] <frobware> dimitern: ok, that explains it (for me at least) :)
[10:37] <menn0> babbageclunk: if you run just one of the tests that's failing in CI does it fail then? (that shouldn't take too long)
[10:38] <babbageclunk> menn0: Yeah, it turns out running just ./provider/lxd that fails.
[10:38] <dimitern> frobware: interesting
[10:39] <dimitern> frobware: and lucky I guess :)
[10:39] <babbageclunk> menn0: But now I can't find a version where it doesn't fail.
[10:39]  * menn0 runs those tests
[10:39] <menn0> babbageclunk: they pass for me
[10:39] <frobware> dimitern: I was trying to understand the behaviour. If I try this outside of juju the ifup on another foo.cfg (which also specifies eth0) will just add another IP addr to eth0.
[10:39] <babbageclunk> menn0: is it safe to just do a checkout, godeps, then go test?
[10:40] <menn0> babbageclunk: yep that should do it (as long as you have mongodb installed)
[10:40] <menn0> babbageclunk: and I guess you probably need to have lxd installed for some tests too
[10:40] <frobware> menn0, babbageclunk: isn't the underlying problem related to the configuration of lxdbr0?
[10:40] <babbageclunk> menn0: ok, so it'll rebuild everything.
[10:40] <frobware> or lack of
[10:41] <menn0> frobware: sure... but why is it suddenly happening in CI and on babbageclunk's machine?
[10:41] <babbageclunk> menn0, frobware: ok - I installed lxd last friday.
[10:41] <menn0> babbageclunk: what does "lxc version" show?
[10:41] <babbageclunk> 0.20
[10:41] <babbageclunk> menn0: I'm on wily
[10:42] <menn0> babbageclunk: I'm on vivid but I'm running 2.0.0.rc1
[10:42] <dimitern> so I see exactly the same test failures on next as on master
[10:42] <babbageclunk> Maybe I should upgrade to that.
[10:42] <menn0> dimitern: yes, all recent merge attempts have had the same lxd/lxcbr0 problems
[10:43] <babbageclunk> menn0: potentially that's also the problem on the build machine(s)
[10:43] <menn0> babbageclunk: there's a PPA for the current lxd from the lxd/lxc team
[10:43] <frobware> menn0, babbageclunk: to repro this just 'cd provider/lxd; go test'?
[10:44] <menn0> frobware: I believe so
[10:44] <babbageclunk> frobware: yup - might need a godeps in there too
[10:44]  * menn0 prefers or "go test ./provider/lxd" but whatever
[10:45] <dimitern> frobware: same thing with running only provider/lxd tests
[10:45] <frobware> menn0, babbageclunk: ok && not terribly helpful but OK: 77 passed, 1 skipped
[10:45] <frobware> menn0, babbageclunk: however, I am _only_ at dd9828ec7003d1a6ec1fc4dbcb7e6d17467a21f0
[10:46] <babbageclunk> menn0, frobware - ok, I'm going to add the ppa and upgrade.
[10:46] <frobware> babbageclunk, menn0: or go backto dd9828ec and try there... it may be something more recent in master
[10:46] <babbageclunk> menn0, frobware: then I guess if that fixes it then it's an indication that someone should do the same on build hosts.
[10:47] <dimitern> frobware: I suspect you did run `sudo dpkg-reconfigure -p medium lxd` as suggested by the tests?
[10:47] <dimitern> otherwise how are you not seeing the failures..
[10:47] <frobware> dimitern: nope, not medium. but I did reconfigure some time last week
[10:47] <menn0> dimitern: I can't repro the problem, and I haven't run dpkg-reconfigure in a long time
[10:47] <babbageclunk> frobware: I tried going back to find a bisect start point, but got back to last Monday and the tests were still failing.
[10:48] <frobware> menn0: it was probably last tue/wed when I did the dpkg-reconfigure
[10:48] <menn0> I haven't since I installed lxd (about 2 months ago?)
[10:48]  * dimitern *facepalm*
[10:48] <frobware> babbageclunk: my /etc/default/lxd-bridge config: http://pastebin.ubuntu.com/15784102/
[10:49] <babbageclunk> anyone have the ppa handy?
[10:49] <dimitern> I remember what I did - changed /e/default/lxd-bridge to not have IPv4 addresses as it was messing up my lxd multi-nic testing
[10:50]  * frobware would like to kickstart/jumpstart all his machines every morning to avoid state...
[10:50] <menn0> dimitern: but why is this also happening on the build hosts?
[10:51] <frobware> menn0: which is why I was suggesting first go back to my current rev ^^ to see if it's just recent churn in master.
[10:53] <babbageclunk> frobware: I'm on that rev - it's upstream/next and upstream/master (since no one's been able to land anything)
[10:53] <frobware> oooohhh. I am at that rev. apologies...
[10:54] <frobware> babbageclunk: my lxd package is:
[10:54] <dimitern> menn0: not sure - perhaps when /e/d/lxd-bridge was introduced it did not have IPv4 config and CI machines haven't been updated since?
[10:54] <frobware> $ apt-cache madison lxd
[10:54] <frobware>        lxd | 2.0.0-0ubuntu2 | http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
[10:54] <dimitern> now all provider/lxd tests pass
[10:54] <frobware> dimitern: to confirm, you're running xenial?
[10:54] <dimitern> frobware:yep
[10:56] <dimitern> however, p/lxd tests should NOT fail if anything like that on the machine happens - they should be properly isolated
[10:58] <babbageclunk> Huh. lxc version still says 0.20, but the tests pass for me now.
[10:58] <dimitern> babbageclunk: you're on wily?
[10:59] <babbageclunk> dimitern: yup
[11:00] <dimitern> babbageclunk: so I needed to add `deb http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu wily main` to /e/a/srcs.list to get lxd to work on wily
[11:01] <babbageclunk> dimitern: yeah - I did the same, but via add-apt-repository for ppa:ubuntu-lxc/lxd-stable
[11:01] <dimitern> babbageclunk: yeah - same thing, and then a-g update && a-g dist-upgrade
[11:02] <dimitern> dist-upgrade if you already installed lxd I mean
[11:03] <babbageclunk> ok, so how do we get the tests passing on the build machines?
[11:10] <voidspace> ooh, down to two failures
[11:13] <dimitern> mgz: ^^
[11:14] <dimitern> mgz: istm the dpkg configure for lxd might have been skipped with the noninteractive frontend
[11:14] <dimitern> mgz: it will be useful to keep the machine around when the merge job fails to see what's going on
[11:15] <babbageclunk> dimitern: I'm running apt-get under ansible, so maybe it's being run noninteractively as well?
[11:16] <dimitern> babbageclunk: well, what's in /etc/default/lxd-bridge ?
[11:17] <babbageclunk> dimitern: http://pastebin.ubuntu.com/15784982/
[11:17] <babbageclunk> dimitern: But the tests are passing now that I've upgraded.
[11:18] <babbageclunk> dimitern: so maybe it was just that I added the ppa.
[11:18] <dimitern> babbageclunk: yeah, it looks like the tests should still fail (?!) mine was very similar before I fixed it
[11:21] <frobware> babbageclunch: looks like most of the config is empty
[11:35] <voidspace> frobware: dimitern: babbageclunk http://reviews.vapour.ws/r/4535/
[11:45] <voidspace> frobware: dimitern: babbageclunk http://reviews.vapour.ws/r/4535/
[11:45]  * voidspace lurches to lunch
[11:48] <dimitern> voidspace: looking
[11:51] <voidspace> dimitern: thanks
[11:54] <babbageclunk> frobware, dimitern, voidspace: http://reviews.vapour.ws/r/4536/
[11:54] <babbageclunk> voidspace: looking at yours now.
[11:54] <voidspace> babbageclunk: dimitern just reviewed it, but thanks
[11:54] <babbageclunk> voidspace: yeah, I just saw that it's merging
[11:55] <voidspace> it will fail
[11:55] <voidspace> dammit
[11:55] <voidspace> I missed off some test fixes - didn't push them
[11:55] <babbageclunk> voidspace: probably should have a look anyway - I'm OCR tomorrow.
[11:55] <voidspace> ooh
[11:55] <babbageclunk> voidspace: Well, it was going to fail due to the lxd thing anyway, right? ;)
[11:55] <voidspace> hah
[11:56] <voidspace> babbageclunk: are all merges backed up on that
[11:57] <babbageclunk> voidspace: there are 9 failures in a row on github-merge-juju that I think are provider/lxd ones.
[11:59] <voidspace> babbageclunk: nice :-)
[11:59] <voidspace> right
[11:59]  * voidspace really goes on lunch
[12:55] <wallyworld_> cherylj: a small one for a ha bug i found testing ha http://reviews.vapour.ws/r/4537/
[12:59] <babbageclunk> voidspace: that AllocateMachine change is biting me too - I'll use a version that has Link.IPAddress() but not the AllocateMachine change until you've updated stuff.
[13:22] <babbageclunk> wallyworld_: we've been having merge jobs fail on Jenkins because of LXD provider tests - do you know about that?
[13:22] <wallyworld_> i saw my job fail, but don't know what's wrong with lxd
[13:23] <rick_h_> babbageclunk: wallyworld_ cherylj and QA at looking into I think
[13:23] <wallyworld_> but i did see a bug where lxd behaves differently on trusty vs xenial with the bridge
[13:23] <wallyworld_> i strongly suspect an upstream lxd issue
[13:23] <babbageclunk> rick_h_: ok, thanks
[13:24] <wallyworld_> babbageclunk: bug 1569120 may be related / relecant
[13:24] <mup> Bug #1569120: wrong lxc bridge still used in juju beta4 <conjure> <juju-core:Incomplete> <https://launchpad.net/bugs/1569120>
[13:25] <babbageclunk> rick_h_, wallyworld_: if it helps, I was getting the same failures on my machine (wily) until I added the PPA for lxd-stable and upgraded.
[13:25] <mgz> we're using daily xenial images for the merge bot (which we have to, as the last one has too old an lxc)
[13:26] <mgz> and there's a new lxd as of 2016-04-11 that's probably in today's image
[13:26] <mgz> with various changes, bug 1548489
[13:26] <mup> Bug #1548489: [FFe] Let's get LXD 2.0 final in Xenial <lxd (Ubuntu):Fix Released> <https://launchpad.net/bugs/1548489>
[13:26] <mgz> so it's likely we just got broken again
[13:26] <babbageclunk> mgz: ah, ok - thanks
[13:39] <frobware> bug #1569361 makes it hard to iterate on fixing container bugs...
[13:39] <mup> Bug #1569361: LXD containers fail to upgrade because the bridge config changes to a different IP address <network> <juju-core:New> <https://launchpad.net/bugs/1569361>
[13:43] <cherylj> frobware:  :(
[13:43]  * perrito666 gets budgeted for his next home internet... U$D450/5M
[13:43] <mup> Bug #1569361 opened: LXD containers fail to upgrade because the bridge config changes to a different IP address <network> <juju-core:New> <https://launchpad.net/bugs/1569361>
[13:57] <mgz> so... where are we actually at with lxd?
[13:57] <mgz> our master doesn't work with their 2.0 - plus various other bugs?
[14:00] <voidspace> babbageclunk: or you can merge my branch
[14:00] <voidspace> babbageclunk: https://github.com/juju/juju/pull/5094/files
[14:01] <babbageclunk> babbageclunk: yeah, but this was pretty easy and likely to require less explanation at review time.
[14:02] <voidspace> babbageclunk: cool, that branch is ready to land though
[14:02] <katco> morning all
[14:02] <babbageclunk> voidspace: true - I'll need to merge it in eventually.
[14:02] <babbageclunk> katco: o/
[14:10] <wallyworld_>                          
[14:10] <wallyworld_> \
[14:10] <mgz> wallyworld_: your arm is falling off
[14:11] <wallyworld_> pressed wrong key
[14:16] <ericsnow> katco: rogpeppe1 is proposing a small API change in csclient.Client which would require a likewise small (isolated) change in core
[14:16] <ericsnow> katco: any objections?
[14:16] <katco> ericsnow: yeah saw the email... cherylj what would a change to core look at this point? would it still go into rc1?
[14:17] <cherylj> katco: it pulls in an updated dep, right?
[14:17]  * ericsnow ignores wallyworld_ since he can't possibly be coherent at this point
[14:17] <katco> cherylj: and a small change to core
[14:18] <cherylj> katco: I'm going to say that should go into rc1.  (not what we're trying to release this week)
[14:18] <katco> cherylj: that's fine
[14:18] <katco> ericsnow: ok, no objections
[14:18] <cherylj> so put it in the next branch that thumper created
[14:18] <ericsnow> cherylj: FYI, it *is* a bug
[14:18] <ericsnow> katco: k
[14:18] <cherylj> yes, I know
[14:18] <ericsnow> rogpeppe1: ^^^
[14:18] <cherylj> ericsnow:  is there a bug opened?  the email I saw didn't mention one?
[14:18] <ericsnow> cherylj: not yet, I expect
[14:19] <rogpeppe1> cherylj: no, i didn't file a bug yet. will do.
[14:19] <ericsnow> rogpeppe1: thanks
[14:19] <cherylj> thanks rogpeppe1
[14:19] <ericsnow> and thanks for noticing the bug :)
[14:19] <katco> ericsnow: fix lands here: https://github.com/juju/juju/tree/next
[14:19] <rogpeppe1> cherylj: not sure if i should file the bug against juju-core or charmrepo/csclient
[14:20] <cherylj> rogpeppe1:  you can target to both
[14:20] <rogpeppe1> cherylj: interesting. how would I do that?
[14:20] <ericsnow> katco: is master for 2.0.1 now?
[14:21] <ericsnow> rogpeppe1: "Also affects project"
[14:21] <cherylj> rogpeppe1: Use "also affects project"
[14:21] <katco> ericsnow: that is my understanding. cherylj, correct?
[14:21] <cherylj> ericsnow: no, master is for beta4.  We didn't branch for the release last night because I didn't know if the branch would've been picked up for testing overnight
[14:22] <cherylj> (but it didn't matter anyway because no merge jobs passed because of lxd)
[14:22] <ericsnow> cherylj: so the fix for rogpeppe1's bug should go in master or next?
[14:22] <katco> cherylj: what is the "next" branch for?
[14:22] <rogpeppe1> cherylj: do i have to do that after submitting the bug? i don't see that option in the "new bug" page.
[14:22] <cherylj> next is for rc1
[14:22] <cherylj> when we release beta4, we will merge next into master
[14:23] <ericsnow> cherylj: ah, okay
[14:23] <mgz> it's 100% that someone is going to screw up targetting here
[14:23] <katco> that seems... backwards
[14:23] <cherylj> rogpeppe1: yes, after you create the bug you can target to a different project
[14:24] <cherylj> katco: yes, I know, but we did it that way because we wanted to make sure master / whatever we're going to release got a CI run overnight and I didn't know if it would pick up a new branch
[14:24] <cherylj> and it was way past EOD for the qa team
[14:24] <katco> cherylj: our tooling T.T
[14:24] <rogpeppe1> cherylj: it doesn't like the fact that there's no launchpad project for charmrepo (it's in github)
[14:24] <cherylj> rogpeppe1: then just target to juju-core
[14:24] <rogpeppe1> cherylj: i've created the bug. https://bugs.launchpad.net/juju-core/+bug/1569386
[14:25] <mup> Bug #1569386: list resources will not work correctly <juju-core:New> <https://launchpad.net/bugs/1569386>
[14:25] <cherylj> thanks!
[14:25] <cherylj> guess I should create a 2.0 rc1 milestone
[14:29] <cherylj> hey natefinch, any luck with bug 1564791?
[14:29] <mup> Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch <blocker> <lxd> <juju-core:Triaged by natefinch> <https://launchpad.net/bugs/1564791>
[14:31] <natefinch> cherylj: it's kind of a twisty maze of code getting passed around, but I have some suspicious lines I'm looking at. e.g.  if result.Arch == "" {result.Arch = "amd64"}
[14:46] <mup> Bug #1569386 opened: list resources will not work correctly <juju-core:New> <https://launchpad.net/bugs/1569386>
[14:57] <natefinch> ericsnow: just had a good idea about the bug 3 lines up... I think this is another case of needing to make our "local" provider special.  LXD has to always default to the arch of the host machine, but we have provider code that says that if you don't specify the arch, we default to amd64, which obviously fails to run on other arches.  I think we never see this in development, because we always use --upload-tools
[14:57] <mup> Bug #3: Custom information for each translation team <feature> <iso-testing> <lp-translations> <Launchpad itself:Fix Released> <MTestZ:Invalid> <Ubuntu:Invalid> <mono (Ubuntu):Invalid> <https://launchpad.net/bugs/3>
[14:58] <ericsnow> natefinch: yep
[15:02] <katco> ericsnow: natefinch: standup time
[15:13] <natefinch> cherylj: is there a card for https://bugs.launchpad.net/juju-core/+bug/1564791
[15:13] <mup> Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch <blocker> <lxd> <juju-core:Triaged by natefinch> <https://launchpad.net/bugs/1564791>
[15:13] <cherylj> natefinch: not yet, I can make one for you
[15:13] <frobware> anybody else see bootstrap failures related to mongod not found in PATH?
[15:14] <frobware> I have bootstrapped quite a few times today but has failed twice in a row now
[15:14] <frobware> see bug #1569408
[15:14] <natefinch> cherylj: thanks
[15:14] <mup> Bug #1569408: Failed to bootstrap because exec: "mongod": executable file not found in $PATH <juju-core:New> <https://launchpad.net/bugs/1569408>
[15:18] <katco> cherylj: can redir land help text changes into the next branch?
[15:19] <cherylj> katco: yes
[15:19] <katco> cherylj: k ta
[15:20] <mup> Bug #1569408 opened: Failed to bootstrap because exec: "mongod": executable file not found in $PATH <juju-core:New> <https://launchpad.net/bugs/1569408>
[15:21] <redir> :)
[15:25] <katco> redir: what's your launchpad id?
[15:34] <redir> reedobrien
[15:34] <redir> katco: ^
[15:34] <katco> redir: ty
[15:50] <dimitern> now everything is broken
[15:53] <dimitern> maas cannot bootstrap due to missing mongod, aws can't add lxc containers as cloud-init sets a non-present locale en_US.UTF-8
[15:54] <dimitern> and the locale is missing because apt-get update & upgrade are apparently required for xenial now
[15:56] <frobware> dimitern, voidspace, tych0: PTAL @ https://github.com/juju/juju/pull/5099
[15:57] <frobware> dimitern: I went back to trusty and added backports to sources.list -- working there. \o/
[15:58] <dimitern> frobware: looking
[15:58] <dimitern> frobware: I managed to get xenial to work as well by doing a-g up & upg & a-g install language-pack-en-base
[15:59] <frobware> dimitern: I can no longer bootstrap with xenial...
[15:59] <dimitern> frobware: on maas, I have the same issue - but I'm using AWS now to verify dropping address-allocation ff does not break something there
[16:00] <frobware> dimitern: gotcha
[16:01] <natefinch> lol, I now have 3 unkillable lxd environments
[16:02] <dimitern> frobware: LGTM
[16:02] <frobware> dimitern: ty
[16:03] <natefinch> uh.... anyone know what this means?
[16:03] <natefinch> $ juju bootstrap local-Apr-12 lxd --upload-tools
[16:03] <natefinch> ERROR invalid config: no addresses match
[16:04] <perrito666> throw some debug there?
[16:04] <dimitern> natefinch: try --debug?
[16:04] <natefinch> oh, maybe this is the lxd problem everyone's been having, that I avoided by just not using lxd for a while :/
[16:04] <natefinch> 2016-04-12 16:04:14 DEBUG juju.cmd.juju.commands bootstrap.go:365 preparing controller with config: map[type:lxd name:admin uuid:0a58e9ef-099f-4cf8-8a48-2772cf8b5c05 controller-uuid:0a58e9ef-099f-4cf8-8a48-2772cf8b5c05]
[16:04] <natefinch> 2016-04-12 16:04:14 ERROR cmd supercommand.go:448 invalid config: no addresses match
[16:10] <dimitern> that's a new issue to me
[16:11] <katco> natefinch: there's a good thread on that with rogpeppe1 and redir
[16:12] <katco> natefinch: search email for that error message
[16:14] <rogpeppe1> natefinch: i think the underlying cause is this: https://bugs.launchpad.net/juju-core/+bug/1567952
[16:14] <mup> Bug #1567952: container/lxd: TestDetectSubnetLocal fails with link/none <juju-core:Triaged> <https://launchpad.net/bugs/1567952>
[16:16] <cherylj> natefinch: you need to do the dpkg-reconfigure to set up the bridge, then service lxd restart
[16:17] <cherylj> perrito666: do you have a minute?
[16:19] <natefinch> cherylj: is this something that'll get fixed?  Or is this something special because we ran old versions of lxd, or?
[16:19] <cherylj> natefinch: you should only have to do it once
[16:20] <cherylj> but it's something that right now, you have to do every time for newly provisioned instances
[16:20] <natefinch> cherylj: ew
[16:20] <cherylj> yeah
[16:23] <natefinch> cherylj: omg, this is so much worse than I expected
[16:23] <cherylj> hahaha
[16:23] <natefinch> seriously, an order of magnitude
[16:23] <cherylj> yeah
[16:23] <cherylj> it's *awesome*
[16:23] <natefinch> I hope the only thing I have to change the default for is the name of the bridge
[16:24] <frobware> cherylj: but at least I can run --upgrade-juju now with LXD containers... makes debugging a little quicker.
[16:25] <natefinch> and lol still fails with the same error message
[16:27]  * natefinch reboots just in case
[16:27] <alexisb> natefinch, for master I was able to get it working by running lxd init and configuring the bridge and network that way
[16:27] <natefinch> alexisb: ok
[16:28] <natefinch> alexisb: oh, it doesn't want me to to that since I have existing containers, let me dump those
[16:28] <bogdanteleaga> is there a way to replace the tools that are in state?
[16:28] <bogdanteleaga> to deploy a machine with freshly built tools?
[16:28] <alexisb> natefinch, yep you have to dump those, then I also removed my lxc bridge
[16:28] <alexisb> not sure if that step was necessary, but that was my process
[16:32] <natefinch> still thinks I have containers around, even though list says there aren't. Sigh.  Gotta run to lunch, will pick this up after.
[16:53] <dimitern> so the missing juju-mongodb3.2 package on xenial broke AWS bootstrap as well as MAAS (with update/upgrade enabled)
[16:53] <cherylj> perrito666: ping?
[16:53] <cherylj> dimitern: yeah, I'm working on it
[16:54] <cherylj> dimitern: well, the problem is now that it's there
[16:54] <cherylj> and we're not looking in the right place for it
[16:54] <dimitern> cherylj: oh, cheers then! :)
[16:54] <perrito666> dimitern: what?
[16:54] <cherylj> perrito666: hey, I've got mongo questions for you :)
[16:55] <perrito666> dimitern: current master wont fail if the package is not there
[16:55] <perrito666> cherylj: sure
[16:55] <cherylj> perrito666: it does :(
[16:55] <dimitern> perrito666: yeah? :)
[16:55] <cherylj> wait
[16:55] <dimitern> perrito666: sure
[16:55] <cherylj> sorry
[16:55] <cherylj> it fails if it *is* there
[16:55] <cherylj> heh
[16:55] <perrito666> cherylj: ok, ill need more details
[16:55] <cherylj> I can has english
[16:56] <cherylj> perrito666: can you HO?
[16:56] <perrito666> cherylj: gimme a sec
[16:56] <cherylj> perrito666: np, when you're ready:  https://plus.google.com/hangouts/_/canonical.com/mongo-fun?authuser=0
[16:59] <mgz> that sounds fun
[17:07] <perrito666> cherylj: mgz look its no fun adding bugs to stuff if you people are going around finding them
[17:07] <cherylj> ha
[17:07] <cherylj> so it *IS* sabotage?
[17:08] <cherylj> hey mgz - about functional-container-networking
[17:08] <mup> Bug #1569467 opened: backup-restore loses the hosted model <backup-restore> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1569467>
[17:08] <cherylj> oh yeah, I was going to look at that ^^
[17:08] <cherylj> good timing, mup
[17:08] <cherylj> mgz: there was a change in juju ssh to default to not using the proxy
[17:09] <cherylj> which breaks that test on AWS
[17:10] <mgz> oh fun
[17:10] <cherylj> mgz: but an easy fix.  Just use juju ssh --proxy=true
[17:10] <cherylj> and backwards compatible to boot
[17:10] <mgz> how long have we had the --proxy flag?
[17:11] <mgz> I guess it doesn't matter too much, can just supply it always for 2.0
[17:11] <cherylj> https://goo.gl/X0oQBt
[17:23] <perrito666> cloud "lxd" not found, trying as a provider name  <--- such is my luck
[17:27] <mgz> perrito666: that's an expected warning
[17:28] <mgz> perrito666: it should still continue fine from there
[17:31] <perrito666> mm I am getting same error as nate, I wonder if the upgrade did something to my conf
[17:47] <natefinch> so... lxc list returns an empty list, but lxd init says error: You have existing containers or images. lxd init requires an empty LXD.
[17:48]  * natefinch reboots just in case
[17:49] <natefinch> sigh
[17:50] <natefinch> hey, that's a ifferent error message
[17:50] <natefinch> $ juju bootstrap local-apr-12 lxd --upload-tools
[17:50] <natefinch> ERROR cannot find network interface "lxcbr0": route ip+net: no such network interface
[17:50] <natefinch> ERROR invalid config: route ip+net: no such network interface
[17:50] <alexisb> natefinch, it should not be looking for lxcbr0
[17:51] <alexisb> lxdbr0
[17:51] <alexisb> are you working off master?
[17:51] <perrito666> natefinch: did you dpkg-reconfigure lxd ?
[17:52] <perrito666> be sure to sat yes to the ipv4 config
[17:52] <natefinch> perrito666: yes, I did, but I changed lxdbr0 to lxcbr0... I guess that was not the right thing to do.
[17:52] <perrito666> natefinch: I did too and am working now with lxd
[17:52] <perrito666> bootstraping a xenial as we speak
[17:53] <alexisb> natefinch, you have time for a hangout
[17:53] <alexisb> w should be able to work through this
[17:53] <natefinch> alexisb: definitely... I'd love to get past this
[17:53] <alexisb> k, out 1x1 HO
[17:54] <perrito666> lemme know if I can help you
[17:56] <cherylj> ugh, I can't even get restore to work
[17:56] <cherylj> the db just exits
[17:56] <cherylj> like "see ya, suckers"
[17:56] <cherylj> Apr 12 17:49:41 ubuntu mongod.37017[3194]: Tue Apr 12 17:49:41.356 [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[17:57] <natefinch> perrito666: thanks, alexis is helping me out
[17:57] <cherylj> I hit this during the last restore problem I was debugging
[17:58] <cherylj> perrito666: did you ever see this restore problem?
[17:58] <perrito666> cherylj: never, which is weird since I coded it
[18:00] <cherylj> I hit this every time.  am I doing something wrong?  argh!
[18:02] <natefinch> yay, alexisb fixed it for me :)
[18:03] <alexisb> :)
[18:03] <natefinch> ugh wow, I wonder if there's a network problem between me and wherever the images are hosted 'cause dayum this is a slow download
[18:03] <alexisb> natefinch, they have been very slow
[18:03] <alexisb> once the image is cached it is easy
[18:04] <natefinch> yeah
[18:04] <alexisb> you can always copy the image over and alias it with the tag
[18:04] <alexisb> lxd will look for the tag and use it
[18:07] <natefinch> alexisb: this will be done in 5 minutes or so, it's ok
[18:16] <natefinch> gah... is juju ssh supposed to work?
[18:17] <perrito666> natefinch: might fail in lxd
[18:17] <perrito666> natefinch: just lxc list and ssh to the machine
[18:17] <mup> Bug #1569490 opened: storage-get crashes on xenial (aws) <storage> <juju-core:New> <https://launchpad.net/bugs/1569490>
[18:17] <natefinch> you just need to put -m <model> before the machine number for some reason
[18:18] <natefinch> oh, I guess because if you put it after, it thinks that's the ssh command
[18:18] <natefinch> blech
[18:19] <natefinch> perrito666: works fine in lxd... just PEBCAK
[18:19] <perrito666> lol I just ssh I am lazy
[18:20] <natefinch> I juju ssh because I'm lazy :)
[18:31] <natefinch> whelp, figured out why I always call kill-controller and not destroy-controller... I don't have to type out the pesky --destroy-all-models
[18:32] <cherylj> yep
[18:33] <cherylj> natefinch: are you trying to ssh to machine 0 just after a bootstrap?
[18:34] <natefinch> cherylj: yes, but it was just a problem of spelling the command correctly, what with multiple models and stuff
[18:34] <cherylj> ah, ok
[18:35] <cherylj> ..... and now mongo3.2 has hit the mirrors for my region
[18:35] <cherylj> yay
[18:35] <natefinch> cherylj: like, it defaults to an empty model, but I wanted to ssh to the controller, so I had to specify the model, but if you put that after juju ssh 0 then it thinks it's a command....
[18:35] <cherylj> yeah, that's totes annoying
[18:36] <natefinch> and I happened to have already created a machine in the non-admin model, so juju ssh 0 still worked and tried to run -m admin as an ssh command, which gave a wacky error message
[18:41] <redir> if unrelated tests fail in CI, do I need to resubmit the PR?
[18:41] <natefinch> yes, if you think they're spurious and will go away
[18:42] <natefinch> ...which is fairly common, unfortunately.  But if you're not sure, send a link and we can help
[18:47] <redir> well the first failure is a termination worker timeout which is fine locally and I can't imagine that it would be related to helptext updates, so I'll resumbit in a bit.
[18:49] <redir> second failure is because it can't untar juju-core_2.0-beta4.tar.gz...
[18:49] <redir> which seems like a CI hiccup
[18:49] <natefinch> sinzui: ^
[18:49] <redir> I'll resumbit both after the queue shrinks.
[18:50] <natefinch> redir: I wouldn't count on the queue shrinking, just sayin' :)
[18:50] <cherylj> true dat
[18:50] <natefinch> redir: yes, sounds like one-off failures, though the failure to untar is concerning
[18:51] <redir> no such file/dir so prolly failed to DL in time.
[18:51] <sinzui> redir: looks like a hiccup, the tar file didn't arrive on the testing instance.
[18:51] <mgz> natefinch: looking at the log, we got ssh disconnected when scping the source to the ec2 test running machine
[18:52] <mgz> ...sinzui won ;_;
[18:52] <sinzui> oh, is this using the xenial ami?
[18:53] <sinzui> mgz: We are testing with the xenial from last week.
[18:53] <redir> yeah I see 'lost connection' above
[18:53] <redir> I know exactly what will fix this for me.
[18:53] <redir> Soup and/or sandwich
[18:54] <cherylj> if only that were the answer to all problems.  sigh...
[18:54] <mgz> mmm, soup
[18:55] <redir_lunch> I guess it is just a work-around
[19:17] <mup> Bug #1569529 opened: update-clouds strips "DO NOT EDIT" warning <ci> <update-clouds> <juju-core:Triaged> <https://launchpad.net/bugs/1569529>
[19:18] <natefinch> sinzui, mgz: is there a trick to compiling for arm64?  GOARCH=arm64 go build github.com/juju/juju/cmd/juju returns errors from lxd about undefined functions
[19:19] <sinzui> natefinch: You can compile on the actualy host if you lile, That is what we do
[19:20] <natefinch> sinzui: I guess... cross compile *should* work and lets me edit in my local environment... but I guess I can copy my code up
[19:21] <sinzui> natefinch: We cross compile windows. In the case of all builds, we use the release tarfile. The scripte that makes it double checks the deps and purges undocumented packages.
[19:23] <sinzui> natefinch: The installed lxd packages can differ between archs in ubuntu.
[19:23] <cherylj> sinzui: Okay, I actually got a restore to work.  Does the test kill the controller?  or use destroy-controller?
[19:23] <natefinch> sinzui: yes, not the code, though... and I'm getting a compile error
[19:23] <sinzui> cherylj: kill-controller.
[19:24] <cherylj> thanks.
[19:24] <cherylj> btw - the output makes it look like it's a status command that's failing:
[19:24] <cherylj> ERROR:root:Command '('juju', '--show-log', 'show-status', '-m', 'functional-backup-restore', '--format', 'yaml')' returned non-zero exit status 1
[19:24] <cherylj> it's just confusing for me
[19:24] <cherylj> but anyway
[19:25] <natefinch> sinzui: oh, it uses cgo, that's probably the problem
[19:25] <sinzui> natefinch: I don't think arm64 golang-1.6 is using cgo. only the osx is using cgo to my knowledge
[19:26] <natefinch> sinzui: no no, sorry, not being clear. The LXD code uses cgo, which complicates cross compilation
[19:26] <sinzui> ah
[19:26] <sinzui> yeah it does. natefinch . We had to setup a dedicated  OS X builder because it does need cgo to link to the native crypto libs
[19:28] <sinzui> natefinch: this long log shows the last build of arm64 for master http://reports.vapour.ws/releases/3881/job/build-binary-xenial-arm64/attempt/424
[19:31] <natefinch> looks like the reason that yuo can cross compile windows is because the cgo stuff is all linux only.... what a PITA.
[19:35] <natefinch> ..well, duh, of course the lxd stuff isn't compiled in Windows :)
[19:36] <natefinch> sinzui: the arm64 machine can't access github, can it? :/
[19:37] <sinzui> natefinch: I just sent you an email with the ssh rules I use. The machine is on Canonical's network. It cannot see much
[19:37] <natefinch> sinzui: yeah, I got the ssh config stuff from cherylj last night.  I guess tgz it is
[19:41] <perrito666> cherylj: I got it fixed, ill make a pr, this goes against master?
[19:45] <natefinch> What happens when I targz a brand new gopath with just juju in it: -rw-rw-r-- 1 nate nate 216M Apr 12 15:44 src.tar.gz
[19:47] <natefinch> oh well, ship it.  Take longer to fix it than just push it up.  Yay for a decent upload speed.
[19:47] <natefinch> 4.8 MB/s... I'll take it
[19:50] <alexisb> natefinch, I was wondering if you had a minute to repay the favor from earlier :)
[19:51] <alexisb> I am stuck on a test update that I am sure is a simple "how go works" type q
[19:54] <bogdanteleaga> can I actually forcibly kill a controller using current master?
[19:54] <bogdanteleaga> "kill-controller" seems to be stuck waiting
[19:55] <alexisb> bogdanteleaga, you should be able to with kill-controller
[19:55] <alexisb> if it is not working it is a bug
[19:56] <bogdanteleaga> seems to be very happily stuck on "Waiting on 1 model, 2 machines, 3 services"
[19:56] <bogdanteleaga> alexisb, but I might be able to help with the how go works thing :p
[20:06] <redir> sinzui: got a second?
[20:06] <sinzui> I do
[20:07] <cherylj> perrito666: yes, against master
[20:07] <cherylj> bogdanteleaga, alexisb if the model is not in a good state, kill controller can "hang"
[20:08] <natefinch> alexisb: sorry, yes, I can help
[20:08] <cherylj> bogdanteleaga: see bug 1566426
[20:08] <mup> Bug #1566426: kill-controller should always work to bring down a controller <juju-release-support> <kill-controller> <juju-core:Triaged> <https://launchpad.net/bugs/1566426>
[20:08] <bogdanteleaga> cherylj, yeah I turned of the controller, issued it again and it went straight to the provider
[20:08] <cherylj> there's a "workaround" in there
[20:08] <bogdanteleaga> s/of/off
[20:08] <cherylj> yeah, that's the workaround :)
[20:09] <alexisb> natefinch, back to the 1x1 hangout
[20:09] <cherylj> while I have you here, bogdanteleaga, is bug 1516668 addressed by your action changes?
[20:09] <mup> Bug #1516668: Switch juju-run to an API model (like actions) rather than SSH. <2.0> <2.0-count> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1516668>
[20:10] <bogdanteleaga> cherylj, yup
[20:10] <cherylj> bogdanteleaga: and that landed, right?
[20:10] <bogdanteleaga> cherylj, correct
[20:11] <cherylj> yay, fix committed it is, then!
[20:11] <bogdanteleaga> I think sometime last week
[20:11] <redir> tx sinzui
[20:11] <cherylj> bogdanteleaga: also bug 1470820 - now that we're ta go 1.6, should this be done?
[20:11] <mup> Bug #1470820: Remove github.com/gabriel-samfira/sys/windows once go 1.4 lands <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1470820>
[20:12] <bogdanteleaga> cherylj, https://bugs.launchpad.net/juju-core/+bug/1426729
[20:12] <mup> Bug #1426729: juju-run does not work on windows hosts <juju-agent> <run> <ssh> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1426729>
[20:12] <bogdanteleaga> this too probably
[20:13] <bogdanteleaga> cherylj, yeah I've talked with curtis about that one last week before the CI switch
[20:13] <bogdanteleaga> however I'm still unsure
[20:13] <bogdanteleaga> since the tests on windows get ran using 1.2
[20:13] <cherylj> bogdanteleaga: maybe something to look at for 2.1 then?
[20:17] <mup> Bug #1545116 changed: When I run "juju resources <service>" after a service is destroyed, resources are still listed. <2.0-count> <juju-release-support> <resources> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1545116>
[20:20] <bogdanteleaga> cherylj, I was about to say it shouldn't be that hard to get the tests passing on 1.6 until I saw the last email with the job
[20:21] <bogdanteleaga> cherylj, we might have to push it further back I guess
[20:21] <bogdanteleaga> any idea what's up with all the "no tools" test errors?
[20:22] <cherylj> bogdanteleaga: do you have a job link you could send?
[20:22] <cherylj> mgz: you still around?
[20:22] <mgz> cherylj: yo
[20:23] <bogdanteleaga> cherylj, http://reports.vapour.ws/releases/3881/job/run-unit-tests-centos7-amd64-go1_6/attempt/1
[20:23] <bogdanteleaga> sorry
[20:23] <bogdanteleaga> http://reports.vapour.ws/releases/3881/job/run-unit-tests-win2012-amd64-go1_6/attempt/1
[20:23] <bogdanteleaga> this one
[20:24] <cherylj> hey mgz could you help me figure out the juju commands that are run as part of the functional-backup-restore test?
[20:24] <cherylj> I can't recreate using what I *think* is going on, and the job output is unhelpful
[20:24] <cherylj> bogdanteleaga: let me take a look
[20:25] <cherylj> bogdanteleaga: I *think* there is one place in the test suite I could change to fix a lot of those problems
[20:27] <mgz> cherylj: sure, also refer to assess_recovery.py for the details
[20:28] <perrito666> how can git not be able to fix a conflict where one commit has nothing and the other has something there....
[20:31] <perrito666> cherylj: did anyone just land anything in master?
[20:31] <mgz> cherylj: if you want, we can also rerun a CI job with --verbose for the explicit
[20:32] <cherylj> mgz: would you be able to do that for this job?  It would be most helpful to see the output of the reboostrap
[20:32] <bogdanteleaga> cherylj, sounds good, I don't understand how changing the go version can give that kind of error
[20:32] <perrito666> cherylj: well a change from wallyworld_ has just landed that broke my patch and pseudo fixed the issue
[20:32] <cherylj> hmm
[20:32] <mgz> cherylj: backup-restoe exactly, not one of the other variants?
[20:32] <mup> Bug # changed: 1175580, 1235529, 1276403, 1279879, 1280949
[20:33] <cherylj> mgz: yeah functional-backup-restore
[20:33] <mgz> building
[20:34] <perrito666> I really need a punching bag in my office
[20:34] <cherylj> sounds like an idea for the next team sprint, perrito666
[20:34] <cherylj> instead of tshirts
[20:34] <cherylj> here's a punching bag!  (complete with juju logo)
[20:34] <perrito666> cherylj: oh no need, in the sprint I can use wallyworld_  :p
[20:34] <cherylj> lol
[20:35] <wallyworld_> perrito666: wot you talking about?
[20:36] <perrito666> wallyworld_: GO TO SLEEEEEEEEP
[20:36] <TheMue> hehe
[20:36] <perrito666> oh ts 6:30 tis ok
[20:36] <wallyworld_> perrito666: i just woke up
[20:36] <perrito666> wallyworld_: go breakfast?
[20:36] <TheMue> punching bags w/o sand, otherwise hard to take it as hand luggage in the plane
[20:36] <perrito666> anyway we just clashed on a fix
[20:37] <wallyworld_> perrito666: getMongoDumpPath still needs to be fixed
[20:37] <TheMue> perrito666: I'll go to bed instead of wallyworld_. here it is almost 11pm now, so time is getting closer.
[20:37] <cherylj> wallyworld_: no sts call today, btw
[20:37] <wallyworld_> cherylj: yeah, saw, ty
[20:38] <wallyworld_> i can actually have breakfast :-)
[20:39] <perrito666> this inability to actually finish destroying controllers is beginning to get into my nerves
[20:40] <perrito666> finally
[20:44] <mgz> oh, what the pants. assess_recovery.py is one of our few jobs that doesn't use common args yet
[20:47] <mup> Bug # changed: 1158187, 1280953, 1289619, 1374906
[20:51] <mgz> cherylj: really rebuilding this time
[20:52] <cherylj> I'm watching it now, mgz  :)
[20:53] <mgz> hm, I want to make our wait loops nicer with --verbose
[20:57] <natefinch> gah, I can't tell if I've fixed this bug, because --upload-tools hides it
[20:57] <cherylj> yeah, what a pain  :(
[20:59] <natefinch> cherylj: I gotta run to make dinner for the kids.  won't be back for a few hours until after they're in bed.
[21:00] <cherylj> natefinch: can you push your changes somewhere?  maybe we could make a branch and test?
[21:02] <natefinch> cherylj: here's a PR..I am honestly not super confident in the fix, since I was kind of running blind... and furthermore the tests in that package pass both before and after I made my change, which means they're not actually testing that
[21:02] <natefinch> cherylj: https://github.com/juju/juju/pull/5116
[21:02] <cherylj> :(
[21:02] <cherylj> thanks, natefinch, we'll see what we can do
[21:02] <natefinch> cherylj: I'll be back on likely in 3.5 hours.
[21:08] <mgz> cherylj: run finished, 'INFO juju --show-log' search should get you all the commands
[21:09] <mup> Bug #1554863 changed: juju bootstrap does not error on unknown or incorrect config values <2.0-count> <juju-release-support> <juju-core:Fix Released> <https://launchpad.net/bugs/1554863>
[21:10] <cherylj> thanks mgz
[21:12] <perrito666> cherylj: http://reviews.vapour.ws/r/4552/
[21:15] <cmars> does lxd placement work? as in, should i be able to juju deploy xyz --to lxd:<machine-number> ?
[21:15] <katco> cherylj: i need a new bug to work on. all of them seem to require a lot of context... any suggestions on what to pick up?
[21:15] <cherylj> let me look
[21:16] <cmars> does lxd placement work? as in, should i be able to juju deploy xyz --to lxd:<machine-number> ?
[21:16] <cherylj> katco:  you can review perrito666's PR while I do that?  ^^ :)
[21:16] <cmars> sorry, wrong window
[21:16] <katco> cherylj: sure
[21:16] <katco> perrito666: if you can review mine :) http://reviews.vapour.ws/r/4551/
[21:16] <cmars> was up-arrow,enter-ing in a term
[21:16] <perrito666> katco: just in case, check it in github too, I am not sure how well rb takes amends
[21:16] <perrito666> katco: sure
[21:18] <katco> perrito666: where's the test for this?
[21:20] <perrito666> katco: mm, you are right, that did not break a test, lemme check that again
[21:21] <perrito666> katco: ship it, but, I am curious, why this change?
[21:21] <perrito666> this is going to make developmente testing incredibly hard
[21:24] <katco> perrito666: just going on what the bug said. "the decision has been made"
[21:24] <katco> perrito666: i was not part of that conversation
[21:25] <perrito666> oh, ok, well It is time to resurrect my fake streams builder it seems
[21:26] <perrito666> well of course I did not break any tests... there arent tests for that, well, lets fix that
[21:26] <katco> perrito666: :)
[21:27] <perrito666> aaand of course, external tests
[21:27] <katco> perrito666: what do you mean external tests?
[21:30] <perrito666> package_test tests
[21:30] <katco> perrito666: i think that's devs discretion and i actively avoid doing that
[21:30] <katco> perrito666: because it just causes boilerplate churn
[21:31] <perrito666> I believe Ill do regular unit tests
[21:31]  * katco cheers
[21:31] <perrito666> I am on your side, I was protesting that the existing ones are externals
[21:31] <perrito666> I cant wait for this semester discussion about internal vs external tests
[21:32] <katco> perrito666: lol
[21:32]  * perrito666 has it pretty much like one of the sprint events
[21:40]  * redir is in a maze of twisty little passages, all alike
[21:40] <katco> redir: please beware of the gru, we've only begun to get to know you.
[21:41] <redir> :)
[21:47] <perrito666> oh dont worry, if you find it just throw the status tests to it, that should keep it occupied a good half an hour
[21:48] <mup> Bug #1565089 changed: create-model does not use the same config format as bootstrap <jujuqa> <juju-core:Fix Released> <https://launchpad.net/bugs/1565089>
[21:48] <mup> Bug #1566303 changed: uniterV0Suite.TearDownTest: The handle is invalid <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Invalid> <juju-core 1.25:Fix Released by dave-cheney> <https://launchpad.net/bugs/1566303>
[22:18] <mup> Bug #1339931 changed: Status panicks during juju-upgrade <panic> <status> <upgrade-juju> <juju-core:Fix Released> <https://launchpad.net/bugs/1339931>
[22:22] <wallyworld_> perrito666: quick chat?
[22:22] <perrito666> wallyworld_: sure
[22:22] <perrito666> where?
[22:22] <wallyworld_> standup
[22:30] <bogdanteleaga> this is impressive http://classicprogrammerpaintings.tumblr.com/
[22:31] <perrito666> wallyworld_: frozen
[22:32] <perrito666> wallyworld_: cannot hear you, you are frozen
[22:35] <perrito666> wallyworld_: you left me speaking alone
[22:37] <anastasiamac> perrito666: wow.. i think something exciting just happenned on our side... i was kicked out from perrito666uassel at least... maybe ian experiences fun too..
[22:37] <anastasiamac> quassel that is..
[22:39] <wallyworld> perrito666: sorry, chrome ate all my memory :-(
[23:15] <mup> Bug #1567690 changed: Can't push charm to my new LP home <juju-core:Invalid> <https://launchpad.net/bugs/1567690>
[23:25] <thumper> ugh... struct equality again...
[23:25] <thumper> what's valid?
[23:33] <davecheney> struct equality ?
[23:34] <thumper> nm
[23:34] <thumper> interestingly...
[23:35] <thumper> if args == StructType{} {
[23:35] <thumper> return nil
[23:35] <thumper> doesn't work
[23:35] <thumper> but
[23:35] <thumper> var empty StructType
[23:35] <thumper> if args == empty {
[23:35] <thumper> does
[23:35] <thumper> hit this before, and no idea why Go doesn't like it
[23:37] <bogdanteleaga> thumper, have you tried args == (StructType{})?
[23:37] <thumper> no
[23:37] <thumper> but I find that less readable
[23:38] <thumper> so would probably go with empty var
[23:42] <anastasiamac> wallyworld: ? :/
[23:44] <bogdanteleaga> it breaks symmetry though :P
[23:50] <davecheney> thumper: http://play.golang.org/p/C16rPMEAlO
[23:51] <davecheney> it's a parsing ambiguity because the parser cannot tell where the start of the block begins and the struct literal ends
[23:52] <davecheney> ironicall it can with this even more verbose version
[23:52] <davecheney> http://play.golang.org/p/R_ui2oTlma
[23:52] <davecheney> but, what you're trying to do smells bad
[23:54] <wallyworld> katco: around?
[23:54] <mwhudson> can i get some juju usage help?
[23:55] <wallyworld> cherylj: katco: i think we got the PR for bug1567170 backwards
[23:55] <mwhudson> i'm trying to test the juju-mongo-tools3.2 package i made
[23:55] <mwhudson> so i need to try to make a backup
[23:55] <mwhudson> i have an controller bootstrapped in ec2
[23:55] <wallyworld> mwhudson: bootstrap with mongo 3.2 is broken at the moment
[23:55] <mwhudson> but now i get
[23:55] <mwhudson> (master *)mwhudson@aeglos:juju-mongo-tools3.2$ juju backups create
[23:55] <mwhudson> ERROR backups are not supported for hosted models
[23:55] <mwhudson> wallyworld: i merged perrito666's PR
[23:56] <wallyworld> mwhudson: juju create-backup -m admin
[23:56] <wallyworld> or first switch to admin
[23:56] <wallyworld> juju switch admin
[23:56] <wallyworld> when you bootstrap, you are switched to the hosted model
[23:56] <mwhudson> ERROR while preparing for DB dump: mongodump not available: failed to get mongod path: exec: "mongod": executable file not found in $PATH
[23:56] <mwhudson> win, i think
[23:56] <mwhudson> wallyworld: now how do i log into the controller node?
[23:56] <cherylj> wallyworld, katco I think you're right
[23:57] <wallyworld> mwhudson: yeah, that's a bug i told horatio i found yesterday when doing a code read
[23:57] <mwhudson> oh juju ssh 0
[23:57] <mwhudson> oh right
[23:57] <wallyworld> mwhudson: the mongodump path needs to be fixed
[23:57] <wallyworld> i'll do a fix today
[23:57] <mwhudson> wallyworld: well mongodump is not even installed
[23:58] <wallyworld> mwhudson: that's because the mongotools package is not installed
[23:58] <wallyworld> juju should depend on it
[23:58] <mwhudson> wallyworld: because i haven't uploaded it yet :-)
[23:58] <wallyworld> right :-)
[23:58] <mwhudson> so i was going to install the package from the ppa
[23:58] <wallyworld> mwhudson: but even when it is uploaded, juju will look in the mongo2.4 path :-(
[23:58] <mwhudson> but you're saying that even that won't work, because the path is wrong?
[23:58] <mwhudson> excellent
[23:59] <wallyworld> i will do a fix this morning
[23:59] <wallyworld> i only just saw it yesterday doing a code read by accident
[23:59] <wallyworld> the backup code is not something i am 10000% familiar with