[01:30] <wallyworld> anastasiamac: can i please get a review on this which is a fix for one of the issues discovered last week during the outage analysis http://reviews.vapour.ws/r/1824/
[01:31] <anastasiamac> wallyworld: looking :D
[01:38] <anastasiamac> fun \o/
[03:19] <davecheney> ls
[06:08] <wallyworld> jam: hey, you around?
[06:08] <jam> hiya wallyworld
[06:08] <wallyworld> quiet today with everyone away
[06:08] <wallyworld> anyways
[06:08] <wallyworld> i have a MP for python-jujuclient which retries send requests if juju says it is upgrading
[06:09] <wallyworld> it should address some of the core issues deployer is having
[06:09] <wallyworld> could you take a look?
[06:09] <wallyworld> https://code.launchpad.net/~wallyworld/python-jujuclient/retry-on-upgrade/+merge/260658
[06:11] <jam> wallyworld: I do wish it was trivial to backoff retries
[06:11] <wallyworld> yeah
[06:11] <wallyworld> it *could* be implemented, but this i think is an ok first step
[06:12] <wallyworld> it covers the small window where juju machine agent needs to first check if upgrades are needed
[06:12] <wallyworld> during which time the api is limited and so the "upgrade error" is reported
[06:12] <wallyworld> which would be < 1 second normally
[06:13] <wallyworld> or thereabouts
[06:13] <wallyworld> 99% of the time (or pick your own stat), the api goes from limited -> open because no upgrade is required
[06:14] <wallyworld> this stops the case where people juju bootstrap && juju-deploy via a script
[06:14] <wallyworld> from going wrong
[06:15] <jam> wallyworld: if its upgrading can't we also get disconnected completely during this time?
[06:16] <wallyworld> so, yes but what a recent juju change did was keep the api in limited mode until the check to see if an upgrade is required. if an upgrade is required, then it does that without giving the deployer a chance to connect simply to be disconnected
[06:16] <wallyworld> but that changed opened a small window where the deployer trying to connect initially got the "upgrading error"
[06:17] <wallyworld> because the upgrade worker needed to start
[06:17] <jam> wallyworld: so I see that you're retrying "upgrade in progress" which is fine, my concern is are we also retrying "I got disconnected completely". IIRC the later is what broke OIL, etc.
[06:17] <wallyworld> i didn't intend to retry anything other than "we are upgrading"
[06:17] <wallyworld> for this change
[06:18] <wallyworld> the "we got disconnected" case is a bit separate
[06:18] <jam> wallyworld: Isn't the original bug about getting disconnected vs upgrading?
[06:18] <jam> (the problem with upgrading is that deployer got disconnected and then just died)
[06:18] <wallyworld> jam: hangout? a bit easier to explain
[06:19] <wallyworld> https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
[06:19] <jam> sec, need to grab headphones
[06:19] <wallyworld> ok
[09:01] <dimitern> dooferlad, standup?
[09:31] <voidspace> dimitern: thanks to dooferlad it now works!
[09:32] <dimitern> voidspace, sweet!
[09:32] <dimitern> voidspace, omw to our 1:1
[09:33] <voidspace> dimitern: grabbing coffee first
[09:33] <dimitern> voidspace, sure
[09:40] <voidspace> dimitern: omw
[10:01] <perrito666> morning
[10:06] <dimitern> perrito666, o/
[13:13] <wallyworld> sinzui: hey, i did a python-jujuclient change to fix the issue of deployer complaining that juju is upgrading. but i need to talk to a maintaner to get that merged (it's approved), and then we need to figure out how to unblock landings
[13:31] <wallyworld> sinzui: ^^^^ - i can get the fix landed but am unsure what to do next to unblock things. can we ass a 1 sec delay to the CI test until pythin-jujuclient gets rolled out
[13:32] <sinzui> wallyworld: We automatically test tht package.
[13:33] <sinzui> wallyworld: all slaves use the juju ppa to get its packages. That is how we cause the quickstart regression last week
[13:33] <wallyworld> sinzui: so as soon as a phtyon-jujuclient fix lands in source, CI will grab that copy?
[13:34] <sinzui> wallyworld: no, CI gets the built packages
[13:34] <wallyworld> how long from branch getting merged till CI using the changes?
[13:35] <sinzui> wallyworld: about 1 hour after the package is built by Lp
[13:35] <wallyworld> ok, great, i'm just trying to see how long we may still be blocke for
[14:04] <katco> ericsnow: standup
[14:15] <sinzui> wallyworld: katco: Do either of you have a minute to review http://reviews.vapour.ws/r/1829/
[14:15] <katco> sinzui: in a meeting, sec
[14:16] <wallyworld> sinzui: +1
[14:16] <sinzui> thank you wallyworld
[14:42] <dimitern> wallyworld, hey there
[14:42] <dimitern> wallyworld, if you can find some time, please have a look at http://reviews.vapour.ws/r/1830/ - instancepoller using the api
[14:42] <ericsnow> wallyworld: any help I can give on #1460171?
[14:42] <mup> Bug #1460171: Deployer fails because juju thinks it is upgrading <blocker> <ci> <deployer> <regression> <upgrade-juju> <juju-core:In Progress by wallyworld> <python-jujuclient:In Progress by wallyworld> <https://launchpad.net/bugs/1460171>
[14:42] <dimitern> dooferlad, voidspace, ^^
[14:43] <voidspace> dimitern: looking
[14:43] <dimitern> voidspace, thanks!
[14:44] <wallyworld> ericsnow: waiting for patch to land in python-jujuclient - no core changes
[14:44] <wallyworld> should be soon i hope
[14:44] <ericsnow> wallyworld: cool
[14:44] <wallyworld> thanks for asking
[14:44] <ericsnow> wallyworld: :)
[14:44] <wallyworld> dimitern: sorry, was talking to someone else
[14:45] <dimitern> wallyworld, no worries
[14:45] <voidspace> dimitern: why does facade version start at 1 whilst others start at 0
[14:45] <dimitern> voidspace, new facades should start at 1
[14:46] <dimitern> (there was some decision about this some time ago)
[14:46] <perrito666> voidspace: 0 is for facades previous to versioning iirc
[14:46] <voidspace> cool, thanks
[14:46] <dimitern> wallyworld, I'd appreciate if you can confirm the instancepoller should start once per apiserver (rather than per environment)
[14:47] <dimitern> fwereade, ^^\
[14:47] <wallyworld> dimitern: so long as it knows how to deal with mult envs
[14:47] <wallyworld> machines are per env after all
[14:47] <fwereade> dimitern, wallyworld: yeah, it sounds like a per-env thing to me
[14:48] <wallyworld> +1
[14:48] <wallyworld> we could have just the one, but polling intervals get tricky
[14:48] <fwereade> dimitern, wallyworld: and including multi-env logic in the instancepoller, rather than just running N of them, would seem suboptimal
[14:48] <wallyworld> yeah
[14:49] <dimitern> fwereade, wallyworld, but each running instance should only work for a given env?
[14:49]  * dimitern wonders if requiring JobManagerEnviron will make this "just work", like for other "singleton" workers
[14:49] <wallyworld> dimitern: almost 1am here, my brain is dead sorry, i need sleep
[14:50] <dimitern> wallyworld, get some sleep then! :)
[14:50] <wallyworld> can talk more tomorrow unless fwereade soets it out
[14:50] <dimitern> sure, no problem
[14:50] <wallyworld> see ya later
[14:51] <fwereade> dimitern, yes, each instance is part of one and only one env
[14:51] <dimitern> fwereade, so I guess starting one per env should work, as login will take care of which envs to use and subsequently what will the watchers report
[14:52] <fwereade> dimitern, I think you should just be starting the instancepoller alongside the firewaller and provisioner for each environment
[14:53] <dimitern> fwereade, right
[14:53] <dimitern> fwereade, so I'll change that, but the rest should be fine
[14:53] <dimitern> fwereade, thanks!
[14:55] <fwereade> dimitern, hey, has instancepoller just always been running non-singular?
[14:56] <fwereade> dimitern, I'm pretty sure we don't want one per state server per env
[14:56] <fwereade> dimitern, ...in fact
[14:56] <fwereade> dimitern, instance address-setting txns have been among the ones we've seen clogging up stuck environments, right?
[14:57] <dimitern>  fwereade so far it was started in the StateWorker() method of the MA
[14:57] <fwereade> dimitern, and the problems with mgo/txn absolutely centre around separate flushers racing to write the same doc
[14:57] <dimitern> fwereade, which means once per state server
[14:58] <fwereade> dimitern, it's also in startEnvWorkers
[14:58] <fwereade> dimitern, ...or only there
[14:58] <dimitern> fwereade, now it's only in startEnvWorkers (running tests still)
[14:59] <fwereade> ah ok
[14:59] <fwereade> dimitern, but I *do* see it non-singular in startEnvWorkers
[15:00] <dimitern> fwereade, where?
[15:00] <fwereade> dimitern, and as a worker that's yammering at the provider api we definitely want it to be singular, I think, not to menntion my FUD about it causing the sort of workload that stresses mgo/txn
[15:00] <fwereade> dimitern, :1116 in master
[15:01] <fwereade> 	runner.StartWorker("instancepoller", func() (worker.Worker, error) {
[15:01] <fwereade> 		return instancepoller.NewWorker(st), nil
[15:01] <fwereade> 	})
[15:01] <dimitern> fwereade, right!
[15:02] <fwereade> dimitern, so s/runner/singularRunner/ and we get a little bit better in a couple of good ways too
[15:03] <fwereade> dimitern, (on top of passing in the api instead of teh state :))
[15:04] <dimitern> fwereade, in a call, will get back to you
[15:13] <cherylj> sinzui: Should I backport bug 1442308 to 1.23?
[15:13] <mup> Bug #1442308: Juju cannot create vivid containers <ci> <cloud-installer> <local-provider> <lxc> <ubuntu-engineering> <vivid> <cloud-installer:Confirmed> <juju-core:In Progress by cherylj> <juju-core 1.24:Fix Committed by cherylj> <https://launchpad.net/bugs/1442308>
[15:14] <sinzui> cherylj: no, I don’t think we will make a 1.23.4 release since we will propse 1.24.0 on Thursday
[15:14] <cherylj> ok, thanks!
[15:15] <sinzui> cherylj: I will add a task to the bug as WONT FIX to be clear that we choose not to
[15:15] <cherylj> sinzui: awesome, thank you
[15:20] <voidspace> rebooting *sigh*
[15:29] <natefinch> abentley: you around?
[15:30] <abentley> natefinch: Yes, but I have standup now.  I'll ping you when done.
[15:30] <natefinch> abentley: thx
[15:54] <voidspace> dimitern: ping
[15:54] <voidspace> dimitern: if you're still around
[15:54] <voidspace> dimitern: I'm still doing your review by the way...
[15:54] <voidspace> it's big
[15:54] <voidspace> (the patch I mean)
[15:54] <voidspace> but also trying to bootstrap juju with MAAS
[15:54] <voidspace> and failing - hard to tell if current failure is a MAAS problem or a juju problem, or something else
[15:55] <voidspace> last problem was HP propietary drivers calling deploy to fail
[15:55] <voidspace> current problem is this:
[15:55] <dimitern> voidspace, yeah, I'm here
[15:55] <dimitern> voidspace, sorry about the side - it's mostly tests though :)
[15:55] <voidspace> dimitern: http://pastebin.ubuntu.com/11499441/
[15:55] <voidspace> dimitern: heh, indeed
[15:55] <dimitern> voidspace, looking
[15:55] <voidspace> dimitern: so juju fails to contact MAAS (connection refused)
[15:55] <voidspace> a
[15:56] <voidspace> fetching that URL in the browser works
[15:56] <voidspace> and there's nothing useful in the MAAS logs
[15:57] <voidspace> the MAAS node is deployed
[15:58] <voidspace> dimitern: I updated MAAS version and am running juju latest master
[15:58] <dimitern> voidspace, why localhost?
[15:58] <voidspace> dimitern: because MAAS is running locally
[15:58] <dimitern> voidspace, on port 80?
[15:58] <voidspace> hmmm... apparently
[15:59] <voidspace> yes
[15:59] <voidspace> that's working fine
[15:59] <dimitern> voidspace, try bootstrapping with --debug to get more context
[15:59] <voidspace> dimitern: ok, will do
[16:00] <voidspace> dimitern: it takes about ten minutes or so because these proliants are *slow* to boot
[16:00] <voidspace> dimitern: the intelligent bios thing takes several minutes to do its thing
[16:00] <voidspace> I might try and disable it
[16:01] <voidspace> but it can run in the background whilst I continue the review
[16:03] <dimitern> voidspace, is MAAS itself configured with http://localhost/MAAS/ ?
[16:03] <dimitern> voidspace, dpkg-reconfigure maas (IIRC)
[16:04] <voidspace> dimitern: I'll check
[16:04] <voidspace> when I went to 127.0.0.1/MAAS instead of localhost I had to login again
[16:04] <voidspace> so there maybe a difference
[16:04] <voidspace> I'll wait until this bootstrap completes
[16:04] <dimitern> voidspace, ok
[16:05] <dimitern> voidspace, I'm pretty sure the MAAS URL has to match exactly - both in maas config and in juju's
[16:06] <voidspace> dimitern: yep, good call
[16:08] <abentley> natefinch: I'm free now.
[16:12] <voidspace> dimitern: I think it needs a visible url and not a local url
[16:12] <voidspace> dimitern: trying with the machine IP address
[16:12] <dimitern> voidspace, that sounds good
[16:12] <voidspace> dimitern: i.e. a node can't use 127.0.0.1 to reach the MAAS API
[16:13] <voidspace> taking a break
[16:13] <dimitern> voidspace, I have a similar setup locally, but I use a 192.168.50.X - .2 for maas, the rest for the nodes
[16:14] <dimitern> voidspace, ok, I'll need to go, but might be back later
[16:14] <voidspace> dimitern: thanks, see you later
[16:15] <natefinch> abentley: I was going to do something like tghis to add the actions feature flag to the CI tests... is this acceptable? http://pastebin.ubuntu.com/11499809/
[16:18] <abentley> natefinch: That won't work because EnvJujuClient24 is only used for juju 1.24.  I meant that you should add an EnvJujuClient22 that was used for juju 1.22, that supplied the 'actions' feature flag.
[16:20] <abentley> natefinch: A heads-up: jog is landing support for -e with "action do" and "action fetch" today.
[16:21] <abentley> natefinch: In this branch: https://code.launchpad.net/~jog/juju-ci-tools/start_chaos
[16:23] <voidspace> dooferlad: hah, and four days later I have a working juju bootstrapped to MAAS on an HP proliant
[16:23] <voidspace> dooferlad: the PDU seems to be working fine now too, both for switching machines on and off
[16:24] <voidspace> dooferlad: http://pastebin.ubuntu.com/11500002/
[16:26] <natefinch> abentley: I'
[16:27] <natefinch> abentley: I'm not really prepared to spend very much more time on this CI test.  It's already taken 3-4 times as long as I had anticipated & scheduled
[16:27] <natefinch> cc katco ^^
[16:28] <natefinch> abentley: but if I can just remove my action code and merge with what jog lands, that's fine with me, though it would make for a lot of wasted work on my part.  It's unfortunate both of us were working on the same functionality.
[16:29] <natefinch> abentley: or maybe I misunderstood what you were talking about.. do you mean he was landing code in the tests or juju-core
[16:30] <jog> natefinch, sorry I was working on another project and just discovered our juju-ci-tools lib needed to handle actions differently on Friday.
[16:31] <abentley> natefinch: He's just done an alternative implementation of the _full_args change, none of the rest.
[16:31] <natefinch> abentley: oh ok, that's good.  I'm glad we didn't overlap much
[16:35] <natefinch> abentley: do I have to do more in the EnvJujuClient22 than implement the _shell_environ, and add a new elif in EnvJujuClient.by_version?  Something like this? http://pastebin.ubuntu.com/11500166/
[16:36] <abentley> natefinch: That's all you need to do for that.
[16:36] <natefinch> abentley: thanks
[16:58] <katco> natefinch: abentley: hey... so these CI tests are being wrapped up then?
[16:59] <natefinch> katco: yeah
[17:03] <katco> yay :D
[17:05] <natefinch> why the heck do I have to log into ubuntu to "download as text" from pastebin.ubuntu.com?
[17:08] <perrito666> lol
[17:08] <perrito666> you can always report it as a bug
[17:57] <katco> wwitzel3: ping
[17:58] <wwitzel3> katco: pong
[17:59] <katco> wwitzel3: hey on the rich status spec? who do you think from ecosystems/accounting would be good to ping?
[18:00] <katco> wwitzel3: it has to do with charm metadata, so charmers for sure. and i would think someone from accounts would want to give input on what information they'd like when doing installations
[18:00] <wwitzel3> katco: not 100% sure, so I'd ping arosales and ask him for some candidates that might have a strong interest/opinion
[18:01] <katco> wwitzel3: ty. arosales, any volunteers? https://docs.google.com/document/d/1JcWkE4SNxXuFClZGBcwnU3w13IpRU1yxMhddQG6mKyE/edit#
[18:03] <arosales> katco, /me looking . .
[18:04] <katco> arosales: ty sir
[18:04] <arosales> katco, I'll bring it up on our daily and send a mail out on it too
[18:04] <katco> arosales: ty... please let me know who you'd like to delegate so i can add them to the reviewers list
[18:06] <arosales> katco, will do
[18:06] <arosales> katco, thanks thanks for looking for the feedback
[18:06] <katco> arosales: ty again!
[18:07] <arosales> katco, np. I'll should have some more information this afternoon.
[18:08] <katco> arosales: i'm also pulling marcoceppi into https://docs.google.com/document/d/1LORhaYvk_A8yMHkAb9FR_cN9V0S55zEx-T6QXdmr3fU/edit#
[18:08] <katco> arosales: he expressed interest in nuremberg
[18:08] <arosales> katco, ah yes is a good one for min version
[18:10] <katco> arosales: juju min. version is the one we'll be focusing on next
[18:27] <abentley> natefinch_afk: jog's stuff has landed now.
[18:51] <natefinch_afk> abentley: thanks
[18:56] <natefinch_afk> abentley, sinzui:  I get this error on several of the tests, despite having run make install-deps
[18:56] <natefinch_afk> OSError: /usr/lib/python2.7/dist-packages/lookup3.so: cannot open shared object file: No such file or directory
[18:57] <sinzui> I wonder what that is
[18:59] <sinzui> natefinch_afk: It appears to relate to jenkins and I I see several reports of it failing
[18:59] <natefinch> sinzui: yeah, just found some interesting things... I found it in /usr/local/lib/python2.7/dist-packages/
[19:00] <sinzui> natefinch_afk: my apt-cache policy python-jenkins says I have 0.2.1-0ubuntu1
[19:00] <natefinch> Installed: 0.2.1-0.1
[19:01] <sinzui> natefinch: how did you get that version? pip? easy_install?
[19:01]  * sinzui thinks we need the ubuntu version
[19:01] <natefinch> sinzui: quite possibly
[19:02] <natefinch> sinzui: I didn't know about make install-deps when I started, so I was just installing stuff however I could find it
[19:03] <sinzui> natefinch: understood. I have to do the same on the win and OS X machines. The issue I am reading implies the jenkins lib does work on OS X, but it is working wel enough for our tests
[19:04] <natefinch> I'm on ubuntu... just ran pip install (I think?)  because I didn't know how else to ge tit
[19:04] <natefinch> and..... now pip is dumping a giant stack trace when I do pip uninstall jenkins.  Nice.
[19:05] <abentley> natefinch: If you ran make install-deps, you should have python-jenkins installed via apt.
[19:05] <sinzui> natefinch: you can run pip unistall jenkins?
[19:05]  * sinzui isn’t sure of the pip package name
[19:05] <natefinch> sinzui: I can try and have it fail
[19:05] <natefinch> sinzui: it seemed to recognize the name
[19:05] <natefinch> abentley: yeah, apt seemed to think I had it installed via apt
[19:05] <sinzui> abentley: surely pip is installing in a path that takes precedence.
[19:06] <natefinch> I removed and reinstalled the apt version, it still gives me  0.2.1-0.1
[19:06] <abentley> I do not have lookup3 installed, and I don't seem to need it.
[19:08] <abentley> I have python-jenkins 0.2.1-0.1 installed.
[19:09] <natefinch> full stack trace from running tests (there are a handful of these): http://pastebin.ubuntu.com/11502857/
[19:11] <abentley> natefinch: Can you delete /usr/local/lib/python2.7/dist-packages/jenkins.py or at least move it aside so that the correct jenkins lib gets loaded?
[19:12] <natefinch> abentley: sure
[19:15] <natefinch> FYI, I don't have  /usr/lib/python2.7/dist-packages/jenkins.py
[19:15] <natefinch> (if I'm supposed to)
[19:17] <natefinch> It looks like all my jenkins stuff got installed to /usr/local/lib/python2.7/dist-packages/  instead of /usr/lib/python2.7/dist-packages/
[19:17] <natefinch> that sounds like "you installed something with or without sudo when you should have done it the other way"   but I have no idea what, being both a linux and python n00b
[19:24] <abentley> natefinch: No, you shouldn't have that, you should have /usr/lib/python2.7/dist-packages/jenkins/__init__.py
[19:25] <natefinch> abentley: ahh, ok, yes, I have that
[19:27] <natefinch> I guess get_python_lib()  must be returning the wrong thing
[19:40] <abentley> natefinch: There are at least two incompatible packages providing 'jenkins': https://pypi.python.org/pypi/jenkins https://pypi.python.org/pypi/python-jenkins and the one installed in /usr/local/lib is the wrong one.
[19:42] <natefinch> abentley: how am I supposed to install it?
[19:43] <abentley> natefinch: The right one is already installed.  You just have to get rid of the wrong one.
[19:45] <natefinch> abentley: ahh, ok, I figured it out. pip uninstall, instead of saying "Hey, this needs to be run with sudo" instead dumped a giant ugly stack trace.
[19:46] <natefinch> which I incorrectly interpreted as "jenkins wasn't installed with pip"
[19:46] <natefinch> that fixed it
[19:57] <rogpeppe> thumper: hiya
[20:09] <natefinch> is there a bzr plugin that'll let me run an external merge tool to fix conflicts?  I found bzr-extmerge, but it appears to be ancient (tries to run with python 2.4)
[20:10] <natefinch> thumper, sinzui, abentley: ^^
[20:13] <abentley> natefinch: No, extmerge is the only one I'm aware of.  But bzr dumps THIS, BASE and OTHER files that you can use an arbitrary tool with.
[20:16]  * natefinch closes his eyes and runs sudo python ./setup.py
[20:17] <natefinch> er setup.py install
[21:46] <sinzui> wallyworld: do you think the maas 1,7 test would pass if we added a 30s delay between bootstrap and deployer?
[21:46] <wallyworld> sinzui: yes
[21:46] <wallyworld> sinzui: not even 30s, more like 1 second
[21:46] <wallyworld> or 2
[21:46] <sinzui> let me try to solve the issue.
[21:46] <sinzui> wallyworld: I will start with 5 seconds
[21:47] <wallyworld> ok :-)
[21:57] <marcoceppi> katco: you still around?
[22:01] <sinzui> wallyworld: I am adding a call to status between bootstrap and deployer. Do you think that is enough time? Do you have a branch ready to merge to test my change. I don’t want to start a test of an old revision if you have work queued?
[22:06] <wallyworld> sinzui: everything you need to test should be in tip of 1.24
[22:06] <wallyworld> sinzui: the python-jujuclient work simply retries during the second or so you will be deplaying
[22:06] <wallyworld> delaying
[22:06] <wallyworld> which would make the delay unnecessary
[22:08] <sinzui> wallyworld: I am pushing a change to all the slaves. I will retest 1.24 tip when I see the changes areive
[22:08] <sinzui> arrive
[22:08] <wallyworld> sinzui: tyvm, i will wait with baited breath
[22:09] <mup> Bug #1460184 changed: Bootstrapping fails with Maas on Ubuntu Vivid <maas-provider> <vivid> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1460184>
[22:14] <wallyworld> ericsnow: any chance of a trivial time display fix review? the code change is one line, the test changes are a search and replace http://reviews.vapour.ws/r/1823/
[22:15] <ericsnow> wallyworld: sure
[22:15] <wallyworld> ty
[22:16] <ericsnow> nice: "You Require More Vespene Gas" (in a test)
[22:17] <ericsnow> wallyworld: ship-it!
[22:17] <wallyworld> ericsnow: ty
[22:17] <ericsnow> wallyworld: any time
[22:24] <katco> marcoceppi: am now, what's up?
[23:45] <wallyworld> waigani_: heya, you working on bug 1376246 ?
[23:46] <mup> Bug #1376246: MAAS provider doesn't know about "Failed deployment" instance status <landscape> <maas-provider> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1376246>
[23:48] <davechen1y> great, build is blocked, again
[23:49] <waigani_> wallyworld: no, I should be able to start on that today though.
[23:49] <wallyworld> waigani_: great, becasue we want 1.24 work done so we can look to do a release overnight
[23:50] <waigani_> wallyworld: okay, let me get a bite to eat and I'll get into it
[23:51] <wallyworld> ty
[23:52] <axw> wallyworld: sorry I missed standup, been on the phone with iinet for 40 minutes trying to get my account unlocked :/
[23:52] <wallyworld> axw: gawd, i hate isps. all fixed?
[23:53] <axw> wallyworld: yeah, silly error while setting up my new modem. OTOH, seems I got swapped to the new port and now I'm syncing at 16Mb as opposed to 4Mb I was getting for the last few months
[23:53] <wallyworld> oh good :-)
[23:54] <wallyworld> axw: you free now for a chat?
[23:54] <axw> sure, just a quick one tho
[23:54] <axw> see you in standup