[00:45] <mup> Bug #1544796 opened: Backup restore fails: upgrade in progress <backup-restore> <blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1544796>
[00:53] <perrito666> sinzui: when did that start?
[00:53] <sinzui> perrito666: in the past hour, and niether master or maas-spaces ever exhibted this
[00:55] <perrito666> sinzui: tried another provider?
[00:55] <sinzui> perrito666: I haven't yet
[00:56] <perrito666> I believe juju declares "upgrading" while checking for upgrades, if machine is slow this could be the cause
[01:02] <sinzui> perrito666: cherylj: think we lost commits in master!!  Our last success on master, earlyier today points to a commit I don't see in https://github.com/juju/juju/commits/master?page=1, but  http://reports.vapour.ws/releases/3595 does link the the last good version of master.
[01:03] <sinzui> nm, fins failure
[01:04] <davecheney> cherylj: https://bugs.launchpad.net/juju-core/+bug/1544796 is another manifestation of the issue that you and thumper were talking about yesterday
[01:04] <mup> Bug #1544796: Backup restore fails: upgrade in progress <backup-restore> <blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1544796>
[01:05] <davecheney> almost certainly the same underlying cause as https://bugs.launchpad.net/juju-core/+bug/1543362
[01:05] <mup> Bug #1543362: juju debug-log returns error if run too early in bootstrap <juju-core:Triaged> <https://launchpad.net/bugs/1543362>
[01:06] <perrito666> sinzui: eod, if there is anything urgent mail me
[01:08] <sinzui> perrito666: have a good evening
[01:25] <menn0> thumper: I'ved replied to your comments for http://reviews.vapour.ws/r/3839/. Can you PTAL?
[01:26] <thumper> just done
[01:26] <menn0> thumper: thanks
[02:03] <menn0> thumper: another look at http://reviews.vapour.ws/r/3839/ please
[02:03] <thumper> looking
[02:20] <davecheney> juju bootstrap -> https://likelockedrooms.files.wordpress.com/2013/03/mad-as-hell.jpg
[02:21] <thumper> :)
[02:21] <thumper> davecheney: take a deep breath, and think to yourself "it is Friday..."
[02:23] <natefinch> ug, I think my displayport cable died... using HDMI->DVI instead, but it can't push the native res, so everything's fuzzy.  and of course it's my center monitor, so I can't just stop using it.  Feh.
[02:26] <axw> anastasiamac: would you be so kind to review http://reviews.vapour.ws/r/3841/ for me?
[02:26] <anastasiamac> axw: of course!
[02:27] <anastasiamac> axw: anything to have a break from fighting import cycle :D
[02:35] <anastasiamac> axw: LGTM
[02:36] <axw> anastasiamac: thanks
[02:36] <axw> anastasiamac: that range-over-map thing is only ok if there's one element in the map
[02:36] <anastasiamac> oh :(
[02:36] <axw> anastasiamac: the order is is non-deterministic
[02:36] <anastasiamac> but i can have a break in there \o/
[02:36] <axw> anastasiamac: (unless there's only one element, obviously)
[02:37] <axw> anastasiamac: sure, just as long as you don't expect a particular order to the range
[02:37] <anastasiamac> axw: awesome \o/
[02:42] <davecheney> thumper: it takes 33 minutes to bootstrap
[02:42] <davecheney> i'm using the sydney ec2 datacenter
[02:43] <thumper> wow
[02:43] <thumper> that is slow
[02:45] <axw> unusually slow, I use ap-southeast-2, never that slow
[02:48] <davecheney> it sits there for minutes at a time squeezing data up to the client
[02:48] <davecheney> s/client/bootstrap
[02:49] <davecheney> lucky(~) % ls -lah $(!!)
[02:49] <davecheney> ls -lah $(which jujud)
[02:49] <davecheney> -rwxrwxr-x 1 dfc dfc 82M Feb 12 13:43 /home/dfc/bin/jujud
[02:49] <davecheney> this probably isn't helping
[02:49] <axw> I've got a PR up for the ssh thing, might help?
[02:49] <davecheney> https://www.youtube.com/watch?v=q_qgVn-Op7Q
[02:49] <davecheney> https://www.youtube.com/watch?v=q_qgVn-Op7Q
[02:49] <davecheney> danit
[02:49] <davecheney> https://youtu.be/q_qgVn-Op7Q?t=1m9s
[03:00] <davecheney> oh, this time it was WAY faster
[03:01] <thumper> heh
[03:01] <thumper> watched that mad has hell thing
[03:03] <davecheney> that was 1976, the same year I was born
[03:04] <davecheney> can you tell me it couldn't apply to today ?
[03:31] <thumper> heh
[04:40] <thumper> ok... time to go
[04:40] <thumper> laters
[04:40] <thumper> have a good weekend folks
[04:58] <axw> anastasiamac: another small one please: http://reviews.vapour.ws/r/3843/
[06:09] <mup> Bug #1544838 opened: suite.TestPprofStartWithExistingSocketFile unit test failure <ci> <test-failure> <unit-tests> <juju-core:New> <https://launchpad.net/bugs/1544838>
[06:12] <mup> Bug #1544838 changed: suite.TestPprofStartWithExistingSocketFile unit test failure <ci> <test-failure> <unit-tests> <juju-core:New> <https://launchpad.net/bugs/1544838>
[06:15] <mup> Bug #1544838 opened: suite.TestPprofStartWithExistingSocketFile unit test failure <ci> <test-failure> <unit-tests> <juju-core:New> <https://launchpad.net/bugs/1544838>
[06:25] <anastasiamac> axw: looking now
[06:28] <mup> Bug #1544846 opened: restore fails with could not exit restoring status <nil> <backup-restore> <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1544846>
[06:37] <mup> Bug #1544846 changed: restore fails with could not exit restoring status <nil> <backup-restore> <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1544846>
[06:40] <mup> Bug #1544846 opened: restore fails with could not exit restoring status <nil> <backup-restore> <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1544846>
[07:37] <mup> Bug #1544853 opened: unit-test failure MachineSuite.TestManageModelRunsInstancePoller <ci> <intermittent-failure> <test-failure> <unit-tests> <juju-core:New> <https://launchpad.net/bugs/1544853>
[07:37] <mup> Bug #1544855 opened: unit-test failure ClientOperationSuite.TestCannotExpireUnheldLease <ci> <intermittent-failure> <test-failure> <unit-tests> <juju-core:New> <https://launchpad.net/bugs/1544855>
[09:43] <mup> Bug #1544890 opened: "ERROR the name of the model must be specified" when 'juju init' required <juju-core:New> <https://launchpad.net/bugs/1544890>
[09:49] <frobware> voidspace, dooferlad: might miss standup/planning. helping out with customer issue and otp atm
[09:51] <voidspace> frobware: ok
[09:57] <voidspace> dooferlad: my parents are paying a surprise visit (driving past on their way up north)
[09:58] <voidspace> dooferlad: so we can have a short standup
[10:02] <dooferlad> voidspace: am in the hangout now
[10:51] <voidspace> cherylj: ping
[13:08] <voidspace> dooferlad: gah, gomaasapi test server *does* support spaces endpoint
[13:08] <voidspace> dooferlad: so long as you've registered some spaces...
[13:08] <voidspace> dooferlad: if there aren't any spaces registered it pretends not to support it to mimic older versions of maas
[13:08] <voidspace> dooferlad: and my tests don't register any spaces
[13:08] <voidspace> dooferlad: well, the preexisting tests don't - because they didn't need to
[13:09] <voidspace> ah well, less work
[13:11] <voidspace> dooferlad: cherylj merged maas-spaces yesterday by the way
[13:15] <frobware> dooferlad, ping
[13:20] <voidspace> dooferlad: hmm... not convinced there is a way to populate spaces now, may still need to implement it
[13:20] <dooferlad> frobware: hi
[13:23] <frobware> dooferlad, see email. sorry! :)
[13:24] <voidspace> dooferlad: looks like the spaces stuff is untested and there's no way to add a space! Probably because you started implementing it and then we decided we didn't need that endpoint after all
[13:24] <dooferlad> voidspace: sounds about right
[13:24] <voidspace> We were doing everything space related through the subnets endpoints
[13:25] <voidspace> Ah well, so I need to add a new test server method for adding spaces and some tests.
[13:26] <dooferlad> voidspace: I hope it is at least clear code and won't take long to do.
[13:30] <cherylj> voidspace: pong
[13:30] <voidspace> cherylj: I wanted to ask about maas-spaces
[13:30] <voidspace> cherylj: but I see you merged it last night
[13:30] <voidspace> cherylj: \o/
[13:31] <cherylj> ah, yes :)
[13:31] <voidspace> thanks
[13:31] <dooferlad> frobware: bug #1543770 is a bit light on details. Doesn't help that the network tests are spitting out 'The old local provider is not supported by 2.0-alpha2' and saying they passed!
[13:31] <mup> Bug #1543770: Juju 2.0alpha1 does not assign a proper netmask for LXC containers <cpe-critsit> <cpe-sa> <dhcp> <lxc> <maas> <network> <juju-core:Triaged> <https://launchpad.net/bugs/1543770>
[13:31] <frobware> dooferlad, you read what I read. :)
[13:32] <frobware> dooferlad, this is in alpah1 - not sure whether we should say use beta1.
[13:32] <frobware> dooferlad, but /32 seems the right netmask to me
[13:32] <dooferlad> frobware: Yea, it is fine.
[13:32] <frobware> dooferlad, otp
[13:32] <dooferlad> frobware: sure.
[13:33] <cherylj> frobware, dooferlad, if it's not clear what the issue is in the bug, definitely press for more details
[13:41] <frobware> voidspace, http://pastebin.ubuntu.com/15024080/
[13:44] <voidspace> frobware: thanks
[13:47] <dooferlad> frobware, cherylj: Just so you know, I would expect container networking to work in the beta if it was this build: http://reports.vapour.ws/releases/3595/job/functional-container-networking/attempt/170
[13:48] <dooferlad> i.e. the maas-spaces branch being merged in
[13:51] <axw> wallyworld
[13:51] <axw> oops
[13:57]  * voidspace lunches
[14:20] <cherylj> dooferlad: so, it's expected that it doesn't work in alpha1?
[14:20] <cherylj> brb
[14:41] <frobware> cherylj, not sure about that. I'm sure I tried alpha1 this morning and it was OK. but I got sidetracked with the other issue...
[14:48] <dooferlad> cherylj: http://reports.vapour.ws/releases/3525 didn't run the container networking tests, so I don't know if it would have worked or not.
[14:49] <cherylj> ok, thanks
[15:23] <katco> cherylj: sinzui: what do i need to do to bootstrap a beta1 controller? i'm setting the agent-metadata-url to "https://streams.canonical.com/juju/tools" and agent-stream to "devel". no bueno
[15:24] <sinzui> katco: beta1 is not released. use --upload-tools or publish your own streams
[15:25] <katco> k ty
[15:32] <mup> Bug #1544847 opened: unit-test failure: configSuite.TestNewModelConfig test failure <ci> <regression> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1544847>
[15:32] <mup> Bug #1544849 opened: unit-test loop with juju.worker.uniter.remotestate retry hook timer triggered <ci> <intermittent-failure> <ppc64el> <unit-tests> <juju-core:Incomplete> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1544849>
[15:32] <mup> Bug #1544850 opened: unit-test failure: cloudImageMetadataSuite.TestFindMetadata <ci> <intermittent-failure> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core upgrade-mongodb3:Triaged> <https://launchpad.net/bugs/1544850>
[15:35] <mup> Bug #1544847 changed: unit-test failure: configSuite.TestNewModelConfig test failure <ci> <regression> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1544847>
[15:35] <mup> Bug #1544849 changed: unit-test loop with juju.worker.uniter.remotestate retry hook timer triggered <ci> <intermittent-failure> <ppc64el> <unit-tests> <juju-core:Incomplete> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1544849>
[15:35] <mup> Bug #1544850 changed: unit-test failure: cloudImageMetadataSuite.TestFindMetadata <ci> <intermittent-failure> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core upgrade-mongodb3:Triaged> <https://launchpad.net/bugs/1544850>
[15:44] <mup> Bug #1544847 opened: unit-test failure: configSuite.TestNewModelConfig test failure <ci> <regression> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1544847>
[15:44] <mup> Bug #1544849 opened: unit-test loop with juju.worker.uniter.remotestate retry hook timer triggered <ci> <intermittent-failure> <ppc64el> <unit-tests> <juju-core:Incomplete> <juju-core maas-spaces:Triaged> <https://launchpad.net/bugs/1544849>
[15:44] <mup> Bug #1544850 opened: unit-test failure: cloudImageMetadataSuite.TestFindMetadata <ci> <intermittent-failure> <test-failure> <unit-tests> <juju-core:Incomplete> <juju-core upgrade-mongodb3:Triaged> <https://launchpad.net/bugs/1544850>
[15:50] <mup> Bug #1545040 opened: TestLoginsDuringUpgrade fails on go1.5/6 <ci> <gccgo> <go1.5> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1545040>
[15:50] <mup> Bug #1545045 opened: TestNotAllContainersAreDeleted: can't get info for image <ci> <lxd> <regression> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545045>
[15:50] <mup> Bug #1545046 opened: TestNewContainerManage executed but not supported by gccgo-go <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545046>
[15:53] <mup> Bug #1545040 changed: TestLoginsDuringUpgrade fails on go1.5/6 <ci> <gccgo> <go1.5> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1545040>
[15:53] <mup> Bug #1545045 changed: TestNotAllContainersAreDeleted: can't get info for image <ci> <lxd> <regression> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545045>
[15:53] <mup> Bug #1545046 changed: TestNewContainerManage executed but not supported by gccgo-go <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545046>
[15:59] <mup> Bug #1545040 opened: TestLoginsDuringUpgrade fails on go1.5/6 <ci> <gccgo> <go1.5> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1545040>
[15:59] <mup> Bug #1545045 opened: TestNotAllContainersAreDeleted: can't get info for image <ci> <lxd> <regression> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545045>
[15:59] <mup> Bug #1545046 opened: TestNewContainerManage executed but not supported by gccgo-go <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Incomplete> <juju-core lxd-container-type:Triaged> <https://launchpad.net/bugs/1545046>
[16:05] <mup> Bug #1545050 opened: TestSubordinateDying is not dying <ci> <intermittent-failure> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1545050>
[16:05] <mup> Bug #1545055 opened: TestManageModelRunsUndertaker timed out <ci> <intermittent-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1545055>
[16:05] <mup> Bug #1545057 opened: TestWorkerDiscoversSpaces no subnets found <ci> <go1.5> <intermittent-failure> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1545057>
[16:12] <cherylj> hey tych0!  Looks like we got on run on your branch and the results are in!  The bugs opened are here: https://bugs.launchpad.net/juju-core/lxd-container-type
[16:24] <natefinch> katco, ericsnow: tests added, tweaks made, rebase successful with minimal problems... code is merging
[16:24] <katco> natefinch: rock!
[16:24] <ericsnow> natefinch: great!
[16:24] <katco> natefinch: we're still in moonstone
[16:25] <natefinch> katco: ok, coming
[16:27] <dooferlad> frobware, voidspace: calling it a day. It has been a long week and thinking straight has become an issue!
[16:28] <frobware> dooferlad, did you come to any conclusion on the netmask bug?
[16:28] <dooferlad> frobware: only that I need more information.
[16:29] <frobware> dooferlad, did you ask in the bug?
[16:29] <dooferlad> frobware: yes
[16:29] <frobware> dooferlad, ah...sorry I see it now.
[16:29] <voidspace> natefinch: question - if I want to test a synchronous function that may deadlock (block) what's the right way to test it without blocking the test
[16:30] <voidspace> natefinch: just fire it off on a goroutine with a signal method and wait for a LongAttempt for the signal?
[16:30] <voidspace> ShortAttempt would do actually
[16:30] <voidspace> given that my current synchronous test - without the fix in place - hasn't returned after about five minutes
[16:31] <voidspace> I think I can safely say I've found the deadlock...
[16:32] <natefinch> voidspace: leaving dead goroutines hanging around in your tests is bad, but I guess as long as that's the failure condition, I guess that's ok
[16:32] <voidspace> natefinch: yeah, it shouldn't fail!
[16:33] <natefinch> voidspace: yeah, a simple goroutine that sends on a channel when it's finished and then you can do a select on that channel and time.After(shortwait)
[16:48] <katco> sinzui: cherylj: hey, i aborted a merge job w/o thinking... will that leave cruft around?
[16:48] <sinzui> katco: thanks for pinging me I will check for an instance
[16:48] <katco> sinzui: sorry =|
[16:50] <sinzui> katco: I have done the same
[16:59] <katco> sinzui: can natefinch do another $$merge$$ in the meantime?
[17:01] <voidspace> nope, no way to test without a failing deadlock - the call that actually deadlocks does asserts so can't be on a different goroutine
[17:01] <voidspace> dammit
[17:01] <voidspace> at least a deadlocking test will tell us something is wrong
[17:03] <voidspace> hmmm... we shouldn't be deadlocked, it should timeout after LongAttempt
[17:04] <voidspace> weird
[17:05] <voidspace> dammit, the test is nonsense
[17:05] <voidspace> but it shouldn't pass
[17:05] <voidspace> and it does
[17:13] <cherylj> katco: are there any other commits coming for your resources branch?
[17:14] <katco> cherylj: we're just tryinf to land the 1 more now
[17:14] <katco> cherylj: but we received a time-out and that's when i aborted the job
[17:14] <katco> cherylj: trying to re-$$merge$$ but natefinch says the bot isn't picking it up yet
[17:18] <sinzui> perrito666: Your upgrade-mongo3 branch needs the latest master for CI to test. There is no rush. CI will be starting a test of the cloud-credentials branch first
[17:28] <frobware> interesting! - https://medium.com/vijay-pandurangan/linux-kernel-bug-delivers-corrupt-tcp-ip-data-to-mesos-kubernetes-docker-containers-4986f88f7a19#.ojo3f9ww3
[17:29] <natefinch> sinzui: can you help my branch? I think katco aborting the merge meant that it won't pick up me re-$$merge$$ing it
[17:29] <natefinch> sinzui: https://github.com/juju/juju/pull/4374
[17:30] <sinzui> natefinch: oh. I will attempt a delete of the aborted job
[17:30] <sinzui> natefinch: I will check back in 5 minutes to see if it is accepted
[17:32] <sinzui> katco: natefinch: the only reason feature-resouces didn't get a bless from CI is that it has a regression from master.
[17:35] <sinzui> natefinch: I have a cunning plan to hand type the aborted build into jenkins to run the tests and hopefully convice the lander to send to the results back to the PR.
[17:37] <voidspace> frobware: http://reviews.vapour.ws/r/3848/
[17:38] <katco> sinzui: nice! we still need to run ci 1 more time with the branch natefinch is trying to land
[17:39] <katco> sinzui: tbh i'd be fine merging w/o a bless because it's just a new command-flag
[17:39] <frobware> voidspace, otp
[17:39] <voidspace> frobware: ok
[17:41] <frobware> voidspace, so the first part of that diff is what we were trying yesterday?
[17:47] <voidspace> frobware: yes, I got bogged down trying to construct a test
[17:47] <voidspace> frobware: in the end I concluded it was already tested and the existing tests pass
[18:06] <katco> cherylj: ok, the feature-resources branch is ready for another ci run
[18:07] <katco> cherylj: or if you're ok with it, we can just merge into master. only patch not tested is a new command flag
[18:09]  * cherylj looks
[18:10] <sinzui> katco: natefinch: the aborted merge is now merged. It is okay to abort, but if we do want to merge, we can use the rebuild link on the aborted job to try again.
[18:10] <katco> sinzui: tyvm
[18:11] <katco> cherylj: here's the diff to help you make the call: https://github.com/juju/juju/commit/f0e1848aca595b35d6483f75a909c38b204eee5a
[18:15] <cherylj> sinzui: did you want to prepare the PR to merge their branch?
[18:16] <katco> cherylj: sinzui: i can do it if you like; it's just a few clicks
[18:16] <sinzui> cherylj: I will do it and explain the two faiures are from the master.
[18:17] <katco> sinzui: cherylj: great, cheers to both of you :)
[18:17] <cherylj> katco: thanks to you and your team for pulling off this amazing feat!
[18:17] <cherylj> katco: now can someone look at the restore failures being seen on master?  :)
[18:17] <katco> cherylj: surely you jest! ;)
[18:18] <katco> cherylj: this doesn't mean the feature is done lol
[18:19] <cherylj> heh, well I can hope right?  I mean, we do need it fixed to ship beta1
[18:25] <perrito666> sinzui: ok, ill merge it
[18:26] <sinzui> cherylj: http://reviews.vapour.ws/r/3849/
[18:26] <cherylj> shipit!
[18:48] <lazyPower> cherylj - question for you regarding https://bugs.launchpad.net/juju-core/+bug/1535165 - is this fix present in 2.0-alpha1?
[18:48] <mup> Bug #1535165: Unable to create hosted models with MAAS provider <juju-release-support> <maas-provider> <juju-core:Fix Released by waigani> <https://launchpad.net/bugs/1535165>
[18:49] <cherylj> lazyPower: no, just alpha2
[18:49] <lazyPower> its tagged alpha2 - but i've got fingers crossed
[18:49] <lazyPower> wah wahhhh
[18:49] <lazyPower> ok :D thanks for confirmation
[18:49] <cherylj> sorry :(
[18:49] <cherylj> np :)
[18:49] <lazyPower> not even upset :D i'll be patient
[18:49] <lazyPower> or i'll build a container w/ nightly
[18:49] <lazyPower> either way
[18:54] <rick_h__> lazyPower: alpha2 is out though?
[18:54] <rick_h__> lazyPower: /me is missing the wah wahhhhh
[18:54] <lazyPower> rick_h__ - wait it is?
[18:54]  * lazyPower pulls latest charmbox:devel
[18:55] <rick_h__> lazyPower: https://launchpad.net/~juju/+archive/ubuntu/devel
[18:55] <lazyPower> woo hot diggity dog, i'm behind the times
[18:56] <rick_h__> lazyPower: turn that frown upside down
[18:56] <rick_h__> and run along the razor's edge
[18:56] <katco> ericsnow: natefinch: babam https://github.com/juju/juju/pull/4408
[18:56] <katco> .
[18:58] <ericsnow> katco: sweet!
[18:58] <lazyPower> katco great success!
[18:58] <lazyPower> devops borat approves of this PR
[18:58] <katco> lol
[18:58] <natefinch> dayum
[18:58] <natefinch> nice
[18:58] <rick_h__> katco: natefinch ericsnow <3
[18:59] <rick_h__> katco: natefinch ericsnow looking forward to demo time
[18:59] <lazyPower> rick_h__ you beautiful man you - that release snuck by me, and was available in the latest devel image. man i love automation
[18:59] <katco> rick_h__: demo/status update to follow
[18:59] <katco> ericsnow: natefinch: i'm just in moonstone w/e you're ready
[19:00] <lazyPower> katco can i be a fly on the wall?
[19:00] <katco> lazyPower: for?
[19:00] <lazyPower> demo / status
[19:00] <katco> lazyPower: you want to watch the demo you mean?
[19:01] <lazyPower> yes :D
[19:01] <katco> lazyPower: sure i'll ping you when we're recording
[19:01] <lazyPower> huzzahhhh
[19:18] <mup> Bug #1545116 opened: When I run "juju resources <service>" after a service is destroyed, resources are still listed. <resources> <juju-core:Confirmed> <https://launchpad.net/bugs/1545116>
[19:51] <mup> Bug #1545126 opened: juju/maas do not create ptr reccords for bare metal servers with multiple networks <juju-core:New> <https://launchpad.net/bugs/1545126>
[19:54] <mup> Bug #1545126 changed: juju/maas do not create ptr reccords for bare metal servers with multiple networks <juju-core:New> <https://launchpad.net/bugs/1545126>
[19:57] <mup> Bug #1545126 opened: juju/maas do not create ptr reccords for bare metal servers with multiple networks <juju-core:New> <https://launchpad.net/bugs/1545126>
[20:24] <lazyPower> omg relations in tabular status? you really DO love us! <3
[20:28] <perrito666> lazyPower: :)
[20:28] <lazyPower> perrito666 best valentines day gift ever
[20:28] <perrito666> lazyPower: I am romantic like that
[20:37] <katco> for anyone who's interested, going to be demoing resources here in a bit: http://youtu.be/SS3AQO3ZN9Y
[20:38] <TheMue>  did I miss something?
[21:21] <marcoceppi> ericsnow:  list-resources --details is awesome
[21:21] <marcoceppi> I like that you all took into consideration resource delivery mismatch
[21:22] <ericsnow> marcoceppi: glad you noticed :)
[21:22] <marcoceppi> ericsnow: is there a resource-updated hook?
[21:22] <ericsnow> marcoceppi: definitely something we could have easily missed
[21:22] <ericsnow> marcoceppi: nope
[21:22] <marcoceppi> so we have to use update-status to check if a new resources is available?
[21:22] <ericsnow> marcoceppi: we're using upgrade-charm
[21:22] <marcoceppi> oh, okay
[21:22] <ericsnow> marcoceppi: (the hoook)
[21:23] <marcoceppi> so an upload-resource call will invoke an upgrade-charm event?
[21:24] <ericsnow> marcoceppi: yeah, both "juju push-resource" and "juju upgrade-charm"
[21:24] <marcoceppi> interesting. is there a way to tell what verion of a resource I have in a charm without running resource-get?
[21:24] <rick_h__> marcoceppi: e.g. if you publish a new combo of charm revision, resource revision, etc in the charmstore. Upgrade charm will get triggered for all of them since they're published as one working set
[21:25] <marcoceppi> I guess upgrade-charm would basically just trigger an "upgrade"
[21:25] <ericsnow> marcoceppi: juju charm list-resources <charm>
[21:25] <marcoceppi> ericsnow: from within the charm
[21:25] <ericsnow> marcoceppi: not really
[21:26] <ericsnow> marcoceppi: you either have the one of the controller or you don't (and resource-get fixes that)
[21:26] <rick_h__> ericsnow: it is interseting in that I don't want to rerun the code that comes after a resource-get if that resource hasn't been updated
[21:26] <ericsnow> marcoceppi: there is no concept of multiple revisions of the same resource (for a given service) on the controller
[21:27] <rick_h__> ericsnow: we'll have to think of how to tell resource-get it's not changed ?
[21:27] <ericsnow> marcoceppi: it's binary
[21:27] <marcoceppi> ericsnow: well, there is implicitly, if a unit needs to know that it needs to do somethign because it has a new resource
[21:27] <ericsnow> rick_h__: good point
[21:27] <ericsnow> rick_h__: I suppose you could compare checksums
[21:28] <marcoceppi> something like resource-list inside a hook
[21:28] <marcoceppi> or resource-get --checksum
[21:28] <rick_h__> marcoceppi: I think as it stands it reruns and expects the charms to be idempotent, but you're right we should enable an effeciency bump there
[21:28] <ericsnow> rick_h__: but having juju do that for the charm may be good
[21:28] <rick_h__> ericsnow: right, the charm author may want to rerun, we should enable them to handle it how they wish
[21:28] <marcoceppi> that way a charm could just track the checksum it's got
[21:28] <rick_h__> yep
[21:29] <rick_h__> ericsnow: can you take that feedback back to the team please?
[21:29] <ericsnow> rick_h__: will do
[21:30] <rick_h__> ty and ty for the feedback marcoceppi, good stuff
[21:30] <ericsnow> rick_h__, marcoceppi: +100
[21:49] <marcoceppi> love it so far, compiling from master now to update a few charms
[22:12] <perrito666> weeee I did it, I didi it, I finally removed the status duplication :D
[22:12] <perrito666> just took me 3 days :p
[22:12]  * perrito666 dances
[22:34] <marcoceppi> hey party people
[22:34] <marcoceppi> anyone around to answer a quick question about the new juju deploy in 2.0?>
[22:39] <marcoceppi> I guess it is pretty late
[22:52] <perrito666> what is juju romulus??
[22:53] <rick_h__> marcoceppi: maybe?
[23:33] <mup> Bug #1545196 opened: Juju claims AWS ap-northeast-2 not found <ec2-provider> <juju-core:Incomplete> <juju-core cloud-credentials:Triaged> <https://launchpad.net/bugs/1545196>
[23:36] <mup> Bug #1545196 changed: Juju claims AWS ap-northeast-2 not found <ec2-provider> <juju-core:Incomplete> <juju-core cloud-credentials:Triaged> <https://launchpad.net/bugs/1545196>
[23:45] <mup> Bug #1545196 opened: Juju claims AWS ap-northeast-2 not found <ec2-provider> <juju-core:Incomplete> <juju-core cloud-credentials:Triaged> <https://launchpad.net/bugs/1545196>