[00:17] * thumper just pushed a mega-branch [00:18] http://reviews.vapour.ws/r/4937/ [00:18] even though it touches 59 files [00:18] the diffstat is just: +541 −387 [00:18] it needs some cleanup in instance/namespace.go and instance/namespace_test.go but the rest is good for review [00:19] * thumper goes to walk the dog before she climbs the walls [00:20] thnx thumper\o/.. i *think* m meant to be OCR today :-P [00:21] wallyworld: this is the shortler lxd name branch [00:21] wallyworld: it also fixes the maas container dns issue [00:21] and makes some things more consistent [00:22] thumper: awesome [00:22] thumper: my branch a couple of days ago was 840 files [00:22] :) [00:23] someone has to do the big hunks [00:23] see you in a bit [00:23] hunks or chunks? [00:59] thumper: OH MY GOD [00:59] i just found a set of state tests that duplicate TearDownTest [01:11] davecheney: doing test archeology? [01:13] perrito666: lasr i've heard, all these were under davecheney's couch or something :) [01:15] perrito666: no, i hit my toe on this [01:16] davecheney: If you are to believe Indiana Jones movies, that is how most archeology is done [01:17] nhttps://github.com/juju/juju/pull/5493 [01:18] perrito666: i have not excavated deep enough to get to the real issue yet [01:18] i'm still diging up the past [01:20] ah, allwatcher [01:46] perrito666: hide yo' kids, hide yo' wife, all watcher coming [02:30] wallyworld: http://reviews.vapour.ws/r/4937/ updated. Nothing really surprising. Heading home to test maas now. Car wheel alignment now done. [02:30] * thumper heads off line briefly === natefinch-afk is now known as natefinch [02:47] * thumper tries to remember how to bootstrap a maas thing again [02:48] thumper: I have a fix for https://bugs.launchpad.net/juju-core/+bug/1586244 [02:48] Bug #1586244: state: DATA RACE in watcher <2.0-count> [02:48] it's not perfect [02:48] but it will unblock things for beta8 [02:49] the "we can just zero shit out to make the test pass" logic was woven through all those tests [02:49] a custom equality function would have been a _lot_ of work [02:49] and would have taken days to test [02:49] days of wall time [02:49] because the state tests are such an utter cluster fuck [02:57] wallyworld: what is the expected way to be able to bootstrap maas? [02:57] I don't need to add-cloud do i? [02:57] just creds? [02:57] * thumper is confused [02:58] thumper: unless it has changed in the last week, you do need add cloud [02:58] what is supposed to go into the cloud config for a maas cloud? [02:58] juju help maas doesn't do anything [02:59] I feel it should tell you how to set up juju for maas [03:00] thumper: yeah.... I am hoping whoever did the maas work will fix that [03:00] it is the credential and cloud definitions [03:01] not the maas provider [03:01] what auth-type should be defined for maas? [03:01] it uses an oauth token [03:01] thumper: http://pastebin.ubuntu.com/16858151/ [03:02] thumper: your yaml file for add-cloud should look like that... fix the cloud name and endpoint of course. [03:02] right [03:02] and creds? [03:02] thumper: I think when you add it, it'll ask for the outh key [03:02] maybe during bootstrap? I forget when it asked me [03:04] I really wish juju bootstrap maas/https://myhostname.com/MAAS worked [03:04] the release notes say that works, but it doesn't [03:04] mine works and creda re in format http://pastebin.ubuntu.com/16858205/ [03:05] creds* r [03:05] I add creds for vmaas [03:05] then go "juju list-credentials" [03:05] and it errors [03:05] fantastic [03:05] saying "removing secrets from credentials for cloud maas: cloud maas not valid" [03:06] works if you say 'juju list-credentials vmaas" [03:06] uhh... that sounds like a bad idea.... deleting secrets because we encountered an error? [03:06] i did mine by hand... and here is my clouds... http://pastebin.ubuntu.com/16858228/ [03:07] or do they just mean they're eliding them from the output? [03:07] because that error message is scary [03:07] also...maybe do not name your cloud 'maas'? maybe there is a confusion with type 'maas' [03:07] got it now [03:07] meh.... if they're one and the same, what's the difference? [03:07] it is bootstrapping [03:07] huzzah [03:08] I name all my environments after the provider type. I only have one of each... why reinvent the wheel [03:08] anastasiamac: I didn't name it maas [03:08] I called mine "vmaas" [03:08] so no idea where it is falling down there [03:08] :/ [03:09] i've cafted files by hand and could bootstrap... the commands gave me grief :( [03:15] maas tests running [03:15] * thumper packing up and taking laptop while Maia does BJJ [03:15] emails and stuff [03:30] all := newStore() [03:30] why ... [03:43] github.com/juju/juju/payload/api/private [03:43] private is a terrible name for a package [03:44] davecheney: naming is hard. It's the api for agents, as opposed to the client api [03:46] * thumper isn't going to start on that one [03:46] * thumper does hr bollocks [04:22] Bug #1587236 opened: no 1.25.5 tools for vivid? [04:33] yes, the client api is called this github.com/juju/juju/payload/api/private/client [04:33] ffs [04:37] thumper: did some looking into the api bifurcation [04:37] lots and lots of refactoring will be needed before its possible [04:38] the api depends directly on watchers [04:38] i don't even know how that's possible [04:38] oh and directly on the state/multiwatcher [04:39] lucky(~/src/github.com/juju/juju/api) % pt multiwatcher [04:39] allwatcher.go: [04:39] 9: "github.com/juju/juju/state/multiwatcher" [04:39] 51:func (watcher *AllWatcher) Next() ([]multiwatcher.Delta, error) { [04:40] right, so we expose the state types directly inside the api, even though they get pooped into and out of json [04:40] this should be reasonably straight forward to fix [04:50] davecheney: as fun as it is, now is not the time to be looking into this [04:50] we need to keep focused on the 2.0 beta 8 bugs [04:50] o/ balloons [04:50] thumper: understood, I'm not making any changes [04:50] but I have some time to think during test runs [04:50] doc is fine to start [04:50] :) [04:50] indeed [04:51] thumper: there's 2 docs in state which have EnvUUID - endpointBiningsDoc and resourceDoc - these should be deleted right [04:51] wallyworld: 'juju deploy ubuntu --to lxd:" doesn't work [04:51] correct [04:51] it's on the todo list [04:51] why not? [04:51] oh, bug? [04:51] knowm issue, just haven't got to it yet [04:51] um... not sure about those docs [04:51] we should double check with the original authors [04:51] ack [04:51] won't file another bug then [04:52] wallyworld: btw, just finished testing with maas, changes look good [04:52] thumper: nate is looking to at plus other lxc to lxd things this week [04:52] awesome [04:52] wallyworld: juju add-machine talks *a lot* about lxc [04:52] we should change to lxd [04:52] yes [04:52] there's a lot there [04:52] just a sed script [04:52] sigh [04:52] * thumper nods [04:52] sed is magic like that [04:53] wallyworld: I thought you'd find my typo amusing on the PR [04:53] fuxes-nnnn [04:53] u and i are so close together [04:53] lol [04:54] thumper: with the EnvUUID thing, these fields are tagged "env-uuid" so even if we keep them, sure we want to use "model-uuid" [04:54] surely [04:54] yes [04:54] surely [04:54] we probably can just remove them [04:54] I have been cleaning up docs as I go [04:54] most model-uuid fields are implicit and aren't needed [04:54] the framework ensures they are there [04:54] and valid [04:55] ok, i'll delete them [04:57] * thumper looks sadly at tomorrow [04:57] meetings solid from 8am to noon [04:58] * thumper sighs [04:58] yay [04:58] later peeps [04:58] I'll check in on that branch to see if it lands - should only be intermittent failures if it doesn't === thumper is now known as thumper-bjj [05:21] oh no [05:21] we have tests which accidentally mutate data stored in a cache [05:21] then expect to match that accidentally mutated data [06:06] menn0: thumper-bjj http://reviews.vapour.ws/r/4941/ [06:06] ^ fix for beta8 blocker [07:07] axw: rb doesn [07:07] 't like the latest pr, you can eyeball the changes here https://github.com/juju/juju/pull/5497/files?diff=split [07:08] if you get time, could you take a look? not really urgent [07:28] wallyworld: ok [07:28] only 380 files changed this time [07:28] yeah :-( [07:29] axw: hey, do you know if any of the openstack charms use the enhanced storage support? [07:29] dimitern: AFAIK only ceph is using it [07:29] dimitern: why do you ask? [07:29] axw: I've been deploying openstack-base bundle on my hardware maas for the past few days [07:30] axw: it mostly works :) [07:30] dimitern: cool :) are you looking to test storage? [07:30] axw: yeah, I was looking at the various ceph-related charms, and ISTM none of them define "storage" section in their metadata [07:31] dimitern: hrm, possibly in staging still [07:31] axw: and it appears cinder is required in order to later use juju on the deployed openstack [07:32] axw: ok, just checking whether I missed something obvious.. === frankban|afk is now known as frankban [07:32] nova-lxd is quite cool! [07:33] dimitern: it's in here: https://api.jujucharms.com/charmstore/v5/ceph/archive/metadata.yaml [07:33] dimitern: apparently it still hasn't made its way over to ceph-osd yet [07:34] axw: what's the charm url for the above? cs:~?/ceph .. [07:34] dimitern: cs:xenial/ceph-1 (or just "ceph") [07:35] axw: ah, I see - so ceph has it but ceph-osd not yet [07:35] dimitern: if you want to use juju storage, don't set the osd-devices config. instead, deploy with --storage osd-devices=<...> [07:35] dimitern: seems so. I thought it had been copied over [07:35] axw: but for that to work the charm needs a storage section in the metadata, right? [07:36] dimitern: yep, so that only works for ceph, and not ceph-osd atm [07:36] axw: do you know the difference ? is ceph-osd+ceph-mon = ceph ? [07:37] dimitern: not entirely sure, but I think so [07:37] axw: ok, I'll ask jamespage for details [07:38] dimitern: Chris Holcombe is in charge of the ceph charms, FYI [07:38] jamespage: can I use 3x ceph instead of 3x ceph-osd + 3x ceph-mon for an opestack base deplyment? [07:39] axw: oh, good to know, thanks! [07:39] dimitern: totally unrelated, I'm curious to know what you're doing with concourse.ci. I saw it a while ago, but didn't dig too deep. got anything to show off at the sprint perhaps? :) [07:40] axw: we have a call later today with the QA guys to discuss concourse ci [07:41] dimitern: okey dokey. I shall watch this space [07:41] axw: I'm charming concorse so it can be evaluated easily, first attempt in bash, now "properly", i.e. with charm layers and unit tests [07:41] * axw nods [07:43] wallyworld: I can't open the full diff on GitHub either, so reviewing this is going to be difficult... [07:43] faaark [07:44] wallyworld: I've got the raw diff, I guess I'll just email comments :/ [07:44] damn, sorry [07:44] axw, dimitern: ceph-osd has storage support in master branch - not yet released [07:44] wallyworld: oh wow it's happening! service -> application [07:44] cs:~openstack-charmers-next/xenial/ceph-osd [07:44] dimitern: yeah, it is. omfg it's been a big job [07:44] jamespage: righto, thanks for clarifying [07:45] jamespage: nice, I'll use that then - did your rabbitmq-server fix for bug 1574844 land on cs:xenial/rabbitmq-server ? [07:45] Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. [07:45] wallyworld: it will be worth enduring it now rather than later :) [07:46] indeed. we need to get this all done for beta8 [07:46] since after beta8, we need to support upgrades [07:47] jamespage: another question - since all of my NUCs have 1 disk only, I decided to try emulating 2 disks by using a volume group with 3 volumes - root, ceph, and lxd (for nova-lxd) - seems to work [07:48] jamespage: well, the question is - should it work equally well like this I guess? [07:54] axw: I've found an issue with storage/looputil/ tests failing if you have a loop device attached locally (e.g. losetup /dev/loop0 /var/lib/lxc-btrfs.img) [07:54] dimitern: eep, sorry. they're meant to be isolated [07:55] dimitern: which test fails? [07:55] or is it all of them? :) [07:55] axw: attempted a fix here: http://reviews.vapour.ws/r/4871/diff/# (that was discarded, but I think about extracting and proposing a fix like the one in the diff in storage/) [07:55] axw: in parseLoopDeviceInfo [07:56] the fix no longer causes the tests to fail, but I guess the isolation issue is still present.. [07:56] dimitern, not yet - working that today [07:57] dimitern, you can always test with the ones from ~openstack-charmers-next - that's an upload of the master branch of each charm as it lands [07:58] jamespage: I used -next initially, but a few deployments failed due to incompatible changes across revisions (e.g. lxd started using 'block-devices' vs 'block-device') [07:58] dimitern: seems like a reasonable change [07:58] axw: cheers, I'll propose it as a separate fix then [07:59] dimitern: could you please also file a bug about isolation? it should be fixed also, but it's a separate issue [07:59] axw: will do [07:59] thanks [07:59] dimitern, well those will always happen at some point in time - lxd was never actually released until 16.04 so we broke it before then... [08:00] jamespage: that's ok - it's under development still [08:01] jamespage: I really should've tried that earlier (full openstack deployment with lxc or lxd).. now I see how flaky our multi-nic approach is :/ [08:02] lxc is *even* worse with the default lxc-clone: true .. any lxc always comes up with 2 IPs per NIC due to cloning ;( [08:02] frobware: are you around? [08:03] dimitern: yep [08:03] frobware: wanna sync? [08:03] dimitern: oh yes [08:03] frobware: ok, omw [08:11] wallyworld: sorry, this is just too immense for me to review === thumper-bjj is now known as thumper-eod [08:12] ok, i wonder wtf is up with github :-( [08:12] maybe i'll try reproposing [08:12] wallyworld: regardless of that, it's too big. the other one was big, and was about 1/4 the size diff [08:13] the trouble is when you rename even one thing, the corresponding changes are huge [08:13] and htat one was basically the same change repeated in most of hte files, this one is all over the place [08:13] wallyworld: I don't know how to fix it, but I can't review as is [08:13] ok, i'll see if it behaves a second time [08:13] wallyworld: one thing I did pick up was an inappropriate renane of juju/service package path [08:14] that's about systemd/upstart services, not juju services [08:14] i didn't mean to do that [08:15] axw: are you sure? [08:16] it's not renamed as far as i can see [08:29] wallyworld: I just replied to your question via email. I misremembered, there's just an invalid comment change [08:29] ah, rightio [08:30] axw: i'll rey and get the diff up using rbtool or something. after soccer [08:30] wallyworld: ok, but I don't think it's going to help much. it really needs to be broken up I think [08:31] damn, tht will be very difficult :-( [08:31] since even an error message change perculates through several packages and tests [08:34] wallyworld: you could do all messages/strings in one branch. types in another, functions in another... I don't know. all I know is I can't perform any kind of useful review on a 60K line diff [08:58] dimitern: https://bugs.launchpad.net/juju-core/+bug/1576674 [08:58] Bug #1576674: 2.0 beta6: only able to access LXD containers (on maas deployed host) from the maas network [09:03] frobware: ta! [09:06] axw: fyi, bug 1587345 [09:06] Bug #1587345: worker/provisioner: Storage-related tests not isolated from the host machine [09:31] Bug #1587345 opened: worker/provisioner: Storage-related tests not isolated from the host machine [10:45] dimitern, fwereade: I'm going to skip the meeting in 15 - need to catch up with other stuff and other meetings later in the day too. [10:45] frobware: that's ok - marked you as optional anyway :) [11:01] fwereade: omw [11:34] dooferlad: standup [11:52] dimitern, fwereade, voidspace, frobware: review please? http://reviews.vapour.ws/r/4944/ It's the other side of the Mongo3.2 slowness fixes. [12:10] anastasiamac: i am on holiday until Thursday :-) [12:11] dooferlad: ooh lucky some \o/ did not see it in team's calendar -sorry :) have fun [12:28] babbageclunk: sorry, was otp till now - looking [12:35] dimitern: further thoughts: nursery workers probably want a StringsWorker that notifies only on enter-relevant-state (or watcher-created-state-already-relevant) [12:35] dimitern, failures kinda need to be the nursery's responsibility to reschedule [12:36] dimitern, I'm not sure whether that will be better with more controller-side infra, or not [12:38] fwereade: a stringsworker as coordinator? [12:38] dimitern, if we do have more infra we should probably go all the way with it: so when we create a machine we add a schedule-whatever doc for "now", and if the worker fails it just sends a reschedule-because-error message that writes a new time on the ticket, updates the status(?) and moves on confident that the watcher will deliver it when required [12:39] dimitern, well, the worker wants to be based on some watcher? I don't know for sure that strings>notify [12:40] fwereade: strings sounds better as those the nursery entites should be a few and short-lived anyway [12:42] fwereade: I've taken notes for those [12:51] fwereade: that makes total sense [12:52] fwereade: it's already taking shape in my mind.. tyvm! [12:52] babbageclunk: ping [12:52] dimitern: pong [12:53] babbageclunk: how can I test your PR locally? run tests before and after apt install mongodb-3.2 ? [12:55] dimitern: Yup - although it's juju-mongodb3.2 [12:55] babbageclunk: ok, I'll try it now [12:56] babbageclunk: cheers [12:56] dimitern: Then build, then run tests with JUJU_MONGOD=/usr/lib/juju/mongo3.2/bin ... [12:57] dimitern: Thanks for looking! [12:57] babbageclunk: oh, I see, ta! (was just wondering..) [12:58] dimitern: You might see the occasional failure in subnets_test - that's the one that seems to be a problem with txns between mgo and mongo3.2 [12:59] babbageclunk: with your patch and both mongodb versions? or only with 3.2? [13:01] dimitern: yes, with my patch, only with mongo3.2. [13:01] babbageclunk: ok [13:01] dimitern: Actually you see the same without my patch. [13:01] babbageclunk: how much longer is "too long" with 3.2? [13:01] dimitern: ...but with mongo3.2 [13:03] dimitern: Well, running the state tests with 3.2 for master takes longer than a couple of hours, so longer than I could bear to wait while doing something else. Running a small slice of them showed they were about 100x slower. [13:04] dimitern: Is the maas-juju meeting I have in my calendar still current? [13:04] babbageclunk: oh! I better not wait more than a couple of minutes over the time it took with 2.4 then :) [13:04] babbageclunk: anybody else around from maas? [13:04] dimitern: Not unless you're feeling very bored. [13:04] dimitern: No, I'm the only one here! [13:05] babbageclunk: I don't have anything new so I guess we should skip it [13:05] dimitern: Cool cool [13:08] Bug #1585836 changed: Race in github.com/juju/juju/provider/azure [13:25] babbageclunk: all tests pass with 3.2, the time difference is more than double here though [13:26] 233.236 s vs 477.708 s [13:26] dimitern: :( [13:26] dimitern: Not sure there's much I can do about it unfortunately. [13:28] babbageclunk: I'll run it a couple of times to get better stats [13:28] dimitern: Thanks [13:28] hi there [13:28] im seeing a lot of this output: http://paste.ubuntu.com/16863930/ in debug-log [13:29] juju version 1.25.5 [13:29] it also happen in 1.24.x before upgrade to 1.25 [13:29] Bug #1585300 opened: environSuite invalid character \"\\\\\" in host name [13:29] redelmann: on what cloud are you seeing this? [13:30] dimitern: aws [13:30] redelmann: have you changed firewall-mode for the environment? [13:31] dimitern: no, everything is in default [13:31] redelmann: and those machines showing the error - can you access exposed workloads on them regardless of the error? [13:32] dimitern: here is a larger output: http://paste.ubuntu.com/16863964/ [13:32] dimitern: yes [13:32] Bug #1585300 changed: environSuite invalid character \"\\\\\" in host name [13:33] dimitern: ok, this is new: exited "firewaller": AWS was not able to validate the provided access credentials (AuthFailure) [13:33] redelmann: it looks like something is odd about your AWS credentials? [13:33] dimitern, babbageclunk, voidspace: testing meeting? [13:33] dimitern: good point, i will research a little [13:33] redelmann: uh, sorry I need to take this - back in ~1/2h [13:34] dimitern: thank you [13:34] frobware: I don't have an invite? [13:35] babbageclunk: you should have now [13:35] dimitern: ta [13:38] Bug #1585300 opened: environSuite invalid character \"\\\\\" in host name [13:59] Bug # changed: 1518807, 1518809, 1518810, 1518820, 1519141, 1576266, 1583772 [14:01] babbageclunk: I suspect the longer run-time was bogus as the second run finished in 482.571 s; running a third now [14:02] dimitern: oh, good - yeah, it's definitely pretty variable. [14:03] redelmann: I'd suggest to install the official AWS CLI tools and using the same AWS credentials you give to juju to try e.g. starting and stopping an instance with --dry-run [14:15] babbageclunk: you have a review [14:15] dimitern: cheers! [14:15] it seems the test actually take less time the more you run them :D [14:22] if anyone wanted to know what the Juju API looked like: http://rogpeppe-scratch.s3.amazonaws.com/juju-api-doc.html [14:22] dimitern: Sorry! I think I mislead you about the value for JUJU_MONGOD - it needs to be the full path to mongod, including the filename, so for 3.2 you need to set it to /usr/lib/juju/mongo3.2/bin/mongod [14:23] dimitern: Otherwise it'll silently fall back to using the 2.4 binary. [14:23] rogpeppe, :) [14:23] alexisb: i don't think there are any API docs anywhere, right? [14:23] rogpeppe, nope [14:24] just what is on github [14:24] babbageclunk: that's what I did, yeah [14:24] alexisb: i guess some people might find this useful then [14:24] babbageclunk: it was using 3.2 [14:24] rogpeppe, you bet! [14:24] thank you [14:24] dimitern: Huh. Ok, awesome! [14:25] alexisb: my pleasure :) [14:25] babbageclunk: I double checked by 'ps -ef|grep mongod' while the test was running [14:25] good morning [14:25] dimitern: Sweet, that would definitely be better. [14:28] perrito666: o/ [14:28] frobware, I am going to be a few minutes late [14:29] alexisb: ack [14:31] babbageclunk: hey! you know what? [14:31] dimitern: what? [14:32] babbageclunk: running the tests on 2.4 with your patch (which I've just thought to do), actually cuts the run-time in *half* ! [14:33] dimitern: heh, nice [14:33] dimitern: Wow, that's a bigger change than I saw. But I didn't want to get excited about the tests going faster under 2.4, because that will just make the pain of 3.2 more intense. [14:33] babbageclunk: running again to get a better sample === rodlogic is now known as Guest89868 [14:35] babbageclunk: well, it's quite reasonable to expect this, especially with not doing things like recreating dbs for *every* test case [14:35] dimitern: Yeah, definitely - it's doing a lot less. [14:36] babbageclunk: I can confirm - ~276 s on 2.4 vs ~476 on 3.2 [14:37] babbageclunk: great job! please land this soon! :) [14:44] dimitern: ok! [14:45] babbageclunk: I did a final run with 3.2 with comparable system load levels - it's still ~476 [14:48] dimitern: Cool. [14:50] dimitern: If someone's commented on a change, and I think I've addressed those comments, I should probably wait for them to put a Ship It on the review before merging it, right? [14:51] babbageclunk: if it was trivial, then just go ahead and merge. If it was something complicated, I usually wait to make sure they agree with my change [14:52] babbageclunk: that's assuming you got a "fix it, then ship it"... if you didn't get the ship it, then definitely ask if they intended to give you a ship it or if they think it will need a re-review [14:52] natefinch: Makes sense - thanks! [14:54] Bug #1587503 opened: LXD provider fails to set hostname [14:54] natefinch: (I got a bit excited and didn't check with someone for a previous change, when I realised I figured it was probably a bit rude.) [15:03] babbageclunk: dimitern: frobware: sorry I missed the testing meeting - I was at the dentist. I have the invite now. [15:04] hmm... pretty sure I'm a team of 1 for my standup [15:14] voidspace: that's ok - we'll have one every other week [15:14] dimitern: yeah, I saw. Great. [15:15] voidspace: the gist of it is we need to come up with a list of relevant networking tests that could be added to the current CI [15:15] dimitern: right, sounds like a good start [15:36] frobware: Are you ok with me merging http://reviews.vapour.ws/r/4944 [15:36] frobware: ? [15:55] natefinch: or katco or anyone familiar with resorces could we get your ack on https://github.com/juju/docs/pull/1122 [15:55] need this to land in docs asap so we can have charmers start using terms [15:56] arosales: will look [15:56] natefinch: thanks! [16:24] fwereade, dimitern: you might like this: http://rogpeppe-scratch.s3.amazonaws.com/juju-api-doc.html [16:25] rogpeppe: awesome! tyvm [16:26] rogpeppe: what did you use for generating this? [16:26] dimitern: some code :) [16:26] dimitern: one mo and i'll push it to github [16:27] dimitern: it does rely on a function i implemented in apiserver/common to get all the facades [16:29] rogpeppe: nice! cheers [16:30] Bug #1587552 opened: GCE Invalid value for field 'resource.tags.items [16:30] babbageclunk: sorry, was otp [16:31] frobware: no worries - not blocking me, just thought I'd check === frankban is now known as frankban|afk [16:49] rogpeppe, nice [16:51] fwereade: obviously a lot of the comments are Go-implementation-oriented, but it's better than nothing :) [16:51] rogpeppe, absolutely [17:07] bbl [17:09] dimitern: i've pushed the code to github.com/rogpeppe/misc/cmd/jujuapidochtml and github.com/rogpeppe/misc/cmd/jujuapidoc [17:09] dimitern: the former just generates the HTML; the latter generates the computer-readable form that jujuapidochtml works from [17:10] rogpeppe: thanks! starred and bookmarked :) [17:29] natefinch, is the PR in this bug merged?: https://bugs.launchpad.net/juju-core/+bug/1581885 [17:29] Bug #1581885: Rename 'admin' model to 'controller' [17:29] seems to be?? [17:40] alexisb: yes, sorry, it's in [17:40] alexisb: I marked it as such [17:41] natefinch, awesome, thank you === redir_afk is now known as redir [19:56] * thumper looks sadly at the calendar for this morning and sighs [19:59] thumper: look at all those oppertunities! [20:08] * perrito666 wonders why the code is working [20:09] it's never good when you have to wonder why something works [20:19] * thumper slaps rick_h_ [20:20] uh, did you just ....rickslapped him? [20:20] * rick_h_ says "Thank you sir may I have another?" === natefinch is now known as natefinch-afk [20:49] brb reboot [20:53] trivial review for someone http://reviews.vapour.ws/r/4948/ [20:55] thumper: ship it [20:55] perrito666: ta [21:07] Bug #1587644 opened: jujud and mongo cpu/ram usage spike [21:52] Bug #1587236 changed: no 1.25.5 tools for vivid? [21:52] Bug #1587653 opened: juju enable-ha accepts the --series= option [22:01] Bug #1587653 changed: juju enable-ha accepts the --series= option [22:01] Bug #1587236 opened: no 1.25.5 tools for vivid? [22:07] Bug #1587236 changed: no 1.25.5 tools for vivid? [22:07] Bug #1587653 opened: juju enable-ha accepts the --series= option [22:14] alexisb: did you need anyhing? [22:14] * perrito666 notices he is answering old messages because of lag [22:16] perrito666, nope, I got an update from Ian, thanks! [22:16] oh, I get my updates from the internet, I just apt-get update :p [22:17] :) [22:18] * perrito666 tries to unfreeze [22:22] alexisb, wallyworld: http://reviews.vapour.ws/r/4949/diff/# [22:22] I've not done other packages because I think we actually need some of them [22:22] ok [22:22] potentially we could do the apiserver too [22:22] but haven't done that yet [22:22] state takes ages [22:23] thumper, well that was a simple fix to remove a bunch of pain [22:23] thank you [22:23] alexisb: I told you it wouldn't be big [22:45] * perrito666 runs tests for state and wonders if he could make dinner while he waits [23:04] thumper: in http://reviews.vapour.ws/r/4925/, you say "business object layer". would it make sense to have it in the core package maybe? I still think it belongs in "names" at the moment for consistency, but maybe if we were to move core outside, and fold names into it? [23:05] axw: have it in names for now [23:06] okey dokey [23:13] Bug #1463420 changed: Zip archived tools needed for bootstrap on windows [23:30] menn0, it seems our 1x1 has fallen off the calendar [23:30] alexisb: yeah, we haven't had one for a while [23:30] I will put some time for us tomorrow [23:31] alexisb: sounds good