* thumper just pushed a mega-branch | 00:17 | |
thumper | http://reviews.vapour.ws/r/4937/ | 00:18 |
---|---|---|
thumper | even though it touches 59 files | 00:18 |
thumper | the diffstat is just: +541 −387 | 00:18 |
thumper | it needs some cleanup in instance/namespace.go and instance/namespace_test.go but the rest is good for review | 00:18 |
* thumper goes to walk the dog before she climbs the walls | 00:19 | |
anastasiamac | thnx thumper\o/.. i *think* m meant to be OCR today :-P | 00:20 |
thumper | wallyworld: this is the shortler lxd name branch | 00:21 |
thumper | wallyworld: it also fixes the maas container dns issue | 00:21 |
thumper | and makes some things more consistent | 00:21 |
wallyworld | thumper: awesome | 00:22 |
wallyworld | thumper: my branch a couple of days ago was 840 files | 00:22 |
thumper | :) | 00:22 |
thumper | someone has to do the big hunks | 00:23 |
thumper | see you in a bit | 00:23 |
anastasiamac | hunks or chunks? | 00:23 |
davecheney | thumper: OH MY GOD | 00:59 |
davecheney | i just found a set of state tests that duplicate TearDownTest | 00:59 |
perrito666 | davecheney: doing test archeology? | 01:11 |
anastasiamac | perrito666: lasr i've heard, all these were under davecheney's couch or something :) | 01:13 |
davecheney | perrito666: no, i hit my toe on this | 01:15 |
perrito666 | davecheney: If you are to believe Indiana Jones movies, that is how most archeology is done | 01:16 |
davecheney | nhttps://github.com/juju/juju/pull/5493 | 01:17 |
davecheney | perrito666: i have not excavated deep enough to get to the real issue yet | 01:18 |
davecheney | i'm still diging up the past | 01:18 |
perrito666 | ah, allwatcher | 01:20 |
davecheney | perrito666: hide yo' kids, hide yo' wife, all watcher coming | 01:46 |
thumper | wallyworld: http://reviews.vapour.ws/r/4937/ updated. Nothing really surprising. Heading home to test maas now. Car wheel alignment now done. | 02:30 |
* thumper heads off line briefly | 02:30 | |
=== natefinch-afk is now known as natefinch | ||
* thumper tries to remember how to bootstrap a maas thing again | 02:47 | |
davecheney | thumper: I have a fix for https://bugs.launchpad.net/juju-core/+bug/1586244 | 02:48 |
mup | Bug #1586244: state: DATA RACE in watcher <2.0-count> <blocker> <race-condition> <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1586244> | 02:48 |
davecheney | it's not perfect | 02:48 |
davecheney | but it will unblock things for beta8 | 02:48 |
davecheney | the "we can just zero shit out to make the test pass" logic was woven through all those tests | 02:49 |
davecheney | a custom equality function would have been a _lot_ of work | 02:49 |
davecheney | and would have taken days to test | 02:49 |
davecheney | days of wall time | 02:49 |
davecheney | because the state tests are such an utter cluster fuck | 02:49 |
thumper | wallyworld: what is the expected way to be able to bootstrap maas? | 02:57 |
thumper | I don't need to add-cloud do i? | 02:57 |
thumper | just creds? | 02:57 |
* thumper is confused | 02:57 | |
natefinch | thumper: unless it has changed in the last week, you do need add cloud | 02:58 |
thumper | what is supposed to go into the cloud config for a maas cloud? | 02:58 |
thumper | juju help maas doesn't do anything | 02:58 |
thumper | I feel it should tell you how to set up juju for maas | 02:59 |
natefinch | thumper: yeah.... I am hoping whoever did the maas work will fix that | 03:00 |
thumper | it is the credential and cloud definitions | 03:00 |
thumper | not the maas provider | 03:01 |
thumper | what auth-type should be defined for maas? | 03:01 |
thumper | it uses an oauth token | 03:01 |
natefinch | thumper: http://pastebin.ubuntu.com/16858151/ | 03:01 |
natefinch | thumper: your yaml file for add-cloud should look like that... fix the cloud name and endpoint of course. | 03:02 |
thumper | right | 03:02 |
thumper | and creds? | 03:02 |
natefinch | thumper: I think when you add it, it'll ask for the outh key | 03:02 |
natefinch | maybe during bootstrap? I forget when it asked me | 03:02 |
natefinch | I really wish juju bootstrap maas/https://myhostname.com/MAAS worked | 03:04 |
natefinch | the release notes say that works, but it doesn't | 03:04 |
anastasiamac | mine works and creda re in format http://pastebin.ubuntu.com/16858205/ | 03:04 |
anastasiamac | creds* r | 03:05 |
thumper | I add creds for vmaas | 03:05 |
thumper | then go "juju list-credentials" | 03:05 |
thumper | and it errors | 03:05 |
natefinch | fantastic | 03:05 |
thumper | saying "removing secrets from credentials for cloud maas: cloud maas not valid" | 03:05 |
thumper | works if you say 'juju list-credentials vmaas" | 03:06 |
natefinch | uhh... that sounds like a bad idea.... deleting secrets because we encountered an error? | 03:06 |
anastasiamac | i did mine by hand... and here is my clouds... http://pastebin.ubuntu.com/16858228/ | 03:06 |
natefinch | or do they just mean they're eliding them from the output? | 03:07 |
natefinch | because that error message is scary | 03:07 |
anastasiamac | also...maybe do not name your cloud 'maas'? maybe there is a confusion with type 'maas' | 03:07 |
thumper | got it now | 03:07 |
natefinch | meh.... if they're one and the same, what's the difference? | 03:07 |
thumper | it is bootstrapping | 03:07 |
natefinch | huzzah | 03:07 |
natefinch | I name all my environments after the provider type. I only have one of each... why reinvent the wheel | 03:08 |
thumper | anastasiamac: I didn't name it maas | 03:08 |
thumper | I called mine "vmaas" | 03:08 |
thumper | so no idea where it is falling down there | 03:08 |
anastasiamac | :/ | 03:08 |
anastasiamac | i've cafted files by hand and could bootstrap... the commands gave me grief :( | 03:09 |
thumper | maas tests running | 03:15 |
* thumper packing up and taking laptop while Maia does BJJ | 03:15 | |
thumper | emails and stuff | 03:15 |
davecheney | all := newStore() | 03:30 |
davecheney | why ... | 03:30 |
davecheney | github.com/juju/juju/payload/api/private | 03:43 |
davecheney | private is a terrible name for a package | 03:43 |
natefinch | davecheney: naming is hard. It's the api for agents, as opposed to the client api | 03:44 |
* thumper isn't going to start on that one | 03:46 | |
* thumper does hr bollocks | 03:46 | |
mup | Bug #1587236 opened: no 1.25.5 tools for vivid? <juju-core:New> <https://launchpad.net/bugs/1587236> | 04:22 |
davecheney | yes, the client api is called this github.com/juju/juju/payload/api/private/client | 04:33 |
davecheney | ffs | 04:33 |
davecheney | thumper: did some looking into the api bifurcation | 04:37 |
davecheney | lots and lots of refactoring will be needed before its possible | 04:37 |
davecheney | the api depends directly on watchers | 04:38 |
davecheney | i don't even know how that's possible | 04:38 |
davecheney | oh and directly on the state/multiwatcher | 04:38 |
davecheney | lucky(~/src/github.com/juju/juju/api) % pt multiwatcher | 04:39 |
davecheney | allwatcher.go: | 04:39 |
davecheney | 9: "github.com/juju/juju/state/multiwatcher" | 04:39 |
davecheney | 51:func (watcher *AllWatcher) Next() ([]multiwatcher.Delta, error) { | 04:39 |
davecheney | right, so we expose the state types directly inside the api, even though they get pooped into and out of json | 04:40 |
davecheney | this should be reasonably straight forward to fix | 04:40 |
thumper | davecheney: as fun as it is, now is not the time to be looking into this | 04:50 |
thumper | we need to keep focused on the 2.0 beta 8 bugs | 04:50 |
thumper | o/ balloons | 04:50 |
davecheney | thumper: understood, I'm not making any changes | 04:50 |
davecheney | but I have some time to think during test runs | 04:50 |
thumper | doc is fine to start | 04:50 |
thumper | :) | 04:50 |
davecheney | indeed | 04:50 |
wallyworld | thumper: there's 2 docs in state which have EnvUUID - endpointBiningsDoc and resourceDoc - these should be deleted right | 04:51 |
thumper | wallyworld: 'juju deploy ubuntu --to lxd:" doesn't work | 04:51 |
wallyworld | correct | 04:51 |
wallyworld | it's on the todo list | 04:51 |
thumper | why not? | 04:51 |
thumper | oh, bug? | 04:51 |
wallyworld | knowm issue, just haven't got to it yet | 04:51 |
thumper | um... not sure about those docs | 04:51 |
thumper | we should double check with the original authors | 04:51 |
thumper | ack | 04:51 |
thumper | won't file another bug then | 04:51 |
thumper | wallyworld: btw, just finished testing with maas, changes look good | 04:52 |
wallyworld | thumper: nate is looking to at plus other lxc to lxd things this week | 04:52 |
wallyworld | awesome | 04:52 |
thumper | wallyworld: juju add-machine talks *a lot* about lxc | 04:52 |
thumper | we should change to lxd | 04:52 |
wallyworld | yes | 04:52 |
wallyworld | there's a lot there | 04:52 |
wallyworld | just a sed script | 04:52 |
wallyworld | sigh | 04:52 |
* thumper nods | 04:52 | |
thumper | sed is magic like that | 04:52 |
thumper | wallyworld: I thought you'd find my typo amusing on the PR | 04:53 |
thumper | fuxes-nnnn | 04:53 |
thumper | u and i are so close together | 04:53 |
wallyworld | lol | 04:53 |
wallyworld | thumper: with the EnvUUID thing, these fields are tagged "env-uuid" so even if we keep them, sure we want to use "model-uuid" | 04:54 |
wallyworld | surely | 04:54 |
thumper | yes | 04:54 |
thumper | surely | 04:54 |
thumper | we probably can just remove them | 04:54 |
thumper | I have been cleaning up docs as I go | 04:54 |
thumper | most model-uuid fields are implicit and aren't needed | 04:54 |
thumper | the framework ensures they are there | 04:54 |
thumper | and valid | 04:54 |
wallyworld | ok, i'll delete them | 04:55 |
* thumper looks sadly at tomorrow | 04:57 | |
thumper | meetings solid from 8am to noon | 04:57 |
* thumper sighs | 04:58 | |
thumper | yay | 04:58 |
thumper | later peeps | 04:58 |
thumper | I'll check in on that branch to see if it lands - should only be intermittent failures if it doesn't | 04:58 |
=== thumper is now known as thumper-bjj | ||
davecheney | oh no | 05:21 |
davecheney | we have tests which accidentally mutate data stored in a cache | 05:21 |
davecheney | then expect to match that accidentally mutated data | 05:21 |
davecheney | menn0: thumper-bjj http://reviews.vapour.ws/r/4941/ | 06:06 |
davecheney | ^ fix for beta8 blocker | 06:06 |
wallyworld | axw: rb doesn | 07:07 |
wallyworld | 't like the latest pr, you can eyeball the changes here https://github.com/juju/juju/pull/5497/files?diff=split | 07:07 |
wallyworld | if you get time, could you take a look? not really urgent | 07:08 |
axw | wallyworld: ok | 07:28 |
axw | only 380 files changed this time | 07:28 |
wallyworld | yeah :-( | 07:28 |
dimitern | axw: hey, do you know if any of the openstack charms use the enhanced storage support? | 07:29 |
axw | dimitern: AFAIK only ceph is using it | 07:29 |
axw | dimitern: why do you ask? | 07:29 |
dimitern | axw: I've been deploying openstack-base bundle on my hardware maas for the past few days | 07:29 |
dimitern | axw: it mostly works :) | 07:30 |
axw | dimitern: cool :) are you looking to test storage? | 07:30 |
dimitern | axw: yeah, I was looking at the various ceph-related charms, and ISTM none of them define "storage" section in their metadata | 07:30 |
axw | dimitern: hrm, possibly in staging still | 07:31 |
dimitern | axw: and it appears cinder is required in order to later use juju on the deployed openstack | 07:31 |
dimitern | axw: ok, just checking whether I missed something obvious.. | 07:32 |
=== frankban|afk is now known as frankban | ||
dimitern | nova-lxd is quite cool! | 07:32 |
axw | dimitern: it's in here: https://api.jujucharms.com/charmstore/v5/ceph/archive/metadata.yaml | 07:33 |
axw | dimitern: apparently it still hasn't made its way over to ceph-osd yet | 07:33 |
dimitern | axw: what's the charm url for the above? cs:~?/ceph .. | 07:34 |
axw | dimitern: cs:xenial/ceph-1 (or just "ceph") | 07:34 |
dimitern | axw: ah, I see - so ceph has it but ceph-osd not yet | 07:35 |
axw | dimitern: if you want to use juju storage, don't set the osd-devices config. instead, deploy with --storage osd-devices=<...> | 07:35 |
axw | dimitern: seems so. I thought it had been copied over | 07:35 |
dimitern | axw: but for that to work the charm needs a storage section in the metadata, right? | 07:35 |
axw | dimitern: yep, so that only works for ceph, and not ceph-osd atm | 07:36 |
dimitern | axw: do you know the difference ? is ceph-osd+ceph-mon = ceph ? | 07:36 |
axw | dimitern: not entirely sure, but I think so | 07:37 |
dimitern | axw: ok, I'll ask jamespage for details | 07:37 |
axw | dimitern: Chris Holcombe is in charge of the ceph charms, FYI | 07:38 |
dimitern | jamespage: can I use 3x ceph instead of 3x ceph-osd + 3x ceph-mon for an opestack base deplyment? | 07:38 |
dimitern | axw: oh, good to know, thanks! | 07:39 |
axw | dimitern: totally unrelated, I'm curious to know what you're doing with concourse.ci. I saw it a while ago, but didn't dig too deep. got anything to show off at the sprint perhaps? :) | 07:39 |
dimitern | axw: we have a call later today with the QA guys to discuss concourse ci | 07:40 |
axw | dimitern: okey dokey. I shall watch this space | 07:41 |
dimitern | axw: I'm charming concorse so it can be evaluated easily, first attempt in bash, now "properly", i.e. with charm layers and unit tests | 07:41 |
* axw nods | 07:41 | |
axw | wallyworld: I can't open the full diff on GitHub either, so reviewing this is going to be difficult... | 07:43 |
wallyworld | faaark | 07:43 |
axw | wallyworld: I've got the raw diff, I guess I'll just email comments :/ | 07:44 |
wallyworld | damn, sorry | 07:44 |
jamespage | axw, dimitern: ceph-osd has storage support in master branch - not yet released | 07:44 |
dimitern | wallyworld: oh wow it's happening! service -> application | 07:44 |
jamespage | cs:~openstack-charmers-next/xenial/ceph-osd | 07:44 |
wallyworld | dimitern: yeah, it is. omfg it's been a big job | 07:44 |
axw | jamespage: righto, thanks for clarifying | 07:44 |
dimitern | jamespage: nice, I'll use that then - did your rabbitmq-server fix for bug 1574844 land on cs:xenial/rabbitmq-server ? | 07:45 |
mup | Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. <conjure> <juju-release-support> <landscape> <lxd-provider> <juju-core:Won't Fix> <rabbitmq-server (Juju Charms Collection):Fix Committed by james-page> <https://launchpad.net/bugs/1574844> | 07:45 |
dimitern | wallyworld: it will be worth enduring it now rather than later :) | 07:45 |
wallyworld | indeed. we need to get this all done for beta8 | 07:46 |
wallyworld | since after beta8, we need to support upgrades | 07:46 |
dimitern | jamespage: another question - since all of my NUCs have 1 disk only, I decided to try emulating 2 disks by using a volume group with 3 volumes - root, ceph, and lxd (for nova-lxd) - seems to work | 07:47 |
dimitern | jamespage: well, the question is - should it work equally well like this I guess? | 07:48 |
dimitern | axw: I've found an issue with storage/looputil/ tests failing if you have a loop device attached locally (e.g. losetup /dev/loop0 /var/lib/lxc-btrfs.img) | 07:54 |
axw | dimitern: eep, sorry. they're meant to be isolated | 07:54 |
axw | dimitern: which test fails? | 07:55 |
axw | or is it all of them? :) | 07:55 |
dimitern | axw: attempted a fix here: http://reviews.vapour.ws/r/4871/diff/# (that was discarded, but I think about extracting and proposing a fix like the one in the diff in storage/) | 07:55 |
dimitern | axw: in parseLoopDeviceInfo | 07:55 |
dimitern | the fix no longer causes the tests to fail, but I guess the isolation issue is still present.. | 07:56 |
jamespage | dimitern, not yet - working that today | 07:56 |
jamespage | dimitern, you can always test with the ones from ~openstack-charmers-next - that's an upload of the master branch of each charm as it lands | 07:57 |
dimitern | jamespage: I used -next initially, but a few deployments failed due to incompatible changes across revisions (e.g. lxd started using 'block-devices' vs 'block-device') | 07:58 |
axw | dimitern: seems like a reasonable change | 07:58 |
dimitern | axw: cheers, I'll propose it as a separate fix then | 07:58 |
axw | dimitern: could you please also file a bug about isolation? it should be fixed also, but it's a separate issue | 07:59 |
dimitern | axw: will do | 07:59 |
axw | thanks | 07:59 |
jamespage | dimitern, well those will always happen at some point in time - lxd was never actually released until 16.04 so we broke it before then... | 07:59 |
dimitern | jamespage: that's ok - it's under development still | 08:00 |
dimitern | jamespage: I really should've tried that earlier (full openstack deployment with lxc or lxd).. now I see how flaky our multi-nic approach is :/ | 08:01 |
dimitern | lxc is *even* worse with the default lxc-clone: true .. any lxc always comes up with 2 IPs per NIC due to cloning ;( | 08:02 |
dimitern | frobware: are you around? | 08:02 |
frobware | dimitern: yep | 08:03 |
dimitern | frobware: wanna sync? | 08:03 |
frobware | dimitern: oh yes | 08:03 |
dimitern | frobware: ok, omw | 08:03 |
axw | wallyworld: sorry, this is just too immense for me to review | 08:11 |
=== thumper-bjj is now known as thumper-eod | ||
wallyworld | ok, i wonder wtf is up with github :-( | 08:12 |
wallyworld | maybe i'll try reproposing | 08:12 |
axw | wallyworld: regardless of that, it's too big. the other one was big, and was about 1/4 the size diff | 08:12 |
wallyworld | the trouble is when you rename even one thing, the corresponding changes are huge | 08:13 |
axw | and htat one was basically the same change repeated in most of hte files, this one is all over the place | 08:13 |
axw | wallyworld: I don't know how to fix it, but I can't review as is | 08:13 |
wallyworld | ok, i'll see if it behaves a second time | 08:13 |
axw | wallyworld: one thing I did pick up was an inappropriate renane of juju/service package path | 08:13 |
axw | that's about systemd/upstart services, not juju services | 08:14 |
wallyworld | i didn't mean to do that | 08:14 |
wallyworld | axw: are you sure? | 08:15 |
wallyworld | it's not renamed as far as i can see | 08:16 |
axw | wallyworld: I just replied to your question via email. I misremembered, there's just an invalid comment change | 08:29 |
wallyworld | ah, rightio | 08:29 |
wallyworld | axw: i'll rey and get the diff up using rbtool or something. after soccer | 08:30 |
axw | wallyworld: ok, but I don't think it's going to help much. it really needs to be broken up I think | 08:30 |
wallyworld | damn, tht will be very difficult :-( | 08:31 |
wallyworld | since even an error message change perculates through several packages and tests | 08:31 |
axw | wallyworld: you could do all messages/strings in one branch. types in another, functions in another... I don't know. all I know is I can't perform any kind of useful review on a 60K line diff | 08:34 |
frobware | dimitern: https://bugs.launchpad.net/juju-core/+bug/1576674 | 08:58 |
mup | Bug #1576674: 2.0 beta6: only able to access LXD containers (on maas deployed host) from the maas network <lxd> <maas-provider> <oil> <ssh> <juju-core:Triaged by dimitern> <https://launchpad.net/bugs/1576674> | 08:58 |
dimitern | frobware: ta! | 09:03 |
dimitern | axw: fyi, bug 1587345 | 09:06 |
mup | Bug #1587345: worker/provisioner: Storage-related tests not isolated from the host machine <tech-debt> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1587345> | 09:06 |
mup | Bug #1587345 opened: worker/provisioner: Storage-related tests not isolated from the host machine <tech-debt> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1587345> | 09:31 |
frobware | dimitern, fwereade: I'm going to skip the meeting in 15 - need to catch up with other stuff and other meetings later in the day too. | 10:45 |
dimitern | frobware: that's ok - marked you as optional anyway :) | 10:45 |
dimitern | fwereade: omw | 11:01 |
anastasiamac | dooferlad: standup | 11:34 |
babbageclunk | dimitern, fwereade, voidspace, frobware: review please? http://reviews.vapour.ws/r/4944/ It's the other side of the Mongo3.2 slowness fixes. | 11:52 |
dooferlad | anastasiamac: i am on holiday until Thursday :-) | 12:10 |
anastasiamac | dooferlad: ooh lucky some \o/ did not see it in team's calendar -sorry :) have fun | 12:11 |
dimitern | babbageclunk: sorry, was otp till now - looking | 12:28 |
fwereade | dimitern: further thoughts: nursery workers probably want a StringsWorker that notifies only on enter-relevant-state (or watcher-created-state-already-relevant) | 12:35 |
fwereade | dimitern, failures kinda need to be the nursery's responsibility to reschedule | 12:35 |
fwereade | dimitern, I'm not sure whether that will be better with more controller-side infra, or not | 12:36 |
dimitern | fwereade: a stringsworker as coordinator? | 12:38 |
fwereade | dimitern, if we do have more infra we should probably go all the way with it: so when we create a machine we add a schedule-whatever doc for "now", and if the worker fails it just sends a reschedule-because-error message that writes a new time on the ticket, updates the status(?) and moves on confident that the watcher will deliver it when required | 12:38 |
fwereade | dimitern, well, the worker wants to be based on some watcher? I don't know for sure that strings>notify | 12:39 |
dimitern | fwereade: strings sounds better as those the nursery entites should be a few and short-lived anyway | 12:40 |
dimitern | fwereade: I've taken notes for those | 12:42 |
dimitern | fwereade: that makes total sense | 12:51 |
dimitern | fwereade: it's already taking shape in my mind.. tyvm! | 12:52 |
dimitern | babbageclunk: ping | 12:52 |
babbageclunk | dimitern: pong | 12:52 |
dimitern | babbageclunk: how can I test your PR locally? run tests before and after apt install mongodb-3.2 ? | 12:53 |
babbageclunk | dimitern: Yup - although it's juju-mongodb3.2 | 12:55 |
dimitern | babbageclunk: ok, I'll try it now | 12:55 |
dimitern | babbageclunk: cheers | 12:56 |
babbageclunk | dimitern: Then build, then run tests with JUJU_MONGOD=/usr/lib/juju/mongo3.2/bin ... | 12:56 |
babbageclunk | dimitern: Thanks for looking! | 12:57 |
dimitern | babbageclunk: oh, I see, ta! (was just wondering..) | 12:57 |
babbageclunk | dimitern: You might see the occasional failure in subnets_test - that's the one that seems to be a problem with txns between mgo and mongo3.2 | 12:58 |
dimitern | babbageclunk: with your patch and both mongodb versions? or only with 3.2? | 12:59 |
babbageclunk | dimitern: yes, with my patch, only with mongo3.2. | 13:01 |
dimitern | babbageclunk: ok | 13:01 |
babbageclunk | dimitern: Actually you see the same without my patch. | 13:01 |
dimitern | babbageclunk: how much longer is "too long" with 3.2? | 13:01 |
babbageclunk | dimitern: ...but with mongo3.2 | 13:01 |
babbageclunk | dimitern: Well, running the state tests with 3.2 for master takes longer than a couple of hours, so longer than I could bear to wait while doing something else. Running a small slice of them showed they were about 100x slower. | 13:03 |
babbageclunk | dimitern: Is the maas-juju meeting I have in my calendar still current? | 13:04 |
dimitern | babbageclunk: oh! I better not wait more than a couple of minutes over the time it took with 2.4 then :) | 13:04 |
dimitern | babbageclunk: anybody else around from maas? | 13:04 |
babbageclunk | dimitern: Not unless you're feeling very bored. | 13:04 |
babbageclunk | dimitern: No, I'm the only one here! | 13:04 |
dimitern | babbageclunk: I don't have anything new so I guess we should skip it | 13:05 |
babbageclunk | dimitern: Cool cool | 13:05 |
mup | Bug #1585836 changed: Race in github.com/juju/juju/provider/azure <azure-provider> <blocker> <ci> <race-condition> <regression> <unit-tests> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1585836> | 13:08 |
dimitern | babbageclunk: all tests pass with 3.2, the time difference is more than double here though | 13:25 |
dimitern | 233.236 s vs 477.708 s | 13:26 |
babbageclunk | dimitern: :( | 13:26 |
babbageclunk | dimitern: Not sure there's much I can do about it unfortunately. | 13:26 |
dimitern | babbageclunk: I'll run it a couple of times to get better stats | 13:28 |
babbageclunk | dimitern: Thanks | 13:28 |
redelmann | hi there | 13:28 |
redelmann | im seeing a lot of this output: http://paste.ubuntu.com/16863930/ in debug-log | 13:28 |
redelmann | juju version 1.25.5 | 13:29 |
redelmann | it also happen in 1.24.x before upgrade to 1.25 | 13:29 |
mup | Bug #1585300 opened: environSuite invalid character \"\\\\\" in host name <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1585300> | 13:29 |
dimitern | redelmann: on what cloud are you seeing this? | 13:29 |
redelmann | dimitern: aws | 13:30 |
dimitern | redelmann: have you changed firewall-mode for the environment? | 13:30 |
redelmann | dimitern: no, everything is in default | 13:31 |
dimitern | redelmann: and those machines showing the error - can you access exposed workloads on them regardless of the error? | 13:31 |
redelmann | dimitern: here is a larger output: http://paste.ubuntu.com/16863964/ | 13:32 |
redelmann | dimitern: yes | 13:32 |
mup | Bug #1585300 changed: environSuite invalid character \"\\\\\" in host name <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1585300> | 13:32 |
redelmann | dimitern: ok, this is new: exited "firewaller": AWS was not able to validate the provided access credentials (AuthFailure) | 13:33 |
dimitern | redelmann: it looks like something is odd about your AWS credentials? | 13:33 |
frobware | dimitern, babbageclunk, voidspace: testing meeting? | 13:33 |
redelmann | dimitern: good point, i will research a little | 13:33 |
dimitern | redelmann: uh, sorry I need to take this - back in ~1/2h | 13:33 |
redelmann | dimitern: thank you | 13:34 |
babbageclunk | frobware: I don't have an invite? | 13:34 |
dimitern | babbageclunk: you should have now | 13:35 |
babbageclunk | dimitern: ta | 13:35 |
mup | Bug #1585300 opened: environSuite invalid character \"\\\\\" in host name <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1585300> | 13:38 |
mup | Bug # changed: 1518807, 1518809, 1518810, 1518820, 1519141, 1576266, 1583772 | 13:59 |
dimitern | babbageclunk: I suspect the longer run-time was bogus as the second run finished in 482.571 s; running a third now | 14:01 |
babbageclunk | dimitern: oh, good - yeah, it's definitely pretty variable. | 14:02 |
dimitern | redelmann: I'd suggest to install the official AWS CLI tools and using the same AWS credentials you give to juju to try e.g. starting and stopping an instance with --dry-run | 14:03 |
dimitern | babbageclunk: you have a review | 14:15 |
babbageclunk | dimitern: cheers! | 14:15 |
dimitern | it seems the test actually take less time the more you run them :D | 14:15 |
rogpeppe | if anyone wanted to know what the Juju API looked like: http://rogpeppe-scratch.s3.amazonaws.com/juju-api-doc.html | 14:22 |
babbageclunk | dimitern: Sorry! I think I mislead you about the value for JUJU_MONGOD - it needs to be the full path to mongod, including the filename, so for 3.2 you need to set it to /usr/lib/juju/mongo3.2/bin/mongod | 14:22 |
babbageclunk | dimitern: Otherwise it'll silently fall back to using the 2.4 binary. | 14:23 |
alexisb | rogpeppe, :) | 14:23 |
rogpeppe | alexisb: i don't think there are any API docs anywhere, right? | 14:23 |
alexisb | rogpeppe, nope | 14:23 |
alexisb | just what is on github | 14:24 |
dimitern | babbageclunk: that's what I did, yeah | 14:24 |
rogpeppe | alexisb: i guess some people might find this useful then | 14:24 |
dimitern | babbageclunk: it was using 3.2 | 14:24 |
alexisb | rogpeppe, you bet! | 14:24 |
alexisb | thank you | 14:24 |
babbageclunk | dimitern: Huh. Ok, awesome! | 14:24 |
rogpeppe | alexisb: my pleasure :) | 14:25 |
dimitern | babbageclunk: I double checked by 'ps -ef|grep mongod' while the test was running | 14:25 |
perrito666 | good morning | 14:25 |
babbageclunk | dimitern: Sweet, that would definitely be better. | 14:25 |
dimitern | perrito666: o/ | 14:28 |
alexisb | frobware, I am going to be a few minutes late | 14:28 |
frobware | alexisb: ack | 14:29 |
dimitern | babbageclunk: hey! you know what? | 14:31 |
babbageclunk | dimitern: what? | 14:31 |
dimitern | babbageclunk: running the tests on 2.4 with your patch (which I've just thought to do), actually cuts the run-time in *half* ! | 14:32 |
frobware | dimitern: heh, nice | 14:33 |
babbageclunk | dimitern: Wow, that's a bigger change than I saw. But I didn't want to get excited about the tests going faster under 2.4, because that will just make the pain of 3.2 more intense. | 14:33 |
dimitern | babbageclunk: running again to get a better sample | 14:33 |
=== rodlogic is now known as Guest89868 | ||
dimitern | babbageclunk: well, it's quite reasonable to expect this, especially with not doing things like recreating dbs for *every* test case | 14:35 |
babbageclunk | dimitern: Yeah, definitely - it's doing a lot less. | 14:35 |
dimitern | babbageclunk: I can confirm - ~276 s on 2.4 vs ~476 on 3.2 | 14:36 |
dimitern | babbageclunk: great job! please land this soon! :) | 14:37 |
babbageclunk | dimitern: ok! | 14:44 |
dimitern | babbageclunk: I did a final run with 3.2 with comparable system load levels - it's still ~476 | 14:45 |
babbageclunk | dimitern: Cool. | 14:48 |
babbageclunk | dimitern: If someone's commented on a change, and I think I've addressed those comments, I should probably wait for them to put a Ship It on the review before merging it, right? | 14:50 |
natefinch | babbageclunk: if it was trivial, then just go ahead and merge. If it was something complicated, I usually wait to make sure they agree with my change | 14:51 |
natefinch | babbageclunk: that's assuming you got a "fix it, then ship it"... if you didn't get the ship it, then definitely ask if they intended to give you a ship it or if they think it will need a re-review | 14:52 |
babbageclunk | natefinch: Makes sense - thanks! | 14:52 |
mup | Bug #1587503 opened: LXD provider fails to set hostname <juju-core:New> <https://launchpad.net/bugs/1587503> | 14:54 |
babbageclunk | natefinch: (I got a bit excited and didn't check with someone for a previous change, when I realised I figured it was probably a bit rude.) | 14:54 |
voidspace | babbageclunk: dimitern: frobware: sorry I missed the testing meeting - I was at the dentist. I have the invite now. | 15:03 |
natefinch | hmm... pretty sure I'm a team of 1 for my standup | 15:04 |
dimitern | voidspace: that's ok - we'll have one every other week | 15:14 |
voidspace | dimitern: yeah, I saw. Great. | 15:14 |
dimitern | voidspace: the gist of it is we need to come up with a list of relevant networking tests that could be added to the current CI | 15:15 |
voidspace | dimitern: right, sounds like a good start | 15:15 |
babbageclunk | frobware: Are you ok with me merging http://reviews.vapour.ws/r/4944 | 15:36 |
babbageclunk | frobware: ? | 15:36 |
arosales | natefinch: or katco or anyone familiar with resorces could we get your ack on https://github.com/juju/docs/pull/1122 | 15:55 |
arosales | need this to land in docs asap so we can have charmers start using terms | 15:55 |
natefinch | arosales: will look | 15:56 |
arosales | natefinch: thanks! | 15:56 |
rogpeppe | fwereade, dimitern: you might like this: http://rogpeppe-scratch.s3.amazonaws.com/juju-api-doc.html | 16:24 |
dimitern | rogpeppe: awesome! tyvm | 16:25 |
dimitern | rogpeppe: what did you use for generating this? | 16:26 |
rogpeppe | dimitern: some code :) | 16:26 |
rogpeppe | dimitern: one mo and i'll push it to github | 16:26 |
rogpeppe | dimitern: it does rely on a function i implemented in apiserver/common to get all the facades | 16:27 |
dimitern | rogpeppe: nice! cheers | 16:29 |
mup | Bug #1587552 opened: GCE Invalid value for field 'resource.tags.items <blocker> <ci> <gce-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1587552> | 16:30 |
frobware | babbageclunk: sorry, was otp | 16:30 |
babbageclunk | frobware: no worries - not blocking me, just thought I'd check | 16:31 |
=== frankban is now known as frankban|afk | ||
fwereade | rogpeppe, nice | 16:49 |
rogpeppe | fwereade: obviously a lot of the comments are Go-implementation-oriented, but it's better than nothing :) | 16:51 |
fwereade | rogpeppe, absolutely | 16:51 |
perrito666 | bbl | 17:07 |
rogpeppe | dimitern: i've pushed the code to github.com/rogpeppe/misc/cmd/jujuapidochtml and github.com/rogpeppe/misc/cmd/jujuapidoc | 17:09 |
rogpeppe | dimitern: the former just generates the HTML; the latter generates the computer-readable form that jujuapidochtml works from | 17:09 |
dimitern | rogpeppe: thanks! starred and bookmarked :) | 17:10 |
alexisb | natefinch, is the PR in this bug merged?: https://bugs.launchpad.net/juju-core/+bug/1581885 | 17:29 |
mup | Bug #1581885: Rename 'admin' model to 'controller' <juju-release-support> <usability> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1581885> | 17:29 |
alexisb | seems to be?? | 17:29 |
natefinch | alexisb: yes, sorry, it's in | 17:40 |
natefinch | alexisb: I marked it as such | 17:40 |
alexisb | natefinch, awesome, thank you | 17:41 |
=== redir_afk is now known as redir | ||
* thumper looks sadly at the calendar for this morning and sighs | 19:56 | |
rick_h_ | thumper: look at all those oppertunities! | 19:59 |
* perrito666 wonders why the code is working | 20:08 | |
natefinch | it's never good when you have to wonder why something works | 20:09 |
* thumper slaps rick_h_ | 20:19 | |
perrito666 | uh, did you just ....rickslapped him? <puts sun glasses> <cue csi miami music> | 20:20 |
* rick_h_ says "Thank you sir may I have another?" | 20:20 | |
=== natefinch is now known as natefinch-afk | ||
redir | brb reboot | 20:49 |
thumper | trivial review for someone http://reviews.vapour.ws/r/4948/ | 20:53 |
perrito666 | thumper: ship it | 20:55 |
thumper | perrito666: ta | 20:55 |
mup | Bug #1587644 opened: jujud and mongo cpu/ram usage spike <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1587644> | 21:07 |
mup | Bug #1587236 changed: no 1.25.5 tools for vivid? <juju-core:Won't Fix> <https://launchpad.net/bugs/1587236> | 21:52 |
mup | Bug #1587653 opened: juju enable-ha accepts the --series= option <cdo-qa> <ha> <juju-core:New> <https://launchpad.net/bugs/1587653> | 21:52 |
mup | Bug #1587653 changed: juju enable-ha accepts the --series= option <cdo-qa> <ha> <ui> <juju-core:New> <https://launchpad.net/bugs/1587653> | 22:01 |
mup | Bug #1587236 opened: no 1.25.5 tools for vivid? <juju-core:Won't Fix> <https://launchpad.net/bugs/1587236> | 22:01 |
mup | Bug #1587236 changed: no 1.25.5 tools for vivid? <juju-core:Won't Fix> <https://launchpad.net/bugs/1587236> | 22:07 |
mup | Bug #1587653 opened: juju enable-ha accepts the --series= option <cdo-qa> <ha> <ui> <juju-core:New> <https://launchpad.net/bugs/1587653> | 22:07 |
perrito666 | alexisb: did you need anyhing? | 22:14 |
* perrito666 notices he is answering old messages because of lag | 22:14 | |
alexisb | perrito666, nope, I got an update from Ian, thanks! | 22:16 |
perrito666 | oh, I get my updates from the internet, I just apt-get update :p | 22:16 |
alexisb | :) | 22:17 |
* perrito666 tries to unfreeze | 22:18 | |
thumper | alexisb, wallyworld: http://reviews.vapour.ws/r/4949/diff/# | 22:22 |
thumper | I've not done other packages because I think we actually need some of them | 22:22 |
wallyworld | ok | 22:22 |
thumper | potentially we could do the apiserver too | 22:22 |
thumper | but haven't done that yet | 22:22 |
thumper | state takes ages | 22:22 |
alexisb | thumper, well that was a simple fix to remove a bunch of pain | 22:23 |
alexisb | thank you | 22:23 |
thumper | alexisb: I told you it wouldn't be big | 22:23 |
* perrito666 runs tests for state and wonders if he could make dinner while he waits | 22:45 | |
axw | thumper: in http://reviews.vapour.ws/r/4925/, you say "business object layer". would it make sense to have it in the core package maybe? I still think it belongs in "names" at the moment for consistency, but maybe if we were to move core outside, and fold names into it? | 23:04 |
thumper | axw: have it in names for now | 23:05 |
axw | okey dokey | 23:06 |
mup | Bug #1463420 changed: Zip archived tools needed for bootstrap on windows <simplestreams> <tools> <windows> <juju-core:Fix Released by bteleaga> <https://launchpad.net/bugs/1463420> | 23:13 |
alexisb | menn0, it seems our 1x1 has fallen off the calendar | 23:30 |
menn0 | alexisb: yeah, we haven't had one for a while | 23:30 |
alexisb | I will put some time for us tomorrow | 23:30 |
menn0 | alexisb: sounds good | 23:31 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!