thumper | wallyworld: quick call then | 00:00 |
---|---|---|
thumper | wallyworld: 1:1 ? | 00:00 |
wallyworld | thumper: didn't you notice the sarcasm? | 00:00 |
thumper | no | 00:00 |
wallyworld | ok | 00:00 |
thumper | wallyworld: I actually thought that you were looking forward to working | 00:00 |
rick_h_ | wallyworld: :p | 00:17 |
wallyworld | rick_h_: living the dream once again, wheeeeee | 00:18 |
rick_h_ | hey, you know you missed it! can't miss the finish line | 00:21 |
rick_h_ | wallyworld: ^ | 00:21 |
wallyworld | rick_h_: indeed, wasn't my first choice to disappear this last little while | 00:22 |
anastasiamac | isn't it one of the relay strategy to put strongest sprinters last? :D | 00:25 |
rick_h_ | hah | 00:25 |
rick_h_ | good times while away wallyworld ? | 00:25 |
rick_h_ | i'm heading out next week. leave to the best of the best at the end :p | 00:26 |
wallyworld | rick_h_: i wish, spent the entire time working my fingers to the bone - ended up having to buy a chain saw to get everything done :-D | 00:26 |
rick_h_ | wallyworld: oooh new tools ftw! | 00:27 |
wallyworld | rick_h_: you camping or something next week? | 00:27 |
wallyworld | yeah, new tools :-) | 00:27 |
rick_h_ | wallyworld: i wish. snowing today | 00:27 |
wallyworld | i wish we had snow | 00:27 |
rick_h_ | wife and i celebrating 10yrs in hawaii | 00:28 |
wallyworld | rick_h_: oh congrats | 00:28 |
rick_h_ | so i'll be closer where i can keep an eye on you :) | 00:28 |
wallyworld | i'll wave from my balcony | 00:28 |
rick_h_ | new camper is done may 11 so after these may sprints i'll live from the woods for a while | 00:29 |
wallyworld | with no wifi :-( | 00:31 |
rick_h_ | nope have a booster antenna for the mifi device | 00:32 |
davecheney | fatal error: concurrent map read and map write | 00:32 |
davecheney | goroutine 1660 [running]: | 00:32 |
davecheney | runtime.throw(0x18945e0, 0x21) | 00:32 |
davecheney | bzzt, cannot land my 1.25 fix wihtout fixing gomanta | 00:32 |
davecheney | fairy nuf | 00:32 |
rick_h_ | davecheney: that seems ungood | 00:32 |
davecheney | rick_h_: thumper cherylj menn0 http://reviews.vapour.ws/r/4507/ | 00:40 |
davecheney | fix is here | 00:40 |
davecheney | i need to land that before I can land that other fix | 00:40 |
wallyworld | davecheney: fwiw, in master joyent doesn't use manta anymore, still may need to clean up some old tests | 00:46 |
davecheney | wallyworld: thanks for that | 00:52 |
davecheney | i need to land this on master to backport it | 00:52 |
davecheney | then i'll raise a tech debt care to remove the manta library as a dep | 00:52 |
davecheney | +1, less things to depend on, wheee! | 00:52 |
wallyworld | davecheney: sure, np. the change to not use manta only landed recently, still scrambling to clean everything up | 00:52 |
wallyworld | but yes, less deps is good. and we now don't need provider storage so long as the provider supports tagging | 00:53 |
menn0 | cherylj, wallyworld, davecheney, sinzui: launchpad is down so the check blockers script is dying at the start of merge attempts | 00:55 |
wallyworld | really? | 00:55 |
davecheney | looks like i picked a bad day to quit sniffing glue | 00:55 |
wallyworld | lol | 00:55 |
blahdeblah | menn0 cherylj wallyworld sinzui: we're working on it | 00:56 |
wallyworld | ty | 00:56 |
wallyworld | did the hamster die | 00:56 |
menn0 | blahdeblah: ok great | 00:56 |
* menn0 starts following @launchpadstatus | 00:57 | |
davecheney | you must construct more pylons | 01:01 |
sinzui | menn0: I can comment out the check | 01:02 |
davecheney | sinzui: pleaes | 01:03 |
menn0 | sinzui: that would be good but it might not be enough. juju has deps on packages hosted on launchpad. godeps might complain. | 01:03 |
menn0 | sinzui: it's worth a try though | 01:03 |
sinzui | menn0: bzr is working so disabling the check might be enough | 01:08 |
menn0 | sinzui: great | 01:09 |
sinzui | menn0: http://juju-ci.vapour.ws:8080/view/Juju%20Ecosystem/job/github-merge-juju/7283/console is retying now | 01:09 |
menn0 | sinzui: looks good so far | 01:10 |
menn0 | spoke too soon | 01:10 |
sinzui | :/ | 01:10 |
sinzui | no I just got godeps | 01:10 |
* menn0 nods | 01:11 | |
menn0 | seems to be stuck at godeps | 01:11 |
menn0 | progress! :) | 01:11 |
menn0 | sinzui: it seems to be moving... i'll keep an eye on it | 01:11 |
menn0 | sinzui: thanks! | 01:11 |
sinzui | np menn0 | 01:12 |
menn0 | launchpad.net appears to be back now anyway | 01:13 |
mup | Bug #1568643 opened: RebootSuite times out if unrelated lxd containers exist <juju-core:New> <https://launchpad.net/bugs/1568643> | 01:21 |
alexisb | sinzui, I am pretty sure you are a machine that never sleeps | 01:25 |
alexisb | thanks for the 1.6 work | 01:25 |
sinzui | alexisb: I don't sleep well :) | 01:25 |
davecheney | wallyworld: https://github.com/juju/juju/pull/5065 | 01:26 |
wallyworld | looking | 01:26 |
davecheney | ^ backport to 1.25 which unblocks my other change landing | 01:26 |
wallyworld | lgtm | 01:27 |
davecheney | danku! | 01:30 |
axw | anastasiamac: reviewed your branch | 01:40 |
anastasiamac | axw: thnx \o/ | 01:41 |
menn0 | grrr... why has lxd suddenly stopped working? | 02:21 |
alexisb | menn0, we have lots of lxd bugs going around atm | 02:24 |
alexisb | there are about 3 criticals against juju-core that are impacting lxd/lxd provider | 02:25 |
cherylj | axw: did you want to take one more look: http://reviews.vapour.ws/r/4502/ ? | 02:26 |
menn0 | alexisb: yep I know... lxd just stopped working for me in the middle of a complex test reproduction - annoying | 02:26 |
cherylj | menn0: what happened? | 02:26 |
axw | cherylj: looks good, thanks | 02:27 |
cherylj | thanks, axw! | 02:27 |
menn0 | cherylj: I was deploying the dummy charm various models under one controller | 02:27 |
cherylj | menn0: what were you seeing? | 02:27 |
menn0 | the first machine came up as normal | 02:27 |
menn0 | the other were stuck in pending | 02:27 |
menn0 | no activity | 02:27 |
menn0 | nothing in juju's logs about it | 02:28 |
menn0 | nothing in the lxd logs that I could find | 02:28 |
cherylj | oh fun, I've seen that too | 02:28 |
menn0 | the machines didn't exen exist when I ran "lxc list" | 02:28 |
cherylj | menn0: do the machines exist in the database? | 02:28 |
cherylj | (mongo) | 02:28 |
menn0 | yes | 02:28 |
menn0 | they were showing in status so had to have been in the DB | 02:28 |
menn0 | it's like juju didn't ask lxd to create the instances, or lxd didn't create them for some reason | 02:29 |
cherylj | ah, I've hit an issue where the services show up, but no machines to host them were ever created | 02:29 |
menn0 | that sounds like what I just saw | 02:29 |
cherylj | I mean, they didn't even exist in mongo | 02:29 |
menn0 | oh right... in my case the machines did show up in status | 02:31 |
menn0 | so they must have been in the DB | 02:31 |
menn0 | cherylj: ^ | 02:32 |
davecheney | https://github.com/juju/juju/pull/5064#issuecomment-208119816 | 03:05 |
davecheney | it's been a while since tht one failed | 03:05 |
davecheney | good to see it's as unreliable as ever | 03:05 |
davecheney | cherylj: are you happy with the response to https://bugs.launchpad.net/bugs/1568602 | 03:11 |
mup | Bug #1568602: Cannot build OS X client with stock Go 1.6 <ci> <go1.6> <osx> <packaging> <regression> <juju-ci-tools:Fix Released by sinzui> <juju-core:Invalid> <https://launchpad.net/bugs/1568602> | 03:11 |
davecheney | we see that report a lot | 03:11 |
davecheney | and it's always a crapped up go install | 03:11 |
davecheney | usually by not removing the old version before unpacking the new version | 03:11 |
cherylj | davecheney: thanks for looking at it. sinzui mentioned he got it working now | 03:12 |
mup | Bug #1568602 changed: Cannot build OS X client with stock Go 1.6 <ci> <go1.6> <osx> <packaging> <regression> <juju-ci-tools:Fix Released by sinzui> <juju-core:Invalid> <https://launchpad.net/bugs/1568602> | 03:13 |
mup | Bug #1568654 opened: ec2: AllInstances and Instances improvement <juju-core:New> <https://launchpad.net/bugs/1568654> | 03:13 |
davecheney | cool, let me know | 03:14 |
davecheney | I can provide some suggestios for how to use the upstream tarball concurrently | 03:15 |
davecheney | if needed | 03:15 |
davecheney | it's pretty straight forward | 03:15 |
cherylj | not sure if we'll need that. sinzui ? ^^ | 03:16 |
cherylj | wallyworld: can you take another look? http://reviews.vapour.ws/r/4504/ | 03:16 |
wallyworld_ | sure | 03:16 |
davecheney | build times go up, build times go down. you cannot explain that | 03:16 |
cherylj | wallyworld: I had to move things around to avoid circular dependencies | 03:16 |
wallyworld_ | cherylj: lgtm, ty | 03:18 |
cherylj | wallyworld_: thanks! | 03:18 |
sinzui | davecheney: getting it working was easy for me but not CI. You were right about more than one tool chain on the host. CI in an effort to purge the env I setup for, disocvered the wrong go tool chain. It found the go I used to compile the go 1.6 we place in a special dir away from the system. All that is resolved now | 03:19 |
davecheney | okie dokes | 03:20 |
davecheney | this might be the one time that i actually say "you should set GOROOT" | 03:20 |
davecheney | but please, don't tell anyone I said thta | 03:21 |
cherylj | Can I get another review? http://reviews.vapour.ws/r/4510/ | 03:30 |
cherylj | (pretty easy) | 03:30 |
davecheney | cherylj: LGTM, ship it | 03:33 |
cherylj | thanks, davecheney! | 03:33 |
cherylj | I'll have to get in line... lots of merges lined up :) | 03:34 |
cherylj | menn0: model-migration merge completed! | 03:34 |
davecheney | boom! | 03:35 |
menn0 | cherylj: *\o/* | 03:36 |
davecheney | wallyworld_: provider/joyent/local_test.go: | 03:39 |
davecheney | 10: lm "github.com/joyent/gomanta/localservices/manta" | 03:39 |
davecheney | ^ ja'cuse | 03:39 |
davecheney | still some code using gomanta | 03:39 |
davecheney | should I kill it with spite ? | 03:40 |
wallyworld_ | davecheney: yes, please, all the manta stuff needs to go away | 03:40 |
wallyworld_ | davecheney: 2.0 should not import gomanta at all | 03:40 |
davecheney | hulk smash! | 03:41 |
davecheney | so what does joyent use now ? | 03:42 |
menn0 | wallyworld_ or axw: can I get some help with a charm storage / GridFS related issue? | 03:44 |
axw | menn0: sure, what's up? | 03:44 |
wallyworld_ | depends what it is :-) | 03:44 |
menn0 | so I fixed this: https://bugs.launchpad.net/juju-core/+bug/1541482 | 03:45 |
mup | Bug #1541482: unable to download local: charm due to hash mismatch in multi-model deployment <2.0-count> <juju-release-support> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1541482> | 03:45 |
menn0 | it turns out the api server had a local cache of charms, which wasn't model specific | 03:45 |
menn0 | it was easy to fix | 03:45 |
menn0 | but fixing it has exposed another problem | 03:46 |
menn0 | if you deploy a local charm to one model | 03:46 |
menn0 | and then deploy the same charm to another model in the same controller, the unit in the 2nd model can't download the charm | 03:46 |
menn0 | if the local charm is the same but with slight modification it works | 03:47 |
menn0 | but not if it's exactly the same charm | 03:47 |
wallyworld_ | menn0: was this cache added by something in charmrepo? | 03:47 |
menn0 | i've added lots of debug logging and everything looks fine | 03:47 |
menn0 | wallyworld_: no the cache is in the charm download API handler | 03:47 |
wallyworld_ | ok, there's also one in charmrepo from memory | 03:48 |
menn0 | wallyworld_: it uses a directory to download charms out of storage into | 03:48 |
menn0 | wallyworld_: the cache assumes that the contents of a charm with a given charm URL is always the same | 03:49 |
menn0 | wallyworld_: but that isn't true for local charms across models | 03:49 |
axw | menn0 wallyworld_: I think the "binarystorage" index might not be model-specific, checking... | 03:49 |
wallyworld_ | menn0: i'm not familair with the code - why does the api server need a cache of stuff from mongo? | 03:49 |
menn0 | wallyworld_: good question. the endpoint supports returning file listings and downloading particular files out of the charm archive | 03:50 |
menn0 | wallyworld_: I guess the expectation is that there will be many API calls for the one charm archive | 03:50 |
wallyworld_ | right, so a local cache adds very little that i can see | 03:50 |
wallyworld_ | i guess so | 03:51 |
menn0 | wallyworld_: at any rate, the cache isn't the problem here | 03:51 |
wallyworld_ | is the index model specific? | 03:51 |
menn0 | wallyworld_: the cache was hiding a deeper bug in charm storage | 03:51 |
axw | menn0: sorry had my wires crossed, that's just for tools... | 03:51 |
menn0 | axw, wallyworld_: the way charms are stored looks fine but there's obviously a problem | 03:52 |
* menn0 prepares a paste | 03:52 | |
wallyworld_ | menn0: the charns collection should be model aware | 03:54 |
menn0 | wallyworld_: yep it is | 03:54 |
menn0 | wallyworld_: the problem isn't there I don't think | 03:54 |
wallyworld_ | that's where the charm doc goes - the url is not model aware but that shouldn't matter | 03:55 |
menn0 | wallyworld_: it's in the GridFS put/get code I think | 03:55 |
wallyworld_ | there's a PutForBucket api | 03:55 |
wallyworld_ | so long as we set the bucket id to be the model uuid that should be enough | 03:55 |
menn0 | wallyworld_, axw : http://paste.ubuntu.com/15752687/ | 03:56 |
menn0 | wallyworld_, axw : that's the output from a bunch of debug logging I added | 03:56 |
menn0 | wallyworld_, axw : does that help? | 03:57 |
menn0 | so for the second charm post, the GridFS entry appears to get reused | 03:58 |
thumper | oh poo | 03:58 |
menn0 | but during the download attempt for the second model it can't be found | 03:58 |
thumper | my maas update seemed to loose all the power settings for the machines | 03:58 |
wallyworld_ | menn0: that looks like it's using the old blobstore | 03:58 |
wallyworld_ | should be blobstore.v2 | 03:58 |
wallyworld_ | with PutForBucket | 03:58 |
menn0 | this is blobstore.v2 | 03:59 |
wallyworld_ | not that it should matter i guess in terms of this bug | 03:59 |
wallyworld_ | ah | 03:59 |
wallyworld_ | some internal methods were not renamed | 03:59 |
wallyworld_ | sigh | 03:59 |
mup | Bug #1568666 opened: provider/lxd: non-juju resource tags are ignored <juju-core:Triaged> <https://launchpad.net/bugs/1568666> | 04:01 |
mup | Bug #1568668 opened: landing bot uses go 1.2 for pre build checkout and go 1.6 for tets <juju-core:New> <https://launchpad.net/bugs/1568668> | 04:01 |
mup | Bug #1568669 opened: juju 2.0 must not depend on gomanta <juju-core:New for dave-cheney> <https://launchpad.net/bugs/1568669> | 04:01 |
wallyworld_ | menn0: it doesn't make sense at first glance - the resource path i think from memory is the raw access to the blob - that should not be affected by wthat happens above with bucket paths etc | 04:04 |
wallyworld_ | so the resource path should be there - unless something is deleting it | 04:05 |
wallyworld_ | is it worth putthing debug in the remove methods | 04:05 |
menn0 | wallyworld_: ok, i'll try that. I have put some in some of the cleanup-on-error defers already but they're not firing | 04:06 |
menn0 | wallyworld_: i'm also checking how things look in the DB | 04:06 |
wallyworld_ | menn0: so to be sure i understand - the issue is that the second upload correctly creates resource metadata for the new model uuid etc, and it rightly shares a de-deuped blob with the first upload, but the attempt to access that blob fails | 04:07 |
wallyworld_ | and it is failing at the point at which the blob itself is retrieved | 04:07 |
menn0 | wallyworld_: spot on. that's my best understanding at the moment | 04:07 |
menn0 | yes | 04:07 |
wallyworld_ | hmmm, so yeah, that blob should be there unless deleted | 04:07 |
wallyworld_ | this should all be orhtogonal to any bucket uuid stuff | 04:08 |
axw | yeah, it seems to be passing the right path to gridfs .. | 04:08 |
axw | based on the error | 04:08 |
menn0 | looking at the storedResources collection and the blobstore db directly, everything looks ok | 04:09 |
wallyworld_ | jeez, well that sucks | 04:10 |
menn0 | i'll add more debug logging in the lookup side of things | 04:10 |
wallyworld_ | at least the storage side looks correct - it is using the model uuid correctly | 04:11 |
wallyworld_ | and de-duping across models | 04:11 |
menn0 | wallyworld_: it certainly *looks* like it's doing the right hting | 04:15 |
wallyworld_ | still could be a subtle issue i guess | 04:15 |
menn0 | yep | 04:16 |
menn0 | i've just added a lot more logging on the get side of things | 04:16 |
menn0 | wallyworld_, axw: I think I've found it | 04:25 |
menn0 | wallyworld_, axw: the GridFS instance is created with the model UUID as the prefix | 04:26 |
menn0 | wallyworld_, axw: so even though the charm is being requested with the correct de-duped UUID | 04:26 |
menn0 | wallyworld_, axw: the GridFS prefix prevents it being found | 04:27 |
wallyworld_ | the namespace? | 04:27 |
menn0 | wallyworld_: yes | 04:27 |
menn0 | does that seem right? | 04:27 |
wallyworld_ | so maybe the namespace should be the controller uuid | 04:27 |
wallyworld_ | i think so | 04:27 |
wallyworld_ | this was all done pre-multi model | 04:28 |
arcticlight | Hi! Am I in the right place to ask about filing a bug against Juju? I'm kind of new to the whole FOSS community thing and I have *no idea* how to use Launchpad. | 04:28 |
wallyworld_ | arcticlight: sure, ask away | 04:28 |
wallyworld_ | and welcome | 04:29 |
wallyworld_ | menn0: does each controller get a different uuid in HA? | 04:30 |
menn0 | wallyworld_: naively fixing this by changing the charms code to use a statestorage with the controller's UUID will no doubt break existing deployments | 04:30 |
wallyworld_ | i think so right? | 04:30 |
wallyworld_ | menn0: it will break existing yes | 04:30 |
arcticlight | wallyworld_: It's actually really simple... Juju doesn't seem to depend on `curl` but goes ahead and uses it anyway. But on Ubuntu Server 14.04 LTS it's not installed by default, or at least it wasn't on my system (I have a fresh install) and `juju bootstrap` blew up. Thought I'd mention it | 04:30 |
menn0 | wallyworld_: in HA the controller is really the whole cluster | 04:31 |
menn0 | wallyworld_: there is only one controller model and one UUID | 04:32 |
menn0 | wallyworld_: so that's not a problem | 04:32 |
wallyworld_ | arcticlight: yes, it does use curl - to retrieve the tools binaries at bootstrap. i'm surprised 14.04 lts doesn't have it out of the box, but i'm not a packaging guy | 04:32 |
wallyworld_ | arcticlight: thanks for mentioning, i'll followup with someone who knows more about ubuntu default packaging and see if we need to fix anything | 04:32 |
arcticlight | wallyworld_: Yeah. I fixed it by installing curl, but it did blow up on a fresh install. I figured someone should know so they can add a dependency on curl. | 04:32 |
arcticlight | wallyworld_: Welcome! | 04:33 |
wallyworld_ | arcticlight: just ask if there's anything else you get stuck on. lots of folks here who can help | 04:33 |
wallyworld_ | menn0: so the "easy" option is to use the controller uuid with gridfs namespace. i'm ok with release notes telling poeple they need to re bootstrap between betasa, but that's IMHO | 04:34 |
arcticlight | wallyworld_: OK! Good to know. I've been fooling around with MAAS/Juju since around 2013. I'm just really shy lol~ Figured this was simple enough to just pop on and ask about tho | 04:34 |
wallyworld_ | we don't bite | 04:35 |
wallyworld_ | unless really provoked :-) | 04:35 |
menn0 | wallyworld_: what about 1.25 systems? | 04:36 |
wallyworld_ | menn0: we have sooooooo many tings to fix with upgrades there | 04:36 |
wallyworld_ | this is just one more | 04:36 |
wallyworld_ | menn0: upgrading from 1.25 will be really, really difficult | 04:37 |
wallyworld_ | already | 04:37 |
wallyworld_ | menn0: i suspect we'll considering migrayions :-) | 04:37 |
wallyworld_ | migrations | 04:37 |
wallyworld_ | might be easier in the long run | 04:37 |
menn0 | wallyworld_: you realise that would mean backporting migrations to 1.25? | 04:38 |
wallyworld_ | yes, or a form thereof | 04:38 |
wallyworld_ | that will likely be so much easier that the alternative | 04:38 |
anastasiamac | jeez wallyworld_, u made thumper quit with all this upgrades talk | 04:38 |
menn0 | wallyworld_: I've realised for some time that this was on the cards :-/ | 04:38 |
wallyworld_ | na, i just bit him hard | 04:38 |
wallyworld_ | menn0: i think we all have :-) sort of a slowly increasing dread | 04:39 |
anastasiamac | dread != anticipation | 04:40 |
menn0 | dread is more accurate :) | 04:43 |
* menn0 tries the naive fix | 04:43 | |
anastasiamac | sure... but when u dread problems become harder and motivation disappears... when u anticipate, besides being prepared, there is always something to look forward to.. | 04:47 |
menn0 | wallyworld_: were you proposing we pass the controller UUID to charm related calls to state/storage.NewStorage() or were you proposing that we use the controller UUID in the call that stateStorage makes to blobstore.NewGridFS()? | 04:58 |
axw | menn0: I think the latter | 04:59 |
wallyworld_ | menn0: i think(?) just the latter is all we need | 04:59 |
wallyworld_ | that would be my preference | 04:59 |
axw | so they're all sharing a common gridfs namespace, and we have model-specific catalogues that point into it | 04:59 |
wallyworld_ | yup | 04:59 |
menn0 | wallyworld_: cool that makes sense (the latter, not the former). I misunderstood what you meant the first time around and then realised it didn't make sense as I started to change all the calls to NewStorage() | 05:01 |
wallyworld_ | oh sorry :-) | 05:01 |
wallyworld_ | i should have been more clear | 05:02 |
menn0 | wallyworld_: no it was my bad | 05:02 |
menn0 | wallyworld_: use the controller UUID over a fixed value? the controller UUID isn't readily available in stateStorage | 05:03 |
wallyworld_ | menn0: isn't there a helper method on state? you don'r have access to that in stateStorage? | 05:03 |
menn0 | wallyworld_: no it gets a MongoSession, not a State | 05:03 |
menn0 | wallyworld_: I can rejig things | 05:04 |
wallyworld_ | where does it get the model uuid from then? | 05:04 |
menn0 | wallyworld_: or use a fixed value "juju" or "state" or "jujustate" | 05:04 |
menn0 | the model UUID is passed in with the MongoSession | 05:04 |
menn0 | 2 args | 05:04 |
wallyworld_ | so can't we change that to controller uuid? | 05:04 |
axw | menn0 wallyworld_: may as well just use a constant, it doesn't really need to be the controller UUID | 05:05 |
axw | it just needs to be the same for all views on the db | 05:06 |
wallyworld_ | except if we want to share a mongo instance between controllers | 05:06 |
blahdeblah | Anyone know if there are moves afoot to introduce DNS as a Service support into either juju itself or the charm store? | 05:25 |
menn0 | wallyworld: i've just tried using a fixed GridFS namespace ("juju") in state/storage.NewStorage() and that fixes the problem | 05:26 |
wallyworld_ | menn0: great. if it's easy to get controller uuid in there that would be good | 05:27 |
menn0 | wallyworld: your reasoning for that is to avoid any chance of collision in case of a mongodb instance being shared by multiple controllers? | 05:28 |
wallyworld_ | menn0: yeah | 05:28 |
wallyworld_ | we just don't know what might need to be supported in the future | 05:28 |
menn0 | wallyworld_: but that would break anyway... our main db is called "juju" | 05:28 |
menn0 | and the collection names in it are fixed | 05:29 |
wallyworld_ | well, we use model uuids though | 05:29 |
wallyworld_ | i guess "juju" is ok for gridfs | 05:29 |
menn0 | wallyworld_: I was about to allow my mind to be changed :) | 05:30 |
wallyworld_ | quick win for now | 05:30 |
wallyworld_ | menn0: i guess there's pros and cons. maybe if it's easy to do.... | 05:31 |
menn0 | wallyworld_: if we were to do it then it would probably make sense to have NewStorage just take a *State | 05:31 |
menn0 | it could then pull the model uuid, controller uuid and session off that | 05:31 |
menn0 | sound ok? | 05:31 |
menn0 | it looks like all the call sites have a *State | 05:32 |
wallyworld_ | menn0: let's use a bespoke interface that just declares the state methods it needs | 05:33 |
wallyworld_ | pass in the concrete state.State from the caller | 05:33 |
wallyworld_ | but declare the NewGridFS() method to use a smaller interface | 05:33 |
wallyworld_ | or just add a controller uuid param | 05:34 |
menn0 | wallyworld_: ok sounds good. i'll pass in state but via an interface | 05:34 |
wallyworld_ | blahdeblah: i don't think there's anything coming in core, but i thought the ecosystems/charm guys had something for juju itself they use | 05:35 |
menn0 | wallyworld_: state doesn't currently have a ControllerUUID method but i'll add one | 05:35 |
wallyworld_ | menn0: i though it did? | 05:35 |
menn0 | wallyworld_: nope. there's ControllerModel and you can get it from there but that's messy | 05:35 |
menn0 | wallyworld_: there's a unexported controllerTag field so I'll just expose the UUID via that | 05:35 |
wallyworld_ | like we do for ModelUUID() | 05:36 |
* menn0 bids | 05:36 | |
menn0 | nods even | 05:36 |
blahdeblah | wallyworld_: thanks - will ping them | 05:44 |
menn0 | wallyworld_: I just noticed that state/imagesstore uses a fixed "osimages" prefix | 05:50 |
wallyworld_ | menn0: ah yes. i think the thinking there was that it is ok to share cached generic lxc images between controllers | 05:51 |
menn0 | wallyworld_: and state.ToolStorage uses the model UUID | 05:51 |
wallyworld_ | hmmm, so tools storage could break the same way maybe? | 05:52 |
* menn0 wonders if ToolsStorage could end up with the same problem where an entry is inaccessible | 05:52 | |
menn0 | haha | 05:52 |
menn0 | yes I think so | 05:52 |
menn0 | wallyworld_: it's probably hard to trigger at the moment | 05:52 |
wallyworld_ | menn0: do we even need a namespace? | 05:52 |
menn0 | wallyworld_: can you upload tools for a hosted machine | 05:52 |
menn0 | ? | 05:53 |
wallyworld_ | um | 05:53 |
wallyworld_ | i think the controller stores all the tools tarballs for the hosted models, would need to check | 05:53 |
menn0 | wallyworld_: I guess if you do juju upgrade-juju --upload-tools for 2 hosted models you might hit the same problem | 05:53 |
wallyworld_ | not sure, but sounds plausible doesn't it | 05:54 |
menn0 | if you used exactly the same tools | 05:54 |
wallyworld_ | yes, that's they key - need to be the same | 05:54 |
menn0 | wallyworld_: does upload-tools recompile the tools | 05:54 |
menn0 | ? | 05:54 |
wallyworld_ | depends - if it finds binaries in the path, then no | 05:54 |
wallyworld_ | is my recollection | 05:54 |
menn0 | wallyworld_: ok. if it did recompile then the odds of the binary being exactly the same is slim | 05:55 |
wallyworld_ | yes | 05:55 |
wallyworld_ | but let's not count on it | 05:55 |
menn0 | wallyworld_: regarding whether we need a namespace, I think you need to provide soemthing. | 05:55 |
menn0 | wallyworld_: the docs say the convention when there's only one namespace required is to use "fs" | 05:56 |
menn0 | wallyworld_: I think it still makes sense to use a namespace for different uses of gridfs | 05:56 |
wallyworld_ | menn0: so maybe we just use a const "osimages", "tools", "charms" etc? | 05:56 |
menn0 | wallyworld_: yep. | 05:58 |
menn0 | wallyworld_: or we use the controller UUID... | 05:58 |
menn0 | wallyworld_: no a prefixes per use seems better | 05:59 |
wallyworld_ | menn0: i think not looking at the different usages - namespaces for images, charms, tools etc seems ok | 05:59 |
wallyworld_ | we still differentiate by model uuid inside each namespace | 06:00 |
menn0 | wallyworld_, axw: why does ToolsStorage and GUIStorage not use state.NewStorage()? | 06:00 |
menn0 | what's different about binaryStorage? | 06:00 |
wallyworld_ | menn0: not sure, i know someone renamed binary storage recently, but can't recall the details | 06:01 |
wallyworld_ | i can recall the specifics ottomh | 06:02 |
menn0 | wallyworld_: ok. well I'll avoid fixing the world. | 06:02 |
wallyworld_ | not in this pr :-) | 06:02 |
wallyworld_ | next one | 06:02 |
menn0 | wallyworld_: thanks for your help and input | 06:04 |
* menn0 is gone for now | 06:04 | |
wallyworld_ | menn0: np, didn't do much :-) | 06:04 |
wallyworld_ | thanks for fixing | 06:04 |
axw | menn0: I think they probably could. tools storage was written before that IIRC | 06:04 |
mup | Bug #1566345 changed: kill-controller leaves instances with storage behind <juju-release-support> <kill-controller> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1566345> | 06:10 |
=== urulama__ is now known as urulama | ||
mup | Bug #1568715 opened: simplestreams needs to be updated for Azure Resource Manager <juju-core:Triaged> <https://launchpad.net/bugs/1568715> | 07:05 |
frobware | dimitern: ping; are you using/testing LXD containers on trusty? | 08:14 |
dimitern | frobware, hey, yes - I did, but mostly testing with xenial now | 08:19 |
frobware | dimitern: I saw you landed the /e/n/i change. It's only half the story though as we need to delete eth0.cfg. | 08:20 |
frobware | dimitern: my MAAS/LAN/OTHER seems borked at the moment as having a lot of problems bootstrapping. | 08:20 |
dimitern | frobware, ok, I'll propose a fix that deletes eth0.cfg | 08:23 |
frobware | dimitern: I was trying to verify on trusty but ran into deploy problems. | 08:23 |
frobware | dimitern: I also wanted to understand the order a little better. Who wins doe 00-juju.cfg and eth0.cfg | 08:24 |
frobware | s/doe/for | 08:24 |
dimitern | frobware, I'm also working on sketching the next steps to fix network config setting properly | 08:24 |
frobware | dimitern: I might still see the issue where the MA needs bouncing on xenial before a containers gets its correct network profile | 08:25 |
dimitern | frobware, well, on trusty you need to add "trusty-backports" to /e/a/sources.list and a-g update | 08:25 |
frobware | dimitern: tried that... | 08:25 |
dimitern | frobware, and then a-g install trusty-backports/lxd ? | 08:25 |
frobware | dimitern: question is, should the have been done by juju for trusty? | 08:26 |
dimitern | frobware, yes, i think so | 08:28 |
dimitern | frobware, wasn't jam doing something about that? | 08:28 |
dimitern | frobware, hmmm or maybe it IIRC trusty-backports are *supposedly enabled* by default, so that steps were unnecessary ? | 08:30 |
dimitern | I haven't seen a cloud image of trusty where backports is on by default | 08:31 |
frobware | dimitern: right; just expected juju to do the business. Only switched to trusty because I had issues with xenial. | 08:31 |
frobware | dimitern: was going to do some testing before I land the `rm eth0.cfg' change. Wanted to understand the order of when 00-juju.cfg wins over eth0.cfg to see if we need to down/up all interfaces. | 08:39 |
dimitern | frobware, sure | 08:41 |
frobware | dimitern: I also wonder whether our rm should be a bit smarter. rm all but 00-juju.cfg. | 08:46 |
dimitern | frobware, alternatively, we can change /e/n/i/ to source interfaces.d/*.juju :) | 08:47 |
dimitern | or something like that | 08:47 |
* fwereade has a filthy cold, please excuse me dimitern voidspace et al | 09:04 | |
dimitern | fwereade, get well soon! :) | 09:05 |
voidspace | fwereade: yep, recover quickly | 09:16 |
* TheMue dccs wishes to get well soon to fwereade | 09:22 | |
jam | dimitern: frobware: you shouldn't need to directly add trusty-backports as it should be *available* by default (but not active), and "juju bootstrap" should be doing "apt-get -t trusty-backports install lxd" | 10:28 |
jam | if it isn't then we have a bug we should be aware of | 10:29 |
jam | dimitern: I've done several bootstraps on Trusty images on AWS and they work. | 10:29 |
jam | dimitern: now, we've had bugs with *trusty* and "go-1.2" with official releases (juju-2.0-beta3) because go-1.2 doesn't work with LXD | 10:29 |
jam | so it tells you "unknown" or somesuch. | 10:30 |
jam | but Master or "juju bootstrap --upload-tools" should all work, and next release should be built with 1.6 | 10:30 |
dimitern | jam, I've yet to see a trusty cloud image on maas that has trusty-backports in /e/apt/sources.list tbo | 10:30 |
jam | dimitern: so I'm not sure where it is enabled, but it does work. | 10:31 |
jam | have you tried just doing "apt-get -t trusty-backports install lxd" ? | 10:31 |
mgz | dimitern: it should be there, but not at a prio that installs packages unless explictly selected | 10:31 |
jam | I'm bootstrapping now to confirm, but I have tested it in the past | 10:31 |
jam | hi mgz | 10:31 |
dimitern | jam, yes, and it says trusty-backports is unknown or something | 10:32 |
jam | dimitern: what cloud/what region/what version of juju? | 10:32 |
dimitern | jam, mgz, hmm ok I'm trying now again to confirm after updating all images | 10:32 |
dimitern | jam, maas/2.0 and juju from master tip | 10:32 |
dimitern | voidspace, I'm getting a lot of "Failed to power on node - Node could not be powered on: Failed talking to node's BMC: Unable to retrieve AMT version: 500 Can't connect to 10.14.0.11:16992 at /usr/bin/amttool line 126." errors with 2.0 maas | 10:33 |
dimitern | it works but unreliably | 10:33 |
dimitern | like 1-2 out of 5 power checks fail | 10:33 |
voidspace | dimitern: weird | 10:35 |
voidspace | dimitern: that really sounds like maas bug | 10:35 |
voidspace | dimitern: I don't hit that because I'm not setting the power type I guess | 10:35 |
voidspace | dimitern: or maybe it's a new version of amttool in xenial | 10:36 |
voidspace | or virsh | 10:36 |
voidspace | either way - maas problem | 10:36 |
dimitern | voidspace, indeed | 10:37 |
jam | dimitern: "sudo apt-get -t trusty-backports install lxd" works for me on a Trusty image created with 2.0.0-beta3 in AWS (even though Juju 2.0.0b3 wouldn't be able to talk to it with released tools) | 10:37 |
dimitern | jam, mgz here it is - fresh install of trusty - http://paste.ubuntu.com/15756105/ | 10:38 |
dimitern | MAAS Version 2.0.0 (beta1+bzr4873) | 10:38 |
dimitern | jam, mgz, is it possible AWS uses different cloud images than MAAS ? | 10:39 |
mgz | dimitern: it does | 10:39 |
jam | dimitern: I'm sure they are different builds, as the root filesystem is different | 10:39 |
jam | but I thought they were supposed to be as much the same as possible. | 10:39 |
jam | dimitern: we need a bug on MaaS and Juju that tracks that | 10:40 |
jam | thanks for noticing. | 10:40 |
dimitern | well that's kinda crappy ux :/ | 10:40 |
dimitern | jam, yeah | 10:40 |
dimitern | jam, looking at http://images.maas.io/query/released.latest.txt and http://images.maas.io/query/daily.latest.txt it looks like the trusty images MAAS is using are 2 years old | 10:50 |
jam | dimitern: that doesn't sound good | 10:50 |
dimitern | indeed | 10:51 |
perrito666 | morning | 11:13 |
voidspace | babbageclunk: are you working on the MAAS2 version of maasObjectNetworkInterfaces? | 11:45 |
voidspace | babbageclunk: pretty sure you are - in which case I'll leave it as a TODO in my branch | 11:45 |
babbageclunk | voidspace: yes | 11:45 |
voidspace | babbageclunk: ok | 11:45 |
voidspace | babbageclunk: it will need wiring into my branch when done | 11:45 |
babbageclunk | voidspace: just pushing and creating a PR for the storage - spent longer than expected bashing my head on golang features. | 11:46 |
voidspace | babbageclunk: heh, welcome to go :-) | 11:46 |
perrito666 | bbl (~1h) | 11:47 |
frobware | jam: trying again; got sidetracked by h/w | 11:48 |
babbageclunk | voidspace: https://github.com/juju/juju/pull/5070 | 11:54 |
babbageclunk | voidspace: would you prefer I work on finishing off stop-instances or interfaces next? | 11:56 |
voidspace | babbageclunk: https://github.com/juju/juju/pull/5071 | 11:56 |
voidspace | babbageclunk: well, StopInstances and then deploymentStatus get us closer to bootstrap | 11:57 |
voidspace | babbageclunk: bootstrap isn't actually blocked on interfaces yet | 11:57 |
voidspace | dimitern: frobware: two PRs for you | 11:57 |
voidspace | dimitern: frobware: http://reviews.vapour.ws/r/4513/ | 11:57 |
babbageclunk | voidspace: ok, stop instances next then | 11:57 |
voidspace | babbageclunk: cool | 11:58 |
voidspace | babbageclunk: thanks | 11:58 |
voidspace | dimitern: frobware: http://reviews.vapour.ws/r/4514/ | 11:58 |
dimitern | voidspace, looking | 11:58 |
dimitern | voidspace, both reviewed | 12:09 |
* dimitern steps out for ~1h | 12:09 | |
voidspace | dimitern: thanks | 12:26 |
voidspace | dimitern: that point about the comment is in a test that currently doesn't run (I changed the name to DONTTestAcquireNodeStorage...) | 12:29 |
voidspace | dimitern: because it needs the work that babbageclunk has done on storage | 12:29 |
voidspace | dimitern: so that will be updated next | 12:29 |
voidspace | dimitern: so effectively that whole test is commented out - and there *is* a TODO comment at the start of the test to explain why. | 12:30 |
* voidspace lunch | 12:30 | |
=== babbageclunk is now known as babbageclunch | ||
mup | Bug #1568845 opened: help text for juju autoload-credentials needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1568845> | 13:09 |
dimitern | voidspace, ok,sgtm | 13:12 |
mup | Bug #1560457 changed: help text for juju bootstrap needs improving <juju-core:Triaged> <https://launchpad.net/bugs/1560457> | 13:39 |
mup | Bug #1568848 opened: help text for juju bootstrap needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1568848> | 13:39 |
mup | Bug #1568854 opened: help text for juju create-model needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1568854> | 13:39 |
mup | Bug #1568862 opened: help text for juju needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1568862> | 13:39 |
=== babbageclunch is now known as babbageclunk | ||
katco | morning all | 14:16 |
mgz | wotcha katco | 14:18 |
voidspace | dimitern: did you see the reply from babbageclunk on his PR | 14:21 |
voidspace | dimitern: he doesn't understand your review comment, and nor do I | 14:21 |
voidspace | we so missed an opportunity | 14:21 |
voidspace | our maas feature branch should have been called maaster | 14:22 |
babbageclunk | voidspace: amazing | 14:22 |
dimitern | voidspace, babbageclunk, sorry - I've posted a reply | 14:22 |
babbageclunk | voidspace: I mean, amaazing | 14:22 |
dimitern | the diff mislead me I guess | 14:22 |
voidspace | baabageclunk | 14:23 |
voidspace | babbageclunk: I like babbagelunch by the way - nice | 14:23 |
babbageclunk | voidspace: thanks | 14:23 |
voidspace | *clunch even | 14:23 |
babbageclunk | dimitern: no worries! | 14:24 |
babbageclunk | frobware: are you ok with that panic if a test sets up the fakeController without files? | 14:25 |
frobware | babbageclunk: panic seems bad | 14:25 |
babbageclunk | frobware: what other way is there of failing the test? | 14:26 |
frobware | babbageclunk: sorry, sidetracked again. | 14:31 |
natefinch | panic is ok if it indicates a programmer error, especially during tests. It's sort of a nice "don't do that!" as long as it happens right away with an obvious cause. | 14:32 |
frobware | babbageclunk: I replied in the review | 14:37 |
babbageclunk | I'll change it to return a generic error with a clear message indicating what the problem is. | 14:37 |
babbageclunk | frobware: cool, thanks! | 14:41 |
natefinch | cherylj: what's the color coding on the bug squad board? | 14:46 |
natefinch | cherylj: nvm... card type. I see | 14:46 |
cherylj | :) | 14:47 |
natefinch | cherylj: I was looking at priority for priority and they were all normal, which was confusing me | 14:47 |
cherylj | ah | 14:47 |
natefinch | cherylj: just grab a critical off the top somewhere, or do you have a suggestion? | 14:48 |
cherylj | natefinch: just pick one on the criticals that interest you :) | 14:48 |
katco | cherylj: ty :) all of moonstone will be on bug squad now | 14:49 |
frobware | jam: not sure if you're still about but I had no joy with add-machine lxd:0. See bug #1568895 | 14:56 |
mup | Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 14:56 |
cherylj | cool, thanks katco. If there are new bugs found in CI, I might need to redirect people :) | 14:56 |
katco | cherylj: we're at your direction | 14:56 |
cherylj | sinzui: is master running in CI now? | 14:56 |
cherylj | katco: I'm actually wondering if I need to block things as we're trying to get a release out tomorrow | 14:56 |
cherylj | :( | 14:56 |
cherylj | I'd rather not, but we really really need a good run | 14:57 |
katco | cherylj: if we need stability then i'd agree =/ | 14:57 |
sinzui | cherylj: 1.25 strted about 45 miniutes ago | 14:57 |
cherylj | sinzui: could you kill it and run master? | 14:57 |
sinzui | cherylj: I hesitate to. killing it means waiting for other things to complete and cleanup. | 14:58 |
* sinzui looks | 14:58 | |
cherylj | sinzui: understood | 14:58 |
cherylj | katco: and technically, master is blocked from a different bug :) | 14:58 |
katco | cherylj: what bug is that? | 14:58 |
katco | cherylj: nm i see it | 14:59 |
cherylj | katco: one of many that I opened over the weekend for failures | 14:59 |
katco | cherylj: looks like fix committed | 14:59 |
katco | cherylj: by you ;p | 15:00 |
cherylj | oh snap, I forgot | 15:00 |
sinzui | cherylj: Can we compromise? it will take an hour for some of the current jobs to complete, but I can see we are half way though 1.25 tests. I can make some jobs not retry to get to master faster | 15:00 |
cherylj | sinzui: did you turn back on the block checker? | 15:00 |
cherylj | sinzui: yeah, just make them not retry | 15:00 |
sinzui | cherylj: I can enable it now | 15:00 |
cherylj | sinzui: thank you | 15:00 |
cherylj | katco: lp was having issues yesterday, so sinzui disabled the blocking bug check | 15:01 |
cherylj | so people should get bounced back again once sinzui re-enables if they're trying to merge | 15:01 |
cherylj | katco: did you guys have anything you needed to add to release notes? | 15:01 |
katco | cherylj: i don't think so, but i'll check with the team | 15:02 |
cherylj | thanks! | 15:02 |
katco | natefinch: standup time | 15:03 |
mup | Bug #1568895 opened: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 15:03 |
natefinch | katco: oh, sorry, I expected it to be tonight... coming | 15:04 |
tych0 | frobware: any luck on the networking stuff for lxd? | 15:04 |
frobware | tych0: the /e/n/i is partially fixed. need to delete eth0.cfg, but also wanted to understand if I can just do this carte-blanche in cloud-init. | 15:06 |
frobware | tych0: sidetracked by the fact that juju/lxd is not installing on trusty (for me at least) | 15:06 |
tych0 | frobware: ah, i saw that bug. can you just use --series=xenial? | 15:07 |
frobware | tych0: yes, but had problems there so I though... I'll just use trusty... and the rest is history | 15:07 |
frobware | tych0: so, yes. back to trusty... but also wary that deleting eth0.cfg might also break precise. | 15:08 |
frobware | tych0: correction, back to xenial... | 15:08 |
cherylj | alexisb: was the cached-images command one on your list to flatten? | 15:13 |
alexisb | cherylj, yes | 15:22 |
alexisb | cherylj, I am behind | 15:22 |
babbageclunk | voidspace: What does this mean? https://github.com/juju/juju/pull/5070 | 15:22 |
frobware | voidspace: AA - did we say disable this across the board, or just for MAAS? | 15:22 |
cherylj | alexisb: no problem. I'm asking for a friend ;) | 15:22 |
babbageclunk | voidspace: that I wasn't up to date with master? | 15:22 |
frobware | cherylj: is master blocked on bug #1568312 | 15:25 |
mup | Bug #1568312: Juju should fallback to juju-mongodb after first failure to find juju-mongodb3.2 <blocker> <ci> <mongodb> <juju-core:Fix Committed by cherylj> <https://launchpad.net/bugs/1568312> | 15:25 |
frobware | cherylj: nevermind, mup answered my question | 15:25 |
cherylj | frobware: master really should be blocked with a stabilization bug | 15:25 |
cherylj | since we're trying to get a release out tomorrow | 15:25 |
frobware | cherylj: right. was trying to help answer babbageclunk's question above | 15:26 |
cherylj | ah, ok | 15:26 |
babbageclunk | frobware: Ah, is that bug blocking my merge? | 15:26 |
frobware | babbageclunk: yes | 15:26 |
mgz | babbageclunk: we want to do a clean release before landing all the maas2 changes, with the current batch of breakage on master dealt with | 15:27 |
babbageclunk | mgz: makes sense - so should I hold off until a release is cut? | 15:28 |
frobware | babbageclunk: keep stacking those branches up. :) | 15:28 |
babbageclunk | frobware: :) I was so close! | 15:28 |
mgz | frobware: if you need to, create a fork under juju to land on | 15:29 |
mgz | frobware: whatever makes it easiest to coordinate your work | 15:29 |
frobware | voidspace, babbageclunk: I think we can keep stacking stuff for a day. thoughts? ^^ | 15:30 |
mup | Bug #1568925 opened: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 <juju-core:New> <https://launchpad.net/bugs/1568925> | 15:33 |
babbageclunk | frobware, voidspace: yeah, no big problem for me. | 15:34 |
katco | cherylj: hey, for bug 1567170 what does "hosted model" mean? aren't all models hosted? | 15:35 |
mup | Bug #1567170: Disallow upgrading with --upload-tools for hosted models <upgrade-juju> <juju-core:Triaged by cox-katherine-e> <https://launchpad.net/bugs/1567170> | 15:35 |
cherylj | katco: non-admin model | 15:35 |
cherylj | sorry, was using old-terminology | 15:35 |
katco | cherylj: ahhh ok | 15:35 |
frobware | cherylj, katco: how does one, in development, update a model that is not ":admin"? | 15:36 |
katco | cherylj: so that's interesting... the binaries are different for non-admin models? | 15:36 |
frobware | cherylj, katco: I got caught by this friday, just ended up doing my upgrade-juju in the admin model | 15:37 |
cherylj | frobware: the idea is that you can do an upgrade to a published (in devel or stable) release | 15:37 |
cherylj | frobware: a later feature request is "upgrade my model to match what the state server is at" | 15:38 |
frobware | cherylj: everytime I tried this I ran into "version mismatch". And each time it tried it bumped the version number | 15:38 |
mup | Bug #1568925 changed: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 <juju-core:New> <https://launchpad.net/bugs/1568925> | 15:39 |
mup | Bug #1568925 opened: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 <juju-core:New> <https://launchpad.net/bugs/1568925> | 15:42 |
bogdanteleaga | is it possible to get the tools from the state machine through http, instead of trying insecure https? | 15:42 |
voidspace | frobware: babbageclunk: heh, I got mine in just before the block | 15:45 |
babbageclunk | frobware, voidspace: Just checking I understand - the bug number jujubot's quoting is a red herring (the fix was merged early this morning), it's just blocking merges until the release is done? | 15:45 |
babbageclunk | voidspace: grrr! :) | 15:45 |
frobware | babbageclunk: guessing so. cherylj mentioned that there really should be a stabilisation bug | 15:47 |
babbageclunk | frobware: ok, got it - cool | 15:47 |
voidspace | babbageclunk: the block isn't removed by the jujubot until the bug is changed by QA to "fix released" | 15:53 |
voidspace | babbageclunk: "fix committed" (i.e. merged) is not sufficient to unblock as it hasn't been QA verified yet | 15:53 |
babbageclunk | voidspace: ah, thanks | 15:54 |
voidspace | babbageclunk: this block will probably be left in place (as frobware said) until the release is done | 15:54 |
babbageclunk | ooh, networking meeting | 15:54 |
natefinch | cherylj: How firm is the suggested wording on #1564622? | 16:00 |
mup | Bug #1564622: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1564622> | 16:00 |
natefinch | cherylj: (if you know) | 16:00 |
rick_h_ | natefinch: it's rough | 16:00 |
rick_h_ | natefinch: please feel free to suggest something better. It was on the fly at a sprint | 16:00 |
ericsnow | tasdomas: PTAL http://reviews.vapour.ws/r/4515/ | 16:01 |
natefinch | rick_h_: ok. Sometimes it's hard to know what's set in stone and what is not. I'll play with it and see what seems to work best. | 16:02 |
mup | Bug #1568943 opened: Juju 2.0-beta4 stabilization <blocker> <juju-core:Triaged> <https://launchpad.net/bugs/1568943> | 16:03 |
mup | Bug #1568944 opened: Failure when deploying on lxd models and model name contains space characters <juju-core:New> <https://launchpad.net/bugs/1568944> | 16:03 |
ericsnow | katco: that bug was more trivial than expected, so I'm going to pick up another (#1456916) | 16:03 |
natefinch | ericsnow: nice | 16:04 |
bogdanteleaga | can somebody not call HEAD on a state machine endpoint with tools | 16:13 |
natefinch | no idea if we support HEAD or not... possibly not | 16:13 |
natefinch | well, wait, which endpoints? It's basically all websocket | 16:14 |
ericsnow | natefinch: tools, charms, backups, and resources all have HTTP endpoints | 16:15 |
natefinch | ahh, thus the reference to tools... misunderstood :) | 16:16 |
ericsnow | (plus a few others) | 16:16 |
katco | ericsnow: sorry was otp... that's always a nice problem to have :) | 16:16 |
ericsnow | dooferlad: are you working on bug #1456916 | 16:17 |
mgz | bug 1456916 looks like a pain to fix well | 16:17 |
mgz | there are lots of ways to try and fix it badly | 16:18 |
ericsnow | dooferlad: just noticed it's assigned to you in LP but not on the card in leankit | 16:18 |
frobware | babbageclunk: I'm still in the call if you wanted to chat | 16:21 |
babbageclunk | frobware: can't get back in for some reason | 16:25 |
frobware | babbageclunk: want to drop into the sapphire standup HO instead? | 16:25 |
babbageclunk | frobware: can't get to there either. | 16:26 |
voidspace | babbageclunk: do you get a redirect loop? | 16:27 |
voidspace | babbageclunk: I've had that and had to force a log back in by going to something else canonical work related | 16:28 |
voidspace | something else on google I mean | 16:28 |
voidspace | like docs.google.com | 16:28 |
voidspace | which should then let you log back in without a redirect loop | 16:28 |
voidspace | but that may not be your problem at all | 16:29 |
babbageclunk | voidspace: yeah, might be something to do with SSO like Jay had. | 16:30 |
babbageclunk | Would make sense, | 16:30 |
ericsnow | natefinch: small patch: http://reviews.vapour.ws/r/4515/ | 16:31 |
ericsnow | redir: thanks for the review :) | 16:31 |
redir | :) | 16:31 |
jam | frobware: lxd had a packaging bug in trusty for 2.0.0~rc9 | 16:32 |
jam | frobware: that should be fixed as of 20 minutes ago Stephan uploaded a bugfix | 16:32 |
frobware | jam: great, will try again | 16:36 |
frobware | jam: I'm using releases for my trusty images. is that enough or do I need to switch to daily? | 16:37 |
natefinch | ericsnow: lol wow | 16:44 |
ericsnow | natefinch: yep | 16:44 |
natefinch | ericsnow: awesome. ship it. | 16:45 |
ericsnow | natefinch: alas, master is blocked now | 16:45 |
natefinch | doh | 16:45 |
ericsnow | rogpeppe: re: bug #1566431, are you talking about different AWS accounts or different Juju accounts? | 16:52 |
mup | Bug #1566431: cloud-init cannot always use private ip address to fetch tools (ec2 provider) <juju-core:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1566431> | 16:52 |
ericsnow | rogpeppe: I'mm looking at how to repro | 16:52 |
rogpeppe | ericsnow: different AWS accounts | 16:52 |
ericsnow | rogpeppe: figured :) | 16:52 |
rogpeppe | ericsnow: problem doesn't manifest in us-east, for some reason | 16:54 |
jam | frobware: it should be an archive change. I don't know how long it will take to propagate | 17:07 |
jam | now dimiter's finding about no 'trusty-backports' in maas images is a different bug | 17:08 |
natefinch | rick_h_: for #1564622 do we actually execute the requested action, or no? like, juju bootstrap is conceivably a valid action at that point. | 17:26 |
mup | Bug #1564622: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir <juju-release-support> <juju-core:Triaged by natefinch> <https://launchpad.net/bugs/1564622> | 17:26 |
redir | so if master is blocked does that mean I should do something differently? or simply that my commits won't make it to master until unblocked? | 17:30 |
mgz | redir: you don't do $$merge$$ till unblock | 17:31 |
redir | mgz: that's what I was looking for:) Thanks. | 17:32 |
=== natefinch is now known as natefinch-lunch | ||
rick_h_ | natefinch-lunch: sorry, was otp, looking | 18:05 |
rick_h_ | natefinch-lunch: intresting, I guess you're right there's a list of actions that are legit. | 18:07 |
rick_h_ | natefinch-lunch: so if you've got juju1 home dir stuff, and you run your very first juju2 command...I think it'd be ok to block and output it once | 18:10 |
rick_h_ | natefinch-lunch: and make them redo the command a second time if they knew they were looking for 2.0 | 18:11 |
rick_h_ | natefinch-lunch: hmm, but my suggested text falls over there doesn't it heh | 18:11 |
perrito666 | mwhudson: hey, are you around? | 18:16 |
perrito666 | mm, I would guess not yet | 18:17 |
ericsnow | rogpeppe: what did you do to get the provider to create instances under a different AWS account? | 18:21 |
rick_h_ | natefinch-lunch: updated the bug with another round of thoughts. Let me know what you think | 18:24 |
cherylj | arosales: with your update to bug 1566420 - do your machines show up in the machine section? | 18:31 |
mup | Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1566420> | 18:31 |
cherylj | arosales: or just the services? | 18:32 |
arosales | cherylj: replying in #juju as the reporter is there | 18:33 |
alexisb | cherylj, perrito666 is looking for a task, can he just take any bug of the board or do you have a particular one that needs eyes? | 18:38 |
cherylj | perrito666: here's a quick one for you - bug 1568944 | 18:39 |
mup | Bug #1568944: Failure when deploying on lxd models and model name contains space characters <juju-core:Triaged> <https://launchpad.net/bugs/1568944> | 18:39 |
perrito666 | cherylj: tx a lot | 18:39 |
cherylj | perrito666: but remember that master is blocked for now :( | 18:40 |
perrito666 | cherylj: I know | 18:40 |
cherylj | I feel like we should create a feature branch for people to merge into for now | 18:40 |
cherylj | until we unblock master | 18:40 |
perrito666 | cherylj: nah, the merge back will be a nightmare | 18:40 |
cherylj | any thoughts (anyone)? | 18:40 |
perrito666 | I do believe that we should do a branch for the release | 18:40 |
cherylj | perrito666: well, hopefully master won't change | 18:40 |
cherylj | ah, so the opposite :) | 18:41 |
cherylj | branch the other thing | 18:41 |
perrito666 | exactly, so master keeps being master | 18:41 |
perrito666 | and if fixes are required for the bless, you can merge just those | 18:41 |
cherylj | sinzui, mgz, abentley, any thoughts on creating a temporary branch for our 2.0-beta4 release? | 18:41 |
cherylj | will that cause problems when we have to release / tag? | 18:42 |
sinzui | cherylj: It wont affect the tag in git | 18:42 |
cherylj | it's just weird having people who can actually work bugs while we're trying to get ready for a release :) | 18:43 |
sinzui | cherylj: are people trying to merge post 2.0 features now? | 18:43 |
cherylj | perrito666: if you want to tackle some of the help text bugs too, I won't say no :) https://bugs.launchpad.net/juju-core/+bugs?field.tag=helpdocs | 18:44 |
mgz | cherylj: it seems a little unnessersary from what I've seen of the pending prs | 18:44 |
abentley | cherylj: I think it is a good idea. It's less friction for devs. There is a risk that bug fixes won't get applied to the 2.0-beta4 branch, but I think that can be managed. | 18:44 |
mgz | not much chance of mass conflicts between then | 18:44 |
perrito666 | cherylj: everytime I write text for juju I end up sounding like tonto | 18:44 |
cherylj | perrito666: the text is there, you just need to copy it in :) | 18:45 |
cherylj | perrito666: the docs team has been busy writing up help text for all the commands | 18:45 |
perrito666 | oh, then I could :p | 18:45 |
perrito666 | ill give a try to the lxc one and then go to docs otherwise | 18:45 |
abentley | perrito666: I don't understand why you said the first case would be a nightmare. Should be exactly as much effort as merging 2.0-beta4 into master. | 18:45 |
cherylj | sinzui: post 2.0-beta4 bugfixes | 18:46 |
perrito666 | abentley: why would you merge beta4 into master? | 18:46 |
abentley | perrito666: Because it has bugfixes. | 18:46 |
perrito666 | abentley: ideally bugfixes should do bug->master->beta | 18:46 |
perrito666 | or the other way but just the one merge, not the whole branch | 18:47 |
abentley | perrito666: I am pretty sure that your policy is bug -> release -> master. | 18:47 |
perrito666 | abentley: yes | 18:47 |
perrito666 | sorry | 18:47 |
perrito666 | we are using github a bit poorly which causes a lot of merge conflicts | 18:47 |
abentley | perrito666: If you want to do one merge per bugfix, you can, but it seems inefficient to me. | 18:49 |
perrito666 | abentley: that is what we where doing for 1.24, 1.25, 1.x maintenance | 18:49 |
abentley | perrito666: Isn't that just to get bug fixes merged in a timely manner? | 18:51 |
rogpeppe | ericsnow: you need to create a model inside the controller that uses different creds | 18:51 |
rogpeppe | ericsnow: every model inside a controller has a different set of model attributes | 18:52 |
ericsnow | rogpeppe: I thought I tried that though | 18:52 |
rogpeppe | ericsnow: we could chat about this if you like | 18:52 |
ericsnow | rogpeppe: sure | 18:52 |
=== redir is now known as redir_lunch | ||
perrito666 | cherylj: I would like your input in the lxc bug, I added a question/suggestion, in the mean time ill go add some help texts :) | 19:30 |
=== thomnico is now known as thomnico|Brussel | ||
mup | Bug #1563590 changed: 1.25.4, xenial, init script install error race <landscape> <juju-core:Invalid> <systemd (Ubuntu):In Progress by pitti> <https://launchpad.net/bugs/1563590> | 19:40 |
cherylj | perrito666: which lxc bug? (sorry, been in calls) | 19:58 |
cherylj | oh ffs | 19:59 |
cherylj | seriously? | 20:00 |
cherylj | 2016-04-11 19:38:30 ERROR cmd supercommand.go:448 region "DFW" in cloud "rackspace" not found (expected one of ["dfw" "ord" "iad" "lon" "syd" "hkg"]) | 20:00 |
rick_h_ | cherylj: case is important? | 20:01 |
cherylj | rick_h_: shouldn't be | 20:01 |
rick_h_ | cherylj: sorry, I missed the :/ on the end there | 20:01 |
cherylj | we changed it to be lower case because sabdfl asked us to, but to keep from regressing people, we should convert the region to lower case before trying to use it | 20:01 |
rick_h_ | cherylj: quit talking sense :P | 20:02 |
cherylj | Bug reporting activated | 20:02 |
cherylj | ses_2: I also see this error coming out of CI: Test recovery strategies.: error: unrecognized arguments: --charm-prefix=local:trusty/ | 20:04 |
cherylj | http://reports.vapour.ws/releases/3881/job/functional-ha-recovery-rackspace/attempt/472 | 20:04 |
frobware | cherylj, alexisb: the other known LXD issue - https://bugs.launchpad.net/juju-core/+bug/1568895 | 20:05 |
mup | Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 20:05 |
frobware | cherylj, alexisb: but may be resolved by "tomorrow" if all I'm waiting on is a package update | 20:06 |
cherylj | frobware: I remember a conversation not too long ago where someone said backports was enabled by default?? | 20:07 |
cherylj | frobware: maybe juju is overriding this somehow? | 20:07 |
frobware | cherylj: let me trying just deploying trust from my MAAS | 20:09 |
* cherylj too | 20:09 | |
frobware | cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/6 | 20:13 |
mup | Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 20:14 |
mup | Bug #1569024 opened: Region names for rackspace should accept caps and lowercase <rackspace> <juju-core:Triaged> <https://launchpad.net/bugs/1569024> | 20:16 |
cherylj | frobware: looks like maybe we're overwriting it? | 20:16 |
cherylj | frobware: a fresh deploy through AWS shows it enabled: http://paste.ubuntu.com/15767200/ | 20:16 |
frobware | cherylj: so on MAAS only? | 20:17 |
cherylj | frobware: did you do that through a maas deploy or juju? | 20:17 |
cherylj | frobware: this was not a juju provisioned machine | 20:17 |
frobware | cherylj: what's your /etc/cloud/build.info | 20:17 |
frobware | cherylj: neither was mine, just MAAS deployed | 20:17 |
cherylj | serial: 20160406 | 20:17 |
cherylj | frobware: interesting | 20:18 |
frobware | cherylj: daily or releases image on AWS? | 20:18 |
cherylj | frobware: daily - I choose the latest from here: https://cloud-images.ubuntu.com/locator/daily/ | 20:19 |
frobware | cherylj: going to switch to daily, as I have two "releases" MAAS setups atm | 20:19 |
thumper | morning | 20:19 |
thumper | o/ frobware | 20:19 |
thumper | frobware: still working? | 20:19 |
frobware | thumper: only virtually | 20:20 |
frobware | thumper: haven't left standup yet :) | 20:20 |
frobware | thumper: still talking about.... CONTAINERS! | 20:20 |
mgz | frobware: contain yourself! | 20:21 |
frobware | Ctrl-D | 20:21 |
cherylj | ah fudge, should we allow people to specify regions in their cloud definitions that have caps? | 20:25 |
cherylj | or should we lowercase everything? | 20:25 |
cherylj | (like we do with users?) | 20:25 |
cherylj | thumper ^^ thoughts? | 20:26 |
mgz | I am confused by the bug in general | 20:26 |
thumper | ah... | 20:26 |
thumper | hmm | 20:26 |
thumper | personally I like lowercasing them | 20:27 |
thumper | but | 20:27 |
anastasiamac | cherylj: some regions must b caps coz of provider. axw made some changes in the area... | 20:27 |
thumper | I don't believe in forcing it if it could cause a problem | 20:27 |
thumper | anastasiamac: yeah... was thinking it might be something like that | 20:27 |
cherylj | so I guess people using rackspace need to just change to use lower case region names? | 20:28 |
cherylj | maybe we can handle that internally to the rackspace provider | 20:29 |
* cherylj looks | 20:29 | |
ses_2 | cherylj, I will fix that | 20:32 |
cherylj | thanks, ses_2 | 20:32 |
=== natefinch-lunch is now known as natefinch | ||
frobware | cherylj: https://bugs.launchpad.net/maas/+bug/1554636 - interesting read if I'm just switching between daily and releases | 20:39 |
mup | Bug #1554636: maas serving old image to nodes <landscape> <MAAS:New> <MAAS 1.9:New> <https://launchpad.net/bugs/1554636> | 20:39 |
cherylj | frobware: interesting! I noticed that my trusty images were old too | 20:44 |
cherylj | (from like 3 weeks ago) | 20:44 |
frobware | cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/7 | 20:56 |
mup | Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 20:56 |
frobware | cherylj: I don't see any difference with switching to daily images | 20:56 |
frobware | cherylj: neither have trusty-backports by default | 20:57 |
cherylj | I wonder if curtin screws with it | 20:57 |
frobware | cherylj: if I launch a container via lxc (locally on my desktop) then I do see backports listed; not all cloud images are created equally | 20:59 |
mwhudson | perrito666: hi! | 21:00 |
natefinch | katco: wallyworld: moonstone standup or tanzanite or ? | 21:01 |
wallyworld_ | natefinch: nothing on the calendar so i was assuming it hadn't been rescheduled yet | 21:02 |
katco | wallyworld_: natefinch: correct hasn't been rescheduled. natefinch, remember i was going to discuss with wallyworld? | 21:02 |
natefinch | katco: oh, right. ok | 21:03 |
frobware | cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/8 | 21:03 |
mup | Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty <juju-core:New> <https://launchpad.net/bugs/1568895> | 21:03 |
* frobware is done... hopefully the MAAS fairies will point me towards the URL of truth... | 21:04 | |
mup | Bug #1569047 opened: juju2 beta 3: bootstrap warnings about interfaces <landscape> <juju-core:New> <https://launchpad.net/bugs/1569047> | 21:16 |
perrito666 | mwhudson: hi, so, are we having problems to bootstrap or just to test? | 21:17 |
mwhudson | perrito666: i haven't tried bootstrapping, i don't actually know how to use juju :-p | 21:20 |
menn0 | cherylj: https://bugs.launchpad.net/juju-core/+bug/1569054 | 21:36 |
mup | Bug #1569054: GridFS namespace breaks charm and tools deduping across models <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1569054> | 21:36 |
menn0 | cherylj: I've also created a card for it on the bug squad board | 21:38 |
perrito666 | mwhudson: mmm, ok, have you attached the logs to the bug? | 21:46 |
perrito666 | from the test I mean | 21:46 |
mup | Bug #1569054 opened: GridFS namespace breaks charm and tools deduping across models <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1569054> | 21:46 |
mwhudson | perrito666: not the most recent run probably, let me do that | 21:47 |
mwhudson | perrito666: https://bugs.launchpad.net/juju-core/+bug/1567708/comments/11 | 21:49 |
mup | Bug #1567708: unit tests fail with mongodb 3.2 <juju-core:Triaged by hduran-8> <https://launchpad.net/bugs/1567708> | 21:49 |
perrito666 | that was fast | 21:49 |
mup | Bug #1569054 changed: GridFS namespace breaks charm and tools deduping across models <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1569054> | 21:52 |
mwhudson | perrito666: that was from my run yesterday, i haven't run the tests today... | 21:54 |
perrito666 | tis ok, nothing changed | 21:55 |
mup | Bug #1569054 opened: GridFS namespace breaks charm and tools deduping across models <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1569054> | 21:58 |
alexisb | thumper, I am running late | 22:00 |
thumper | alexisb: ok, just on a call with mpontillo | 22:00 |
alexisb | thumper, on the HO when you are ready, no rush | 22:06 |
katco | ericsnow: i thought we fixed this? bug 1567170 | 22:10 |
mup | Bug #1567170: Disallow upgrading with --upload-tools for hosted models <upgrade-juju> <juju-core:In Progress by cox-katherine-e> <https://launchpad.net/bugs/1567170> | 22:10 |
katco | ericsnow: sorry wrong bug. bug 1545116 | 22:10 |
mup | Bug #1545116: When I run "juju resources <service>" after a service is destroyed, resources are still listed. <2.0-count> <juju-release-support> <resources> <juju-core:Confirmed> <https://launchpad.net/bugs/1545116> | 22:10 |
ericsnow | katco: pretty sure we did | 22:11 |
menn0 | wallyworld_, thumper, cherylj : I've gone for an even simpler fix for the gridfs namespace issue. it's ready now - just doing some manual testing. | 22:24 |
wallyworld_ | menn0: awesome, ty | 22:25 |
alexisb | thumper, I dropped off the standup, please ping when you are available | 22:27 |
thumper | alexisb: ok, here now | 22:27 |
alexisb | lol of course | 22:27 |
mup | Bug #1569072 opened: juju2 bundle deploy help text out of date <landscape> <juju-core:New> <https://launchpad.net/bugs/1569072> | 22:46 |
perrito666 | mwhudson: is there I way I can get my hands into this? | 22:47 |
mwhudson | perrito666: you mean test on s390x? | 22:47 |
mwhudson | perrito666: the failures occur on intel too though | 22:47 |
perrito666 | mwhudson: ok, so to reproduce this you do what exactly? | 22:48 |
perrito666 | sorry I asked you this already :) | 22:48 |
perrito666 | just making sure I am doing things right | 22:48 |
mwhudson | perrito666: install the juju-mongodb3.2 from ppa:juju/experimental | 22:49 |
mwhudson | perrito666: run JUJU_MONGOD=/usr/lib/juju/mongo3.2/bin/mongod go test github.com/juju/juju/... | 22:49 |
* perrito666 is pretty sure he is about to break something in his desktop | 22:51 | |
menn0 | wallyworld_: http://reviews.vapour.ws/r/4523/ | 23:03 |
wallyworld_ | looking | 23:04 |
menn0 | wallyworld_: I went with the approach of making the gridfs namespace the same as the DB name (as per osimages) | 23:04 |
menn0 | wallyworld_: I came to the conclusion that anything more elaborate was just YAGNI | 23:04 |
wallyworld_ | yep | 23:04 |
menn0 | wallyworld_: tested with local and charm store deployments | 23:05 |
perrito666 | mwhudson: I get juju-mongodb3 and juju-mongo3 which is a transitional | 23:05 |
wallyworld_ | menn0: lgtm | 23:07 |
menn0 | wallyworld_: cheers | 23:07 |
mwhudson | perrito666: well you should check with sinzui i guess but i'm preeeeetttty sure juju is jumping from 2.6 to 3.2, not to 3 | 23:08 |
perrito666 | sinzui: care to shed some light into this ? | 23:08 |
perrito666 | wallyworld_: wb btw | 23:08 |
wallyworld_ | it's 3.2 | 23:09 |
perrito666 | mm, there is juju-mongodb3 package in version 3.2.0-0ubuntu0~juju8~ubuntu15.10.1~15.10 | 23:09 |
mwhudson | perrito666: if you've been making juju compatible with 3.0 that would explain some things :-) | 23:09 |
wallyworld_ | juju-mongodb3.2 | 23:09 |
mwhudson | (presumably that work is a subset of the work to be compatible with 3.2 so not wasted effort entirely) | 23:10 |
perrito666 | wallyworld_: mwhudson https://pastebin.canonical.com/153956/ | 23:10 |
mwhudson | ah i bet my reply to this is "3.2 is only built for xenial" | 23:10 |
perrito666 | mwhudson: good to know :p | 23:10 |
perrito666 | ok, here goes my machine's stability | 23:11 |
* perrito666 upgrades to xenial | 23:11 | |
mwhudson | ah yes, probably | 23:11 |
mwhudson | hah uh, or use a lxd or ec2 instance or something? :) | 23:11 |
mwhudson | i do need to upgrade myself soon | 23:11 |
wallyworld_ | perrito666: juju-mongodb3.2 is not in the archives yet afaik | 23:12 |
wallyworld_ | that's the whole issue that has been hanging around for several weeks | 23:12 |
wallyworld_ | which is why my pr should not have been landed yet | 23:13 |
mwhudson | i am about to upload juju-mongodb3.2 RIGHT NOW!!!one | 23:13 |
perrito666 | wallyworld_: your pr issue has been dealt with | 23:13 |
mwhudson | unfortunately it will then sit in NEW for a while | 23:13 |
perrito666 | I am now trying to see mwhudson issue | 23:13 |
wallyworld_ | mwhudson: new or proposed for a while? | 23:14 |
wallyworld_ | should be out of proposed for xenial pretty quickly? | 23:14 |
mwhudson | wallyworld_: NEW as in https://launchpad.net/ubuntu/xenial/+queue | 23:14 |
* mwhudson afk for 5 | 23:14 | |
menn0 | thumper or wallyworld_: here's the fix for the original bug: http://reviews.vapour.ws/r/4524/ | 23:15 |
perrito666 | wallyworld_: axw anastasiamac i am trying to find my laptop for the standup be right there | 23:15 |
anastasiamac | perrito666: k | 23:15 |
wallyworld_ | menn0: will look after standup | 23:15 |
menn0 | wallyworld_: np | 23:16 |
mup | Bug #1569086 opened: Juju controller CA & TLS server keys are weak <juju-core:New> <https://launchpad.net/bugs/1569086> | 23:31 |
mup | Bug #1569047 changed: juju2 beta 3: bootstrap warnings about interfaces <landscape> <juju-core:New> <https://launchpad.net/bugs/1569047> | 23:52 |
ericsnow | axw: re: bug #1566431, unfortunately there's more to it than fiddling with the provisioner-side...tools stuff has to be tweaked as well | 23:54 |
mup | Bug #1566431: cloud-init cannot always use private ip address to fetch tools (ec2 provider) <juju-core:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1566431> | 23:54 |
axw | ericsnow: yeah, doesn't surprise me :/ | 23:54 |
ericsnow | axw: looks like you had to deal with a related issue last year | 23:54 |
axw | ericsnow: oh? what was that? | 23:54 |
ericsnow | axw: https://github.com/juju/juju/blob/master/apiserver/common/tools.go#L359 | 23:55 |
axw | ericsnow: ah that comment's just about HA really | 23:55 |
ericsnow | axw: the comment applies regardless, I think | 23:56 |
axw | ericsnow: yup | 23:56 |
ericsnow | axw: we do the separate "tools URL" thing for the sake of possibly using alternate tools servers, no? | 23:57 |
axw | ericsnow: we've got a very small window to make a breaking change, if we can fix it... :) | 23:57 |
ericsnow | axw: yeah :) | 23:57 |
axw | ericsnow: can't remember why, sorry | 23:57 |
ericsnow | axw: np | 23:57 |
=== ses is now known as Guest97064 | ||
axw | ericsnow: I think that is the case, just to encapsulate that logic for the several places where we need to get tools URLs | 23:58 |
axw | it may be redundant if we're just returning all the addresses | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!