[00:00] wallyworld: quick call then [00:00] wallyworld: 1:1 ? [00:00] thumper: didn't you notice the sarcasm? [00:00] no [00:00] ok [00:00] wallyworld: I actually thought that you were looking forward to working [00:17] wallyworld: :p [00:18] rick_h_: living the dream once again, wheeeeee [00:21] hey, you know you missed it! can't miss the finish line [00:21] wallyworld: ^ [00:22] rick_h_: indeed, wasn't my first choice to disappear this last little while [00:25] isn't it one of the relay strategy to put strongest sprinters last? :D [00:25] hah [00:25] good times while away wallyworld ? [00:26] i'm heading out next week. leave to the best of the best at the end :p [00:26] rick_h_: i wish, spent the entire time working my fingers to the bone - ended up having to buy a chain saw to get everything done :-D [00:27] wallyworld: oooh new tools ftw! [00:27] rick_h_: you camping or something next week? [00:27] yeah, new tools :-) [00:27] wallyworld: i wish. snowing today [00:27] i wish we had snow [00:28] wife and i celebrating 10yrs in hawaii [00:28] rick_h_: oh congrats [00:28] so i'll be closer where i can keep an eye on you :) [00:28] i'll wave from my balcony [00:29] new camper is done may 11 so after these may sprints i'll live from the woods for a while [00:31] with no wifi :-( [00:32] nope have a booster antenna for the mifi device [00:32] fatal error: concurrent map read and map write [00:32] goroutine 1660 [running]: [00:32] runtime.throw(0x18945e0, 0x21) [00:32] bzzt, cannot land my 1.25 fix wihtout fixing gomanta [00:32] fairy nuf [00:32] davecheney: that seems ungood [00:40] rick_h_: thumper cherylj menn0 http://reviews.vapour.ws/r/4507/ [00:40] fix is here [00:40] i need to land that before I can land that other fix [00:46] davecheney: fwiw, in master joyent doesn't use manta anymore, still may need to clean up some old tests [00:52] wallyworld: thanks for that [00:52] i need to land this on master to backport it [00:52] then i'll raise a tech debt care to remove the manta library as a dep [00:52] +1, less things to depend on, wheee! [00:52] davecheney: sure, np. the change to not use manta only landed recently, still scrambling to clean everything up [00:53] but yes, less deps is good. and we now don't need provider storage so long as the provider supports tagging [00:55] cherylj, wallyworld, davecheney, sinzui: launchpad is down so the check blockers script is dying at the start of merge attempts [00:55] really? [00:55] looks like i picked a bad day to quit sniffing glue [00:55] lol [00:56] menn0 cherylj wallyworld sinzui: we're working on it [00:56] ty [00:56] did the hamster die [00:56] blahdeblah: ok great [00:57] * menn0 starts following @launchpadstatus [01:01] you must construct more pylons [01:02] menn0: I can comment out the check [01:03] sinzui: pleaes [01:03] sinzui: that would be good but it might not be enough. juju has deps on packages hosted on launchpad. godeps might complain. [01:03] sinzui: it's worth a try though [01:08] menn0: bzr is working so disabling the check might be enough [01:09] sinzui: great [01:09] menn0: http://juju-ci.vapour.ws:8080/view/Juju%20Ecosystem/job/github-merge-juju/7283/console is retying now [01:10] sinzui: looks good so far [01:10] spoke too soon [01:10] :/ [01:10] no I just got godeps [01:11] * menn0 nods [01:11] seems to be stuck at godeps [01:11] progress! :) [01:11] sinzui: it seems to be moving... i'll keep an eye on it [01:11] sinzui: thanks! [01:12] np menn0 [01:13] launchpad.net appears to be back now anyway [01:21] Bug #1568643 opened: RebootSuite times out if unrelated lxd containers exist [01:25] sinzui, I am pretty sure you are a machine that never sleeps [01:25] thanks for the 1.6 work [01:25] alexisb: I don't sleep well :) [01:26] wallyworld: https://github.com/juju/juju/pull/5065 [01:26] looking [01:26] ^ backport to 1.25 which unblocks my other change landing [01:27] lgtm [01:30] danku! [01:40] anastasiamac: reviewed your branch [01:41] axw: thnx \o/ [02:21] grrr... why has lxd suddenly stopped working? [02:24] menn0, we have lots of lxd bugs going around atm [02:25] there are about 3 criticals against juju-core that are impacting lxd/lxd provider [02:26] axw: did you want to take one more look: http://reviews.vapour.ws/r/4502/ ? [02:26] alexisb: yep I know... lxd just stopped working for me in the middle of a complex test reproduction - annoying [02:26] menn0: what happened? [02:27] cherylj: looks good, thanks [02:27] thanks, axw! [02:27] cherylj: I was deploying the dummy charm various models under one controller [02:27] menn0: what were you seeing? [02:27] the first machine came up as normal [02:27] the other were stuck in pending [02:27] no activity [02:28] nothing in juju's logs about it [02:28] nothing in the lxd logs that I could find [02:28] oh fun, I've seen that too [02:28] the machines didn't exen exist when I ran "lxc list" [02:28] menn0: do the machines exist in the database? [02:28] (mongo) [02:28] yes [02:28] they were showing in status so had to have been in the DB [02:29] it's like juju didn't ask lxd to create the instances, or lxd didn't create them for some reason [02:29] ah, I've hit an issue where the services show up, but no machines to host them were ever created [02:29] that sounds like what I just saw [02:29] I mean, they didn't even exist in mongo [02:31] oh right... in my case the machines did show up in status [02:31] so they must have been in the DB [02:32] cherylj: ^ [03:05] https://github.com/juju/juju/pull/5064#issuecomment-208119816 [03:05] it's been a while since tht one failed [03:05] good to see it's as unreliable as ever [03:11] cherylj: are you happy with the response to https://bugs.launchpad.net/bugs/1568602 [03:11] Bug #1568602: Cannot build OS X client with stock Go 1.6 [03:11] we see that report a lot [03:11] and it's always a crapped up go install [03:11] usually by not removing the old version before unpacking the new version [03:12] davecheney: thanks for looking at it. sinzui mentioned he got it working now [03:13] Bug #1568602 changed: Cannot build OS X client with stock Go 1.6 [03:13] Bug #1568654 opened: ec2: AllInstances and Instances improvement [03:14] cool, let me know [03:15] I can provide some suggestios for how to use the upstream tarball concurrently [03:15] if needed [03:15] it's pretty straight forward [03:16] not sure if we'll need that. sinzui ? ^^ [03:16] wallyworld: can you take another look? http://reviews.vapour.ws/r/4504/ [03:16] sure [03:16] build times go up, build times go down. you cannot explain that [03:16] wallyworld: I had to move things around to avoid circular dependencies [03:18] cherylj: lgtm, ty [03:18] wallyworld_: thanks! [03:19] davecheney: getting it working was easy for me but not CI. You were right about more than one tool chain on the host. CI in an effort to purge the env I setup for, disocvered the wrong go tool chain. It found the go I used to compile the go 1.6 we place in a special dir away from the system. All that is resolved now [03:20] okie dokes [03:20] this might be the one time that i actually say "you should set GOROOT" [03:21] but please, don't tell anyone I said thta [03:30] Can I get another review? http://reviews.vapour.ws/r/4510/ [03:30] (pretty easy) [03:33] cherylj: LGTM, ship it [03:33] thanks, davecheney! [03:34] I'll have to get in line... lots of merges lined up :) [03:34] menn0: model-migration merge completed! [03:35] boom! [03:36] cherylj: *\o/* [03:39] wallyworld_: provider/joyent/local_test.go: [03:39] 10: lm "github.com/joyent/gomanta/localservices/manta" [03:39] ^ ja'cuse [03:39] still some code using gomanta [03:40] should I kill it with spite ? [03:40] davecheney: yes, please, all the manta stuff needs to go away [03:40] davecheney: 2.0 should not import gomanta at all [03:41] hulk smash! [03:42] so what does joyent use now ? [03:44] wallyworld_ or axw: can I get some help with a charm storage / GridFS related issue? [03:44] menn0: sure, what's up? [03:44] depends what it is :-) [03:45] so I fixed this: https://bugs.launchpad.net/juju-core/+bug/1541482 [03:45] Bug #1541482: unable to download local: charm due to hash mismatch in multi-model deployment <2.0-count> [03:45] it turns out the api server had a local cache of charms, which wasn't model specific [03:45] it was easy to fix [03:46] but fixing it has exposed another problem [03:46] if you deploy a local charm to one model [03:46] and then deploy the same charm to another model in the same controller, the unit in the 2nd model can't download the charm [03:47] if the local charm is the same but with slight modification it works [03:47] but not if it's exactly the same charm [03:47] menn0: was this cache added by something in charmrepo? [03:47] i've added lots of debug logging and everything looks fine [03:47] wallyworld_: no the cache is in the charm download API handler [03:48] ok, there's also one in charmrepo from memory [03:48] wallyworld_: it uses a directory to download charms out of storage into [03:49] wallyworld_: the cache assumes that the contents of a charm with a given charm URL is always the same [03:49] wallyworld_: but that isn't true for local charms across models [03:49] menn0 wallyworld_: I think the "binarystorage" index might not be model-specific, checking... [03:49] menn0: i'm not familair with the code - why does the api server need a cache of stuff from mongo? [03:50] wallyworld_: good question. the endpoint supports returning file listings and downloading particular files out of the charm archive [03:50] wallyworld_: I guess the expectation is that there will be many API calls for the one charm archive [03:50] right, so a local cache adds very little that i can see [03:51] i guess so [03:51] wallyworld_: at any rate, the cache isn't the problem here [03:51] is the index model specific? [03:51] wallyworld_: the cache was hiding a deeper bug in charm storage [03:51] menn0: sorry had my wires crossed, that's just for tools... [03:52] axw, wallyworld_: the way charms are stored looks fine but there's obviously a problem [03:52] * menn0 prepares a paste [03:54] menn0: the charns collection should be model aware [03:54] wallyworld_: yep it is [03:54] wallyworld_: the problem isn't there I don't think [03:55] that's where the charm doc goes - the url is not model aware but that shouldn't matter [03:55] wallyworld_: it's in the GridFS put/get code I think [03:55] there's a PutForBucket api [03:55] so long as we set the bucket id to be the model uuid that should be enough [03:56] wallyworld_, axw : http://paste.ubuntu.com/15752687/ [03:56] wallyworld_, axw : that's the output from a bunch of debug logging I added [03:57] wallyworld_, axw : does that help? [03:58] so for the second charm post, the GridFS entry appears to get reused [03:58] oh poo [03:58] but during the download attempt for the second model it can't be found [03:58] my maas update seemed to loose all the power settings for the machines [03:58] menn0: that looks like it's using the old blobstore [03:58] should be blobstore.v2 [03:58] with PutForBucket [03:59] this is blobstore.v2 [03:59] not that it should matter i guess in terms of this bug [03:59] ah [03:59] some internal methods were not renamed [03:59] sigh [04:01] Bug #1568666 opened: provider/lxd: non-juju resource tags are ignored [04:01] Bug #1568668 opened: landing bot uses go 1.2 for pre build checkout and go 1.6 for tets [04:01] Bug #1568669 opened: juju 2.0 must not depend on gomanta [04:04] menn0: it doesn't make sense at first glance - the resource path i think from memory is the raw access to the blob - that should not be affected by wthat happens above with bucket paths etc [04:05] so the resource path should be there - unless something is deleting it [04:05] is it worth putthing debug in the remove methods [04:06] wallyworld_: ok, i'll try that. I have put some in some of the cleanup-on-error defers already but they're not firing [04:06] wallyworld_: i'm also checking how things look in the DB [04:07] menn0: so to be sure i understand - the issue is that the second upload correctly creates resource metadata for the new model uuid etc, and it rightly shares a de-deuped blob with the first upload, but the attempt to access that blob fails [04:07] and it is failing at the point at which the blob itself is retrieved [04:07] wallyworld_: spot on. that's my best understanding at the moment [04:07] yes [04:07] hmmm, so yeah, that blob should be there unless deleted [04:08] this should all be orhtogonal to any bucket uuid stuff [04:08] yeah, it seems to be passing the right path to gridfs .. [04:08] based on the error [04:09] looking at the storedResources collection and the blobstore db directly, everything looks ok [04:10] jeez, well that sucks [04:10] i'll add more debug logging in the lookup side of things [04:11] at least the storage side looks correct - it is using the model uuid correctly [04:11] and de-duping across models [04:15] wallyworld_: it certainly *looks* like it's doing the right hting [04:15] still could be a subtle issue i guess [04:16] yep [04:16] i've just added a lot more logging on the get side of things [04:25] wallyworld_, axw: I think I've found it [04:26] wallyworld_, axw: the GridFS instance is created with the model UUID as the prefix [04:26] wallyworld_, axw: so even though the charm is being requested with the correct de-duped UUID [04:27] wallyworld_, axw: the GridFS prefix prevents it being found [04:27] the namespace? [04:27] wallyworld_: yes [04:27] does that seem right? [04:27] so maybe the namespace should be the controller uuid [04:27] i think so [04:28] this was all done pre-multi model [04:28] Hi! Am I in the right place to ask about filing a bug against Juju? I'm kind of new to the whole FOSS community thing and I have *no idea* how to use Launchpad. [04:28] arcticlight: sure, ask away [04:29] and welcome [04:30] menn0: does each controller get a different uuid in HA? [04:30] wallyworld_: naively fixing this by changing the charms code to use a statestorage with the controller's UUID will no doubt break existing deployments [04:30] i think so right? [04:30] menn0: it will break existing yes [04:30] wallyworld_: It's actually really simple... Juju doesn't seem to depend on `curl` but goes ahead and uses it anyway. But on Ubuntu Server 14.04 LTS it's not installed by default, or at least it wasn't on my system (I have a fresh install) and `juju bootstrap` blew up. Thought I'd mention it [04:31] wallyworld_: in HA the controller is really the whole cluster [04:32] wallyworld_: there is only one controller model and one UUID [04:32] wallyworld_: so that's not a problem [04:32] arcticlight: yes, it does use curl - to retrieve the tools binaries at bootstrap. i'm surprised 14.04 lts doesn't have it out of the box, but i'm not a packaging guy [04:32] arcticlight: thanks for mentioning, i'll followup with someone who knows more about ubuntu default packaging and see if we need to fix anything [04:32] wallyworld_: Yeah. I fixed it by installing curl, but it did blow up on a fresh install. I figured someone should know so they can add a dependency on curl. [04:33] wallyworld_: Welcome! [04:33] arcticlight: just ask if there's anything else you get stuck on. lots of folks here who can help [04:34] menn0: so the "easy" option is to use the controller uuid with gridfs namespace. i'm ok with release notes telling poeple they need to re bootstrap between betasa, but that's IMHO [04:34] wallyworld_: OK! Good to know. I've been fooling around with MAAS/Juju since around 2013. I'm just really shy lol~ Figured this was simple enough to just pop on and ask about tho [04:35] we don't bite [04:35] unless really provoked :-) [04:36] wallyworld_: what about 1.25 systems? [04:36] menn0: we have sooooooo many tings to fix with upgrades there [04:36] this is just one more [04:37] menn0: upgrading from 1.25 will be really, really difficult [04:37] already [04:37] menn0: i suspect we'll considering migrayions :-) [04:37] migrations [04:37] might be easier in the long run [04:38] wallyworld_: you realise that would mean backporting migrations to 1.25? [04:38] yes, or a form thereof [04:38] that will likely be so much easier that the alternative [04:38] jeez wallyworld_, u made thumper quit with all this upgrades talk [04:38] wallyworld_: I've realised for some time that this was on the cards :-/ [04:38] na, i just bit him hard [04:39] menn0: i think we all have :-) sort of a slowly increasing dread [04:40] dread != anticipation [04:43] dread is more accurate :) [04:43] * menn0 tries the naive fix [04:47] sure... but when u dread problems become harder and motivation disappears... when u anticipate, besides being prepared, there is always something to look forward to.. [04:58] wallyworld_: were you proposing we pass the controller UUID to charm related calls to state/storage.NewStorage() or were you proposing that we use the controller UUID in the call that stateStorage makes to blobstore.NewGridFS()? [04:59] menn0: I think the latter [04:59] menn0: i think(?) just the latter is all we need [04:59] that would be my preference [04:59] so they're all sharing a common gridfs namespace, and we have model-specific catalogues that point into it [04:59] yup [05:01] wallyworld_: cool that makes sense (the latter, not the former). I misunderstood what you meant the first time around and then realised it didn't make sense as I started to change all the calls to NewStorage() [05:01] oh sorry :-) [05:02] i should have been more clear [05:02] wallyworld_: no it was my bad [05:03] wallyworld_: use the controller UUID over a fixed value? the controller UUID isn't readily available in stateStorage [05:03] menn0: isn't there a helper method on state? you don'r have access to that in stateStorage? [05:03] wallyworld_: no it gets a MongoSession, not a State [05:04] wallyworld_: I can rejig things [05:04] where does it get the model uuid from then? [05:04] wallyworld_: or use a fixed value "juju" or "state" or "jujustate" [05:04] the model UUID is passed in with the MongoSession [05:04] 2 args [05:04] so can't we change that to controller uuid? [05:05] menn0 wallyworld_: may as well just use a constant, it doesn't really need to be the controller UUID [05:06] it just needs to be the same for all views on the db [05:06] except if we want to share a mongo instance between controllers [05:25] Anyone know if there are moves afoot to introduce DNS as a Service support into either juju itself or the charm store? [05:26] wallyworld: i've just tried using a fixed GridFS namespace ("juju") in state/storage.NewStorage() and that fixes the problem [05:27] menn0: great. if it's easy to get controller uuid in there that would be good [05:28] wallyworld: your reasoning for that is to avoid any chance of collision in case of a mongodb instance being shared by multiple controllers? [05:28] menn0: yeah [05:28] we just don't know what might need to be supported in the future [05:28] wallyworld_: but that would break anyway... our main db is called "juju" [05:29] and the collection names in it are fixed [05:29] well, we use model uuids though [05:29] i guess "juju" is ok for gridfs [05:30] wallyworld_: I was about to allow my mind to be changed :) [05:30] quick win for now [05:31] menn0: i guess there's pros and cons. maybe if it's easy to do.... [05:31] wallyworld_: if we were to do it then it would probably make sense to have NewStorage just take a *State [05:31] it could then pull the model uuid, controller uuid and session off that [05:31] sound ok? [05:32] it looks like all the call sites have a *State [05:33] menn0: let's use a bespoke interface that just declares the state methods it needs [05:33] pass in the concrete state.State from the caller [05:33] but declare the NewGridFS() method to use a smaller interface [05:34] or just add a controller uuid param [05:34] wallyworld_: ok sounds good. i'll pass in state but via an interface [05:35] blahdeblah: i don't think there's anything coming in core, but i thought the ecosystems/charm guys had something for juju itself they use [05:35] wallyworld_: state doesn't currently have a ControllerUUID method but i'll add one [05:35] menn0: i though it did? [05:35] wallyworld_: nope. there's ControllerModel and you can get it from there but that's messy [05:35] wallyworld_: there's a unexported controllerTag field so I'll just expose the UUID via that [05:36] like we do for ModelUUID() [05:36] * menn0 bids [05:36] nods even [05:44] wallyworld_: thanks - will ping them [05:50] wallyworld_: I just noticed that state/imagesstore uses a fixed "osimages" prefix [05:51] menn0: ah yes. i think the thinking there was that it is ok to share cached generic lxc images between controllers [05:51] wallyworld_: and state.ToolStorage uses the model UUID [05:52] hmmm, so tools storage could break the same way maybe? [05:52] * menn0 wonders if ToolsStorage could end up with the same problem where an entry is inaccessible [05:52] haha [05:52] yes I think so [05:52] wallyworld_: it's probably hard to trigger at the moment [05:52] menn0: do we even need a namespace? [05:52] wallyworld_: can you upload tools for a hosted machine [05:53] ? [05:53] um [05:53] i think the controller stores all the tools tarballs for the hosted models, would need to check [05:53] wallyworld_: I guess if you do juju upgrade-juju --upload-tools for 2 hosted models you might hit the same problem [05:54] not sure, but sounds plausible doesn't it [05:54] if you used exactly the same tools [05:54] yes, that's they key - need to be the same [05:54] wallyworld_: does upload-tools recompile the tools [05:54] ? [05:54] depends - if it finds binaries in the path, then no [05:54] is my recollection [05:55] wallyworld_: ok. if it did recompile then the odds of the binary being exactly the same is slim [05:55] yes [05:55] but let's not count on it [05:55] wallyworld_: regarding whether we need a namespace, I think you need to provide soemthing. [05:56] wallyworld_: the docs say the convention when there's only one namespace required is to use "fs" [05:56] wallyworld_: I think it still makes sense to use a namespace for different uses of gridfs [05:56] menn0: so maybe we just use a const "osimages", "tools", "charms" etc? [05:58] wallyworld_: yep. [05:58] wallyworld_: or we use the controller UUID... [05:59] wallyworld_: no a prefixes per use seems better [05:59] menn0: i think not looking at the different usages - namespaces for images, charms, tools etc seems ok [06:00] we still differentiate by model uuid inside each namespace [06:00] wallyworld_, axw: why does ToolsStorage and GUIStorage not use state.NewStorage()? [06:00] what's different about binaryStorage? [06:01] menn0: not sure, i know someone renamed binary storage recently, but can't recall the details [06:02] i can recall the specifics ottomh [06:02] wallyworld_: ok. well I'll avoid fixing the world. [06:02] not in this pr :-) [06:02] next one [06:04] wallyworld_: thanks for your help and input [06:04] * menn0 is gone for now [06:04] menn0: np, didn't do much :-) [06:04] thanks for fixing [06:04] menn0: I think they probably could. tools storage was written before that IIRC [06:10] Bug #1566345 changed: kill-controller leaves instances with storage behind === urulama__ is now known as urulama [07:05] Bug #1568715 opened: simplestreams needs to be updated for Azure Resource Manager [08:14] dimitern: ping; are you using/testing LXD containers on trusty? [08:19] frobware, hey, yes - I did, but mostly testing with xenial now [08:20] dimitern: I saw you landed the /e/n/i change. It's only half the story though as we need to delete eth0.cfg. [08:20] dimitern: my MAAS/LAN/OTHER seems borked at the moment as having a lot of problems bootstrapping. [08:23] frobware, ok, I'll propose a fix that deletes eth0.cfg [08:23] dimitern: I was trying to verify on trusty but ran into deploy problems. [08:24] dimitern: I also wanted to understand the order a little better. Who wins doe 00-juju.cfg and eth0.cfg [08:24] s/doe/for [08:24] frobware, I'm also working on sketching the next steps to fix network config setting properly [08:25] dimitern: I might still see the issue where the MA needs bouncing on xenial before a containers gets its correct network profile [08:25] frobware, well, on trusty you need to add "trusty-backports" to /e/a/sources.list and a-g update [08:25] dimitern: tried that... [08:25] frobware, and then a-g install trusty-backports/lxd ? [08:26] dimitern: question is, should the have been done by juju for trusty? [08:28] frobware, yes, i think so [08:28] frobware, wasn't jam doing something about that? [08:30] frobware, hmmm or maybe it IIRC trusty-backports are *supposedly enabled* by default, so that steps were unnecessary ? [08:31] I haven't seen a cloud image of trusty where backports is on by default [08:31] dimitern: right; just expected juju to do the business. Only switched to trusty because I had issues with xenial. [08:39] dimitern: was going to do some testing before I land the `rm eth0.cfg' change. Wanted to understand the order of when 00-juju.cfg wins over eth0.cfg to see if we need to down/up all interfaces. [08:41] frobware, sure [08:46] dimitern: I also wonder whether our rm should be a bit smarter. rm all but 00-juju.cfg. [08:47] frobware, alternatively, we can change /e/n/i/ to source interfaces.d/*.juju :) [08:47] or something like that [09:04] * fwereade has a filthy cold, please excuse me dimitern voidspace et al [09:05] fwereade, get well soon! :) [09:16] fwereade: yep, recover quickly [09:22] * TheMue dccs wishes to get well soon to fwereade [10:28] dimitern: frobware: you shouldn't need to directly add trusty-backports as it should be *available* by default (but not active), and "juju bootstrap" should be doing "apt-get -t trusty-backports install lxd" [10:29] if it isn't then we have a bug we should be aware of [10:29] dimitern: I've done several bootstraps on Trusty images on AWS and they work. [10:29] dimitern: now, we've had bugs with *trusty* and "go-1.2" with official releases (juju-2.0-beta3) because go-1.2 doesn't work with LXD [10:30] so it tells you "unknown" or somesuch. [10:30] but Master or "juju bootstrap --upload-tools" should all work, and next release should be built with 1.6 [10:30] jam, I've yet to see a trusty cloud image on maas that has trusty-backports in /e/apt/sources.list tbo [10:31] dimitern: so I'm not sure where it is enabled, but it does work. [10:31] have you tried just doing "apt-get -t trusty-backports install lxd" ? [10:31] dimitern: it should be there, but not at a prio that installs packages unless explictly selected [10:31] I'm bootstrapping now to confirm, but I have tested it in the past [10:31] hi mgz [10:32] jam, yes, and it says trusty-backports is unknown or something [10:32] dimitern: what cloud/what region/what version of juju? [10:32] jam, mgz, hmm ok I'm trying now again to confirm after updating all images [10:32] jam, maas/2.0 and juju from master tip [10:33] voidspace, I'm getting a lot of "Failed to power on node - Node could not be powered on: Failed talking to node's BMC: Unable to retrieve AMT version: 500 Can't connect to 10.14.0.11:16992 at /usr/bin/amttool line 126." errors with 2.0 maas [10:33] it works but unreliably [10:33] like 1-2 out of 5 power checks fail [10:35] dimitern: weird [10:35] dimitern: that really sounds like maas bug [10:35] dimitern: I don't hit that because I'm not setting the power type I guess [10:36] dimitern: or maybe it's a new version of amttool in xenial [10:36] or virsh [10:36] either way - maas problem [10:37] voidspace, indeed [10:37] dimitern: "sudo apt-get -t trusty-backports install lxd" works for me on a Trusty image created with 2.0.0-beta3 in AWS (even though Juju 2.0.0b3 wouldn't be able to talk to it with released tools) [10:38] jam, mgz here it is - fresh install of trusty - http://paste.ubuntu.com/15756105/ [10:38] MAAS Version 2.0.0 (beta1+bzr4873) [10:39] jam, mgz, is it possible AWS uses different cloud images than MAAS ? [10:39] dimitern: it does [10:39] dimitern: I'm sure they are different builds, as the root filesystem is different [10:39] but I thought they were supposed to be as much the same as possible. [10:40] dimitern: we need a bug on MaaS and Juju that tracks that [10:40] thanks for noticing. [10:40] well that's kinda crappy ux :/ [10:40] jam, yeah [10:50] jam, looking at http://images.maas.io/query/released.latest.txt and http://images.maas.io/query/daily.latest.txt it looks like the trusty images MAAS is using are 2 years old [10:50] dimitern: that doesn't sound good [10:51] indeed [11:13] morning [11:45] babbageclunk: are you working on the MAAS2 version of maasObjectNetworkInterfaces? [11:45] babbageclunk: pretty sure you are - in which case I'll leave it as a TODO in my branch [11:45] voidspace: yes [11:45] babbageclunk: ok [11:45] babbageclunk: it will need wiring into my branch when done [11:46] voidspace: just pushing and creating a PR for the storage - spent longer than expected bashing my head on golang features. [11:46] babbageclunk: heh, welcome to go :-) [11:47] bbl (~1h) [11:48] jam: trying again; got sidetracked by h/w [11:54] voidspace: https://github.com/juju/juju/pull/5070 [11:56] voidspace: would you prefer I work on finishing off stop-instances or interfaces next? [11:56] babbageclunk: https://github.com/juju/juju/pull/5071 [11:57] babbageclunk: well, StopInstances and then deploymentStatus get us closer to bootstrap [11:57] babbageclunk: bootstrap isn't actually blocked on interfaces yet [11:57] dimitern: frobware: two PRs for you [11:57] dimitern: frobware: http://reviews.vapour.ws/r/4513/ [11:57] voidspace: ok, stop instances next then [11:58] babbageclunk: cool [11:58] babbageclunk: thanks [11:58] dimitern: frobware: http://reviews.vapour.ws/r/4514/ [11:58] voidspace, looking [12:09] voidspace, both reviewed [12:09] * dimitern steps out for ~1h [12:26] dimitern: thanks [12:29] dimitern: that point about the comment is in a test that currently doesn't run (I changed the name to DONTTestAcquireNodeStorage...) [12:29] dimitern: because it needs the work that babbageclunk has done on storage [12:29] dimitern: so that will be updated next [12:30] dimitern: so effectively that whole test is commented out - and there *is* a TODO comment at the start of the test to explain why. [12:30] * voidspace lunch === babbageclunk is now known as babbageclunch [13:09] Bug #1568845 opened: help text for juju autoload-credentials needs improving [13:12] voidspace, ok,sgtm [13:39] Bug #1560457 changed: help text for juju bootstrap needs improving [13:39] Bug #1568848 opened: help text for juju bootstrap needs improving [13:39] Bug #1568854 opened: help text for juju create-model needs improving [13:39] Bug #1568862 opened: help text for juju needs improving === babbageclunch is now known as babbageclunk [14:16] morning all [14:18] wotcha katco [14:21] dimitern: did you see the reply from babbageclunk on his PR [14:21] dimitern: he doesn't understand your review comment, and nor do I [14:21] we so missed an opportunity [14:22] our maas feature branch should have been called maaster [14:22] voidspace: amazing [14:22] voidspace, babbageclunk, sorry - I've posted a reply [14:22] voidspace: I mean, amaazing [14:22] the diff mislead me I guess [14:23] baabageclunk [14:23] babbageclunk: I like babbagelunch by the way - nice [14:23] voidspace: thanks [14:23] *clunch even [14:24] dimitern: no worries! [14:25] frobware: are you ok with that panic if a test sets up the fakeController without files? [14:25] babbageclunk: panic seems bad [14:26] frobware: what other way is there of failing the test? [14:31] babbageclunk: sorry, sidetracked again. [14:32] panic is ok if it indicates a programmer error, especially during tests. It's sort of a nice "don't do that!" as long as it happens right away with an obvious cause. [14:37] babbageclunk: I replied in the review [14:37] I'll change it to return a generic error with a clear message indicating what the problem is. [14:41] frobware: cool, thanks! [14:46] cherylj: what's the color coding on the bug squad board? [14:46] cherylj: nvm... card type. I see [14:47] :) [14:47] cherylj: I was looking at priority for priority and they were all normal, which was confusing me [14:47] ah [14:48] cherylj: just grab a critical off the top somewhere, or do you have a suggestion? [14:48] natefinch: just pick one on the criticals that interest you :) [14:49] cherylj: ty :) all of moonstone will be on bug squad now [14:56] jam: not sure if you're still about but I had no joy with add-machine lxd:0. See bug #1568895 [14:56] Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty [14:56] cool, thanks katco. If there are new bugs found in CI, I might need to redirect people :) [14:56] cherylj: we're at your direction [14:56] sinzui: is master running in CI now? [14:56] katco: I'm actually wondering if I need to block things as we're trying to get a release out tomorrow [14:56] :( [14:57] I'd rather not, but we really really need a good run [14:57] cherylj: if we need stability then i'd agree =/ [14:57] cherylj: 1.25 strted about 45 miniutes ago [14:57] sinzui: could you kill it and run master? [14:58] cherylj: I hesitate to. killing it means waiting for other things to complete and cleanup. [14:58] * sinzui looks [14:58] sinzui: understood [14:58] katco: and technically, master is blocked from a different bug :) [14:58] cherylj: what bug is that? [14:59] cherylj: nm i see it [14:59] katco: one of many that I opened over the weekend for failures [14:59] cherylj: looks like fix committed [15:00] cherylj: by you ;p [15:00] oh snap, I forgot [15:00] cherylj: Can we compromise? it will take an hour for some of the current jobs to complete, but I can see we are half way though 1.25 tests. I can make some jobs not retry to get to master faster [15:00] sinzui: did you turn back on the block checker? [15:00] sinzui: yeah, just make them not retry [15:00] cherylj: I can enable it now [15:00] sinzui: thank you [15:01] katco: lp was having issues yesterday, so sinzui disabled the blocking bug check [15:01] so people should get bounced back again once sinzui re-enables if they're trying to merge [15:01] katco: did you guys have anything you needed to add to release notes? [15:02] cherylj: i don't think so, but i'll check with the team [15:02] thanks! [15:03] natefinch: standup time [15:03] Bug #1568895 opened: Cannot add LXD containers in 2.0beta4 on trusty [15:04] katco: oh, sorry, I expected it to be tonight... coming [15:04] frobware: any luck on the networking stuff for lxd? [15:06] tych0: the /e/n/i is partially fixed. need to delete eth0.cfg, but also wanted to understand if I can just do this carte-blanche in cloud-init. [15:06] tych0: sidetracked by the fact that juju/lxd is not installing on trusty (for me at least) [15:07] frobware: ah, i saw that bug. can you just use --series=xenial? [15:07] tych0: yes, but had problems there so I though... I'll just use trusty... and the rest is history [15:08] tych0: so, yes. back to trusty... but also wary that deleting eth0.cfg might also break precise. [15:08] tych0: correction, back to xenial... [15:13] alexisb: was the cached-images command one on your list to flatten? [15:22] cherylj, yes [15:22] cherylj, I am behind [15:22] voidspace: What does this mean? https://github.com/juju/juju/pull/5070 [15:22] voidspace: AA - did we say disable this across the board, or just for MAAS? [15:22] alexisb: no problem. I'm asking for a friend ;) [15:22] voidspace: that I wasn't up to date with master? [15:25] cherylj: is master blocked on bug #1568312 [15:25] Bug #1568312: Juju should fallback to juju-mongodb after first failure to find juju-mongodb3.2 [15:25] cherylj: nevermind, mup answered my question [15:25] frobware: master really should be blocked with a stabilization bug [15:25] since we're trying to get a release out tomorrow [15:26] cherylj: right. was trying to help answer babbageclunk's question above [15:26] ah, ok [15:26] frobware: Ah, is that bug blocking my merge? [15:26] babbageclunk: yes [15:27] babbageclunk: we want to do a clean release before landing all the maas2 changes, with the current batch of breakage on master dealt with [15:28] mgz: makes sense - so should I hold off until a release is cut? [15:28] babbageclunk: keep stacking those branches up. :) [15:28] frobware: :) I was so close! [15:29] frobware: if you need to, create a fork under juju to land on [15:29] frobware: whatever makes it easiest to coordinate your work [15:30] voidspace, babbageclunk: I think we can keep stacking stuff for a day. thoughts? ^^ [15:33] Bug #1568925 opened: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 [15:34] frobware, voidspace: yeah, no big problem for me. [15:35] cherylj: hey, for bug 1567170 what does "hosted model" mean? aren't all models hosted? [15:35] Bug #1567170: Disallow upgrading with --upload-tools for hosted models [15:35] katco: non-admin model [15:35] sorry, was using old-terminology [15:35] cherylj: ahhh ok [15:36] cherylj, katco: how does one, in development, update a model that is not ":admin"? [15:36] cherylj: so that's interesting... the binaries are different for non-admin models? [15:37] cherylj, katco: I got caught by this friday, just ended up doing my upgrade-juju in the admin model [15:37] frobware: the idea is that you can do an upgrade to a published (in devel or stable) release [15:38] frobware: a later feature request is "upgrade my model to match what the state server is at" [15:38] cherylj: everytime I tried this I ran into "version mismatch". And each time it tried it bumped the version number [15:39] Bug #1568925 changed: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 [15:42] Bug #1568925 opened: Address Allocation feature flag still enabled for MAAS provider in Juju 2.0 [15:42] is it possible to get the tools from the state machine through http, instead of trying insecure https? [15:45] frobware: babbageclunk: heh, I got mine in just before the block [15:45] frobware, voidspace: Just checking I understand - the bug number jujubot's quoting is a red herring (the fix was merged early this morning), it's just blocking merges until the release is done? [15:45] voidspace: grrr! :) [15:47] babbageclunk: guessing so. cherylj mentioned that there really should be a stabilisation bug [15:47] frobware: ok, got it - cool [15:53] babbageclunk: the block isn't removed by the jujubot until the bug is changed by QA to "fix released" [15:53] babbageclunk: "fix committed" (i.e. merged) is not sufficient to unblock as it hasn't been QA verified yet [15:54] voidspace: ah, thanks [15:54] babbageclunk: this block will probably be left in place (as frobware said) until the release is done [15:54] ooh, networking meeting [16:00] cherylj: How firm is the suggested wording on #1564622? [16:00] Bug #1564622: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir [16:00] cherylj: (if you know) [16:00] natefinch: it's rough [16:00] natefinch: please feel free to suggest something better. It was on the fly at a sprint [16:01] tasdomas: PTAL http://reviews.vapour.ws/r/4515/ [16:02] rick_h_: ok. Sometimes it's hard to know what's set in stone and what is not. I'll play with it and see what seems to work best. [16:03] Bug #1568943 opened: Juju 2.0-beta4 stabilization [16:03] Bug #1568944 opened: Failure when deploying on lxd models and model name contains space characters [16:03] katco: that bug was more trivial than expected, so I'm going to pick up another (#1456916) [16:04] ericsnow: nice [16:13] can somebody not call HEAD on a state machine endpoint with tools [16:13] no idea if we support HEAD or not... possibly not [16:14] well, wait, which endpoints? It's basically all websocket [16:15] natefinch: tools, charms, backups, and resources all have HTTP endpoints [16:16] ahh, thus the reference to tools... misunderstood :) [16:16] (plus a few others) [16:16] ericsnow: sorry was otp... that's always a nice problem to have :) [16:17] dooferlad: are you working on bug #1456916 [16:17] bug 1456916 looks like a pain to fix well [16:18] there are lots of ways to try and fix it badly [16:18] dooferlad: just noticed it's assigned to you in LP but not on the card in leankit [16:21] babbageclunk: I'm still in the call if you wanted to chat [16:25] frobware: can't get back in for some reason [16:25] babbageclunk: want to drop into the sapphire standup HO instead? [16:26] frobware: can't get to there either. [16:27] babbageclunk: do you get a redirect loop? [16:28] babbageclunk: I've had that and had to force a log back in by going to something else canonical work related [16:28] something else on google I mean [16:28] like docs.google.com [16:28] which should then let you log back in without a redirect loop [16:29] but that may not be your problem at all [16:30] voidspace: yeah, might be something to do with SSO like Jay had. [16:30] Would make sense, [16:31] natefinch: small patch: http://reviews.vapour.ws/r/4515/ [16:31] redir: thanks for the review :) [16:31] :) [16:32] frobware: lxd had a packaging bug in trusty for 2.0.0~rc9 [16:32] frobware: that should be fixed as of 20 minutes ago Stephan uploaded a bugfix [16:36] jam: great, will try again [16:37] jam: I'm using releases for my trusty images. is that enough or do I need to switch to daily? [16:44] ericsnow: lol wow [16:44] natefinch: yep [16:45] ericsnow: awesome. ship it. [16:45] natefinch: alas, master is blocked now [16:45] doh [16:52] rogpeppe: re: bug #1566431, are you talking about different AWS accounts or different Juju accounts? [16:52] Bug #1566431: cloud-init cannot always use private ip address to fetch tools (ec2 provider) [16:52] rogpeppe: I'mm looking at how to repro [16:52] ericsnow: different AWS accounts [16:52] rogpeppe: figured :) [16:54] ericsnow: problem doesn't manifest in us-east, for some reason [17:07] frobware: it should be an archive change. I don't know how long it will take to propagate [17:08] now dimiter's finding about no 'trusty-backports' in maas images is a different bug [17:26] rick_h_: for #1564622 do we actually execute the requested action, or no? like, juju bootstrap is conceivably a valid action at that point. [17:26] Bug #1564622: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir [17:30] so if master is blocked does that mean I should do something differently? or simply that my commits won't make it to master until unblocked? [17:31] redir: you don't do $$merge$$ till unblock [17:32] mgz: that's what I was looking for:) Thanks. === natefinch is now known as natefinch-lunch [18:05] natefinch-lunch: sorry, was otp, looking [18:07] natefinch-lunch: intresting, I guess you're right there's a list of actions that are legit. [18:10] natefinch-lunch: so if you've got juju1 home dir stuff, and you run your very first juju2 command...I think it'd be ok to block and output it once [18:11] natefinch-lunch: and make them redo the command a second time if they knew they were looking for 2.0 [18:11] natefinch-lunch: hmm, but my suggested text falls over there doesn't it heh [18:16] mwhudson: hey, are you around? [18:17] mm, I would guess not yet [18:21] rogpeppe: what did you do to get the provider to create instances under a different AWS account? [18:24] natefinch-lunch: updated the bug with another round of thoughts. Let me know what you think [18:31] arosales: with your update to bug 1566420 - do your machines show up in the machine section? [18:31] Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image [18:32] arosales: or just the services? [18:33] cherylj: replying in #juju as the reporter is there [18:38] cherylj, perrito666 is looking for a task, can he just take any bug of the board or do you have a particular one that needs eyes? [18:39] perrito666: here's a quick one for you - bug 1568944 [18:39] Bug #1568944: Failure when deploying on lxd models and model name contains space characters [18:39] cherylj: tx a lot [18:40] perrito666: but remember that master is blocked for now :( [18:40] cherylj: I know [18:40] I feel like we should create a feature branch for people to merge into for now [18:40] until we unblock master [18:40] cherylj: nah, the merge back will be a nightmare [18:40] any thoughts (anyone)? [18:40] I do believe that we should do a branch for the release [18:40] perrito666: well, hopefully master won't change [18:41] ah, so the opposite :) [18:41] branch the other thing [18:41] exactly, so master keeps being master [18:41] and if fixes are required for the bless, you can merge just those [18:41] sinzui, mgz, abentley, any thoughts on creating a temporary branch for our 2.0-beta4 release? [18:42] will that cause problems when we have to release / tag? [18:42] cherylj: It wont affect the tag in git [18:43] it's just weird having people who can actually work bugs while we're trying to get ready for a release :) [18:43] cherylj: are people trying to merge post 2.0 features now? [18:44] perrito666: if you want to tackle some of the help text bugs too, I won't say no :) https://bugs.launchpad.net/juju-core/+bugs?field.tag=helpdocs [18:44] cherylj: it seems a little unnessersary from what I've seen of the pending prs [18:44] cherylj: I think it is a good idea. It's less friction for devs. There is a risk that bug fixes won't get applied to the 2.0-beta4 branch, but I think that can be managed. [18:44] not much chance of mass conflicts between then [18:44] cherylj: everytime I write text for juju I end up sounding like tonto [18:45] perrito666: the text is there, you just need to copy it in :) [18:45] perrito666: the docs team has been busy writing up help text for all the commands [18:45] oh, then I could :p [18:45] ill give a try to the lxc one and then go to docs otherwise [18:45] perrito666: I don't understand why you said the first case would be a nightmare. Should be exactly as much effort as merging 2.0-beta4 into master. [18:46] sinzui: post 2.0-beta4 bugfixes [18:46] abentley: why would you merge beta4 into master? [18:46] perrito666: Because it has bugfixes. [18:46] abentley: ideally bugfixes should do bug->master->beta [18:47] or the other way but just the one merge, not the whole branch [18:47] perrito666: I am pretty sure that your policy is bug -> release -> master. [18:47] abentley: yes [18:47] sorry [18:47] we are using github a bit poorly which causes a lot of merge conflicts [18:49] perrito666: If you want to do one merge per bugfix, you can, but it seems inefficient to me. [18:49] abentley: that is what we where doing for 1.24, 1.25, 1.x maintenance [18:51] perrito666: Isn't that just to get bug fixes merged in a timely manner? [18:51] ericsnow: you need to create a model inside the controller that uses different creds [18:52] ericsnow: every model inside a controller has a different set of model attributes [18:52] rogpeppe: I thought I tried that though [18:52] ericsnow: we could chat about this if you like [18:52] rogpeppe: sure === redir is now known as redir_lunch [19:30] cherylj: I would like your input in the lxc bug, I added a question/suggestion, in the mean time ill go add some help texts :) === thomnico is now known as thomnico|Brussel [19:40] Bug #1563590 changed: 1.25.4, xenial, init script install error race [19:58] perrito666: which lxc bug? (sorry, been in calls) [19:59] oh ffs [20:00] seriously? [20:00] 2016-04-11 19:38:30 ERROR cmd supercommand.go:448 region "DFW" in cloud "rackspace" not found (expected one of ["dfw" "ord" "iad" "lon" "syd" "hkg"]) [20:01] cherylj: case is important? [20:01] rick_h_: shouldn't be [20:01] cherylj: sorry, I missed the :/ on the end there [20:01] we changed it to be lower case because sabdfl asked us to, but to keep from regressing people, we should convert the region to lower case before trying to use it [20:02] cherylj: quit talking sense :P [20:02] Bug reporting activated [20:04] ses_2: I also see this error coming out of CI: Test recovery strategies.: error: unrecognized arguments: --charm-prefix=local:trusty/ [20:04] http://reports.vapour.ws/releases/3881/job/functional-ha-recovery-rackspace/attempt/472 [20:05] cherylj, alexisb: the other known LXD issue - https://bugs.launchpad.net/juju-core/+bug/1568895 [20:05] Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty [20:06] cherylj, alexisb: but may be resolved by "tomorrow" if all I'm waiting on is a package update [20:07] frobware: I remember a conversation not too long ago where someone said backports was enabled by default?? [20:07] frobware: maybe juju is overriding this somehow? [20:09] cherylj: let me trying just deploying trust from my MAAS [20:09] * cherylj too [20:13] cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/6 [20:14] Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty [20:16] Bug #1569024 opened: Region names for rackspace should accept caps and lowercase [20:16] frobware: looks like maybe we're overwriting it? [20:16] frobware: a fresh deploy through AWS shows it enabled: http://paste.ubuntu.com/15767200/ [20:17] cherylj: so on MAAS only? [20:17] frobware: did you do that through a maas deploy or juju? [20:17] frobware: this was not a juju provisioned machine [20:17] cherylj: what's your /etc/cloud/build.info [20:17] cherylj: neither was mine, just MAAS deployed [20:17] serial: 20160406 [20:18] frobware: interesting [20:18] cherylj: daily or releases image on AWS? [20:19] frobware: daily - I choose the latest from here: https://cloud-images.ubuntu.com/locator/daily/ [20:19] cherylj: going to switch to daily, as I have two "releases" MAAS setups atm [20:19] morning [20:19] o/ frobware [20:19] frobware: still working? [20:20] thumper: only virtually [20:20] thumper: haven't left standup yet :) [20:20] thumper: still talking about.... CONTAINERS! [20:21] frobware: contain yourself! [20:21] Ctrl-D [20:25] ah fudge, should we allow people to specify regions in their cloud definitions that have caps? [20:25] or should we lowercase everything? [20:25] (like we do with users?) [20:26] thumper ^^ thoughts? [20:26] I am confused by the bug in general [20:26] ah... [20:26] hmm [20:27] personally I like lowercasing them [20:27] but [20:27] cherylj: some regions must b caps coz of provider. axw made some changes in the area... [20:27] I don't believe in forcing it if it could cause a problem [20:27] anastasiamac: yeah... was thinking it might be something like that [20:28] so I guess people using rackspace need to just change to use lower case region names? [20:29] maybe we can handle that internally to the rackspace provider [20:29] * cherylj looks [20:32] cherylj, I will fix that [20:32] thanks, ses_2 === natefinch-lunch is now known as natefinch [20:39] cherylj: https://bugs.launchpad.net/maas/+bug/1554636 - interesting read if I'm just switching between daily and releases [20:39] Bug #1554636: maas serving old image to nodes [20:44] frobware: interesting! I noticed that my trusty images were old too [20:44] (from like 3 weeks ago) [20:56] cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/7 [20:56] Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty [20:56] cherylj: I don't see any difference with switching to daily images [20:57] cherylj: neither have trusty-backports by default [20:57] I wonder if curtin screws with it [20:59] cherylj: if I launch a container via lxc (locally on my desktop) then I do see backports listed; not all cloud images are created equally [21:00] perrito666: hi! [21:01] katco: wallyworld: moonstone standup or tanzanite or ? [21:02] natefinch: nothing on the calendar so i was assuming it hadn't been rescheduled yet [21:02] wallyworld_: natefinch: correct hasn't been rescheduled. natefinch, remember i was going to discuss with wallyworld? [21:03] katco: oh, right. ok [21:03] cherylj: https://bugs.launchpad.net/juju-core/+bug/1568895/comments/8 [21:03] Bug #1568895: Cannot add LXD containers in 2.0beta4 on trusty [21:04] * frobware is done... hopefully the MAAS fairies will point me towards the URL of truth... [21:16] Bug #1569047 opened: juju2 beta 3: bootstrap warnings about interfaces [21:17] mwhudson: hi, so, are we having problems to bootstrap or just to test? [21:20] perrito666: i haven't tried bootstrapping, i don't actually know how to use juju :-p [21:36] cherylj: https://bugs.launchpad.net/juju-core/+bug/1569054 [21:36] Bug #1569054: GridFS namespace breaks charm and tools deduping across models [21:38] cherylj: I've also created a card for it on the bug squad board [21:46] mwhudson: mmm, ok, have you attached the logs to the bug? [21:46] from the test I mean [21:46] Bug #1569054 opened: GridFS namespace breaks charm and tools deduping across models [21:47] perrito666: not the most recent run probably, let me do that [21:49] perrito666: https://bugs.launchpad.net/juju-core/+bug/1567708/comments/11 [21:49] Bug #1567708: unit tests fail with mongodb 3.2 [21:49] that was fast [21:52] Bug #1569054 changed: GridFS namespace breaks charm and tools deduping across models [21:54] perrito666: that was from my run yesterday, i haven't run the tests today... [21:55] tis ok, nothing changed [21:58] Bug #1569054 opened: GridFS namespace breaks charm and tools deduping across models [22:00] thumper, I am running late [22:00] alexisb: ok, just on a call with mpontillo [22:06] thumper, on the HO when you are ready, no rush [22:10] ericsnow: i thought we fixed this? bug 1567170 [22:10] Bug #1567170: Disallow upgrading with --upload-tools for hosted models [22:10] ericsnow: sorry wrong bug. bug 1545116 [22:10] Bug #1545116: When I run "juju resources " after a service is destroyed, resources are still listed. <2.0-count> [22:11] katco: pretty sure we did [22:24] wallyworld_, thumper, cherylj : I've gone for an even simpler fix for the gridfs namespace issue. it's ready now - just doing some manual testing. [22:25] menn0: awesome, ty [22:27] thumper, I dropped off the standup, please ping when you are available [22:27] alexisb: ok, here now [22:27] lol of course [22:46] Bug #1569072 opened: juju2 bundle deploy help text out of date [22:47] mwhudson: is there I way I can get my hands into this? [22:47] perrito666: you mean test on s390x? [22:47] perrito666: the failures occur on intel too though [22:48] mwhudson: ok, so to reproduce this you do what exactly? [22:48] sorry I asked you this already :) [22:48] just making sure I am doing things right [22:49] perrito666: install the juju-mongodb3.2 from ppa:juju/experimental [22:49] perrito666: run JUJU_MONGOD=/usr/lib/juju/mongo3.2/bin/mongod go test github.com/juju/juju/... [22:51] * perrito666 is pretty sure he is about to break something in his desktop [23:03] wallyworld_: http://reviews.vapour.ws/r/4523/ [23:04] looking [23:04] wallyworld_: I went with the approach of making the gridfs namespace the same as the DB name (as per osimages) [23:04] wallyworld_: I came to the conclusion that anything more elaborate was just YAGNI [23:04] yep [23:05] wallyworld_: tested with local and charm store deployments [23:05] mwhudson: I get juju-mongodb3 and juju-mongo3 which is a transitional [23:07] menn0: lgtm [23:07] wallyworld_: cheers [23:08] perrito666: well you should check with sinzui i guess but i'm preeeeetttty sure juju is jumping from 2.6 to 3.2, not to 3 [23:08] sinzui: care to shed some light into this ? [23:08] wallyworld_: wb btw [23:09] it's 3.2 [23:09] mm, there is juju-mongodb3 package in version 3.2.0-0ubuntu0~juju8~ubuntu15.10.1~15.10 [23:09] perrito666: if you've been making juju compatible with 3.0 that would explain some things :-) [23:09] juju-mongodb3.2 [23:10] (presumably that work is a subset of the work to be compatible with 3.2 so not wasted effort entirely) [23:10] wallyworld_: mwhudson https://pastebin.canonical.com/153956/ [23:10] ah i bet my reply to this is "3.2 is only built for xenial" [23:10] mwhudson: good to know :p [23:11] ok, here goes my machine's stability [23:11] * perrito666 upgrades to xenial [23:11] ah yes, probably [23:11] hah uh, or use a lxd or ec2 instance or something? :) [23:11] i do need to upgrade myself soon [23:12] perrito666: juju-mongodb3.2 is not in the archives yet afaik [23:12] that's the whole issue that has been hanging around for several weeks [23:13] which is why my pr should not have been landed yet [23:13] i am about to upload juju-mongodb3.2 RIGHT NOW!!!one [23:13] wallyworld_: your pr issue has been dealt with [23:13] unfortunately it will then sit in NEW for a while [23:13] I am now trying to see mwhudson issue [23:14] mwhudson: new or proposed for a while? [23:14] should be out of proposed for xenial pretty quickly? [23:14] wallyworld_: NEW as in https://launchpad.net/ubuntu/xenial/+queue [23:14] * mwhudson afk for 5 [23:15] thumper or wallyworld_: here's the fix for the original bug: http://reviews.vapour.ws/r/4524/ [23:15] wallyworld_: axw anastasiamac i am trying to find my laptop for the standup be right there [23:15] perrito666: k [23:15] menn0: will look after standup [23:16] wallyworld_: np [23:31] Bug #1569086 opened: Juju controller CA & TLS server keys are weak [23:52] Bug #1569047 changed: juju2 beta 3: bootstrap warnings about interfaces [23:54] axw: re: bug #1566431, unfortunately there's more to it than fiddling with the provisioner-side...tools stuff has to be tweaked as well [23:54] Bug #1566431: cloud-init cannot always use private ip address to fetch tools (ec2 provider) [23:54] ericsnow: yeah, doesn't surprise me :/ [23:54] axw: looks like you had to deal with a related issue last year [23:54] ericsnow: oh? what was that? [23:55] axw: https://github.com/juju/juju/blob/master/apiserver/common/tools.go#L359 [23:55] ericsnow: ah that comment's just about HA really [23:56] axw: the comment applies regardless, I think [23:56] ericsnow: yup [23:57] axw: we do the separate "tools URL" thing for the sake of possibly using alternate tools servers, no? [23:57] ericsnow: we've got a very small window to make a breaking change, if we can fix it... :) [23:57] axw: yeah :) [23:57] ericsnow: can't remember why, sorry [23:57] axw: np === ses is now known as Guest97064 [23:58] ericsnow: I think that is the case, just to encapsulate that logic for the several places where we need to get tools URLs [23:59] it may be redundant if we're just returning all the addresses