[00:04] perrito666: yt? [00:04] redir: always [00:04] great [00:05] so looking at bug 1577776 [00:05] Bug #1577776: 2.0b6: asks for domain-name, then doesn't know what it is [00:05] sorry menn0 wallyworld and I are running over [00:05] * perrito666 clicks [00:05] redir: ok [00:06] You added 'domain-name' to the credential config for opentack [00:06] yes? [00:08] I can add it to the openstack environ schema and provider configurator... but is there more that needs to happen? [00:08] redir: I believe so, for keystone 3 [00:09] Apparently I am getting kicked from the conf room. [00:09] redir: not sure really, the underlying code should be there (I am a bit worried that this bug exists) [00:09] OK. I'll find you later tonight or more likely tomorrow [00:10] perrito666: me too since it seems that something got nuked or was never tested [00:10] bbiab [00:10] or tomorrow [00:10] redir: k [00:16] menn0, are you still available to meet or did you want to skip this week? [00:17] alexisb_: I'm available now. I waited around for a while for you. [00:17] alexisb_: I don't have anything major to discuss except maybe the sprint review workshops. [00:23] menn0, I can join for a bit [00:23] I will hop on [00:23] k, eod, CUALL tomorrow [00:42] bugger [00:42] CreateMutex with a name doesn't give us the semantics we want on windows for the mutex [00:42] * thumper digs some more into windows world [00:48] hmm... [00:48] dog walk time, then I'll try something else [01:06] wallyworld: http://reviews.vapour.ws/r/5072/ -- I'll test with manual now [01:07] ta, looking [01:14] axw: lgtm. works on lxd i assume as well [01:15] wallyworld: will lxd test shortly, pretty certain it will [01:16] yeah, looks like it will [01:36] wallyworld: seems there's a different issue with manual, investigating/fixing in a follow up [01:36] ok [01:36] wallyworld: I'm getting "creating hosted model: model already exists" -- possibly the same thing the grnet folks are getting [01:37] axw: was that grnet issue preceeding this work then? [01:37] wallyworld: the issue the found was before you landed my changes, if that's what you mean [01:37] yeah [01:38] so good to fix [01:38] well I don't know if it's what they're seeing, but need to fix it anyway [01:38] we'll see [01:41] axw: is there a bug number? [01:42] wallyworld: no, it was on the mailing list. about the synnefo provider [01:42] I'll create a bug for the issue I'm seeing now tho [01:42] ok, ta [01:43] wallyworld: actually I suspect it's different, I think it's to do with detecting regions [01:43] rightio. we should maybe test all providers we can to be sure [01:43] cloud says it has no regions, but if you bootstrap manual/ then that goes in as a region [01:44] altho hm I think we add that in.. anyway, I'll look into it and stop guessing [02:06] Bug #1593033 opened: manual: bootstrapping fails with "creating hosted model: model already exists" [02:33] thumper: simple bit of infrastructure I need for current MM work: http://reviews.vapour.ws/r/5074/ [02:45] wallyworld: http://reviews.vapour.ws/r/5075/ [02:45] looking [02:53] axw: lgtm to but it seems there's a bit of test coverage missing unless i missed it [02:57] wallyworld: yeah, because it's a PITA to inject/test with custom environ providers. I'll see if I can make an incremental improvement... [02:58] axw: we can land for the beta and do in a follow up, let's get the merge happening, so long as you've tested by hand [02:59] axw: also, could you add a few lines to the release notes about this and the cloud/cred/addmodel changes, eg for add model a user visible change is that the --credential argument doesn't take a cloud name prefix anymore [03:02] wallyworld: yep. I'll mention --region, but it's not fully baked yet because you still have to specify region in --config [03:02] wallyworld: on account of us still storing cloud/cred details in model config [03:03] no worries, just the minimal so folks know what to do and are given info on what's different. i have to do a bit on shared config also [03:03] axw: also, when you're free, i have pushed that controller uuid change, but i'll land after the beta revision is finalised [03:04] wallyworld: ok. still gotta look into the CreateModel compatibility [03:04] yep, that takes priority for sure [03:12] wallyworld: bleh, GUI uses ConfigSkeleton [03:12] fark [03:12] we'll have to put it back for now and let them know to remove it [03:12] I guess I'll put it back in, in some minimal form [03:26] fuck yeah!! [03:26] got windows working too [03:26] can't use a mutex because of stupid windows thread bollocks [03:27] if the current thread already owns the mutex, any call to wait for that mutex succeeds [03:27] so I'm using a named semaphore [03:27] with a max count of 1 [03:27] that works [03:27] phew [03:28] wow, that could have been a buzzkill [03:33] Bug #1593042 opened: Juju GUI cannot create new models [03:39] davecheney: but windows mutex now has exactly the same behaviour as iOS and linux [03:40] I'm busy adding docstrings [03:40] and a few extra tests [03:40] sgtm [03:45] wallyworld: http://reviews.vapour.ws/r/5076/ [03:45] looking [03:49] axw: was hoping it would be simple like that. i did something similar for deploy with adding vs not adding charms [03:55] wallyworld: the existing release notes don't say anything about a cloud prefix on --credential [03:55] well, looks like that omission is now fixed :-) [03:56] wallyworld: on second thoughts, I think we should just be silent about --region until it's fully supported [03:56] ok [04:15] thumper: migration minion now reporting to master: http://reviews.vapour.ws/r/5077/ [04:15] meetingology: ool [04:15] thumper: Error: "ool" is not a valid command. [04:15] oops [04:16] ugh [04:16] meetingology: cool [04:16] thumper: Error: "cool" is not a valid command. [04:16] menn0: cool [04:16] who asked meetingology along anyway [04:16] thumper: it should have been broken up into smaller PRs, but it's not actually that big overall [04:16] thumper: what is meetingology ? [04:16] a bot by the look of it [04:16] I got that :) [04:17] meetingology: help [04:17] menn0: (help [] []) -- This command gives a useful description of what does. is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands. [04:17] meetingology: list [04:17] menn0: Admin, Channel, Config, MeetBot, Misc, NickAuth, NickCapture, Owner, and User [04:17] meetingology: Owner [04:17] menn0: Error: "Owner" is not a valid command. [04:17] meetingology: help Owner [04:17] menn0: Error: There is no command "owner". However, "Owner" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Owner'. [04:17] meetingology: list Owner [04:17] menn0: announce, defaultcapability, defaultplugin, disable, enable, flush, ircquote, load, logmark, quit, reload, reloadlocale, rename, unload, unrename, and upkeep [04:41] davecheney, axw, menn0: https://github.com/juju/mutex/pull/1 [04:42] thumper: will look later. i've got to pick up my daughter from drama class. [04:42] kk [04:43] * davecheney looks [04:43] hey axw - got a sec for a bug question? [04:43] cherylj: yup [04:44] I'm looking at bug 1592887, and the behavior the reporter is describing is what I would expect to happen [04:44] Bug #1592887: juju destroy-service deletes openstack volumes [04:44] wanted to sanity check [04:45] thumper: https://github.com/juju/mutex/blob/192dc9c6d06c8353820d24757b3ed3150ccbae6c/impl_flock.go <-- it's probably a good idea to say in the docs that a name is scoped to user ... but really I don't think we shold do that. that's different behaviour to the other implementations [04:45] thumper: the caller can always put $USER in the name if they want to [04:45] cherylj: looking [04:45] I think that could be fine [04:45] thx [04:46] axw: there is a PR there [04:46] it can have comments :) [04:46] cherylj: expected behaviour. there was always an intention of adding support for disassociating volumes from the lifetime of a service, but never fully implemented [04:47] thanks, axw! [04:51] does anyone know if there's an equivalent json tag for yaml's 'inline'? [04:53] axw, davecheney, menn0: I'll let you argue for a bit about the minor details, but I'm -1 on adding panics [04:53] * thumper is heading to BJJ [05:06] Bug #1592832 changed: enable-ha embeds ModelCommand but should be controller-specific [05:06] Bug #1592887 changed: juju destroy-service deletes openstack volumes === rharper` is now known as rharper [05:15] axw: found another problem if i have a clouds.yam with a custom lxd cloud, validating initialization args: validating cloud credentials: credential "" with auth-type "empty" is not supported (expected one of []) [05:15] le sigh [05:16] wallyworld: we should check for empty AuthTypes list as well as one that includes "empty" [05:16] yeah, that sounds ok doesn't it [05:16] wallyworld: you can have auth-types: ["empty"] [05:16] but we should relax that [05:16] sgtm, cover all cases [05:18] wallyworld: currently knee deep in adding cloud tests, can you please file a bug if you're not looking to fix it [05:18] sure, will probs just chuck up a quick fix [05:19] wallyworld: state/cloudcredentials.go is one place to fix, not sure if there are others [05:19] yup, i'll test manually [06:02] wallyworld: http://reviews.vapour.ws/r/5073/diff/# -- why has bundlechanges changed? [06:03] axw: they merge the service->app rename PR and so we now pull from master and not the feature branch [06:03] same code [06:03] ok [06:07] axw: this fixed immediate issue but now i get a "model already exists" error, need to investigate that http://reviews.vapour.ws/r/5081/ [06:08] wallyworld: failed txn assertion when adding a model [06:08] wallyworld: the error is misleading, basically means the assertions are invalid [06:08] rightio, so fix not perfect :-) [06:08] wallyworld: did you pull master? [06:08] yup [06:10] wallyworld: you need to not add empty to the requiredAuthTypes below [06:10] wallyworld: well really it should be an assertion of len(auth-types)==0 || includes "empty" === cmars` is now known as cmars [06:11] ok, wasn't sure about adding or not [06:11] wallyworld: perhaps it would be simpler if we just require that all clouds have a non-empty AuthTypes [06:11] wallyworld: and when we bootstrap we add []{"empty"] if there's nothing [06:12] that sounds like a bit of a pita when defining clouds [06:12] i'll make this quick fix now and we cn take a view [06:12] wallyworld: no, we'll just add one on the way in [06:12] wallyworld: rather than complicating the assertions everywhere else [06:12] ah, right, i see [06:13] i'll try that [06:47] axw: pushed fix to add auth types at bootstrap, seems to work [06:47] wallyworld: thanks, still going through your other one [06:48] axw: np, sorry, it's large in scope. this other one though is a few lines - i'd like to get it landed asap if possible [06:48] so we can get CI done for release [06:51] wallyworld: reviewed [06:51] (the small one) [06:51] ta [07:05] wallyworld: reviewed the other one too [08:17] axw: thanks for reviews, was otp so missed them. about to head to soccer. the latest commit to master is in CI now (the empty creds one). but the one before which should have had all manual / lxd fixes looks to be failing on manual still http://reports.vapour.ws/releases/4061 [08:18] axw: can you take a look and fix any issue, i'll check back in after soccer; needless to say if there is an issue we need a fix asap [08:36] Hello [08:36] I'm facing a problem. [08:36] 2016-06-16 08:36:27 DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused [08:37] How to solve this? [08:37] I rebooted my machine many times with no luck [09:07] frobware: dimitern: standup [09:08] Yash: it looks like your machine is trying to reach that host on ipv6 but either the juju server isn't listening on ipv6 or its blocked [09:14] ok [09:14] I disabled ufw [09:15] Is that a problem? [09:15] I pinf ipv6 address but not that port [09:15] nmap failed to connect [09:18] davecheney : Please suggest how to solve? [09:23] wallyworld: the issue is that https://bazaar.launchpad.net/~juju-qa/+junk/cloud-city/view/head:/clouds.yaml contains an entry called "manual" [09:23] which does not have any regions [09:23] wallyworld: it was working before by luck, not by design [09:24] babbageclunk: buy this and I'll buy one (or two) off you http://www.ebay.co.uk/itm/252426716669?_trksid=p2060353.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT [09:24] wallyworld: we only do the auto-detect thing if there's no cloud with the given name [09:30] voidspace: They just seem like the ground effect lights that boy-racers put on their car so they can drive around the square in Palmerston North. [09:31] babbageclunk: But for your living room! [09:31] Gah, dumb brain. [09:31] pimping up your living room? :) [09:31] dimitern: exactly [09:32] Yash: What does sudo lxc list show? [09:32] Yash: Does that container have an ipv4 address as well, or just an ipv6 one? [09:36] Something I said? [09:36] maybe he's from palmerston north [09:36] :) [09:37] * babbageclunk lols [09:40] babbageclunk: thanks for that [09:41] admcleod: Well, I don't think I was much help. Hopefully they come back. [09:41] mgz, sinzui, balloons: can one of you please remove the "manual" entry in https://bazaar.launchpad.net/~juju-qa/+junk/cloud-city/view/head:/clouds.yaml -- it's interfering with the ability to use the manual/ syntax in bootstrap [09:45] dimitern, voidspace: Go testing question - when I run go test ./... at the top level, am I right to think that each of the packages' tests are run as separate processes? [09:46] If I did something in juju/testing/mgo.go:init(), it would still run multiple times across the whole test suite. [09:48] babbageclunk: go test ./... just recurses into each subdir, doing the same it does otherwise - i.e. build a .test binary in a subdir then run it [09:48] babbageclunk: how did you get your canonical hostname irc cloak? [09:49] dimitern: Ok, thanks - that's what I thought. [09:50] or, anyone [09:50] admcleod: ? They asked me what irc name I wanted when I was joining. [09:50] hrm ok [09:50] admcleod: Hang on - what do you mean irc cloak? [09:50] admcleod: he's in bluefin, but otherwise do a full-text search on wiki.c.c for "cloak" [09:50] admcleod: there's a separate process for freenode and canonical IIRC [09:51] dimitern: yeah nothing on the wiki that i can find [09:51] babbageclunk: when you whois someone, what should be their hostname/ip address is cloaked/hidden [09:51] https://wiki.canonical.com/StayingInTouch/IRC/FreeNodeRegistration?highlight=%28cloak%29 [09:51] clearly i dont know how to use the search box [09:51] thanks :) [09:51] ;) [09:53] babbageclunk: a cloak can mask your source address for /whois on IRC [09:53] dimitern: oh right. enter does a title search. [09:53] admcleod: I don't think I have one. [09:54] babbageclunk: you do :) all bluefin folk have one by default [09:54] babbageclunk: [babbageclunk] (user@nat/canonical/x-zxxacbyqsikrawst) [09:54] admcleod, dimitern: ah. cool! [10:00] babbageclunk: i believe kjackal may have a question for you also [10:00] hi babbageclunk [10:00] kjackal: hi! [10:00] I am on juju beta8 [10:01] http://pastebin.ubuntu.com/17393462/ [10:01] Deployed a machine (with kafka but that does not matter) and the fqdn is not resolvable/pingable [10:02] is this expected? [10:02] kjackal: What are you deploying to? [10:02] solly yes, lxd [10:02] sorry [10:02] kjackal: realised that after looking at the pastebin, sorry. [10:03] kjackal: nslookup is dns server specific, it doesnt look at /etc/hosts [10:03] admcleod: yes true! [10:04] kjackal: I'm not sure whether we'd expect the container to be resolvable. dimitern? [10:04] Actually! After sometime it gets resolvable and pingable! [10:04] kjackal: What's the broader problem? [10:04] http://pastebin.ubuntu.com/17393499/ [10:05] Yash: welcome back! Did you see my question above? [10:05] So, the fqdn is not resolvable immediately after the container comes up [10:05] babbageclunk: ^ [10:06] kjackal: Sure - but is that actually causing a problem? [10:07] babbageclunk: Yes because if i try to start immediately after I get the machine a service (in my case kafka) it will fail because all the handshakes with other services will fail and in the case of kafka the fqdn is used internaly for figuring out the interfaces to listen to [10:07] no [10:07] kjackal: (I mean, it might be, but I just want to know what the context is - what's the problem you're trying to solve?) [10:08] babbageclunk: Now I cleaning machine. :( [10:08] 10:32 Yash: What does sudo lxc list show? [10:08] > 10:32 Yash: Does that container have an ipv4 address [10:08] Yash: :( [10:09] 10 machines with ip4 and ipv6 ips [10:09] all are running [10:09] kjackal: Ah, ok - thanks [10:09] my exact problem: starting kafka without its fqdn resolvable fails throwing an unknown host exception [10:09] I cann't ssh ubuntu@publicip [10:09] connetion refused [10:10] also I think that if I have two services (eg kafka and zookeeper) trying to comunicate using their fqdns will fail [10:10] babbageclunk: ^ [10:11] kjackal: Sorry, I don't think I'm the right person to answer this (I'm a newbie here). dimitern, any ideas? [10:12] babbageclunk: I appreciate your help [10:13] kjackal: Wish I could be more helpful! [10:20] Yash: so you get connection refused from ssh, rather than permission denied? [10:26] kjackal, babbageclunk: sorry, was afk for a bit [10:27] kjackal, babbageclunk: it looks like either a lxd issue or juju-using-lxd-incorrectly I guess [10:27] kjackal: what's inside /etc/resolv.conf inside the container? [10:28] just a sec [10:28] kjackal: it might be relevant whether this is a trusty or xenial container.. [10:28] Yash: Can you get into the container using "sudo lxc exec -- /bin/bash"? [10:29] dimitern: http://pastebin.ubuntu.com/17393776/ [10:30] kjackal: ok, that looks good - do you have another lxd that cannot resolve itself? [10:30] I have to spin up a new one [10:30] what do you want me to check there? [10:30] the resolv.conf? [10:31] kjackal: yeah, and also pinging e.g. google.com [10:31] dimitern: ^ [10:31] Ok doing it now, will ping you in a moment [10:31] kjackal: btw the /etc/hosts does not have lxd's own hostname.. grr looks like bug 1513165 but on lxd rather than maas [10:31] Bug #1513165: Containers registered with MAAS use wrong name [10:32] kjackal: just to clarify - what version of juju are you using? [10:32] 2.0-beta8-xenial-amd64 [10:33] kjackal: can you please file a bug like the above, so we can track it separately? [10:34] yes I will do that [10:34] kjackal: please include a paste with /etc/hosts, /etc/resolv.conf, and /etc/network/interfaces, ideally for a container that works ok (like that one above) and a not working one [10:35] kjackal: thank you! [10:35] the ubuntu lxd container came up fine [10:36] kjackal: :/ yeah.. it looks like one of those lxd race conditions.. [10:36] dimitern: here is what I got this time http://pastebin.ubuntu.com/17393827/ [10:36] and is resolvable immediatelly [10:37] cool, let me file this bug and i will try again to repro, to see what the resolv.conf is [10:37] kjackal: ok, try deploying the charm that had the issue I guess? [10:37] kjackal: awesome! thanks, and sorry for the issues :/ [10:37] yes, that is the plan. Do not mention it, this is why we are here for [10:39] kjackal: hey, I've just "groked" your nick btw :) - how's the provider going? [10:39] :) "how is it going" in what sence [10:39] ? [10:40] kjackal: we should try and help you guys with any issues along the way.. it's a bit of a rough path, but you're trailblazers :) [10:40] kjackal: well, I've been looking the ML for progress [10:40] just curious [10:42] dimitern: I am looking at bug https://bugs.launchpad.net/juju-core/+bug/1513165 [10:42] Bug #1513165: Containers registered with MAAS use wrong name [10:42] in the lxd case both juju-095f51-2.lxd and juju-095f51-2 are resolvable [10:43] dimitern: Should I still open a bug? [10:43] so the dnsmasq instance listening on lxdbr0 is handling DNS requests [10:44] kjackal: yes please, esp. if you find a way to reproduce the issue [10:45] they are both pingable http://pastebin.ubuntu.com/17393893/ but we also need the /etc/hosts to include the fqdn [10:48] kjackal: yeah, sounds like https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1574844 [10:48] Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. [10:49] kjackal: not quite, but the same issue - not having the lxd's address and hostname in /etc/hosts [10:50] kjackal: so filing a bug about this will be appreciated! [10:54] here you go dimitern https://bugs.launchpad.net/juju-core/+bug/1593185 [10:54] Bug #1593185: In lxd the containers own fqdn is not inclused in /etc/hosts [10:55] kjackal: thank you! [10:58] dimitern: how is it different from deploying a local charm from fetching it from the store in terms of name resolutions, could this be related [10:58] I deployed my test kafka charm from local and is not resolved... [10:58] kjackal: it shouldn't matter where the charm comes from [10:58] ah wait, I am deploying in trusty series! [10:59] let me try deploying ubuntu from the store trusty series [11:02] I think we have found something dimitern, ubuntu trusty series not resolvable [11:03] kjackal: with a local charm though, you *could* use a similar workaround jamespage did for rabbitmq-server - i.e. update /etc/hosts to include a line with the IP address (returned by `unit-get private-address` in the charm) and the `hostname` (short and fqdn) [11:03] nameserver 10.173.130.1 [11:03] kjackal: no search? [11:03] search lxd [11:04] dimitern: this kind of workaround is not good, because the hostname has to be resolvable by both itself and by others! [11:04] kjackal: what does `nslookup juju-xxx.lxd 10.173.130.1` return? [11:05] http://pastebin.ubuntu.com/17394126/ [11:05] kjackal: and with the .lxd suffix? [11:06] same [11:06] kjackal: odd.. can you ping the dns ip? [11:07] yes 10.173.130.1 is pingable [11:07] kjackal: also, can you paste the output of `ip -d route show` ? [11:08] http://pastebin.ubuntu.com/17394152/ [11:09] kjackal: ok, I suspect there might be a lingering `dhclient` process on eth0 due to /etc/network/interfaces.d/eth0.cfg being present.. [11:09] kjackal: can you please paste /etc/network/interfaces on the container? [11:10] Bug #1593185 opened: In lxd the containers own fqdn is not inclused in /etc/hosts [11:10] Bug #1593188 opened: Include complete information in Client.CharmInfo API call [11:10] Yes there is a process http://pastebin.ubuntu.com/17394175/ [11:10] dimitern: ^ [11:10] http://pastebin.ubuntu.com/17394184/ [11:11] dimitern: the interfaces ^ [11:11] kjackal: yeah :/ I suspect you'll have better outcome with a more recent version than beta8, e.g. from the daily ppa [11:12] awesome! there is an issue already resolved? [11:13] kjackal: that one with eth0.cfg messing up things should be, not the missing /etc/hosts line [11:13] kjackal: but unfortunately the daily ppa seems out of date - last build was on June, 8, so it won't have the fix [11:14] Beta 9 should have the fix right? [11:14] kjackal: would you mind trying to build juju from source? [11:14] yeah, once it's out - which, I'm told should be by end of this week [11:15] dimitern: Ah I always wanted to do that!!! (compile from sources) [11:15] going to grab something to eat and will start after that, ok? [11:15] kjackal: ok! always good to have early feedback :) [11:16] kjackal: ok, try to follow the steps in the README.md on https://github.com/juju/juju [11:16] I'll help if needed [11:24] babbageclunk: can you open LKK now? [11:24] babbageclunk: ah, sorry - it's back up apparently [11:25] axw: thanks for investigating, i'll send an email [12:11] morning [12:15] perrito666: morning [12:15] perrito666: what country are you in? [12:15] voidspace: argentina, why? [12:16] perrito666: ah, there's a chance I might be visiting Brasil later this year and I couldn't remember if you were Brasil or Argentina [12:16] sorry :-) [12:16] I am actually on a bus traveling from my city to a much smaller city [12:17] sounds like fun [12:17] why? [12:18] well some family issues (and also an excelent chance to stress test my lte modem) [12:19] perrito666: ah, sorry about the family issues [12:22] perrito666, wallyworld, if you're both around: can you talk me through the two collections it looks like we'll be using? [12:24] perrito666: Want to test your modem with a review? (It's the Mongo version detection.) http://reviews.vapour.ws/r/5084/ [12:24] perrito666: (Hope the family is ok) [12:25] babbageclunk: going, family is ok, just annoying :p [12:26] perrito666: Oh good. Mine too! [12:28] fwereade: modeluser collection is from the initial multi-model work - it is like a sql join table between models and users. the permissions collection records an access privilege for a user on a target (model/controller) [12:34] wallyworld, how does the latter not encompass the former? [12:34] wallyworld, I'm all for the perms collection, but I think it renders modelusers redundant [12:35] the former is who can access a model at some level. the permissions collection is generic and used to record permissions on various targets (controller, model) for various source entities (users, groups) [12:36] modelusers records things like last access time etc (from memory) [12:36] not permission related [12:39] wallyworld, ok, so we're dropping Access from modeluser? and requiring that we create a perms doc for each modeluser? [12:39] yep [12:39] wallyworld, ok, thanks, makes sense now, sorry [12:39] perrito666, ^^ [12:40] separation of concerns and all that [12:40] and separate mongo docs to avoid write locks [12:40] wallyworld, yeah, absolutely [12:40] wallyworld, tyvm [12:40] ok, you saved me the explanation tx :) [12:40] np, glad we got the design right :-) [12:40] wallyworld, I had it in my mind that last-connand stuff were elsewhere [12:40] cheers [12:41] thanks for asking the question, always good to be sure [12:43] Hey dimitern, it seems the problem still exists in juju beta 9 [12:44] I compiling from source juju still results in unresolvable fqdns in trusty containers [12:46] kjackal: how about dhclient - is it still there? [12:47] yes it is still here :( [12:47] kjackal: and can you paste the /etc/network/interfaces? [12:49] dimitern: http://pastebin.ubuntu.com/17395368/ at the very bottom [12:51] you were suspecting this would be fixed right? do we have a ticket for me to read so that I understand what is happening? If I understand this correctly dhcp handshake hungs and therefore the nameresolution service on the host is not updating its entries [12:51] kjackal: ok, that's not good - did you bootstrap with --upload-tools ? [12:52] dimitern: yes http://pastebin.ubuntu.com/17395425/ [12:53] axw wallyworld Do we still need remove manual from clouds.yaml to mark bug 1593033 fix released? [12:53] Bug #1593033: manual: bootstrapping fails with "creating hosted model: model already exists" [12:53] kjackal: the stanza 'source etc/network/interfaces.d/*.cfg' pulls in that eth0.cfg causing the dhclient to come up and acquire a DHCP lease, even if there's already a static address for eth0 [12:53] sinzui: that bug is a different root cause, the manual in clouds causes CI failures [12:54] the root cause for that bug should be fixed [12:54] wallyworld: okay, I report a separate bug for clouds.yaml [12:54] dimitern: babbageclunk: actually turned out to be easy to test [12:54] dimitern: babbageclunk: http://reviews.vapour.ws/r/5085/ [12:54] sinzui: ok, and if we retest the manual deployments, it should work with manual removed [12:55] kjackal: hmm, can you check what's the commit hash in $GOPATH/src/github.com/juju/juju/ ? [12:55] git log -n1 in there.. [12:56] voidspace: cheers, will have a look shortly [12:57] dimitern: http://pastebin.ubuntu.com/17395483/ [12:58] kjackal: if you try running `go install -v github.com/juju/juju/...` does it rebuild anything? [12:59] kjackal: it's also worth noting the first jujud binary in $PATH (`which jujud`) will be uploaded with --upload-tools, and it might be not the one you built but the system-wide one from the package [13:02] dimitern: the jujud binary: /home/jackal/workspace/gogogo/bin/jujud [13:02] dimitern: go install -v github.com/juju/juju/... did not build anything [13:03] kjackal: ok, just wanted to double check the binary is correct.. [13:04] * dimitern *facepalms* [13:04] kjackal: sorry, I forgot the issue I was thinking about is fixed, but for containers on maas provider and others, not the lxd provider [13:05] kjackal: so, it's not fixed, but we'll get to it soon [13:06] dimitern: do we have a ticket we can monitor. We have a couple of charms that we will need to review as soon as this issue is resolved [13:08] kjackal: yeah - that one you filed is already triaged as high and affecting 1.25 and master [13:10] bbl [13:11] So you are saying is I add the hostname in the hosts file all will be well? [13:16] dimitern: I do not understand the mechanism, but I trust you [13:16] thanks [13:18] kjackal: well, assuming inside the container eth0 has IP 10.20.30.42, if you add a line `10.20.30.42 juju-767941-0 juju-767941-0.lxd` in /etc/hosts after the line about 127.0.0.1, resolving should work inside the container [13:19] Bug #1593221 opened: remove manual from clouds.yaml [13:19] yes, from within the container resolving the juju-767941-0.lxd would work. However will that fqdn be resolvable from others? [13:22] dimitern: Kafka talks to Zookeeper and says "Hey zookeeper, I am a new kafka unit so that you know I am juju-767941-0.lxd", then Zookeeper talks to Kafka and says "Nah, you are nobody, I cannot find you on DNS" [13:24] kjackal: so long as both kafka and zookeeper use the same dns server, and the server works, it should be fine [13:25] kjackal: the issue is the dnsmasq seems to not recognize the hostnames we set inside the container [13:25] dimitern: ok, thank you! [13:25] hmm... that's the real problem [13:26] kjackal: can you compare what you see as hostname in /etc/hostname and in /var/log/cloud-init-output.log during the initial boot in the container? [13:29] what am I searching for in cloud-init-output? [13:29] I can see The key fingerprint is: [13:29] 35:24:ce:24:69:5b:89:d5:76:35:51:ab:67:1b:88:57 root@juju-767941-2 [13:29] kjackal: search for 'hostname' - from the top I guess [13:30] the "juju-767941-2" is the /etc/hostname [13:30] kjackal: either in /v/l/cloud-init-output.log, or in /v/l/cloud-init.log (where it says as it's about to configure each bit) [13:31] kjackal: ah, maybe easier - check /var/lib/cloud/ - rgrep for "hostname:" ? [13:32] dimitern: cc_set_hostname.py[DEBUG]: Setting the hostname to juju-767941-2.localdomain (juju-767941-2) [13:32] kjackal: that's it! [13:33] kjackal: but I bet `nslookup juju-767941-2.localdomain` doesn't resolve? [13:34] dimitern: nope, it does not resolve [13:35] kjackal: ok, thanks - added a comment about that to the bug as it looks like the real cause [13:37] thanks [13:48] voidspace: reviewed [14:23] dimitern: ta [14:28] Bug #1593221 changed: remove manual from clouds.yaml [14:37] Bug #1593221 opened: remove manual from clouds.yaml [14:41] whoa, why are we erasing status history for removed units? [14:41] perrito666, ^^ [14:42] not even for *removed* units, merely *destroyed* units [14:43] fwereade: we.. good question, I think we deemed it proper cleanup [14:43] but I am sure you disagree [14:44] perrito666, I do think it's a misleading use of the word "history" [14:45] perrito666, and apart from anything else the unit will still be setting statuses and recording history as it shuts down, won't it? [14:51] fwereade: agree, open abug pls [14:51] Hello dimitern [14:52] I'm facing a problem. 14 pending 10.100.100.131 juju-4667c953-1492-4846-8123-071efb515fe4-machine-14 xenial [14:52] 14 pending 10.100.100.131 juju-4667c953-1492-4846-8123-071efb515fe4-machine-14 xenial [14:52] Machine is in pending state [14:52] Bug #1593221 changed: remove manual from clouds.yaml [14:52] can we start it? [14:58] Bug #1593263 opened: juju deploy openstack-base results in error with ceph-mon [15:02] Yash: hey, I'd check the maas ui to see what's keeping it up? or is this lxd? [15:28] Yash: terminal only? [15:29] Yash: It's a way of getting onto the machine that doesn't rely on ssh (handy if the network isn't coming up). [15:29] I mean is it I'm inside running machine [15:29] virtual machine [15:29] ok [15:30] server is down ..I can see no server running [15:30] ss -t -a [15:31] so to remind myself - you've bootstrapped to LXD? And the controller is stuck in pending? [15:31] Or another machine? [15:31] Hey babbageclunk restarting machine 0 brings server back [15:31] I can see status now [15:31] Yash: yay - when you say server do you mean the juju controller? [15:31] but one unit is showing agent is lost, sorry! See 'juju status-history neutron-api [15:32] Yash: sometimes after rebooting it takes a while for things to get restarted and reconnected. [15:32] Problem is when I try to deploy openstack one by one components.. (openstack bundle never works for me) [15:33] Suddenly one machine never starts and remain in pending state [15:33] so I restart / sometimes remove server with force machine removal [15:34] Now can you please help me what is this error [15:34] agent is lost, sorry! See 'juju status-history neutron-a [15:34] -api [15:34] Yash: in juju status, does that machine show up with an ipv6 address as its public address? [15:34] because we had a bug that sounds like this. [15:34] This time I'm not using ipv6 only ipv4 [15:35] in lxd bridge configuration..I ignored ipv6 [15:35] Can you paste juju status to http://pastebin.ubuntu.com/ [15:35] Ipv6 surely some problem there [15:36] juju status --debug [15:36] or ? [15:36] how [15:36] Yash: yeah, we definitely had a bug with lxd containers and ipv6 [15:36] Yash: Is juju status working for you now? [15:36] yes [15:37] restarting worked.. [15:37] So could you cut and paste it into a pastebin? [15:37] Sure..is there any shortcut of doing...I mean command line tool or like that [15:38] katco: did those patches for OSM/manual provider cleanup ever get released in a 1.25 release? [15:39] marcoceppi: not to my knowledge. only in a special branch/binary. [15:39] http://pastebin.ubuntu.com/17399592/ [15:39] This time it's clean [15:39] only agent lost [15:39] How can I fix that error? [15:39] katco: cool, I'm about to reply to an email about it, testing seems positive, so I'm sure the stakeholders will want to see that in a point release sooner (rather than later). I'll email some folks about it when I get home [15:40] marcoceppi: cool. [15:40] Yash: awesome, thanks [15:40] Bug #1593274 opened: remove-unit deletes (some) status history [15:42] Ok, so can you ssh onto machine 9? [15:42] http://pastebin.ubuntu.com/17399701/ [15:42] Here we can see 106 ip machine [15:42] but not with juju status [15:42] yea..sorry we see [15:43] Hmm - you can see the .106 machine in the machines list. [15:43] juju ssh 9 : failed [15:43] Bug #1593263 changed: juju deploy openstack-base results in error with ceph-mon [15:44] ssh ubuntu@ip : connection refused [15:44] same error [15:45] This command rocks [15:45] sudo lxc exec juju-4667c953-1492-4846-8123-071efb515fe4-machine-9 -- /bin/bash [15:45] Please include somewhere in docs [15:45] it will everyone [15:45] I restarted machine..let's c [15:45] No! [15:45] I mean, you can do that. [15:46] But then we can't work out what's gone wrong. [15:46] ok how? [15:46] Ah well [15:46] now machine is back..agent lost problem solved [15:46] you guys rocked :) [15:46] :) [15:47] Don't know if we helped much - it's better if the machine comes up properly! [15:47] That will be awesome [15:48] one bug is it's really slow and I have wait unless one machine comes up then try another [15:48] two or more deploy create problems [15:49] I think it's not bug or I missing something [15:49] ceph-osd/0 blocked idle 2.0-beta7 6 10.100.100.130 No block devices detected using current configuration [15:49] No block devices detected using current configuration [15:49] What is this? [15:50] Do I need to configure something inside machine which should be automatic ? not sure though [15:51] Not sure, but I think ceph needs block devices to provide its storage. https://jujucharms.com/docs/devel/charms-storage [15:52] Actually, I don't know how it relates to cinder (openstack block storage) [15:55] ok some charms are still for trusty like [15:55] juju deploy cs:~axwalk/postgresql --storage data=cinder,10G [15:55] on devel doc which suppose to be for xenial ..Am I right? [15:56] Bug #1593263 opened: juju deploy openstack-base results in error with ceph-mon [16:01] I think that doc was written before xenial was out. [16:02] ok [16:04] How to access http://10.100.100.217/gui/uuid from laptop ......when lxd is in desktop within same network [16:04] I mean 10.100.100.217 is private ip inside desktop [16:05] Its required for Juju gui and openstack dashboard [16:05] I can't find doc on this topic? [16:05] Please suggest? [16:05] I use sshuttle: https://github.com/apenwarr/sshuttle [16:06] on the laptop, run sshuttle -r user@desktop 10.0.0.0/8 [16:06] and from web broswer [16:07] That's cool tool [16:07] Yeah, it's really neat. [16:07] Once it's running you should be able to see 10.100.100.217 in the browser. [16:08] Bug #1593263 changed: juju deploy openstack-base results in error with ceph-mon [16:08] let me try :) [16:08] (dimitern only put me on to it a few days ago) [16:17] Bug #1591225 changed: Generated image stream is not considered in bootstrap on private cloud [16:17] Bug #1592887 opened: juju destroy-service deletes openstack volumes [16:17] Bug #1593299 opened: HA recovery fails in azure [16:17] Bug #1593303 opened: Google Compute Engine provider often reports wrong IP address as the public address [16:35] dimitern, frobware: do you have an update on https://bugs.launchpad.net/maas/+bug/1590689 - just being asked myself [16:35] Bug #1590689: MAAS 1.9.3 + Juju 1.25.5 - on the Juju controller node eth0 and juju-br0 interfaces have the same IP address at the same time [16:36] dooferlad: I have a working fix, but not done as it causes issues for lxcs still, but on the up side is confirmed to work with bonds ok [16:38] dimitern: thanks for the update. If you could put a quick update in the bug then anastasiamac will be very happy. Do you have an ETA? [16:40] dooferlad: later tonight, will update the bug as well - need to be afk for a while though.. [16:43] dimitern: please ping me for the review. [16:43] dooferlad: ok [16:44] Bug # changed: 1577945, 1589353, 1592221, 1592582, 1592981, 1592987 [16:52] dooferlad: here it is, for review (might need to tweak it here and there after I do more testing): http://reviews.vapour.ws/r/5087/ [16:52] frobware, voidspace: ^^ if you can have a look as well [16:53] Bug # changed: 1482634, 1504637, 1537585, 1553059, 1571545, 1576318, 1576674, 1577614, 1579633, 1581627, 1581893, 1583412, 1585005, 1586298, 1587788, 1588095, 1588446, 1588559, 1588924, 1589061, 1589066, 1589748, 1590095, 1590205, 1590689, 1590960, 1592210, 1592733, 1593033, 1593042, 1593188 [16:57] dimitern: so, not a small change then *sigh* [17:04] fwereade: format 2.0 PR is pretty much just a rename of existing file. I'll remove further refs to 1.18 u've identified but do not feel like fixing yaml ouput/tags is concern of this PR :D [17:04] fwereade: m happy to be corrected :) [17:07] dooferlad: it's not huge - the important parts are in add-juju-bridge.py mostly; the other stuff is mostly backported from master (except for the state changes, which turned out to be needed) [17:10] dimitern: looking [17:15] dimitern: I seem to have reviewed this before... [17:15] voidspace: it's mostly backported [17:32] Now I can see dashboard login...hurray :) [17:32] http://10.100.100.197/horizon/auth/login/ [17:33] what is password and username? [17:33] anastasiamac, I would really appreciate it if you would add the tags, I feel we should be eliminating implicit serializations as we touch them [17:39] fwereade: only because u ask so nicely and will really appreciate it \o/ [17:39] fwereade: tyvm for review :D [17:44] Bug #1593350 opened: Juju register allows users to register alternatively named controllers of the same UUID [17:48] dimitern: LGTM [18:13] perrito666: yt? [18:13] katco: yt? [18:13] redir: otp, ttyl [18:13] redir: yup [18:13] perrito666: I'm back re: keystone3 [18:14] redir: tell me [18:15] perrito666: https://goo.gl/d5S5mF says If "domain-name" is present (and [18:15] user/password too) juju will use V3 authentication by default.y [18:15] so is that present as a key or present as a non empty string? [18:15] perrito666: ^ [18:15] hapy to HO if htat is easier [18:16] as a non empty string [18:16] redir: I am in a place where I could not ho properly, apologies [18:16] tx [18:16] np perrito666 [18:16] redir: I did the original of that and katco did an amazin job fixing it because I had dome some dumb things [18:20] Bug #1592155 changed: restore-backup fails when attempting to 'replay oplog'. [18:23] Bug #1592155 opened: restore-backup fails when attempting to 'replay oplog'. [18:27] voidspace: thanks! [18:32] Bug #1592155 changed: restore-backup fails when attempting to 'replay oplog'. [18:50] Bug #1579750 changed: Wrong hostname added to dnsmasq using LXD [19:05] cherylj, anastasiamac, macgreag1ir, redir: http://paste.ubuntu.com/17408402/ === TheRealMue is now known as TheMue [19:19] perrito666: http://reviews.vapour.ws/r/5088/ [19:20] katco: perrito666 ^ can you have a look? That makes the warning go away, but unsure if other bits were lost along the way [19:50] Bug #1593394 opened: model already exists but can't be destroyed because it's not found [19:50] Bug #1593395 opened: model already exists but can't be destroyed because it's not found [20:21] Bug #1593395 changed: model already exists but can't be destroyed because it's not found === macgreag1ir is now known as macgreagoir [22:05] this make sno sense, the whole trip coming to this city I had LTE and now returning from I only have 3g [22:05] oh there it is [22:06] wallyworld: wanna? [22:06] sure [22:06] * katco has been forced to retreat to her headless to run the full suite of juju tests [22:06] wallyworld: loading [22:21] wallyworld, available whenever you are free [22:21] alexisb_: be there in a minute, just talking to horatio [22:22] perrito666 I love all teh different variations of you name wallyworld comes up with [22:24] wallyworld: sorry went from LTE to Edge [22:24] perrito666: np, i think we had finished [22:25] enjoy the rest of the bus trip back home :-) [22:25] alexisb_: he usually goes very close to hora[a-z]*o [22:25] wallyworld: yes, people seems rather unhappy that I awoke them by speaking english loudly :p [22:26] :-) [22:26] I still cant understand how this same trip this morning had LTE coverage and now it hasnt [22:26] and honestly, who sleeps in a 6pm trip? [23:56] axw, davechen1y, menn0: I have updated the juju/mutex PR, keeping the Acquirer interface, but removing Acquire function to have only one way. Renamed Spec to Mutex, removed error return from Release [23:56] tested on linux, mac and windows [23:56] also added a few more tests [23:58] davechen1y: axw asked for the interface [23:58] can you to duke it out [23:58] I don't particularly care one way or the other [23:58] but I'd like to start removing fslock asap [23:58] menn0: do you have an opinion on Acquirer interface vs. package function?