[00:23] hallyn - environments.yaml is a juju 1.x convention [00:24] i'm assuming you're using 1.25.6 after re-reading that statement.... as kvm local provider only exists on the 1.x series of juju atm [01:59] heh, then i suppose it's a good thing i'm using that. i was thinking of trying 2.0 as surely it must be better, but... [02:17] ok let's look through the source [02:22] though finding the src package can be a challange. wow. [02:27] ugh, having the source doesn't always help :) [02:27] rogpeppe: hey! [02:31] oh here we go, maybe in ./src/github.com/juju/juju/container/kvm/kvm.go [02:34] *sigh* https://juju.ubuntu.com/docs/reference-constraints.html redirecting to fluff is helpful === verterok` is now known as verterok === thumper is now known as thumper-cooking [05:15] all right i guess i'll stick to manually doing set-constraints on every deploy === thumper-cooking is now known as thumper-dogwalk [05:42] hallyn, bonjour, where did you get that link? [05:44] hallyn, try inserting /1.25/ or /2.0/ after /docs/ [06:44] hallyn: hiya [06:45] Hi all, does anybody know how to bootstrap juju controller into openstack cloud with self signed certificate? [06:46] --debug tells me, that certificate has unknown authority. Searching for something similar as with using nova --insecure [06:47] herb64: i have no problems using self signed certificates [06:47] I'm using Juju 2.0 beta [06:48] 2.0-beta15-xenial-amd64 exactly [06:48] ok, i have rc2 [06:50] Good to hear, that it basically should work and that there's no general problem with it. I'll go for an update. Thank you [06:52] Hmm, but now after looking my novarc file the keystone url is only http === thumper-dogwalk is now known as thumper [06:53] Well, but updating might be a good idea, anyway === frankban|afk is now known as frankban [07:24] Good morning juju world! [08:10] god help us [08:20] I now upgraded to juju 2.0 rc3 - but still the same, when bootstrapping into openstack [08:21] auth fails, because "x509: certificate signed by unknown authority" [08:22] any ideas, how to bootstrap into openstack with self signed certs, some flag similar to --insecure with nova? [08:35] herb64: dunno, kjackal_ appears to be awake though so might know someone who knows [08:36] or jamespage might be around and might have a clue if you've not spoken with him about it [08:36] Hi magicaltrout herb64 [08:37] i've not used openstack, but i'd imagine most certs are self signed aren't they? [08:37] considering how many people use it to test rather than in production [08:38] awww [08:38] as if [08:38] oh well [08:39] in other news.... it turns out my goldfish likes to be stroked...... === bpierre is now known as 6A4AAA1WT === med_ is now known as Guest38941 === zeus is now known as Guest10825 [10:50] Hi, I don't know much about the non-reactive charms (or that much about reactive ones either, but I'm getting there) but I have a problem with getting a reactive charm to talk to it. The relation doesn't seem to be triggering anything on the non-reactive side. When I look in the hooks folder, I can see lots of symlinks in the hooks folder for the relations with other charms, with names such as "xxx-relation-joined" or "yyy-relation-changed". [10:50] The one I'm trying to trigger is @hooks.hook('oildashboard-relation-joined') in hooks.py. Looking at the symlink files, they all just seem to be symlinks to the hooks.py - could this be the cause? Do I just make a new symlink called oildashboard-relation-joined? Seems a bit random...? === freyes__ is now known as freyes [13:37] pmatulis: https://juju.ubuntu.com/docs/1.25/reference-constraints.html doesn't work either :( link came from a blogpos iirc [13:37] but blaming the blog post would be wrong. this is the internet. [13:59] rogpeppe: i was looking for someone who could point me to docs about the keys available in environments.yaml [14:00] hallyn: there is none. https://bugs.launchpad.net/juju/+bug/1628865 [14:00] Bug #1628865: bootstrap command help does not document possible configuration values [14:01] hallyn: unfortunately the source code seems to be the only reliable place and the relevant code is now all over the place since the key space has been split up [14:01] rogpeppe: d'oh [14:01] right, key space split up is what i ran into last night looking for the answer in the src :) ok thx [14:02] hallyn: good places to look are: controller/config.go environs/bootstrap/config.go environs/config/config.go [14:02] hallyn: there are probably others that i don't know about [14:05] rogpeppe: i was assuming there was some structure, i.e. "container: kvm\nconstraints.mem: 2G" [14:06] hallyn: what version of juju are you using? [14:13] 1.25.6-xenial-amd64 [14:20] Should I expect a unit that's error/idle with a failed update-status hook to ignore any attempts to remove it? [14:26] even with a --force switch, unit-remove seems to have zero affect on the hung unit [14:26] the 16.10 release announce mentions juju 2.0 GA, is that going out today? http://insights.ubuntu.com/2016/10/13/canonical-releases-ubuntu-16-10/ [14:27] vmorris - which substrate is this? [14:27] lazyPower lxd/local [14:27] vmorris juju remove-machine # --force (assuming only one application/charm is deployed there) should remove that stuck unit [14:28] thats kind of a big hammer approach to removing a stuck unit, but it does work. [14:28] yeah, i suppose that would be fine for lxd [14:29] icey we can hope :D [14:30] icey: yes [14:35] lazyPower: did you see the reply here? http://askubuntu.com/questions/835522/kubectl-cluster-info-get-502-bad-gateway-error [14:36] marcoceppi did just now :( and it makes me sad [14:36] default lxd-test localhost/localhost 2.0-rc3 -- we dont support lxd deployments yet [14:36] lazyPower: yeah, is there not a profile we can add to LXD to at least get it around [14:36] * lazyPower will update the question and edit it to be approrpiate [14:36] nope, lxd constraints will keep flannel from working so it'll never fully turn up [14:37] you'd have to run the entire thing as priveldged containers and i'm not certain that works [14:37] Cynerva - have we tried that? stuffing k8s in priv. lxd containers? [14:39] lazyPower: tried master, haven't tried worker. With master in privileged LXD I got pretty far, but nginx-ingress-controller seemed to have trouble deploying. That may have been a sporadic issue though. I didn't look into it further. [14:39] thats odd its jsut an nginx container + ssl certs [14:46] but ok, we've taken a prelim look at it. [14:54] marcoceppi i'm not sure what i can add here to the conversation. I could reasonably rewrite the question/answer to be mroe specific to the problem and go into technical detail about the different components and what we think needs to happen [14:54] is that overkill? [14:55] marcoceppi: hi, around? I got a very surprising update-status hook failure in the ubuntu charm [14:55] it was running ok every 5min, and now it's failing in every run with ImportError: No module named 'charmhelpers' [14:55] there was no code or charm upgrade triggered by me [14:56] I filed #1633106 [14:56] Bug #1633106: update-status hook failure: cannot import charmhelpers [15:01] marcoceppi: n/m, forgot that this machine was mid release-upgrade :( [15:19] rick_h_: we continue to hit the lxd issues with rabbitmq [15:19] rick_h_: specifically https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902 [15:19] Bug #1563271: update-status hook errors when unable to connect [15:19] Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 [15:19] rick_h_: I think vmorris is seeing this with lxd/local with openstack charms on s390x [15:20] rick_h_: suggestion how we should proceed? In 1584902 we were unable to work around the issue in charms, and thought it may need to be resolved at the lxd or juju level, thus wanted to get your thoughts [15:21] yep -- adding calls to configure_nodename() in the update-status and amqp-relation-changed hooks seems to help [15:25] arosales: looking [15:27] dooferlad: ping, we should be in a place where all lxd containers have hostnames now as of rc3 right? [15:29] I believe so [15:31] dooferlad: created a card for the rabbitmq hostname issue there if you can please look into that as a next line of owrk [15:31] dooferlad: looks like the hostname turns into ubuntu or something there [15:32] vmorris: I think kwmonroe was looking at using the hostname in the big data charms to work around a similar issue [15:32] *think* [15:33] rick_h_ dooferlad : the hostname is set to ubuntu at initial deploy, then is changed to juju-##### however the rmq-server configuration in /etc never gets updated [15:36] So - juju 2.0 does not have the local/kvm provider, what will be the proposed alternative? [15:37] hallyn: lxd provider is the only alternative. Manual provider if you want to create kvm machines and add-machine them to a juju model [15:39] yeah rick_h_ arosales vmorris dooferlad, the hostnames are legit after the initial deployment (i think because the containers are rebooted). unfortuantely, that doesn't help containers talk to each other. [15:39] http://paste.ubuntu.com/23318452/ [15:40] seems like the lxd containers should have '.lxd' as their domainname instead of '.localdomain'. dooferlad, does that sound right to you? [15:42] kwmonroe: I don't know the specifics of if .localdomain, .lxd or something else is right. [15:43] rick_h_, vmorris, kwmonroe: the hostname is set by Cloud Init - Juju just asks it to write the files. I didn't see hostname = ubuntu during my testing. [15:45] none of this provides DNS though. [15:45] kwmonroe: this is on the manual provider? [15:45] dooferlad et.al https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271/comments/5 [15:45] Bug #1563271: update-status hook errors when unable to connect [15:46] kwmonroe: vmorris that is ? ^ [15:46] e.g. that cloud-init might not be in play here? [15:46] rick_h_: cloud-init only runs at the first boot. [15:46] rick_h_: so no, that is nothing to do with that. [15:47] dooferlad: right, my point is to see if an original hostname is set when the machine is created, and when it's added to juju with add-machine does it change/not change in some way that's causing an issue? [15:48] rick_h_: OK, that won't have anything to do with cloud-init. I can't get to that today, but I will look at it first thing tomorrow. [15:48] dooferlad: k [15:49] rick_h_: in my case, it's fine that 'ubuntu' is used when the machine is created so long as it has a real FQDN when the application is installed. for me, it's a problem that 2 containers can't resolve the FQDNs in the same deployment.. so i opened https://bugs.launchpad.net/juju/+bug/1633126. [15:49] Bug #1633126: can't resolve lxd containers by fqdn [15:49] rick_h_: (because Juju doesn't run until cloud-init has finished) [15:49] rick_h_: I believe kwmonroe is using local/lxd on 2.0 and big data charms, not rabbit, but similar issues [15:49] arosales: k, so the s390 will be different than local/lxd so trying to narrow down where we're looking atm. [15:51] rick_h_: gotcha and in bigdata charm we see the issue everywhere, not just s390x, but also on x and p, aiui [15:51] kwmonroe, arosales: do those LXDs have a DNS server that will resolve the Juju names? If not, how do we expect it to work? [15:51] rick_h_: for s390x we are seeing https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/juju/+bug/1632030 [15:51] Bug #1563271: update-status hook errors when unable to connect [15:51] Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error [15:51] arosales: right, and the second one we've got eyes/work going into [15:52] sure dooferlad.. those LXDs use the lxdbr0 as their nameserver... and they can all resolve each other as "juju-foo-X.lxd". just not "juju-foo-X.localdomain" [15:52] arosales: so I'd like to take one issue at a time and ask dooferlad to look into hostname issues specifically wherever we're seeing those [15:52] rick_h_: thanks, and the rabbit issue sounds related to the one the openstack folks opened, https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902 [15:52] Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 [15:52] rick_h_: thanks and very reasonable approach. [15:52] :-) [15:52] kwmonroe: thanks - that certainly points to a fix! [15:52] thanks to vmorris for testing and providing feedback [15:53] thanks arosales & all for the attention [15:53] rick_h_: admcleod was also going to be setting up openstack on s390x, manual on lpar, and with lxd (I think) and see if he can reproduce https://bugs.launchpad.net/juju/+bug/1632030 [15:53] Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error [15:54] vmorris: I think LXD + Ubuntu on LPAR is a really solid use case, so would like to make sure it is working smoothly [15:56] dooferlad: fwiw, this feels eerily familiar to https://bugs.launchpad.net/juju/+bug/1623480, but that bug was about a single container not being able to address itself. it's like just a baby step farther to make sure multiple containers can address each other (by ensuring domainname = '.lxd').. i think :) [15:56] Bug #1623480: Cannot resolve own hostname in LXD container [15:56] kwmonroe: agreed. [16:09] rick_h_: thanks. too bad. [16:10] arosales: agreed, the platform is great for packing in containers [16:50] hi arosales, rick_h_ - heads up. ceilometer and aodh are also about to become more dns-sensitive, not due to changes in the charms, but changes in upstream code where they really really want sane A/PTR resolution all the way around. this will be a growing theme, not specific to openstack /methinks. [16:59] beisner yep, K8s has gone that way as well on our side of the infra wall. Nice to see we're converging around some of the same ideas in the deployment. [16:59] beisner: right, we have a long term plan that we've tested works well that'll be coming soon [17:01] fresh bug for reference: https://bugs.launchpad.net/juju-core/+bug/1632909 slightly different set of tools and providers than the other bugs though, so new bug. [17:01] Bug #1632909: ceilometer-api fails to start on xenial-newton (maas + lxd) [17:05] rick_h_ lazyPower arosales - ultimately if this new canary bundle fails, then all sorts of workloads should be expected to fail. i'd go as far as to suggest TDD on juju-core, gated on this passing on providers: http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/other/magpie-lxc.yaml [17:06] if that^ passes, so will rabbitmq, ceilometer, nova-cloud-controller and others who need to know names and numbers :-) [17:06] beisner: that'd be good to send to tbauman and try to get into the stack of things they test there [17:06] beisner: once we know it works, guessing it doesnt at all currentlY/ [17:06] ? [17:07] rick_h_, yep we talked about it at the sprint and sinzui is planning something i believe. [17:07] ok [17:10] rick_h_, for ex., that is known to fail on the openstack provider and manual provider, since lxc units go to an island behind nat. if you need a measure of fixing that, then this bundle is your baby. [17:13] beisner: it feels like the problem is larger than rabbit, thanks for testing your bundle to confirm that with data. === frankban is now known as frankban|afk [17:19] arosales, lazyPower - yep, glad we're either all crazy or right :-) [17:20] beisner the one difference is k8s is shipping with its own dns provider to do the mapping. its not relying on env specific dns [17:20] its kind of boggling how it all works, theres a lot of moving componentry there that can and might break. [17:22] can i specify a unit in the "to: X" bundle placement directive? [17:22] Forgot to say happy Yakkety release day [17:22] and OpenStack 16.10 charm release day [17:23] reading this: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives, it seems like i should be able to say "to: ubuntu/0" if an ubuntu was defined earlier.. but that doesn't work: [17:23] E: placement ubuntu/0 refers to non-existent service ubuntu/0 [17:23] perhaps that documentation is *only* for lxd placement on a unit? [17:23] woohoo yes arosales ! [17:23] kwmonroe: you have to target machiens not applications [17:24] kwmonroe: we don't map the application/unit to the machine, it's to a new machine, new container on an existing machine, etc [17:24] kwmonroe, it'd be --to "ubuntu=0" in juju-deployer speak. [17:25] kwmonroe: I seriously think that doc section is a lie :/ [17:25] beisner: say what? what is this juju-deployer --to stuff you speak of? [17:26] ha! anyway, this has worked for us for all of time: http://pastebin.ubuntu.com/23319065/ [17:26] my stars beisner! i'm fixin to send you an ecard if this works. [17:26] beisner: ok, but only with the deployer though? or does that work with juju itself? [17:27] rick_h_, i believe so. juju dash deployer ! juju space deploy ;-) [17:28] beisner: looking at the code it checks the directive is a machine [17:28] i think not beisner.. invalid placement syntax "ganglia=0" [17:28] yea [17:28] using "juju deploy", not "juju-deployer" [17:29] right. so, rick_h_ as we continue to ramp up 2.0 in osci, you'll likely see a load of feature parity wishlist items from us. such as this. [17:30] beisner: I look forward to getting the liste of requests [17:30] so rick_h_, riddle me this. my bundle defines 1 machine. in juju1, that needs to be 'machine: 1' because machine 0 is taken by the bootstrap node. in juju2, juju will create the machine as machine-0, so subsequent placement fails (there is no machine-1 in juju2). [17:30] oh i've been trying to solve for that equation too. /me stands by [17:30] kwmonroe: so bundles all start at 0 and include only machines defined in the bundle [17:30] i'll whip up a proper bug report, but i think that's what's happeneing [17:30] kwmonroe: so you're saying that bundle fails on the deployer side? I thought it was updated to accept that in the v4 format [17:31] rick_h_: if i have a bundle define machine 0 and deploy it on juju1, stuff gets colocated on my bootstrap node [17:31] kwmonroe: so the machine number cannot and does not relate to a machine in your model, it's all new machines [17:31] kwmonroe: not with the gui, I'd have to test deployer then [17:31] rick_h_, that's exactly why addressing them by name is valuable ^ [17:31] yeah rick_h_ -- probably a j-deployer thing.. like i said, i'll write it up more betterer. [17:32] +1 beisner [17:32] beisner: understand, but async and order and ... so name is hard [17:32] beisner: but yea, it's cool to get it as a request and look at it [17:32] beisner: just saying that it's been this way for xxxxx months and to get it the day of release is a can of "sorry" [17:32] * rick_h_ is having too much fun today, take all that with a giant :) [17:32] ha! :-) no i'm not asking for that. [17:34] native deploy ftw; i've just got both paths to regression test for the lifetime of Trusty, so it's tricky to craft bundles in a way that we don't have to maintain two sets of bundles. [17:34] wait, we're not doing double digit RCs? [17:35] beisner: right, but where's the bug on this as part of native deploy? [17:35] back to arosales - happy Yakkety day! [17:39] lazyPower, kjackal_, petevg, kwmonroe: Can I get a re-review on https://github.com/juju-solutions/layer-apache-kafka/pull/13 [17:40] cory_fu: +1 (I agree that it's okay to fail on a weirdly composed tarball.) [17:41] rene4jazz: glad you made it! [17:42] lazyPower: I want to introduce you to a colleague of mine, Rene Lopez (rene4jazz) [17:43] Greetings rene4jazz o/ [17:43] lazyPower: Rene is interested in deploying your kub bundle, I thought I would put you guys in touch [17:43] rick_h_, ha, had to dig deep as the one i had been tracking is invalid. https://bugs.launchpad.net/juju-core/+bug/1583443 [17:43] Bug #1583443: juju 2.0 doesn't support bundles with 'to: ["service=0"]' placement syntax [17:44] beisner: ah yea not on our radar looking at invalid bugs [17:44] Greetings lazyPower [17:45] rene4jazz I'm happy to help get you started with Kubes. Do you have any initial questions? [17:45] rene4jazz: lazyPower is the maintainer of the kub bundle, I wanted to introduce you two, so hopefully you can bounce ideas off eachother as you run through kub deploys [17:45] co-maintainer* [17:45] thanks bdx [17:45] mybad *^ [17:45] np [17:45] mbruzek does a lot of the heavy lifting too :) pedantic i know, but he deserves some credit too [17:46] def [17:46] mbruzek: props! [17:46] he's out ot lunch and getting props, haha [17:46] he's gonna be bummed he's missing it [17:47] i'll relay though. So back to kubernetes [17:47] rene4jazz - tell me a little bit about your wants/needs here. I have it on good authority you were pioneering on LXD - which is unfortunately not supported at this time for most of the charms. [17:48] lazyPower, I'm curious about the kub bundle and started to mess with it. My first approach was to try with the localhost (lxc based) provider [17:48] yeah, we really need to move that warning up in the README... its buried at the bottom under the caveats [17:49] My goal is first deploy the bundle then start adding pods for several app related services [17:50] Ok. Is localhost your primary option for exploration? The cost of using clouds being the prohibitive factor here.... [17:50] lazyPower, correct... cost is a factor [17:51] rene4jazz - so this limits options, but we can work with this. Are you familiar with KVM? [17:52] lazyPower, the Hypervisor? [17:53] correct. What i'm going to propose is using the manual provider to enlist a few KVM vm's, and test there. We can trim the bundle down to only the K8s charms whcih will save you some effort in how many VM's to provision. [17:55] lazyPower: what about developer.juju.solutions .. is that not a thing anymore? [17:55] bdx - oh yeah! i forgot all about it [17:55] bdx nice save [17:55] lazyPower, bundle size is fine, I can deal with the VM numbers [17:55] rick_h_: i'm game to make a juju doc PR. to be clear, the placement directive section should talk about machine placement only, and those should be defined in a 0-based "machines:" section. right? === adam_g` is now known as adam_g [17:56] rick_h_: and to be doubly clear, i'm talking about updating this page: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives [17:56] rene4jazz - alternatively, if you're up for it we can get you on the charm developer program which will give you some AWS runtime while you explore the bundle in its entirety, on aws. [17:56] kwmonroe: yes [17:56] got it [17:57] rene4jazz if you're interested in the charm developer program - head over to http://developer.juju.solutions and sign up for the CDP [17:58] lazyPower: great to know, right now will keep exploring local options, thanks for the help [17:58] rene4jazz - OK. I'm here to lend a hand if you have any questions. feel free to ping [18:02] ah crap.. #thatfeelingwhen you mistype 'branch -d' instead of 'checkout -b' :/ [18:02] hope you pushed it remotely so you can re-fetch [18:03] dont' be silly lazyPower. i'm too agile for all this "pushing" and "remote" nonsense [18:03] and to top it off, my time machine backup disk died 2 days ago, and of course i turned off mobilebackups because #dumb. [18:03] Whoops [18:04] should have gotten a thinkpad [18:04] my thinkpad has hw failure scrolling in dmesg :( [18:04] lol.. maybe a dell then. [18:04] well i also purchased it like fifth or sixth hand [18:04] assuming anyway === Guest38941 is now known as med_ [19:49] does anyone have experience using nginx or haproxy to reverse proxy access to juju applications? [19:51] link to a tutorial or some guidance would be appreciated! [20:18] Is james page in this channel? [20:19] x58: yeah [20:22] What's his nick? [20:23] jamespage [20:23] Nevermind, found it :P [20:23] >< [20:23] Just didn't tab enough. Too many james's [20:23] hello [20:23] vmorris, there is a haproxy charm that does that [20:23] jamespage thanks, i'm looking into it now [20:23] basically any juju app that provides an http interface can be loadbanlanced [20:24] vmorris, context? [20:24] jamespage: https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902 this being reverted is worrying to me... [20:24] Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 [20:24] x58, well it broke alot more things than it fixed - what's your specific concern? [20:25] jamespage: It will break our deployment on 2.0, where it will use the wrong NODENAME and won't cluster/do anything. [20:26] The fix that went in was specifically to support deployments on 2.0, I understand that it apparently breaks 1.25.x but reverting it doesn't seem like the right thing to do either. [20:26] x58, no it still needs fixing [20:27] x58, tbh alot of this relates to the fact that we need a consistent, resolvable hostname based environment from juju [20:27] We have had 40+ deployments in our lab with 2.0 with the charm as it stands now, and everything clusters and works without issues. [20:27] Yeah... [20:27] x58, a charm should never have to mess with /etc/hosts [20:27] and that was the fix we tried [20:28] basically unless every rabbit unit had the same view as every other one, units failed to cluster [20:29] As I mentioned, currently it works for us. It may not be ideal, but it works. Reverting it will break it. [20:29] A better solution should be found... [20:30] x58, fix your charm version for the time being [20:31] x58, we will have a better fix for 2.0 users [20:31] if a particular version works now, continue to use that and don't upgrade [20:31] Ok. Will do. [20:31] x58, actually wait - which charm version are you using? I reverted this in the stable charm as well as the dev brnach [20:31] Let me check my bundle file. === rmcall_ is now known as rmcall [20:34] prod: cs:~openstack-charmers-next/xenial/rabbitmq-server lab: rabbitmq-server [20:35] I just noticed the comment on the bug, we haven't had to redeploy lab yet, but I have a feeling that as soon as we grab the reverted version, we are going to hit the original bug. [20:35] rick_h_ beisner: if you'd like to double check the language, i merged the "machine specifications" and "bundle placement" sections into 1 as per our earlier discussion: https://github.com/juju/docs/pull/1448 [20:35] (that is, apps can specify machine placement, not service/X placement) [20:37] jamespage: ^^ [20:37] x58, yeah if you deploy clustered I think you will [20:38] What was the previous version so we can pin that? [20:39] this is a dumb question, but how do i get the credentials for the percona db post-deploy? [20:40] vmorris, i believe you have to set it via the charm config options [20:40] deployed via the openstack-base bundle, it wasn't set [20:41] looking at the juju config output... [20:41] root-password: [20:41] default: true [20:41] ? [20:44] x58, I think its https://jujucharms.com/rabbitmq-server/xenial/5 [20:44] rabbitmq-server-5 [20:45] vmorris, ok so there is a gotcha here - if you want to deploy multiple pxc units, you must provide via configuration [20:45] vmorris, I need to spend some time on pxc overhauling the bootstrap process and password management stuff to use leader election [20:46] it pre-dates juju providing anything helpful for generating passwords for clustered services from with a unit [20:47] hi kwmonroe, rick_h_ i'd prefer to defer to someone on juju-core who is code-familiar with native deployer re: docs. [20:48] x58, just confirming that now - have a MAAS 2.0 /Juju 2.0 env I'm doing some other testing in [20:48] its nippy so spinning a few more contains is OK [20:52] x58, yup that version lgtm [20:52] http://paste.ubuntu.com/23320046/ [20:53] Excellent. Will pin it. [20:57] Thanks jamespage! [20:57] Looking forward to seeing the issue fixed properly. [20:59] Is there something logging the channel to HTTP? Something like botbot.me would be awesome. [21:00] https://botbot.me/request/ [21:02] so if i have a workload running in, say environment gce, can i juju switch amazon, start a differnet workload, and switch back and forth? /me is afraid to try and ruin the currnet install :) [21:05] x58, you can find the logs at https://irclogs.ubuntu.com/2016/10/13/%23juju.html [21:05] Excellent. [21:05] Would be nice to drop a link to that in the topic! [21:06] Doesn't have latest logs :-( [21:07] I spoke too soon ;-) [21:27] jamespages: thanks for that [22:06] hallyn: sure, you can switch back and forth between controllers/models. the only think i would be wary of is doing something like "juju bootstrap foo" in a tmux session and then "juju bootstrap bar" in another. [22:07] hallyn: that said, once the controller has received your request and you're returned to a command prompt, you can switch to whatever your heart desiers. [22:07] *desires [22:10] kwmonroe: and switch back and forth? [22:11] sure hallyn.. juju doesn't forget when you switch to something else :) [22:11] hallyn: kwmonroe and most commands taje a -m for a model or controller:model combo [22:12] so you can status w/o a switch and such [22:12] hallyn: 'juju controllers' and 'juju models' are a lifesaver for me to remember where my stuff is deployed [22:12] kwmonroe: awesome! thx :) [22:12] hallyn: let me know when you try it so i can sign off [22:13] (totally kiddin) [22:13] :) [22:17] drat, i guess models are purely a 2.0 thing [22:18] hallyn: yes, whooe new world [22:19] whole [22:30] hallyn: it's true models are new in 2.0, but you can still switch in juju 1.25. i go from aws to azure all the time. [22:31] hallyn: again, the only thing i would be concerned about is if you did an operation (like bootstrapping one env) and switched before it was completed. [22:31] but you should really just go to juju2. like rick_h_ said, it's a whole new (better) world :) [22:31] go to yacketty while you're at it ;) [23:56] how do i influence the contents of /etc/neutron/plugins/ml2/ml2_conf.ini ? i'm using a bundle to set up openstack