[00:23] <lazyPower> hallyn - environments.yaml is a juju 1.x convention
[00:24] <lazyPower> i'm assuming you're using 1.25.6 after re-reading that statement.... as kvm local provider only exists on the 1.x series of juju atm
[01:59] <hallyn> heh, then i suppose it's a good thing i'm using that.  i was thinking of trying 2.0 as surely it must be better, but...
[02:17] <hallyn> ok let's look through the source
[02:22] <hallyn> though finding the src package can be a challange.  wow.
[02:27] <hallyn> ugh, having the source doesn't always help :)
[02:27] <hallyn> rogpeppe: hey!
[02:31] <hallyn> oh here we go, maybe in ./src/github.com/juju/juju/container/kvm/kvm.go
[02:34] <hallyn> *sigh*  https://juju.ubuntu.com/docs/reference-constraints.html redirecting to fluff is helpful
[05:15] <hallyn> all right i guess i'll stick to manually doing set-constraints on every deploy
[05:42] <pmatulis> hallyn, bonjour, where did you get that link?
[05:44] <pmatulis> hallyn, try inserting /1.25/ or /2.0/ after /docs/
[06:44] <rogpeppe> hallyn: hiya
[06:45] <herb64> Hi all, does anybody know how to bootstrap juju controller into openstack cloud with self signed certificate?
[06:46] <herb64> --debug tells me, that certificate has unknown authority. Searching for something similar as with using nova --insecure
[06:47] <anrah> herb64: i have no problems using self signed certificates
[06:47] <herb64> I'm using Juju 2.0 beta
[06:48] <herb64> 2.0-beta15-xenial-amd64 exactly
[06:48] <anrah> ok, i have rc2
[06:50] <herb64> Good to hear, that it basically should work and that there's no general problem with it. I'll go for an update. Thank you
[06:52] <anrah> Hmm, but now after looking my novarc file the keystone url is only http
[06:53] <herb64> Well, but updating might be a good idea, anyway
[07:24] <kjackal_> Good morning juju world!
[08:10] <magicaltrout> god help us
[08:20] <herb64> I now upgraded to juju 2.0 rc3 - but still the same, when bootstrapping into openstack
[08:21] <herb64> auth fails, because "x509: certificate signed by unknown authority"
[08:22] <herb64> any ideas, how to bootstrap into openstack with self signed certs, some flag similar to --insecure with nova?
[08:35] <magicaltrout> herb64: dunno, kjackal_ appears to be awake though so might know someone who knows
[08:36] <magicaltrout> or jamespage might be around and might have a clue if you've not spoken with him about it
[08:36] <kjackal_> Hi magicaltrout herb64
[08:37] <magicaltrout> i've not used openstack, but i'd imagine most certs are self signed aren't they?
[08:37] <magicaltrout> considering how many people use it to test rather than in production
[08:38] <magicaltrout> awww
[08:38] <magicaltrout> as if
[08:38] <magicaltrout> oh well
[08:39] <magicaltrout> in other news.... it turns out my goldfish likes to be stroked......
[10:50] <autonomouse> Hi, I don't know much about the non-reactive charms (or that much about reactive ones either, but I'm getting there) but I have a problem with getting a reactive charm to talk to it. The relation doesn't seem to be triggering anything on the non-reactive side. When I look in the hooks folder, I can see lots of symlinks in the hooks folder for the relations with other charms, with names such as "xxx-relation-joined" or "yyy-relation-changed".
[10:50] <autonomouse> The one I'm trying to trigger is @hooks.hook('oildashboard-relation-joined') in hooks.py. Looking at the symlink files, they all just seem to be symlinks to the hooks.py - could this be the cause? Do I just make a new symlink called oildashboard-relation-joined? Seems a bit random...?
[13:37] <hallyn> pmatulis: https://juju.ubuntu.com/docs/1.25/reference-constraints.html doesn't work either :(   link came from a blogpos iirc
[13:37] <hallyn> but blaming the blog post would be wrong.  this is the internet.
[13:59] <hallyn> rogpeppe: i was looking for someone who could point me to docs about the keys available in environments.yaml
[14:00] <rogpeppe> hallyn: there is none. https://bugs.launchpad.net/juju/+bug/1628865
[14:00] <mup> Bug #1628865: bootstrap command help does not document possible configuration values <juju:Triaged> <https://launchpad.net/bugs/1628865>
[14:01] <rogpeppe> hallyn: unfortunately the source code seems to be the only reliable place and the relevant code is now all over the place since the key space has been split up
[14:01] <hallyn> rogpeppe: d'oh
[14:01] <hallyn> right, key space split up is what i ran into last night looking for the answer in the src :)  ok thx
[14:02] <rogpeppe> hallyn: good places to look are: controller/config.go environs/bootstrap/config.go environs/config/config.go
[14:02] <rogpeppe> hallyn: there are probably others that i don't know about
[14:05] <hallyn> rogpeppe: i was assuming there was some structure, i.e. "container: kvm\nconstraints.mem: 2G"
[14:06] <rogpeppe> hallyn: what version of juju are you using?
[14:13] <hallyn> 1.25.6-xenial-amd64
[14:20] <vmorris> Should I expect a unit that's error/idle with a failed update-status hook to ignore any attempts to remove it?
[14:26] <vmorris> even with a --force switch, unit-remove seems to have zero affect on the hung unit
[14:26] <icey> the 16.10 release announce mentions juju 2.0 GA, is that going out today? http://insights.ubuntu.com/2016/10/13/canonical-releases-ubuntu-16-10/
[14:27] <lazyPower> vmorris - which substrate is this?
[14:27] <vmorris> lazyPower lxd/local
[14:27] <lazyPower> vmorris juju remove-machine # --force (assuming only one application/charm is deployed there) should remove that stuck unit
[14:28] <lazyPower> thats kind of a big hammer approach to removing a stuck unit, but it does work.
[14:28] <vmorris> yeah, i suppose that would be fine for lxd
[14:29] <lazyPower> icey we can hope :D
[14:30] <rick_h_> icey: yes
[14:35] <marcoceppi> lazyPower: did you see the reply here? http://askubuntu.com/questions/835522/kubectl-cluster-info-get-502-bad-gateway-error
[14:36] <lazyPower> marcoceppi did just now :( and it makes me sad
[14:36] <lazyPower> default  lxd-test    localhost/localhost  2.0-rc3  -- we dont support lxd deployments yet
[14:36] <marcoceppi> lazyPower: yeah, is there not a profile we can add to LXD to at least get it around
[14:36]  * lazyPower will update the question and edit it to be approrpiate
[14:36] <lazyPower> nope, lxd constraints will keep flannel from working so it'll never fully turn up
[14:37] <lazyPower> you'd have to run the entire thing as priveldged containers and i'm not certain that works
[14:37] <lazyPower> Cynerva - have we tried that? stuffing k8s in priv. lxd containers?
[14:39] <Cynerva> lazyPower: tried master, haven't tried worker. With master in privileged LXD I got pretty far, but nginx-ingress-controller seemed to have trouble deploying. That may have been a sporadic issue though. I didn't look into it further.
[14:39] <lazyPower> thats odd its jsut an nginx container + ssl certs
[14:46] <lazyPower> but ok, we've taken a prelim look at it.
[14:54] <lazyPower> marcoceppi i'm not sure what i can add here to the conversation. I could reasonably rewrite the question/answer to be mroe specific to the problem and go into technical detail about the different components and what we think needs to happen
[14:54] <lazyPower> is that overkill?
[14:55] <ahasenack> marcoceppi: hi, around? I got a very surprising update-status hook failure in the ubuntu charm
[14:55] <ahasenack> it was running ok every 5min, and now it's failing in every run with ImportError: No module named 'charmhelpers'
[14:55] <ahasenack> there was no code or charm upgrade triggered by me
[14:56] <ahasenack> I filed #1633106
[14:56] <mup> Bug #1633106: update-status hook failure: cannot import charmhelpers <kanban-cross-team> <landscape> <ubuntu (Juju Charms Collection):New> <https://launchpad.net/bugs/1633106>
[15:01] <ahasenack> marcoceppi: n/m, forgot that this machine was mid release-upgrade :(
[15:19] <arosales> rick_h_:  we continue to hit the lxd issues with rabbitmq
[15:19] <arosales> rick_h_: specifically https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902
[15:19] <mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
[15:19] <mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
[15:19] <arosales> rick_h_: I think vmorris is seeing this with lxd/local with openstack charms on s390x
[15:20] <arosales> rick_h_: suggestion how we should proceed? In 1584902 we were unable to work around the issue in charms, and thought it may need to be resolved at the lxd or juju level, thus wanted to get your thoughts
[15:21] <vmorris> yep -- adding calls to configure_nodename() in the update-status and amqp-relation-changed hooks seems to help
[15:25] <rick_h_> arosales: looking
[15:27] <rick_h_> dooferlad: ping, we should be in a place where all lxd containers have hostnames now as of rc3 right?
[15:29] <dooferlad> I believe so
[15:31] <rick_h_> dooferlad: created a card for the rabbitmq hostname issue there if you can please look into that as a next line of owrk
[15:31] <rick_h_> dooferlad: looks like the hostname turns into ubuntu or something there
[15:32] <arosales> vmorris: I think kwmonroe was looking at using the hostname in the big data charms to work around a similar issue
[15:32] <arosales> *think*
[15:33] <vmorris> rick_h_ dooferlad : the hostname is set to ubuntu at initial deploy, then is changed to juju-##### however the rmq-server configuration in /etc never gets updated
[15:36] <hallyn> So - juju 2.0 does not have the local/kvm provider, what will be the proposed alternative?
[15:37] <rick_h_> hallyn: lxd provider is the only alternative. Manual provider if you want to create kvm machines and add-machine them to a juju model
[15:39] <kwmonroe> yeah rick_h_ arosales vmorris dooferlad, the hostnames are legit after the initial deployment (i think because the containers are rebooted).  unfortuantely, that doesn't help containers talk to each other.
[15:39] <kwmonroe> http://paste.ubuntu.com/23318452/
[15:40] <kwmonroe> seems like the lxd containers should have '.lxd' as their domainname instead of '.localdomain'.  dooferlad, does that sound right to you?
[15:42] <dooferlad> kwmonroe: I don't know the specifics of if .localdomain, .lxd or something else is right.
[15:43] <dooferlad> rick_h_, vmorris, kwmonroe: the hostname is set by Cloud Init - Juju just asks it to write the files. I didn't see hostname = ubuntu during my testing.
[15:45] <dooferlad> none of this provides DNS though.
[15:45] <rick_h_> kwmonroe: this is on the manual provider?
[15:45] <vmorris> dooferlad et.al https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271/comments/5
[15:45] <mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
[15:46] <rick_h_> kwmonroe: vmorris that is ? ^
[15:46] <rick_h_> e.g. that cloud-init might not be in play here?
[15:46] <dooferlad> rick_h_: cloud-init only runs at the first boot.
[15:46] <dooferlad> rick_h_: so no, that is nothing to do with that.
[15:47] <rick_h_> dooferlad: right, my point is to see if an original hostname is set when the machine is created, and when it's added to juju with add-machine does it change/not change in some way that's causing an issue?
[15:48] <dooferlad> rick_h_: OK, that won't have anything to do with cloud-init. I can't get to that today, but I will look at it first thing tomorrow.
[15:48] <rick_h_> dooferlad: k
[15:49] <kwmonroe> rick_h_: in my case, it's fine that 'ubuntu' is used when the machine is created so long as it has a real FQDN when the application is installed.  for me, it's a problem that 2 containers can't resolve the FQDNs in the same deployment.. so i opened https://bugs.launchpad.net/juju/+bug/1633126.
[15:49] <mup> Bug #1633126: can't resolve lxd containers by fqdn <juju:New> <https://launchpad.net/bugs/1633126>
[15:49] <dooferlad> rick_h_: (because Juju doesn't run until cloud-init has finished)
[15:49] <arosales> rick_h_: I believe kwmonroe is using local/lxd on 2.0 and big data charms, not rabbit, but similar issues
[15:49] <rick_h_> arosales: k, so the s390 will be different than local/lxd so trying to narrow down where we're looking atm.
[15:51] <arosales> rick_h_: gotcha and in bigdata charm we see the issue everywhere, not just s390x, but also on x and p, aiui
[15:51] <dooferlad> kwmonroe, arosales: do those LXDs have a DNS server that will resolve the Juju names? If not, how do we expect it to work?
[15:51] <arosales> rick_h_: for s390x we are seeing https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/juju/+bug/1632030
[15:51] <mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
[15:51] <mup> Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error <juju> <juju-db> <mongodb> <s390x> <juju:Incomplete> <Ubuntu on IBM z Systems:Incomplete> <https://launchpad.net/bugs/1632030>
[15:51] <rick_h_> arosales: right, and the second one we've got eyes/work going into
[15:52] <kwmonroe> sure dooferlad.. those LXDs use the lxdbr0 as their nameserver... and they can all resolve each other as "juju-foo-X.lxd".  just not "juju-foo-X.localdomain"
[15:52] <rick_h_> arosales: so I'd like to take one issue at a time and ask dooferlad to look into hostname issues specifically wherever we're seeing those
[15:52] <arosales> rick_h_: thanks, and the rabbit issue sounds related to the one the openstack folks opened, https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902
[15:52] <mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
[15:52] <arosales> rick_h_: thanks and very reasonable approach.
[15:52] <arosales> :-)
[15:52] <dooferlad> kwmonroe: thanks - that certainly points to a fix!
[15:52] <arosales> thanks to vmorris for testing and providing feedback
[15:53] <vmorris> thanks arosales & all for the attention
[15:53] <arosales> rick_h_: admcleod was also going to be setting up openstack on s390x, manual on lpar, and with lxd (I think) and see if he can reproduce https://bugs.launchpad.net/juju/+bug/1632030
[15:53] <mup> Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error <juju> <juju-db> <mongodb> <s390x> <juju:Incomplete> <Ubuntu on IBM z Systems:Incomplete> <https://launchpad.net/bugs/1632030>
[15:54] <arosales> vmorris: I think LXD + Ubuntu on LPAR is a really solid use case, so would like to make sure it is working smoothly
[15:56] <kwmonroe> dooferlad: fwiw, this feels eerily familiar to https://bugs.launchpad.net/juju/+bug/1623480, but that bug was about a single container not being able to address itself.  it's like just a baby step farther to make sure multiple containers can address each other (by ensuring domainname = '.lxd').. i think :)
[15:56] <mup> Bug #1623480: Cannot resolve own hostname in LXD container <lxd> <network> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1623480>
[15:56] <dooferlad> kwmonroe: agreed.
[16:09] <hallyn> rick_h_: thanks.  too bad.
[16:10] <vmorris> arosales: agreed, the platform is great for packing in containers
[16:50] <beisner> hi arosales, rick_h_ - heads up.  ceilometer and aodh are also about to become more dns-sensitive, not due to changes in the charms, but changes in upstream code where they really really want sane A/PTR resolution all the way around.  this will be a growing theme, not specific to openstack /methinks.
[16:59] <lazyPower> beisner yep, K8s has gone that way as well on our side of the infra wall. Nice to see we're converging around some of the same ideas in the deployment.
[16:59] <rick_h_> beisner: right, we have a long term plan that we've tested works well that'll be coming soon
[17:01] <beisner> fresh bug for reference:  https://bugs.launchpad.net/juju-core/+bug/1632909   slightly different set of tools and providers than the other bugs though, so new bug.
[17:01] <mup> Bug #1632909: ceilometer-api fails to start on xenial-newton (maas + lxd) <maas-provider> <uosci> <OpenStack AODH Charm:New> <juju-core:New> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1632909>
[17:05] <beisner> rick_h_ lazyPower arosales - ultimately if this new canary bundle fails, then all sorts of workloads should be expected to fail.  i'd go as far as to suggest TDD on juju-core, gated on this passing on providers:  http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/other/magpie-lxc.yaml
[17:06] <beisner> if that^ passes, so will rabbitmq, ceilometer, nova-cloud-controller and others who need to know names and numbers :-)
[17:06] <rick_h_> beisner: that'd be good to send to tbauman and try to get into the stack of things they test there
[17:06] <rick_h_> beisner: once we know it works, guessing it doesnt at all currentlY/
[17:06] <rick_h_> ?
[17:07] <beisner> rick_h_, yep we talked about it at the sprint and sinzui is planning something i believe.
[17:07] <rick_h_> ok
[17:10] <beisner> rick_h_, for ex., that is known to fail on the openstack provider and manual provider, since lxc units go to an island behind nat.  if you need a measure of fixing that, then this bundle is your baby.
[17:13] <arosales> beisner: it feels like the problem is larger than rabbit, thanks for testing your bundle to confirm that with data.
[17:19] <beisner> arosales, lazyPower - yep, glad we're either all crazy or right :-)
[17:20] <lazyPower> beisner the one difference is k8s is shipping with its own dns provider to do the mapping. its not relying on env specific dns
[17:20] <lazyPower> its kind of boggling how it all works, theres a lot of moving componentry there that can and might break.
[17:22] <kwmonroe> can i specify a unit in the "to: X" bundle placement directive?
[17:22] <arosales> Forgot to say happy Yakkety release day
[17:22] <arosales> and OpenStack 16.10 charm release day
[17:23] <kwmonroe> reading this:  https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives, it seems like i should be able to say "to: ubuntu/0" if an ubuntu was defined earlier.. but that doesn't work:
[17:23] <kwmonroe> E: placement ubuntu/0 refers to non-existent service ubuntu/0
[17:23] <kwmonroe> perhaps that documentation is *only* for lxd placement on a unit?
[17:23] <beisner> woohoo yes arosales !
[17:23] <rick_h_> kwmonroe: you have to target machiens not applications
[17:24] <rick_h_> kwmonroe: we don't map the application/unit to the machine, it's to a new machine, new container on an existing machine, etc
[17:24] <beisner> kwmonroe, it'd be --to "ubuntu=0" in juju-deployer speak.
[17:25] <rick_h_> kwmonroe: I seriously think that doc section is a lie :/
[17:25] <kwmonroe> beisner: say what?  what is this juju-deployer --to stuff you speak of?
[17:26] <beisner> ha!  anyway, this has worked for us for all of time:  http://pastebin.ubuntu.com/23319065/
[17:26] <kwmonroe> my stars beisner!  i'm fixin to send you an ecard if this works.
[17:26] <rick_h_> beisner: ok, but only with the deployer though? or does that work with juju itself?
[17:27] <beisner> rick_h_, i believe so.  juju dash deployer ! juju space deploy ;-)
[17:28] <rick_h_> beisner: looking at the code it checks the directive is a machine
[17:28] <kwmonroe> i think not beisner.. invalid placement syntax "ganglia=0"
[17:28] <rick_h_> yea
[17:28] <kwmonroe> using "juju deploy", not "juju-deployer"
[17:29] <beisner> right.  so, rick_h_ as we continue to ramp up 2.0 in osci, you'll likely see a load of feature parity wishlist items from us.  such as this.
[17:30] <rick_h_> beisner: I look forward to getting the liste of requests
[17:30] <kwmonroe> so rick_h_, riddle me this.  my bundle defines 1 machine.  in juju1, that needs to be 'machine: 1' because machine 0 is taken by the bootstrap node.  in juju2, juju will create the machine as machine-0, so subsequent placement fails (there is no machine-1 in juju2).
[17:30] <beisner> oh i've been trying to solve for that equation too.  /me stands by
[17:30] <rick_h_> kwmonroe: so bundles all start at 0 and include only machines defined in the bundle
[17:30] <kwmonroe> i'll whip up a proper bug report, but i think that's what's happeneing
[17:30] <rick_h_> kwmonroe: so you're saying that bundle fails on the deployer side? I thought it was updated to accept that in the v4 format
[17:31] <kwmonroe> rick_h_: if i have a bundle define machine 0 and deploy it on juju1, stuff gets colocated on my bootstrap node
[17:31] <rick_h_> kwmonroe: so the machine number cannot and does not relate to a machine in your model, it's all new machines
[17:31] <rick_h_> kwmonroe: not with the gui, I'd have to test deployer then
[17:31] <beisner> rick_h_, that's exactly why addressing them by name is valuable ^
[17:31] <kwmonroe> yeah rick_h_ -- probably a j-deployer thing.. like i said, i'll write it up more betterer.
[17:32] <kwmonroe> +1 beisner
[17:32] <rick_h_> beisner: understand, but async and order and ... so name is hard
[17:32] <rick_h_> beisner: but yea, it's cool to get it as a request and look at it
[17:32] <rick_h_> beisner: just saying that it's been this way for xxxxx months and to get it the day of release is a can of "sorry"
[17:32]  * rick_h_ is having too much fun today, take all that with a giant :)
[17:32] <beisner> ha! :-) no i'm not asking for that.
[17:34] <beisner> native deploy ftw;  i've just got both paths to regression test for the lifetime of Trusty, so it's tricky to craft bundles in a way that we don't have to maintain two sets of bundles.
[17:34] <kwmonroe> wait, we're not doing double digit RCs?
[17:35] <rick_h_> beisner: right, but where's the bug on this as part of native deploy?
[17:35] <beisner> back to arosales - happy Yakkety day!
[17:39] <cory_fu> lazyPower, kjackal_, petevg, kwmonroe: Can I get a re-review on https://github.com/juju-solutions/layer-apache-kafka/pull/13
[17:40] <petevg> cory_fu: +1 (I agree that it's okay to fail on a weirdly composed tarball.)
[17:41] <bdx> rene4jazz: glad you made it!
[17:42] <bdx> lazyPower: I want to introduce you to a colleague of mine, Rene Lopez (rene4jazz)
[17:43] <lazyPower> Greetings rene4jazz o/
[17:43] <bdx> lazyPower: Rene is interested in deploying your kub bundle, I thought I would put you guys in touch
[17:43] <beisner> rick_h_, ha, had to dig deep as the one i had been tracking is invalid.  https://bugs.launchpad.net/juju-core/+bug/1583443
[17:43] <mup> Bug #1583443: juju 2.0 doesn't support bundles with 'to: ["service=0"]' placement syntax <juju-core:Invalid> <https://launchpad.net/bugs/1583443>
[17:44] <rick_h_> beisner: ah yea not on our radar looking at invalid bugs
[17:44] <rene4jazz> Greetings lazyPower
[17:45] <lazyPower> rene4jazz I'm happy to help get you started with Kubes. Do you have any initial questions?
[17:45] <bdx> rene4jazz: lazyPower is the maintainer of the kub bundle, I wanted to introduce you two, so hopefully you can bounce ideas off eachother as you run through kub deploys
[17:45] <lazyPower> co-maintainer*
[17:45] <rene4jazz> thanks bdx
[17:45] <bdx> mybad *^
[17:45] <bdx> np
[17:45] <lazyPower> mbruzek does a lot of the heavy lifting too :)  pedantic i know, but he deserves some credit too
[17:46] <bdx> def
[17:46] <bdx> mbruzek: props!
[17:46] <lazyPower> he's out ot lunch and getting props, haha
[17:46] <lazyPower> he's gonna be bummed he's missing it
[17:47] <lazyPower> i'll relay though. So back to kubernetes
[17:47] <lazyPower> rene4jazz - tell me a little bit about your wants/needs here. I have it on good authority you were pioneering on LXD - which is unfortunately not supported at this time for most of the charms.
[17:48] <rene4jazz> lazyPower, I'm curious about the kub bundle and started to mess with it. My first approach was to try with the localhost (lxc based) provider
[17:48] <lazyPower> yeah, we really need to move that warning up in the README... its buried at the bottom under the caveats
[17:49] <rene4jazz> My goal is first deploy the bundle then start adding pods for several app related services
[17:50] <lazyPower> Ok. Is localhost your primary option for exploration? The cost of using clouds being the prohibitive factor here....
[17:50] <rene4jazz> lazyPower, correct... cost is a factor
[17:51] <lazyPower> rene4jazz - so this limits options, but we can work with this. Are you familiar with KVM?
[17:52] <rene4jazz> lazyPower, the Hypervisor?
[17:53] <lazyPower> correct. What i'm going to propose is using the manual provider to enlist a few KVM vm's, and test there. We can trim the bundle down to only the K8s charms whcih will save you some effort in how many VM's to provision.
[17:55] <bdx> lazyPower: what about developer.juju.solutions .. is that not a thing anymore?
[17:55] <lazyPower> bdx - oh yeah! i forgot all about it
[17:55] <lazyPower> bdx nice save
[17:55] <rene4jazz> lazyPower, bundle size is fine, I can deal with the VM numbers
[17:55] <kwmonroe> rick_h_: i'm game to make a juju doc PR.  to be clear, the placement directive section should talk about machine placement only, and those should be defined in a 0-based "machines:" section.  right?
[17:56] <kwmonroe> rick_h_: and to be doubly clear, i'm talking about updating this page: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives
[17:56] <lazyPower> rene4jazz - alternatively, if you're up for it we can get you on the charm developer program which will give you some AWS runtime while you explore the bundle in its entirety, on aws.
[17:56] <rick_h_> kwmonroe: yes
[17:56] <kwmonroe> got it
[17:57] <lazyPower> rene4jazz if you're interested in the charm developer program - head over to http://developer.juju.solutions and sign up for the CDP
[17:58] <rene4jazz> lazyPower: great to know, right now will keep exploring local options, thanks for the help
[17:58] <lazyPower> rene4jazz - OK. I'm here to lend a hand if you have any questions. feel free to ping
[18:02] <kwmonroe> ah crap.. #thatfeelingwhen you mistype 'branch -d' instead of 'checkout -b' :/
[18:02] <lazyPower> hope you pushed it remotely so you can re-fetch
[18:03] <kwmonroe> dont' be silly lazyPower.  i'm too agile for all this "pushing" and "remote" nonsense
[18:03] <kwmonroe> and to top it off, my time machine backup disk died 2 days ago, and of course i turned off mobilebackups because #dumb.
[18:03] <lazyPower> Whoops
[18:04] <kwmonroe> should have gotten a thinkpad
[18:04] <lazyPower> my thinkpad has hw failure scrolling in dmesg :(
[18:04] <kwmonroe> lol.. maybe a dell then.
[18:04] <lazyPower> well i also purchased it like fifth or sixth hand
[18:04] <lazyPower> assuming anyway
[19:49] <vmorris> does anyone have experience using nginx or haproxy to reverse proxy access to juju applications?
[19:51] <vmorris> link to a tutorial or some guidance would be appreciated!
[20:18] <x58> Is james page in this channel?
[20:19] <vmorris> x58: yeah
[20:22] <x58> What's his nick?
[20:23] <vmorris> jamespage
[20:23] <x58> Nevermind, found it :P
[20:23] <vmorris> ><
[20:23] <x58> Just didn't tab enough. Too many james's
[20:23] <jamespage> hello
[20:23] <jamespage> vmorris, there is a haproxy charm that does that
[20:23] <vmorris> jamespage thanks, i'm looking into it now
[20:23] <jamespage> basically any juju app that provides an http interface can be loadbanlanced
[20:24] <jamespage> vmorris, context?
[20:24] <x58> jamespage: https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902 this being reverted is worrying to me...
[20:24] <mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
[20:24] <jamespage> x58, well it broke alot more things than it fixed - what's your specific concern?
[20:25] <x58> jamespage: It will break our deployment on 2.0, where it will use the wrong NODENAME and won't cluster/do anything.
[20:26] <x58> The fix that went in was specifically to support deployments on 2.0, I understand that it apparently breaks 1.25.x but reverting it doesn't seem like the right thing to do either.
[20:26] <jamespage> x58, no it still needs fixing
[20:27] <jamespage> x58, tbh alot of this relates to the fact that we need a consistent, resolvable hostname based environment from juju
[20:27] <x58> We have had 40+ deployments in our lab with 2.0 with the charm as it stands now, and everything clusters and works without issues.
[20:27] <x58> Yeah...
[20:27] <jamespage> x58, a charm should never have to mess with /etc/hosts
[20:27] <jamespage> and that was the fix we tried
[20:28] <jamespage> basically unless every rabbit unit had the same view as every other one, units failed to cluster
[20:29] <x58> As I mentioned, currently it works for us. It may not be ideal, but it works. Reverting it will break it.
[20:29] <x58> A better solution should be found...
[20:30] <jamespage> x58, fix your charm version for the time being
[20:31] <jamespage> x58, we will have a better fix for 2.0 users
[20:31] <jamespage> if a particular version works now, continue to use that and don't upgrade
[20:31] <x58> Ok. Will do.
[20:31] <jamespage> x58, actually wait - which charm version are you using? I reverted this in the stable charm as well as the dev brnach
[20:31] <x58> Let me check my bundle file.
[20:34] <x58> prod: cs:~openstack-charmers-next/xenial/rabbitmq-server lab: rabbitmq-server
[20:35] <x58> I just noticed the comment on the bug, we haven't had to redeploy lab yet, but I have a feeling that as soon as we grab the reverted version, we are going to hit the original bug.
[20:35] <kwmonroe> rick_h_ beisner: if you'd like to double check the language, i merged the "machine specifications" and "bundle placement" sections into 1 as per our earlier discussion:  https://github.com/juju/docs/pull/1448
[20:35] <kwmonroe> (that is, apps can specify machine placement, not service/X placement)
[20:37] <x58> jamespage: ^^
[20:37] <jamespage> x58, yeah if you deploy clustered I think you will
[20:38] <x58> What was the previous version so we can pin that?
[20:39] <vmorris> this is a dumb question, but how do i get the credentials for the percona db post-deploy?
[20:40] <beisner> vmorris, i believe you have to set it via the charm config options
[20:40] <vmorris> deployed via the openstack-base bundle, it wasn't set
[20:41] <vmorris> looking at the juju config output...
[20:41] <vmorris>   root-password:
[20:41] <vmorris>     default: true
[20:41] <vmorris> ?
[20:44] <jamespage> x58, I think its https://jujucharms.com/rabbitmq-server/xenial/5
[20:44] <jamespage> rabbitmq-server-5
[20:45] <jamespage> vmorris, ok so there is a gotcha here - if you want to deploy multiple pxc units, you must provide via configuration
[20:45] <jamespage> vmorris, I need to spend some time on pxc overhauling the bootstrap process and password management stuff to use leader election
[20:46] <jamespage> it pre-dates juju providing anything helpful for generating passwords for clustered services from with a unit
[20:47] <beisner> hi kwmonroe, rick_h_ i'd prefer to defer to someone on juju-core who is code-familiar with native deployer re: docs.
[20:48] <jamespage> x58, just confirming that now - have a MAAS 2.0 /Juju 2.0 env I'm doing some other testing in
[20:48] <jamespage> its nippy so spinning a few more contains is OK
[20:52] <jamespage> x58, yup that version lgtm
[20:52] <jamespage> http://paste.ubuntu.com/23320046/
[20:53] <x58> Excellent. Will pin it.
[20:57] <x58> Thanks jamespage!
[20:57] <x58> Looking forward to seeing the issue fixed properly.
[20:59] <x58> Is there something logging the channel to HTTP? Something like botbot.me would be awesome.
[21:00] <x58> https://botbot.me/request/
[21:02] <hallyn> so if i have a workload running in, say environment gce, can i juju switch amazon, start a differnet workload, and switch back and forth?  /me is afraid to try and ruin the currnet install :)
[21:05] <freyes> x58, you can find the logs at https://irclogs.ubuntu.com/2016/10/13/%23juju.html
[21:05] <x58> Excellent.
[21:05] <x58> Would be nice to drop a link to that in the topic!
[21:06] <x58> Doesn't have latest logs :-(
[21:07] <x58> I spoke too soon ;-)
[21:27] <vmorris> jamespages: thanks for that
[22:06] <kwmonroe> hallyn: sure, you can switch back and forth between controllers/models.  the only think i would be wary of is doing something like "juju bootstrap foo" in a tmux session and then "juju bootstrap bar" in another.
[22:07] <kwmonroe> hallyn: that said, once the controller has received your request and you're returned to a command prompt, you can switch to whatever your heart desiers.
[22:07] <kwmonroe> *desires
[22:10] <hallyn> kwmonroe: and switch back and forth?
[22:11] <kwmonroe> sure hallyn.. juju doesn't forget when you switch to something else :)
[22:11] <rick_h_> hallyn: kwmonroe and most commands taje a -m for a model or controller:model combo
[22:12] <rick_h_> so you can status w/o a switch and such
[22:12] <kwmonroe> hallyn: 'juju controllers' and 'juju models' are a lifesaver for me to remember where my stuff is deployed
[22:12] <hallyn> kwmonroe: awesome!  thx :)
[22:12] <kwmonroe> hallyn: let me know when you try it so i can sign off
[22:13] <kwmonroe> (totally kiddin)
[22:13] <hallyn> :)
[22:17] <hallyn> drat, i guess models are purely a 2.0 thing
[22:18] <rick_h_> hallyn: yes, whooe new world
[22:19] <rick_h_> whole
[22:30] <kwmonroe> hallyn: it's true models are new in 2.0, but you can still switch in juju 1.25.  i go from aws to azure all the time.
[22:31] <kwmonroe> hallyn: again, the only thing i would be concerned about is if you did an operation (like bootstrapping one env) and switched before it was completed.
[22:31] <kwmonroe> but you should really just go to juju2. like rick_h_ said, it's a whole new (better) world :)
[22:31] <kwmonroe> go to yacketty while you're at it ;)
[23:56] <pmatulis> how do i influence the contents of /etc/neutron/plugins/ml2/ml2_conf.ini ? i'm using a bundle to set up openstack