[02:16] <Budgie^Smore> well it has been one of those days!
[03:01] <lazyPower> Budgie^Smore how so?
[03:02] <Budgie^Smore> ended up reinstalling the work laptop ... windows to ubuntu
[03:03] <lazyPower> ah the great nuking and repaving day
[03:03] <Budgie^Smore> working on a Vagrantfile to create the different VMs that I need... came across an old git repo of someone doing that for MaaS
[03:07] <Budgie^Smore> oh and the window to ubuntu change was cause I couldn't get ssh to work nicely so I could ssh and start a VM headless from the CLI
[07:34] <kjackal> Good morning juju world!
[08:17] <kklimonda> when a hook is failing, I can run juju debug-hooks unit to see what's happening
[08:17] <kklimonda> how can I change options being passed to the hook?
[08:29] <kjackal> hi kklimonda I am not sure you can change the parameters passed to the hook. However since you are in debugging mode you can alter the values in tha parameters you already get, right?
[08:46] <kklimonda> yes, I can definitely do that
[09:13] <kklimonda> or I could if this wasn't used in a dozen different places.. I have to figure out where to set it in hooks
[09:16] <Zic> lazyPower: ping back
[09:16] <Zic> (but you're probably sleeping :p)
[09:44] <kklimonda> if bundle deployment fails on one machine, can I retry just this one machine?
[10:24] <BlackDex> i'm working on a shared server with someone, and we both use juju but with different envirionments
[10:25] <BlackDex> i mean models
[10:25] <BlackDex> are there envirionment variables i can use so that juju selects that model instead?
[10:26] <BlackDex> because if i, or someone else does juju switch, its selecting that model
[10:27] <BlackDex> JUJU_ENV isn't working
[10:28] <anrah> would JUJU_MODEL work?
[10:29] <BlackDex> no idea
[10:29] <BlackDex> lets check
[10:30] <BlackDex> yes
[10:30] <BlackDex> that sounds like a nice feature
[10:30] <BlackDex> :)
[10:30] <BlackDex> A pitty it isn't documented
[10:32] <BlackDex> hmm it is on github i see
[10:32] <anrah> BlackDex: https://jujucharms.com/docs/stable/reference-environment-variables
[10:33] <BlackDex> how could i have missed that
[10:33] <BlackDex> i search for environment in the docs
[10:33] <BlackDex> :S
[10:33] <BlackDex> probably some typo
[10:33] <BlackDex> thx!
[10:33] <anrah> np! :)
[10:42] <anrah> Does bundle deployment work with manual cloud?
[10:42] <anrah> I have manually added couple servers to juju and I would like to use bundle-file to deploy my apps
[10:43] <anrah> I get errors for each machine:
[10:43] <anrah> placement "0" refers to a machine not defined in this bundle
[11:12] <anrah> when setting machine-part on the bundle, i get error:
[11:12] <anrah> ERROR cannot deploy bundle: cannot create machine for holding my-charm unit: cannot add a new machine: use "juju add-machine ssh:[user@]<host>" to provision machines
[12:16] <alexlist> @jcastro: in LP#1662172 I mentioned that conjure-up didn't copy .kube/config and kubectl to the controlling host, and just noticed the workaround is documented here: https://github.com/juju-solutions/bundle-canonical-kubernetes/tree/master/fragments/k8s/core - however to streamline things, I suggest to amend the docs to copy kubectl to ~/bin which should be in people's path if they use the default .profile from /etc/skel
[12:29] <jcastro> I did this a bunch of time yesterday and at the end it prompted and copied the binary over
[12:30] <jcastro> oh, and also, if not obvious, for 1.5.x we stopped bundling the elastic bundle by default, though you can still deploy after the fact
[12:31] <jcastro> oh ok, I see we put that workaround in the docs anyways: https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/
[12:34] <jcastro> alexlist: any other feedback on that page before I start a PR?
[12:35] <alexlist> jcastro: Not yet, will probably redo the whole thing once more in a VM to verify...
[12:35] <jcastro> did you deploy the bundle manually or via conjure?
[12:36] <alexlist> conjure
[12:36] <jcastro> ok, at the end conjure should prompt you to copy the creds and binary over automatically
[12:36] <alexlist> Lemme try this once more...
[12:36] <jcastro> if that doesn't happen lmk the version of conjure and we can have stokachu take a look
[12:36] <alexlist> ok
[12:45] <jcastro> https://github.com/kubernetes/kubernetes.github.io/pull/2556
[12:45] <jcastro> a review here would be lovely!
[12:51] <stokachu> alexlist, yea lemme know as it's supposed to copy to ~/bin automatically in the steps view
[12:51] <jcastro> oh ok so conjure already copies to ~/bin?
[12:51] <jcastro> ok so really the manual steps were the ones that needed to be fixed then
[12:53] <jcastro> then when rye has the readmes use the upstream markdown instead of the bundle markdown it should all generate from one source of truth instead of the two we have now
[13:13] <ahasenack> marcoceppi: hi, around?
[13:13] <ahasenack> does anybody know if there is a python3?-libchamstore package for trusty somewhere? It used to be in ppa:juju/devel, but the latest build there failed
[13:14] <marcoceppi> ahasenack: I'm fixing that today
[13:14] <ahasenack> the trusty build?
[13:14] <ahasenack> I actually don't know what is pulling libcharmstore in
[13:14] <marcoceppi> charm-tools
[13:15] <ahasenack> where will it be uploaded to?
[13:15] <marcoceppi> juju/stable
[13:24] <SimonKLB> hey marcoceppi! could you tell me what the current best practice is for exposing charms deployed in lxd containers on aws? when im running locally with nested lxd containers i create some simple NAT rules in iptables, but when im running in a public cloud i also need to get the proper security rules in place and those doesnt seem to be created when executing `juju expose X` on the containerized application
[13:25] <SimonKLB> (sorry, that was a long question) :D
[13:26] <rick_h> SimonKLB: since you can't get at the containers from the outside you need something to help proxy things.
[13:26] <rick_h> SimonKLB: usually i setup a HAProxy on the root of the host
[13:28] <marcoceppi> rick_h: does the network setting changes in 2.1 help address this?
[13:28] <rick_h> marcoceppi: not atm. You still can't get multiple addresses/mac addresses so the containers are internet addressable
[13:28] <SimonKLB> rick_h: right! it would be useful to have some mechanism to expose containerized applications though, for example in the openstack bundle where lots of components are put in lxd containers instead of directly on the host
[13:29] <rick_h> SimonKLB: yes, agreed. So the team is working to enable that when there's something that allows it. The network changes in 2.1 are a move in that direction and I know that by 2.2 the idea is to have that on places like manual provider, openstack, places where you might be able to get dhcp to the containers for root level ip addresses
[13:30] <rick_h> SimonKLB: but AWS does some things in their SDN that only allows the one mac address on hosts so it's harder to get containers exposed
[13:30] <SimonKLB> rick_h: even without dhcp access, NAT:ing could be an option, you just need to first open the ports in the security rules
[13:31] <rick_h> SimonKLB: right, but because it has to be one mac the host can't NAT multiple things inside and tell who it goes to is my understanding. It could only do something based on one container per port perhaps.
[13:32] <rick_h> SimonKLB: but yea, atm you have to handle that via a proxy and config and such but the team's actively working on it across many of the provders
[13:33] <SimonKLB> rick_h: yea, that is what im doing right now, adding NAT rules with destination port as the match, so if you expose an application that is running a service on port 80 you add the iptables NAT rule and then expose the application like you would if it was running on the host
[13:35] <SimonKLB> for example exposing keystone in the openstack-base bundle: iptables -t nat -A PREROUTING -p tcp --dport 5000 -j DNAT --to [private ip]:5000
[13:36] <SimonKLB> this works great when you're running on localhost
[13:44] <alexlist> stokachu: it did indeed copy the files to ~/.kube/config and ~/bin/kubectl, but the last steps in conjure-up still threw errors
[13:44] <stokachu> alexlist, what error?
[13:45] <alexlist> stokachu: http://pastebin.ubuntu.com/24007303/
[13:45] <Zic> I had the same error today alexlist / stokachu
[13:45] <Zic> conjure-up tries to run kubectl get nodes / get pods before the cluster is ready during the deployment
[13:45] <alexlist> stokachu: most likely a race condition - now it tried to copy the files even though the deploy isn't finished yet
[13:46] <alexlist> what Zic said...
[13:46] <Zic> (I just ignored the error, and the installation finished correctly btw)
[13:46] <stokachu> what rev are you guys on?
[13:46] <Zic> let me check
[13:46] <stokachu> snap list conjure-up
[13:46] <Zic> I don't use the snap
[13:46] <Zic> it's the PPA version
[13:47] <stokachu> Zic, ah, should migrate to the snap when you can
[13:47] <Zic> 2.1.0-0~201701041302~ubuntu
[13:47] <Zic> stokachu: noted :)
[13:47] <alexlist> 2.1.0-0~201701041302~ubuntu16.10.1
[13:47] <Zic> I'm on Ubuntu 16.04
[13:47] <stokachu> yea you guys should be using the snap version
[13:47] <Zic> ok
[13:48] <stokachu> my test runners haven't seen this error yet if you do a `sudo snap install conjure-up --classic --candidate`; `sudo apt-get remove conjure-up juju-2.0`
[13:48] <alexlist> stokachu: I just followed https://jujucharms.com/canonical-kubernetes/ which tells me to use the PPA...
[13:48] <stokachu> conjure-up provides everything you need
[13:48] <stokachu> alexlist, yea ive got a PR to get that changed
[13:48] <stokachu> alexlist, hasn't landed yet
[13:48] <alexlist> \o/
[13:49] <stokachu> with snaps you can deploy on trusty now too
[13:49] <stokachu> latest juju etc
[13:49] <Zic> stokachu: I don't have any preference between deb or snap, but https://jujucharms.com/canonical-kubernetes/ should be upgraded with the snap package installation I guess
[13:50] <Zic> it's the only reason of why I used the PPA :)
[13:50] <stokachu> Zic, yep soon as they land the PR and bup charm revs
[13:50] <Zic> cool
[13:51] <BlackDex> Is there someone here who can help me with some problems with the nrpe charm? It doesn't install/announce the disk/mem/cpu checks? stub, blahdeblah, hloeung, pjdc_ or someone else?
[13:51] <magicaltrout> there's a bug and a fix open for that I think BlackDex
[13:52] <stokachu> Zic, alexlist once Juju 2.1 GA ive got deb updates that will point you to the snap install version
[13:52] <alexlist> stokachu: ok.
[13:52] <magicaltrout> cause i used it a bunch of times
[13:52] <BlackDex> magicaltrout: if you mean bug 1605733, i couldn't find that file anymore in the new charms
[13:52] <mup> Bug #1605733: Nagios charm does not add default host checks to nagios <canonical-bootstack> <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>
[13:53] <alexlist> stokachu: Do you think this will work on a plain Debian as well, or are there too many things missing? Just asking, as I have to deal with a managed hosting provider who prefer Debian ...
[13:53] <stokachu> alexlist, i think there are efforts to get snapd running on debian
[13:53] <magicaltrout> yeah BlackDex well that was my issue 3 weeks ago
[13:53] <stokachu> i think it already does on the latest
[13:53] <magicaltrout> i've not checked it since
[13:53] <magicaltrout> but it was around like that for ages
[13:54] <stokachu> alexlist, you should join #snappy and see if one of those guys know more
[13:54] <stokachu> alexlist, but yea, the theory is wherever snappy can run you'll be able to use conjure-up
[13:54] <stokachu> fedora, arch etc
[13:54] <BlackDex> i will check it again by using the charm-tools to download the charm
[13:55] <magicaltrout> i just juju ssh into the unit and hack around the code, but whatever floats your boat :)
[13:55] <BlackDex> that is also an option, but not if you want to deploy it to a lot of instances
[13:58] <magicaltrout> http://bazaar.launchpad.net/~charmers/charms/precise/nagios/trunk/view/head:/hooks/common.py
[13:58] <magicaltrout> i don't think that has been updated BlackDex
[13:58] <magicaltrout> so I think that bug is still valid
[14:00] <BlackDex> man, i think i'm looking in the wrong charm now :p
[14:00] <BlackDex> i need to check the nagios charm, and not the nrpe
[14:01] <magicaltrout> yup
[14:01] <BlackDex> doh
[14:01] <magicaltrout> thats why I just hacked the code :)
[14:01] <BlackDex> lets see if that is the case for the xenial version also
[14:01] <BlackDex> that makes a bit more sense
[14:03] <BlackDex> oke
[14:03] <BlackDex> lets see what that does :)
[14:10] <alexlist> stokachu: ok, with the snap versions everything works as it should
[14:11] <stokachu> alexlist, great! thanks for testing
[14:19] <BlackDex> magicaltrout: Thx for clearup, i now installed the latest nrpe charm with the manual patched nagios, that seems to work
[14:33] <magicaltrout> no problem BlackDex
[14:33] <magicaltrout> in other news.... someone just brought a parrot into our kitchen at work....
[14:38] <BlackDex> parrot wants a cookie
[14:53] <lazyPower> Zic  :D
[14:57] <Zic> lazyPower: the production cluster is all fine, I have just one little trouble: the kube-dns pod restart sometime with no reason other than 21h    3m    24    {kubelet mth-k8svitess-02}    spec.containers{dnsmasq}    Warning    Unhealthy    Liveness probe failed: HTTP probe failed with statuscode: 503
[14:57] <Zic> I mitigated this with scaling the kube-dns deployment to 5 replicas instead of 1
[14:58] <Zic> (it keeps restarting sometime, ~1 every two hours, but at least, they are other kube-dns pod which not restarting in the same time that can handle request)
[14:58] <magicaltrout> when in doubt.... replicate!
[14:58] <lazyPower> Zic - fantastic, we were discussing on if we should scale that... i think there's a bug for this actually
[14:58] <Zic> magicaltrout: already done :p
[14:59] <lazyPower> Zic - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/181
[14:59] <Zic> thx
[14:59] <lazyPower> not 1:1 the same, but if you coudl add your comments there it would add some weight
[14:59] <lazyPower> and we can probably get that scheduled for post 1.6 release
[14:59] <Zic> other than that, all is fine, I'm running on 1 master for now but I will switch to nominal-3 when you're patch will be officially released regarding my bug :)
[15:01] <lazyPower> Zic - well i think we want to use teh autoscaler addon tbh
[15:01] <lazyPower> as its using metrics to drive the scale
[15:01] <lazyPower> surely it by defalt wants HA
[15:01] <lazyPower> *default
[15:02] <Zic> s/you're/your/
[15:02] <magicaltrout> Im pushing for CDK to be used on a Darpa project here lazyPower, dunno if it'll win, but i'm tryin'
[15:02] <lazyPower> magicaltrout - <3 <3 <3
[15:02] <lazyPower> magicaltrout - let me know if there's anything we can do to help support this effort
[15:03] <magicaltrout> slap some sysadmins?
[15:03] <Zic> I have just a landing page with a logo publicly available through this K8s cluster :D
[15:03] <lazyPower> Zic - no fancy mobile game servers? ;)
[15:03] <Zic> the real start is planed for the end of the month
[15:03] <Zic> lazyPower: it's about video streaming with paid access :)
[15:04] <Zic> (no, it's not p*rn !!)
[15:04] <magicaltrout> i'll have to get SaMnCo 's GPU stuff in there we've got like 100+ GPUS at launch
[15:04] <magicaltrout> Zic: you're streaming porn with CDK?!!! ;_
[15:04] <Zic> nope :p
[15:04] <SaMnCo> magicaltrout: yooohoooo
[15:05]  * magicaltrout tweets Zic 's revelation....
[15:05] <SaMnCo> What GPU?
[15:06] <lazyPower> Zic - schenanigans, sounds like p*rn to me
[15:10] <magicaltrout> i think most are titan X's SaMnCo but there are a range of different ones being brought in from other projects
[15:10] <magicaltrout> plus we got told off by Nvidia for using off the shelf hardware instead of the stupidly marked up "data processing GPUs"
[15:11] <magicaltrout> "how dare you use our chips for anything other than Games without us apply a 100% markup!"
[15:11] <lazyPower> O_o
[15:11] <lazyPower> obvious marketing effort is obvious
[15:11] <magicaltrout> yup
[15:12] <Zic> lazyPower: when I wrote "it's about video streaming with paid access" I immediately realized that it will be acknowledged as p*rn :p
[15:12] <Zic> so I precised it :D
[15:12] <lazyPower> Zic - or netflix, people like to conflat the two
[15:12] <magicaltrout> i like the fact no one can write porn
[15:12] <Zic> magicaltrout: don't know if it's a banned word :p
[15:12] <magicaltrout> well lazyPower has yet to tell me off
[15:12] <Zic> seems there is no bots in this chan
[15:12] <lazyPower> i think its a fine word in the current context
[15:12] <magicaltrout> so i assume its acceptable, we are mostly adults after all :)
[15:12] <jrwren> I'm going to imagine it is sports.
[15:13] <magicaltrout> hehe
[15:13] <lazyPower> if you were like explicit i might have you remind you and i of the Code of Conduct
[15:13] <Zic> :D
[15:13] <lazyPower> however, we're all being sensible
[15:13] <magicaltrout> http://www.darpa.mil/program/data-driven-discovery-of-models this is what we're working on lazyPower
[15:13] <magicaltrout> 4 year programme about discovering data
[15:13] <lazyPower> magicaltrout - wow this is a complex etl stack, its not really fully decomposed
[15:13] <lazyPower> this is the 10k foot diagram right?
[15:13] <Zic> plus, I talked about lazyPower's body some days ago, so I'm limiting myself about prohibited word :D
[15:13]  * Zic left
[15:13] <lazyPower> oh my
[15:14] <lazyPower> you had to bring that back up didnt you
[15:14] <lazyPower> hawkwarddddd
[15:14] <Zic> :p
[15:14] <magicaltrout> http://www.darpa.mil/news-events/2016-06-17
[15:14] <lazyPower> yeahhh ml for ml
[15:14] <lazyPower> your recursion is neat :D
[15:14] <magicaltrout> yeah lazyPower lots of crazy GPU powered machine learning to run over public datasets to try and automatically detect the models without users having to write the code
[15:15] <lazyPower> and phase 1 of skynet will have been delivered
[15:15] <lazyPower> the fact CDK might possibly be empowering skynet, is kinda neat
[15:15] <magicaltrout> Zic: https://irclogs.ubuntu.com/2017/02/16/%23juju.html you mean this publically accessible log ? ;)
[15:16] <Zic> lazyPower: I kept the meme's image you gave me as goodies :p
[15:16] <magicaltrout> Zic 's love of lazyPower is forever preserved
[15:16] <Zic> xD
[15:16] <lazyPower> <3
[15:20] <Zic> it's why I gave the Juju OpenStack project to one of my colleague, I only want to use Juju if there is "lazyPower parts"
[15:20] <magicaltrout> i wouldn't recommend that as a life choice ;)
[15:20] <lazyPower> ^
[15:20] <Zic> fun fact: Canonical commercial's support answered us with a "About your OpenStack Kubernetes stack"
[15:20] <Zic> I only mentioned Kubernetes in my contact mail...
[15:20] <Zic> :>
[15:21] <Zic> (we're going to buy commercial's support at term, when it will really go prod)
[15:22] <mbruzek> awesome Zic
[15:25] <lazyPower> Zic - i appreciate your contributions to my pizza budget
[15:26] <Zic> :D
[15:26] <magicaltrout> that is the largest budget in Canonical
[15:27] <magicaltrout> more than Marks private jet costs...
[15:27] <Zic> the next Ubucon EU is at Paris
[15:27] <Zic> iirc
[15:27] <Zic> if you want your pizzas :p
[15:27] <magicaltrout> sod ubucon
[15:27] <lazyPower> i'm a firm beleiver in paying it forward
[15:27] <magicaltrout> get lazyPower to sponsor you to juju charmer summit
[15:27] <Zic> http://ubucon.org/en/
[15:27] <lazyPower> Zic - if you can enrich someone elses life with a pizza, pay it forward to them and i'll be happy
[15:28] <Zic> :D
[15:28] <Zic> lazyPower came to EU one time, he feared now
[15:28] <Zic> (because of me)
[15:29] <magicaltrout> na he got taken away by a weird british guy and his belgian friend.... he's not been the same since
[15:29] <lazyPower> ^ true story
[15:29] <lazyPower> WWI re-enactment actors
[15:29] <lazyPower> i got a heck of a history lesson that night magicaltrout
[15:31] <magicaltrout> thats what you tell us lazyPower
[15:31] <magicaltrout> no one else was there to witness it
[15:31] <lazyPower> a gentleman never tells
[15:31] <Zic> xD
[15:32] <lazyPower> but what happened does *not* rhyme with zic's business model
[15:32] <magicaltrout> hehe
[15:50] <SaMnCo> Zix who is your sales rep?N
[15:51] <SaMnCo> @Zic you is your sales rep?
[15:51] <SaMnCo> missed the key :/
[15:51] <SaMnCo> @magicaltrout : so it is bare metal stuff?
[15:52] <magicaltrout> SaMnCo: we've got some baremetal stuff some openstack stuff
[15:52] <magicaltrout> depends where stuff gets deployed
[15:56] <Zic> SaMnCo: we don't have any contact for now, we're just preparing it, but our contact is Mac Belonwu
[15:57] <Zic> (I don't answer him for now, as I need to do some COMEX discussion at our company before :/)
[15:58] <Zic> SaMnCo: or maybe I misunderstand your question: nope, I'm just a sysadmin at our company :)
[15:58] <SaMnCo> OK. I usually cover EU for pre-sales stuff around k8s, so any question or issue don't hesitate to involve me
[16:00] <magicaltrout> SaMnCo is trying to steal commission ! ;)
[16:00] <SaMnCo> ahahah :D
[16:00] <SaMnCo> you know sales / pre-sales, it's like Jehovah people
[16:00] <SaMnCo> always go by 2
[16:01] <Zic> we're Paris-based, so maybe you're more involved in my request than your colleague
[16:01] <magicaltrout> that then try and extor money from you with tales of woe! you're correct its identical
[16:02] <magicaltrout> silly keyboard -that s/extor/extort
[16:06] <SaMnCo> Zic: no no, I just wanted to understand where you were in the process to see if there was a need for tech support on your end. I contacted Mac, so we're all set.
[16:18] <Zic> SaMnCo: we just sent a request via ubuntu.com for now, we didn't really (he just had our sales rep on the phone) answer to him as we don't have all the elements
[16:41] <SaMnCo> ok
[17:29] <Cynerva> Hey folks, if I git clone juju and run `snapcraft`, does it build from the local repo?
[17:29] <Cynerva> I want to try something that's in the 2.1 branch but not in a release candidate yet
[17:31] <Cynerva> I'm not familiar with golang or the juju repo, so I just want to make sure the resulting snap has whatever's in the tree :)
[18:24] <neiljerram> Is there a way that I can have all units in a bundle deployed to machines in the same GCE zone?
[18:31] <marcoceppi> neiljerram: you can, but it's typically not advised
[18:31] <neiljerram> marcoceppi, I'm wondering if it might help with a problem I'm seeing with 'juju ssh'
[18:32] <lazyPower> neiljerram - i can confirm i've been experiencing networking issues in google/us-central1 region
[18:32] <lazyPower> and it started this morning, it was fine lastnight.
[18:32] <neiljerram> lazyPower, yes, that's where I've been seeing issues too.
[18:33] <lazyPower> neiljerram - i moved to us-east1, its a bit slower, but it doesn't have the same connectivity issues
[18:33] <neiljerram> lazyPower, But I believe my issues are much more longstanding than just the last day or two.
[18:33] <lazyPower> and more to the point of being obnoxious, its intermittant
[18:34] <neiljerram> What is the symptom that you see?
[18:35] <lazyPower> neiljerram - i have issues connecting between untis in different az's and i have som external connectivity failures, specifically with their apt mirror.
[18:35] <lazyPower> i had an etcd cluster tank during testing because AZ-a wasn't able to talk to AZ-c for whatever reason
[18:36] <neiljerram> Interesting.  The thing I notice first, in my case, is 'juju ssh ...' failing.  But it could also be that there are connectivity issues between the deployed units.
[18:37] <lazyPower> that would be consistent actually if your controller is in a different AZ than your unit
[18:37] <lazyPower> i do believe that default behavior is to proxy through the controller to establish ssh, but that is configurable
[18:38] <neiljerram> Ah, that sounds good, where is the switch for that?
[18:38] <lazyPower> proxy-ssh                   default  false
[18:38] <lazyPower> i see its defaulted to false here, so looks like i may be wrong
[18:39] <lazyPower> neiljerram - fyi - juju model-config  or juju model-defaults
[18:40] <neiljerram> I have false as well, already - so guess that's good.
[18:41] <neiljerram> So for getting all unit machines in the same zone, I think I just discovered the method for that:
[18:41] <neiljerram> for n in `seq 1 10`; do juju add-machine zone=us-central1-c; done
[18:42] <neiljerram> But I still have my controller in a different zone (us-central1-a)...
[18:43] <neiljerram> I guess that setting the zone for the controller would need to be something on the 'juju bootstrap# invocation.  Any ideas?
[18:45] <lazyPower> neiljerram - there are bootstrap-constraints
[18:45] <lazyPower> juju bootstrap --help has an overview
[18:48] <neiljerram> I already had that help in my terminal, but hadn't seen --bootstrap-constraints.
[18:50] <neiljerram> So would it be: juju bootstrap google/us-central1 bundle --config image-stream=daily --bootstrap-constraints zone=us-central1-c
[18:50] <neiljerram> (Just waiting for my existing controller to die so I can try myself.)
[18:51] <neiljerram> No, ERROR unknown constraint "zone"
[19:02] <lazyPower> hmmm
[19:03] <lazyPower> i would have thought that would have mirrored the constraints you can pass to --constraints
[19:03] <lazyPower> i admittedly have not attempted to pass a zone constraint on those constraints.
[19:09] <neiljerram> Ah, it's --to instead of --bootstrap-constraints.
[19:09] <neiljerram> (Discovered from code reading!)
[19:09] <neiljerram> So: juju bootstrap google/us-central1 bundle2 --config image-stream=daily --to zone=us-central1-c
[19:12] <lazyPower> oh neat
[19:12] <lazyPower> #TIL
[20:37] <freyes> hi marcoceppi , I noticed that ceph-proxy doesn't exist under https://bugs.launchpad.net/charms/ , so it's not possible to file bug against it ( https://jujucharms.com/ceph-proxy/xenial/0 ), could you it? or do you know who could do it?
[20:44] <stormmore> finally gotten around to installing an IRC client
[20:44] <stormmore> o/ juju world!
[20:54] <marcoceppi> o/ stormmore welcome back :)
[20:59] <stormmore> hate doing workstation reinstall but sometimes ya gotta do what ya do!
[21:00] <lazyPower> stormmore - :) :) w
[21:00] <lazyPower> *wb
[22:45] <Budgie^Smore> hmm this is weird
[23:21] <stormmore> OK this is anonying, didn't use to get ping timeouts :-/