[00:05] <Guest20118> @tavansteenburgh, thanks
[06:11] <kklimonda> when is 2.1.2 being released? launchpad suggests last friday, but there are no packages I can find
[06:42] <kklimonda> zeestrat:  looks like I can't reference variables in other variables? I guess that would be cheating ;)
[07:27] <kjackal> Good morning Juju world!
[08:22] <zeestrat> kklimonda: Yeah, I tried that too ;)
[09:56] <cnf> can you register multiple users on one machine with juju?
[09:57] <cnf> and switch between them?
[09:59] <kjackal> cnf you can register multiple users on the same controller
[10:01] <cnf> kjackal: yes, but can I log in to different users on the same local machine
[10:01] <cnf> using the juju command
[10:01] <cnf> i don't want to be admin all the time
[10:02] <kjackal> cnf: I have never seen that
[10:02] <cnf> hmm
[10:02] <cnf> and no ACL's either, it seems
[10:03] <kjackal> you mean do a "juju ssh myapplication/0" and you get a shell that is not a sudoer user
[10:03] <kjackal> That is an interesting feature. What is the use case you want to serve with this?
[10:05] <cnf> no, i mean juju register
[10:05] <cnf> multiple users
[10:06] <cnf> right now, everything i do with the juju command is as admin
[10:07] <cnf> have to pick up some guests at reception, brb
[10:13] <cnf> back
[10:14] <cnf> kjackal: i basically want to restrict myself when i don't need to be admin
[10:14] <kjackal> cnf: your juju client can be a non-priviliged user
[10:15] <kklimonda> I just had juju fail to bring a couple of containers on two machines, with error cannot start instance for machine "0/lxd/15": unable to setup network: no obvious space for container "0/lxd/15", host machine has spaces: "management", "two-ceph-private", "two-ceph-public"
[10:15] <kklimonda> but that actually worked fine for other units of the same application, and for other containers on the same machine
[10:15] <kklimonda> (I'm testing 2.1.2 from git right now)
[10:16] <cnf> kjackal: right, but do i need to delete the admin credentials?
[10:16] <cnf> or can i swap between them?
[10:17] <kjackal> cnf: what kind of credentials?
[10:26] <cnf> kjackal: the admin user you are authenticated with
[10:27] <cnf> kjackal: now, my juju command is registered as full admin, right?
[10:27] <cnf> i can destroy models, delete users, etc etc
[10:28] <kjackal> you can create a juju user (juju add-user) and then grant permissions (eg juju grant myuser add-model)
[10:28] <cnf> right, i did that
[10:28] <cnf> but when i want to register it, it says i can't because i am already registered
[10:29] <cnf> so is the only way to unregister the admin user, and register the new user
[10:29] <cnf> and when i need admin the other way around?
[10:31] <kjackal> I ahven't tried to register a juju user with a local user that is already registered
[10:31] <kjackal> I guess it is expected for juju to complain
[10:32] <kjackal> can you have a second local user to register with the controller with limited permissions?
[10:32] <cnf> you mean add a new local user?
[10:33] <cnf> that's a lot of extra work for this :P
[10:35] <cnf> maybe i'll set up a docker container or something
[11:32] <hoenir> can anyone recommand me a charm that supports adding storage?
[11:34] <kklimonda> ceph and ceph-osd
[11:44] <tvansteenburgh> hoenir: the postgres charm does
[12:33] <kklimonda> when juju is deploying containers on MAAS, what is responsible for assigning IPs? I have a reservation for IP range, but Juju still deployed LXD containers with IPs from this range
[13:10] <cnf> kklimonda: did you have free ip's?
[13:34] <kklimonda> cnf: I should have 100+ free IPs in that netwokr
[13:34] <kklimonda> (based on quick math and what MAAS tells me)
[13:37] <cnf> k
[13:37] <cnf> i had all ip's in either dynamic or static pool
[14:04] <skayskay> resources question. I have a snap charm and made a bad assumption about resources as optional things. I thought I could remove a resource. The charm is for a public snap, so I want to grab it from the store.
[14:04] <skayskay> but when I want to test a change in a fully deployed environment, I'd like to attach a snap resource
[14:05] <skayskay> that is a bad assumption since I can't remove a resource.
[14:05] <skayskay> so, how do people normally test things in that situation? I'd rather not push a candidate snap if I'm doing something like adding debugging code to get more info, etc etc etc
[14:14] <stub> skayskay: If you are using the snap layer, attaching a 0 byte resource is effectively the same as removing the resource.
[14:15] <stub> skayskay: You should be able to have your edge charms with a snap attached as a resource, and your stable channel charms with no snap or 0 byte resource.
[14:15] <skayskay> stub: thanks, I didn't think of attaching a new zero byte resource. I feel sheepish now.
[14:16] <skayskay> :)
[14:16] <skayskay> (I am using the snap layer)
[14:16] <stub> skayskay: Its a hack, but works until we can drop this resources-are-required nonsense.
[15:24] <Zic> lazyPwr: hey, long time since we didn't talk, I have a question for you (or for marcoceppi too), can I update the nginx-ingress-controller deployed by CDK manually?
[15:24] <Zic> I need this functionality recently merged: https://github.com/kubernetes/ingress/pull/246/commits/5cc5669938108ab7429bc7eee40c18a6ba18150a
[15:25] <lazyPwr> Zic: hey, 1 sec let me see whats in teh link
[15:26] <lazyPwr> Zic: so this looks like k8s golang bin code... do you know if this is landing in the addon container?
[15:26] <lazyPwr> Zic: if so, you should be able to just update the image reference in /etcd/kubernetes/addons/ingress-*.yml
[15:26] <lazyPwr> Zic: however, you might need to set it in the Template dir of the charm, so it doesn't get autogenerated right back out of the template before its re-scheduled
[15:27] <lazyPwr> stub: hey little bit of an update on that btw, i spoke to rick about optional resources @ the last sprint. It's now acknowledged that its a potential issue, but no resolution has been offered as of yet.
[15:28] <Zic> lazyPwr: the last message is what I feared :( can I do this simply without breaking future updates of the charm ?
[15:28] <lazyPwr> skayskay: also, one workflow tip that mbruzek and I have done is resource-revision 0 is always a zero byte resource (by convention, we touch and push) so you can always re-publish with a zero byte resource simply.
[15:29] <Zic> lazyPwr: or do you know if this PR is already in the image used by the kubernetes charm in 1.5.3? I'm always on 1.5.2 currently
[15:29] <lazyPwr> Zic: certainly if you patch the template and toggle ingress=true it will get your update. on teh next charm-upgrade it will ovewrite the changes to that template and you should be back in alignment with our upstream releases
[15:29] <lazyPwr> Zic: i'm not positive on if its in the 1.5.3 release, it depends on when it was cut and added.
[15:30] <Zic> yeah, I saw this PR have 26 days, but I don't know if it was merged for the 1.5.3 release
[15:30] <lazyPwr> i'm going to err and say it wasn't
[15:30] <Zic> https://github.com/kubernetes/ingress/pull/246 <= thr PR associated
[15:31] <lazyPwr> Zic: the trouble here is the issue wasn't attached to a release https://github.com/kubernetes/ingress/issues/180
[15:31] <lazyPwr> so its hard to discern when it was actually pulled into a release or if its just sitting in trunk
[15:31] <lazyPwr> Zic: i would probably ping one of the authors of this pr and ask when it was/will-be released.
[15:35] <Zic> lazyPwr: thanks, also, just seeing the tag "Coverage 46%" in the Issue, seems that it's sure that it's not released so :(
[15:36] <Zic> I will try to ping the author anyway
[15:39] <mbruzek> Zic: I can get you a resource if you can examine the binary files to see if the fix is in there.
[15:40] <mbruzek> Zic: Alternately if you check out the tag branch "v1.5.3"  You could check for the code in there
[15:42] <Zic> mbruzek: good idea, it seems to be this one: https://github.com/kubernetes/ingress/releases/tag/nginx-0.9.0-beta.2
[15:42] <Zic> I will check
[15:44] <Zic> ok so
[15:44] <Zic> 1.5.2 and 1.5.3 of Kubernetes in CDK's charm both use 0.8.3 of this nginx-controller
[15:45] <Zic> the proxy-set-headers I need appeared in 0.9.0-beta2
[15:45] <Zic> (but it's beta as you can see...)
[15:45] <Zic> but a later stable version is 0.9.2
[15:46] <Zic> are you planning to switch to 0.9.2 in next release so? :D
[15:49] <Zic> https://github.com/kubernetes/ingress/releases <= oh, I do a mistake, it's not Nginx which is in 0.9.2, it's GCE ingress...
[15:49] <Zic> the latest version of nginx-ingress-controller is still 0.9.0-beta.2, which contains the feature I wanted
[15:55] <Zic> lazyPwr / marcoceppi : for waiting, as I really need to customize nginx header for the launch of this customer, can I simply "kubectl edit rc nginx-ingress-controller", swap the image to "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2" instead of "gcr.io/google_containers/nginx-ingress-controller:0.8.3" (which CDK deployed) ?
[15:55] <Zic> as I understand, nex-upgrade will overwrite this change
[15:56] <Zic> but I just need to recall myself to re-put it, no?
[15:56] <Zic> I don't feel confident in hacking the CDK template and deploy it for a production cluster :/
[16:07] <marcoceppi> Zic: good question, Cynerva ryebot ^ ?
[16:08] <Cynerva> catching up, gimme a minute :)
[16:09] <Zic> :)
[16:11] <Zic> haha, I just realize that I confused marcoceppi & mbruzek for my earlier highlight-ing of CDK's maintainers :)
[16:11] <Zic> sorry :]
[16:13] <Cynerva> Zic: it looks like kubernetes-worker won't override the ingress RC until the charm is upgraded
[16:15] <Zic> and do I need to rollback to the original version of CDK kubernetes-worker charm before the next upgrade ?
[16:15] <Cynerva> Zic: so I think as a temporary workaround that should work. but i'm not 100% sure
[16:17] <Cynerva> Zic: hmm good question. all the charm does is `kubectl apply -f ingress-replication-controller.yaml` so i would think you won't have to roll it back before charm upgrade
[16:22] <Zic> ok thanks
[16:22] <Zic> I think I will upgrade from 1.5.2 to 1.5.3 before testing this
[16:22] <Zic> planned for tomorrow, I will let you know if it works :)
[16:23] <Zic> (the ingress upgrade part, I'm confident for 1.5.3 upgrade part)
[16:24] <Cynerva> cool :D
[16:24]  * Zic fix lazyPwr's eyes for the second part of his message
[16:24] <Zic> :}
[16:25] <lazyPwr> O_o
[16:25] <lazyPwr> o_O
[16:26]  * Zic begins to search "juju rollback command"
[16:26] <Zic> xD
[16:26] <lazyPwr> wait for snaps my friend, it'll make your roll forward/backwords life a bit easier.
[16:26] <lazyPwr> Zic: have you seen the etcd 3.x migration bits using snaps?
[16:27] <Zic> nop, the last time we talked about this, it was not publicly released
[16:27] <Zic> s/publicly/officially/
[16:27] <Zic> does it now?
[16:28] <stormmore> I still have the the thought about using snaps on switches / routers to load the maas and juju controllers onto them instead of servers
[16:33] <lazyPwr> stormmore: you're not alone in that thought :)
[16:34] <lazyPwr> stormmore: however storage on a switch is going to be hairy at best... you'll likely want to run your rack controller on a unit with some storage
[16:34] <lazyPwr> Zic: its pending a merge against layer-etcd, i have it published in my namespace though
[16:34] <Zic> oh, I can test it in the lab so :)
[16:34] <lazyPwr> Zic: it supports moving between channels, and if you dont attach external storage (read: ebs/gce volumes) it'll version your data
[16:35] <lazyPwr> so say you upgrade to 3.0/stable channel and things get funky, NO PROBLEM!
[16:35] <Zic> currently, we have deployed our 2nd CDK cluster, full-AWS this time
[16:35] <lazyPwr> revert back to 2.3/stable
[16:35] <lazyPwr> and it'll reload your data from the version of the snap that was installed using 2.3, you may incur some minor data loss, but this is expected with a rollback no?
[16:35] <Zic> and I convert the LXD labs into a real lab
[16:35] <lazyPwr> yeah?
[16:35] <stormmore> lazyPwr, yeah I realize that but storage is only a problem when you are a boot strapping the data center. once the servers are up, the db, etc. can be moved into the cluster
[16:35] <lazyPwr> nice man :) You're getting your hands all kinds of dirty with cdk
[16:36] <Zic> lazyPwr: yeah, I'm just using NFS & ISCSI volume
[16:37] <stormmore> lazyPwr, that just leaves the core services running on the switches / routers, i.e. pxe, etc. from maas and the juju controller agent
[16:37] <Zic> (for the first CDK cluster actually)
[16:37] <lazyPwr> Zic: i was talking to cholcombe and it appears that gluster+heketi is going to be problematic moving forward
[16:37] <lazyPwr> some mailing list drama popped up about this :/
[16:37] <Zic> for the second one, we're using AWS-EBS
[16:37] <lazyPwr> i read into it a bit, and they are wanting to not support existing glusterfs deployments, which baffles me
[16:38] <cholcombe> i find it odd also
[16:38] <Zic> (the first-one is the one I gave you the Architecture Plan, hybrid between baremetal servers / on-premise VMs & EC2@AWS)
[16:38] <Zic> (the second-one is full-AWS)
[16:38] <lazyPwr> cholcombe: <3 can we send them a gentle reminder to not stab their users in the face with a hot branding iron?
[16:38] <cholcombe> i tried on the irc channel and they shot me down
[16:38] <lazyPwr> because $reasons again?
[16:39] <cholcombe> yeah because deploying onto an existing server that they didn't setup is hard
[16:39] <cholcombe> i get it but at least try
[16:39] <lazyPwr> so what if we dont support existing clusters and instead make it a charm deployed glusterfs endpoint the only one we support in cdk?
[16:39] <lazyPwr> or am i missing something
[16:40] <lazyPwr> i dont think its out of the question to have a storage admin provision some bricks and enlist them in a gluster charm deployment, then xmodel relate them to cdk workers
[16:40] <magicaltrout> lazyPwr: http://pastebin.com/E4ZrqzNN
[16:40] <magicaltrout> http://pastebin.com/ewUELFwT
[16:40] <magicaltrout> got any suggestions?
[16:40] <cholcombe> lazyPwr: suggesting we just take over for heketi?
[16:41] <lazyPwr> magicaltrout: how did you deploy this?
[16:41] <magicaltrout> maual
[16:41] <magicaltrout> manual
[16:41] <Zic> lazyPwr: basic question, does the "juju upgradecharm <app>" have a special order recommended? because between the insights.ubuntu.com news and https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/, it's not the same
[16:41] <lazyPwr> Zic: follow the kubernetes.io docs
[16:41] <Zic> upgrade-charm*
[16:41] <magicaltrout> just walked through the bundle
[16:41] <Zic> ok :)
[16:41] <lazyPwr> we've kept that up to date
[16:41] <lazyPwr> magicaltrout: are these local copies of the charms?
[16:41] <magicaltrout> nope
[16:41] <magicaltrout> i've got 2 running workers and 2 stalled
[16:42] <lazyPwr> magicaltrout: well at the end of teh day it appears the resources didn't make it out of the charm store. You'll want to fetch the resources from jujucharms.com/u/containers/kubernetes-worker and juju attach them
[16:42] <Zic> At this time rolling back etcd is unsupported. / At this time rolling back Kubernetes is unsupported. <= you already prepare this section about snap :D
[16:42] <lazyPwr> Zic: not until it lands as GA
[16:42] <lazyPwr> but yeah, i'll be adding subsections there about rollbacks using the snaps and what to expect
[16:43] <Zic> the only snap-in-production that I used today is Rocketchat
[16:43] <lazyPwr> Zic: i'm expected to have this fully landed and r2g by the time we cut the 1.6.x release
[16:43] <lazyPwr> i've got some really good early tests in, i'm hacking on fixing the whole "i killed the master and my cluster just pooped the bed" bug
[16:43] <Zic> :}
[16:43] <lazyPwr> which is really obnoxious if you're not paying attention to what you're doing... juju remove-unit etcd-master, and suddenly things brick
[16:43] <lazyPwr> *sadtrombone.wav*
[16:43] <Zic> I need to switch to multimaster again some day also
[16:43] <Zic> I'm also on monomaster in production for now
[16:44] <Zic> the lab is in multimaster with your patch for token
[16:44] <lazyPwr> Zic: we have incoming work rn to hack in place an haproxy load balancer instead of that nginx based one
[16:44] <lazyPwr> my colleague is hacking on that rn
[16:44] <lazyPwr> magicaltrout: lmk if that doesn't resolve it, and i'm going to point a finger at a juju bug that i hvaen't found yet, but it seems like it just bailed during grabbing the resource, which is unfortunate :( i've seen this once before but it was back in the 2.0 beta days
[16:45] <lazyPwr> magicaltrout: actually, are you still using a beta or did you finally upgrade to GA?
[16:47] <magicaltrout> thanks lazyPwr testing
[16:48] <magicaltrout> this is a new deploy for the  JPL folks so nothing beta here :)
[16:48]  * lazyPwr snaps
[16:48] <magicaltrout> 2.0.2
[16:48] <lazyPwr> i was kind of hoping you would say beta-18 again, so i could easily point to why it happened
[16:49] <magicaltrout> it might have been connectivity betwen the openstack units and the charm store, but it would be nice for juju to recover if that were actually =the case
[16:59] <ybaumy> huhu juju
[17:00] <ybaumy> can somebody tell me how to filter hosts to use for a cloud by zones in maas
[17:00] <ybaumy> i tried adding zone = foobar to the yaml file for the cloud
[17:00] <ybaumy> but that didnt work
[17:17] <andrew-ii> If a MAAS node is released while it is a Juju controller... can I get that controller removed? It seems hung on `juju destroy-controller someController`
[17:19] <ybaumy> have you tried unregister
[17:19] <andrew-ii> Not yet... one moment
[17:19] <andrew-ii> oh
[17:20] <andrew-ii> Well, that was fast.
[17:20] <ybaumy> did it work
[17:20] <andrew-ii> Thanks! I think I can rebootstrap it now
[17:20] <ybaumy> yes you can
[17:20] <ybaumy> had a similar problem and that helped me too
[17:21] <ybaumy> do you happen to know how to filter maas nodes for a cloud by zones?
[17:22] <ybaumy> i need to know if that is possible
[17:22] <andrew-ii> oh man, I am barely getting this thing to download images from ubuntu's cloud repo
[17:22] <andrew-ii> I vaguely remember that... one sec
[17:23] <andrew-ii> Nuts, that's next on my list of things to try
[17:23] <andrew-ii> That's what works with the HA zones, right?
[17:24] <ybaumy> well i want to create nodes for customers and add them to zones. then create a cloud for that particular zone
[17:24] <Zic> lazyPwr: I have a lot of apt updates also on the 1.5.2 cluster I'm planning to upgrade tomorrow, do you advice me to firstly run juju upgrade-charp <app> *before* apt update/upgrade ?
[17:24] <Zic> (to bypass some problem like the time apt upgrade etcd without the charms :/)
[17:25] <andrew-ii> ybaumy: I was going about that upsidedown and was going to use Openstack to partition customers. But you make an interesting point; if I had my old pile of blade servers that sounds pretty good
[17:25] <magicaltrout> okay i think we can safely say Juju doesn't like deploying stuff to low power vms
[17:26] <ybaumy> andrew-ii: a customer should be able to create his own cloud by a webinterface with vmware and maas. that the target.
[17:27] <andrew-ii> I don't have vmware, but my customers really only need containerization, so I probably have a different use case
[17:28] <ybaumy> andrew-ii: true. thats different. i have thought about that too. and will look into it once i am able to do that ..
[17:28] <ybaumy> im creating POC's for my company
[17:29] <ybaumy> so its just try and play at the moment
[17:29] <andrew-ii> ybaumy: I get the feeling it's jumping right into the deep end to go straight-up containerization. But I only have a few machines for maas, so I can't afford to dedicate to each customer. But man, that sounds like a nice idea.
[17:30] <andrew-ii> ybaumy: same here; if I can get the cloud bootstrapped and somehow lock it down _and_ get it to be usable, then it'll be a nice playground.
[17:31] <ybaumy> andrew-ii: we have vsphere vrealize scripted to create the vm templates and distribute them across the esx farms. then i use maas to commission them and juju to create the cloud
[17:31] <ybaumy> but now i need something to filter those vm's
[17:32] <andrew-ii> ybaumy: slick. It does sound like zones are exactly what you'd want (if I understand why they were added).
[17:33] <ybaumy> andrew-ii: yes its really working now. its one last piece to put it together and close the POC
[17:33] <andrew-ii> Best I can figure is the `juju deploy someCharm --to zone=maasZone1`, but I bet you're not really looking to use the --to command
[17:33] <ybaumy> andrew-ii: does that work?
[17:33] <ybaumy> i have to try
[17:34] <andrew-ii> (For my setup I'm using maas tags to filter machines, sadly, so I may be a terrible example)
[17:35] <andrew-ii> ybaumy: note that I just sorta stole that command and assumed it worked! It may be aws specific (or maybe maas emulates that too) - sorry if it fails utterly
[17:36] <ybaumy> andrew-ii: great this seems to work. at least it used now for bootstraping the foobar zone i specified
[17:37] <andrew-ii> ybaumy: holy mackeral, whelp, I'm more optimistic now (I actually thought it'd fail...).
[17:37] <ybaumy> maybe its luck but i will try that a few times
[17:38] <andrew-ii> That's the spirit!
[17:56] <lazyPwr> magicaltrout: yeah it tends to yield a pretty crummy experience when you starve the unit resources
[17:57] <lazyPwr> Zic: probably do the upgrade charm then run the apt update/upgrades
[17:57] <lazyPwr> Zic: keep your change set to the least viable change in order to validate things are functional after making the change
[18:01] <ybaumy> andrew-ii: sadly this --to zone= seems to be ignored. i bootstrapt it then i wanted to enable-ha --to zone=foobar and one node was created in default one in foobar
[18:01] <magicaltrout> indeed lazyPwr i was hoping the units would at least start though :)
[18:01] <ybaumy> so i need something else
[18:01] <andrew-ii> ybaumy: sorry - I was afraid it wouldn't work with maas yet
[18:02] <ybaumy> andrew-ii: thanks anyway .. better die trying then havent tried at all
[18:02] <andrew-ii> Might be worth checking if a newer version manages to use it (though I think I got that command from 2.1.1)
[18:03] <ybaumy> im using devel
[18:03] <andrew-ii> ouch, nevermind then
[18:03] <andrew-ii> Did you try `--constraints zone=foobar` ?
[18:03] <ybaumy> hmm no :D
[18:03] <ybaumy> will do
[18:03] <ybaumy> hehe
[18:05] <andrew-ii> I have some real goofy machines to play with, and some are only good at certain things (like one's basically a bank of hard drives), so I use `--constraits "tags=storage"` for that one
[18:05] <ybaumy> what is your goal what are you trying to accomplish?
[18:06] <andrew-ii> Though just in case, make sure when boostrapping a controller you use `--boostrap-constraints` instead (otherwise the controller sets ALL nodes to use that constraint)
[18:06] <ybaumy> k
[18:06] <andrew-ii> I have a hodgepodge of random servers to play with, and I need a test rig that can simulate a network; plus host some in-house tools that will get replaced often
[18:07] <andrew-ii> Literally random machines collected off craigslist for a few months
[18:07] <ybaumy> is that you homelab?
[18:07] <ybaumy> your
[18:08] <andrew-ii> It's my equipment, but it's hosted in the office for it's fancy 220V line, space, and dedicated network connection
[18:10] <ybaumy> i started also with some few blades but now the project has management attention
[18:11] <ybaumy> so i got everything i requested
[18:11] <ybaumy> that neat
[18:12] <andrew-ii> haha always the best when people with checkbooks take notice
[18:14] <ybaumy> true
[18:28] <ybaumy>  juju deploy --to zone=test cs:bundle/openstack-base-49
[18:28] <ybaumy> ERROR Flags provided but not supported when deploying a bundle: --to.
[18:28] <ybaumy> also something that should work
[18:29] <andrew-ii> hmmm, yeah, that's what I thought to try
[18:30] <ybaumy> i will try single charms to see how that goes
[18:36] <ybaumy> nope ignored as well
[18:37] <ybaumy> juju deploy --to zone=test -n4 cs:ubuntu-10
[18:37] <ybaumy> creates random machines in all zones
[18:48] <ybaumy> i will try tags
[18:49] <andrew-ii> That should work, though it's a pain to tag all the machines
[18:49] <ybaumy> yep
[18:50] <ybaumy> seems like that you cannot assign tags to all machines in a zone
[18:50] <andrew-ii> Nuts. Need to do something complicated like a script to assign them all? (Overkill maybe?)
[18:52] <ybaumy> you are right. i will have to script it but thats not the problem. it would be nice if something like that would be supported out of the box
[18:59] <andrew-ii> I think there is a goal for it, it's just not added in yet?
[19:00] <ybaumy> we have to ask the devs
[19:00] <ybaumy> i signed up on the mailing list maybe i get an answer there
[19:02] <andrew-ii> ybaumy: I don't know too much about navigating launchpad, but check https://launchpad.net/juju to see if that feature is slated for addition
[19:08] <ybaumy> juju deploy --constraints tags=test  cs:bundle/openstack-base-49
[19:08] <ybaumy> ERROR Flags provided but not supported when deploying a bundle: --constraints.
[19:08] <ybaumy> ;)
[19:09] <zeestrat> ybaumy: 99% sure --constraints/--to don't work with bundles
[19:10] <zeestrat> ybaumy: same with --config
[19:10] <ybaumy> zeestrat: thats too bad. shouldnt they?
[19:10] <zeestrat> --config is on the roadmap
[19:11] <ybaumy> single charms work now with contraints with tags thats nice so i have to script the whole bundle setup as single charms
[19:11] <zeestrat> constraints and to are (or at least can be) defined in the bundle so the idea is that you set them there
[19:12] <ybaumy> ok so i can generate a yaml file with --contraints?
[19:13] <ybaumy> the format with all the : at the end is really hard to script
[19:13] <ybaumy> hmm
[19:13] <ybaumy> i will try and see
[19:13] <zeestrat> have you checked out https://jujucharms.com/docs/stable/charms-bundles ?
[19:16] <ybaumy> before not. but i can just generate it one time and then use perl or sed to change the tags=test to tags=somethingelse
[19:17] <ybaumy> nice
[19:17] <ybaumy> :)
[19:17] <ybaumy> thanks zeestrat
[19:17] <ybaumy> that will do
[19:18] <zeestrat> ybaumy: glad to help :) There are no examples of using variables in the docs at the moment, but here's a openstack HA bundle you can look at: https://launchpadlibrarian.net/298175262/bundle.yaml
[19:20] <ybaumy> no need for variables. i just sed -i 's/tags=test/tags=somethingsomethign/g' in the file for each process before i start juju deploy bundle.yaml
[19:21] <ybaumy> and thanks for the HA link
[19:38] <ybaumy> will try tomorow. now football
[19:48] <stormmore> lazyPwr, trying to put together my budget ask, is there any conference, etc. that would be good for me to try and attend? any suggestions?
[19:50] <magicaltrout> all of them \o
[19:53] <ybaumy> zeestrat andrew-ii adding constraints to the bundle works. thanks for helping me.
[19:55] <stormmore> magicaltrout, yeah I wish I could get the budget for that :-/ I have to be more realistic than that and target kubernetes, juju, maas related ones as well as the automotive ones since that is the industry we are actually in
[19:56] <magicaltrout> hmm wtaf, my deployment is green but the ingress proxy just gives me a 504 on  absolutely everything =/
[19:56] <magicaltrout> what country you in stormmore ?
[19:57] <hatch> can a 'requires' relation be related to two 'provides' at the same time?
[19:57] <lazyPwr> stormmore: well, the charmer summit is great if you want facetime with us to hack on projects :)
[19:57] <hatch> ex) kibana1 to elasticsearch1 and kibana1 to elasticsearch2
[19:58] <lazyPwr> stormmore: however if you want k8s focus, you're more than likely going ot have success attending kubecon's but our presence there isn't very big. Mostly what I would call guerilla community ops, where we roam hallway tracks and find people to engage with
[19:58] <lazyPwr> hatch: depends on how the charm is coded to use those relationships, but yeah.
[19:58] <hatch> lazyPwr ok so there is no Juju restriction to that effect?
[19:59] <lazyPwr> interfaces as the abstraction, you should be able to interface with both es clusters, but its up to the app to implement that, and up to the author to correctly implement the data coming from the interfaces.
[19:59] <lazyPwr> afaik, nope
[19:59] <hatch> the GUI explicitly prohibits that interaction
[19:59] <lazyPwr> why?
[19:59] <lazyPwr> seems....arbitrary
[19:59] <hatch> my guess, a limitation in Juju 1?
[19:59] <lazyPwr> kidna like not letting me unrelate a subordinate :P
[19:59] <lazyPwr> yeah, you're probably right
[19:59] <hatch> or from PyJuju even?
[19:59] <lazyPwr> juju1 had fun quirks like that
[19:59] <hatch> yeah
[19:59] <hatch> ok thanks
[20:00]  * hatch creates a model on jujucharms.com to test for sure
[20:00] <stormmore> lazyPwr, yeah I would love to go to a "charmer summit" but I don't see another one setup yet
[20:01] <lazyPwr> stormmore: i dont think we'll get it scheduled until mid spring.  jcastro might have more details as to when the next summit will be however
[20:02] <jcastro> we haven't really talked about it
[20:02] <magicaltrout> somewhere hot
[20:05] <stormmore> and of course finance want to know like yesterday :-/
[20:11] <stormmore> lazyPwr, fyi I think the better question is whether you guys want to suffer the pain of meeting me ;-)
[20:12] <magicaltrout> any idea how to debug a 504 on the ingress router, no kubernetes dashboard etc lazyPwr ?
[20:22] <stormmore> @jcastro, I will look forward to hearing about when you do have those discussions
[20:35] <jcastro> \o/
[21:04] <skayskay> hey, trying to remove a unit that is in an error state for the config-changed hook doesn't work, as far as I can tell
[21:05] <skayskay> there's ab ug about a failing upgrade state that already exists, maybe this is related
[21:07] <skayskay> oh, this may be something else. lp:1671476 is about destroying a model
[21:09] <skayskay> I'm unable to remove a unit or application
[21:10] <magicaltrout> you need to mark the unit resolved first skayskay
[21:10] <magicaltrout> juju resolved unit/0 --no-retry
[21:11] <skayskay> magicaltrout: thanks! that's it. I didn't know about that --no-retry option. handy
[21:18] <firl> hello all; anyone have an example bundle of openstack using network spaces ?
[21:22] <magicaltrout> oooh shiugar
[21:22] <magicaltrout> sooo lazyPwr i'm not convinced that each k8s worker should be on its own flannel subnet, am I right?
[21:25] <stormmore> this is really odd, having it hang during fetching juju agent
[21:26] <stormmore> I suspect a network issue but I can't log into the instance using the key that I added to the MaaS user! hmmmm
[21:28] <lazyPwr> magicaltrout: why not?
[21:29] <magicaltrout> okay in that case i'm just wrong :)
[21:29] <magicaltrout> i assumed they had to share a subnet
[21:30]  * magicaltrout returns to wondering why they aren't working
[21:30] <stormmore> magicaltrout, no each worker needs it's own subnet to manage it's containers with
[21:30] <magicaltrout> fair enough
[21:30] <magicaltrout> i'm spinning one up on aws for comparison, but i assumed they'd be on the same subnet. Obviously not :)
[21:33] <stormmore> anyone have an idea where to look when I can enlist and commission nodes in maas but I can't bootstrap juju?
[21:42] <rick_h> stormmore: juju bootstrap --debug and then check the maas logs during bootstrap.
[21:44] <stormmore> rick_h, thanks, I keep forgetting about that. building a "clean" environment to try again
[21:49] <zeestrat> firl: The #openstack-charm people have a bundle in their dev repo: https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-spaces/bundle.yaml
[21:49] <firl> zeestrat thanks!
[21:50] <zeestrat> firl: There's a OpenStack HA bundle laying around using networks defined in config (not the new juju network bindings): https://launchpadlibrarian.net/298175262/bundle.yaml
[21:50] <firl> interesting
[21:51] <firl> I am trying to figure out the best way to do the networking I need to, and I just finished configuring all of the physical net, so trying to make sure I have MAAS setup the way I Want to deploy the bundle
[21:52] <firl> it seems like “spaces” is where the bundles are going
[21:52] <firl> charms rather
[21:53] <zeestrat> firl: I recommend checking out #openstack-charm channel as well. I'm working on a HA bundle with spaces if you're interested. Send me a pm and I can get back to you
[21:53] <firl> I will jump on that channel too
[21:54] <firl> hrmm no one in that one.
[21:54] <zeestrat> firl: sorry, #openstack-charms
[21:54] <firl> perfect
[21:55] <zeestrat> there are also some docs on https://docs.openstack.org/developer/charm-guide/ with release notes for the latest stable releases of the openstack charms
[21:56] <zeestrat> I'm off for now, but pm me and I can get back to you
[21:56] <firl> thanks
[22:07] <stormmore> rick_h, http://paste.ubuntu.com/24179043/ shows the output from --debug, I am not seeing anything obvious in the regiond.log, rackd.log or the instance messages log to show why it isn't bootstrapping :-/
[22:10] <stormmore> rick_h, but this time I was able to log in, apparently I am missing a default gateway
[22:11] <stormmore> rick_h, which seem odd considering it did a apt update / apt dist-upgrade
[22:22] <stormmore> one problem down, no telling how many to go :-/
[22:29] <stormmore> gotta love weird networking issues