[00:22] <Budgie^Smore> miss me?
[03:27] <xavpaice> anyone here using juju-deployer with 2.1 and spaces?
[07:18] <kklimonda> how can I list bindings for the deployed application?
[07:25] <kjackal> Goog morning Juju world!
[07:25] <kjackal> kklimonda: by bindings you mean relations?
[07:36] <kklimonda> kjackal: with 2.1, I have to bind all my containers to spaces, and it's failing randomly - I'm trying to understand what spaces did the unit request, and failed to get
[08:42] <zeestrat> kklimonda: Yeah, there's not a lot of visibility at the moment. I created #1672997.
[08:42] <mup> Bug #1672997: Missing overview over charm bindings <juju:New> <https://launchpad.net/bugs/1672997>
[10:20] <cnf> hmm
[10:21] <cnf> how do i get juju to reprovision a machine?
[10:22] <cnf> one machine failes to come up, and juju just stops :/
[10:23] <cnf> retry-provisioning maybe...
[10:24] <cnf> no, that doesn't seem to do anything
[10:27] <cnf> hmz
[10:27] <cnf> juju really doesn't seem to like machines not doing the right thing
[10:27] <cnf> :/
[10:42] <disposable2> cnf: hmmm... i see you still haven't given up on your juju powered openstack dream.
[10:42] <cnf> disposable2: i'm close to giving up
[10:44] <cnf> it really should not be this difficult
[10:44] <disposable2> cnf: same here. while i'm still allowed to continue testing, my management will not allow me to use this in production. primarily because of all the guesswork involved. the absence of proper documentation is the showstopper. this is the 3rd time over last 5-6 years i've tried using maas/juju for something and it just isn't improving. there are still no books, there are still no useful manpages, no useful examples/howtos.
[10:45] <cnf> yep
[10:45] <cnf> while i'm the one deciding, not management :P
[10:45] <cnf> i'm very much advising NOT to use it, at this point
[10:47] <cnf> i'm trying to find out how to replace a failed node
[10:47] <cnf> and i can't found out how to do this
[10:47] <cnf> o,O
[10:51] <disposable2> cnf: well, i wish i could help you.. did you ever find out whether MAAS needed to be aware of all the networks juju was going to use?
[10:51] <cnf> no
[10:52] <jamespage> cnf: disposable2: reading backscroll a bit then I'll try answer some of your questions
[10:53] <jamespage> cnf: re your failed provisioning of a machine - as a workaround try juju remove-machine --force <ID of machine>
[10:53] <jamespage> and then re-add the application units that removes using juju add-unit
[10:54] <cnf> i didn't, i added a bundle
[10:54] <jamespage> that's a bit of a workaround but I agree that retry-provisioning should dtrt there
[10:55] <jamespage> disposable2: cnf: on the query re MAAS needing to be aware of networks juju is going to use - yes that is the case; MAAS holds the underlying map of actual servers, network fabrics, vlans etc...
[10:55] <cnf> how do i see what units are running on a machine?
[10:56] <jamespage> cnf: lemme check that
[10:59] <cnf> also, juju should have a migrate function, or something similar
[11:01] <jamespage> cnf: hmm other than 'grep' I can see a nice way to figure out services -> machine mapping for a specific machine
[11:01] <cnf> hmz
[11:02] <jamespage> cnf: that said juju deploy <bundle> should figure out what's missing a re-add it if you run it again
[11:02] <cnf> i'm trying to get cs:bundle/openstack-base-49 working
[11:03] <jamespage> cnf: ok so juju remove-machine --force <ID> the failed machine
[11:03] <jamespage> cnf and then juju deploy cs:bundle/openstack-base-49 again
[11:04] <disposable2> jamespage: does that maintain state? i.e. bring up the missing machine with the configuration the previous machine had (if the configuration was done via juju)
[11:04] <cnf> and now to wait 15 minutes
[11:04] <cnf> (HP servers are slow to boot)
[11:06] <jamespage> cnf: I can related to that... esp if they have lotza different cards in them
[11:06] <cnf> i also need to figure out how to force the use of specific machines or maas tags for specific services
[11:06] <cnf> ugh, and now MAAS is being difficult
[11:06] <cnf> >,<
[11:09] <cnf> (retry-provisioning  seems to do nothing at all, ever, btw)
[11:10] <cnf> jamespage: i do apologise for the frustration leaking through
[11:10] <cnf> i have been at this for a while
[11:25] <cnf> wow, juju doesn't even see the machine coming up, now...
[11:25] <cnf> o,O
[11:26] <cnf> maybe i'm doing something wrong, but it seems impossible to actually get anything working reliable with juju :(
[11:28] <cnf> "message: agent is not communicating with the server" ...
[11:33] <disposable2> cnf: well, at every single FOSDEM, marco ceppi demonstrates deploying and scaling up wordpress. so i'd guess at least that has had all its bugs ironed out.
[11:33] <cnf> wordpress is the last thing i care about :P
[11:34] <disposable2> then again, on my computer even that fails setting up the mysql server.
[11:37] <junaidali> cnf: I'm also a juju user and alot of improvements came up recently in juju 2.X. are you facing this issue with a specific machine?
[11:38] <cnf> no, in general
[11:38] <cnf> a machine failed because i did something silly
[11:38] <cnf> but getting juju to recover has been a pain
[11:38] <cnf> (among all the other issues)
[11:39] <junaidali> is that machine now in 'deployed' state in MAAS?
[11:40] <junaidali> which has the status in juju "message: agent is not communicating with the server"
[11:40] <cnf> deploying
[11:40] <cnf> HP machines take a LONG time to boot
[11:40] <cnf> oh, it was deployed when it showed that
[11:40] <cnf> i removed it (again) and ran juju deploy cs:bundle/openstack-base-49
[11:42] <junaidali> btw when the status is 'deployed' in MAAS, it means the machine is now provisioned. Then juju will install some packages and deploys the charm that we specified.
[11:43] <junaidali> charm/charms*
[11:44] <cnf> uhu
[11:45] <cnf> i expect it to go to "pending" shortly after maas says "deployed" though
[11:45] <cnf> not sit at down for 10 minutes
[11:46] <cnf> at least it is at pending, now
[11:46] <junaidali> machine status in juju goes to pending as soon as we run the bundle, which eventually changes to 'started' after the 'deployed' status in MAAS
[11:46] <cnf> we'll see how that goes
[11:46] <cnf> junaidali: yes, except it wasn't :P
[11:46] <cnf> so i had to remove the machine, again, and deploy, again
[11:46] <cnf> which takes 15+ minutes, again
[11:48] <junaidali> what are the specs of these hp machines, for me it usually takes <10-12 mins even with a slow internet
[11:48] <cnf> this is one of the slowest ones
[11:48] <cnf> 32 cores, 96G ram
[11:50] <cnf> hw boot always takes a long time on HP servers
[11:50] <cnf> i'll wait until juju debug-log quiets down
[11:51] <cnf> i do find it troubeling how hard it seems to replace hardware with juju
[11:51] <cnf> well, "hardware", "a machine instance"
[11:54] <junaidali> Getting started with juju is not very helpful due to the docs but once we spend some time, imo it turns out to be a great tool
[11:55] <cnf> plausible, but i'm struggling figuring out how to use it properly
[11:55] <disposable2> junaidali: there won't be much adoption if there's no good documentation.
[11:55] <junaidali> I second ya disposable2
[11:56] <cnf> and if this gets deployed in production will be largely based on my reccomendation ^^;
[11:56] <cnf> just finding the right juju command is hard o,O
[11:57] <junaidali> yes, its not easy for a newbie
[11:57] <junaidali> and this is due to the documentation
[11:59] <cnf> i'll admit i also need(ed?) to figure out MaaS at the same time
[11:59] <cnf> and some of my problems is me doing silly stuff with maas
[12:00] <cnf> hmz
[12:00] <cnf> k, i think everything came up?
[12:00] <cnf> but openstack is in full error mode
[12:00] <cnf> but that will have to be for after lunch
[12:00] <junaidali> cnf: what is the output of juju status ?
[12:01] <cnf> ceph-osd blocked, neutron-gateway in error
[12:01] <cnf> http://termbin.com/uk1p
[12:03] <cnf> k, i need a short break, and some food :P
[12:03] <cnf> bbl, thanks for the help so far
[12:03] <junaidali> cnf: ok, ssh to neutron gateway (juju ssh neutron-gateway/0) and share output of /var/log/juju/unit-neutron-gateway-0.log
[12:03] <junaidali> when you are back :)
[12:05] <junaidali> i think the issue is most probably due to the neutron-gateway config "bridge-mappings" which you should update as per your environment
[12:08] <stub> Mmike: I think we already have your mongodb changes in the git branch at https://launchpad.net/mongodb-charm. Its got everything up until March 6th, including your patches from January and February
[12:09] <stub> Mmike: (I've responded to your email)
[12:13] <jamespage> junaidali, cnf: yup due to slot based naming, we can't write a bundle atm that just works everywhere - you'll need to set the data-port value according to your server wiring
[12:36] <stokachu> junaidali: and there is always http://conjure-up.io
[12:43] <zeestrat> Anyone got an example file of a yaml file formatted as a string so it can be inputted as config for a charm?
[12:48] <tvansteenburgh> zeestrat: http://pastebin.ubuntu.com/24171951/
[12:49] <tvansteenburgh> zeestrat: lines 11-26 are string-formatted yaml
[12:58] <rick_h> Reminder Juju Show #8 this afternoon: https://twitter.com/mitechie/status/841997038808125441
[13:08] <zeestrat> tvansteenburgh: Thanks, managed to sort it out I think.
[13:09] <zeestrat> tvansteenburgh: P.S. The syntax for the ssl_keys have me intrigued. Where does that include-base64:// come from and is it native juju?
[13:11] <cnf> ok, back
[13:11] <cnf> junaidali: and juju ssh neutron-gateway/4, it seems...
[13:11] <tvansteenburgh> zeestrat: No, I think that's a juju-deployer thing
[13:12] <cnf> and it can't find eth0
[13:12] <cnf> makes sense
[13:13] <cnf> jamespage: that was my next question, the bundle seems to not take care of networking / disk storage well
[13:13] <cnf> how do i deal with this?
[13:15] <junaidali> cnf: sorry, you need to update data-port instead of bridge-mappings in the bundle
[13:15] <cnf> uhm
[13:16] <cnf> how do i do that?
[13:17] <junaidali> now as the charm is deployed, run $juju config neutron-gateway data-port="br-ex:<external-interface-name>"
[13:18] <junaidali> external network interface*
[13:18] <junaidali> stokachu: nice, I looked at it a few days back. I will surely check it
[13:19] <cnf> hmm, i should sort out the networking for openstack, and how it relates to juju, i guess
[13:19] <cnf> (and maas)
[13:20] <cnf> as a side note, can I create links between models in juju?
[13:21] <rick_h> cnf: that's in development atm
[13:21] <cnf> hmm, ok
[13:21] <cnf> i'm not very comfortable putting all of ceph and all of openstack on the same model
[13:36] <cnf> k, adding some vlan's on the qfabric
[13:39] <cnf> junaidali, jamespage so I need to configure the openstack network in MaaS before i deploy the juju components?
[13:39] <cnf> no way to add it afterwards?
[14:01] <rick_h> cnf: no, MAAS is kind of the 'state of existance' and MAAS only ingests data in there when the machine comes up. So Juju can't rely on changes made afterwards in MAAS
[14:03] <cnf> right
[14:03] <cnf> and juju can't set ip
[14:03] <cnf> and mount disks either, right?
[14:04] <cnf> so, then how can I get juju to pick certain machines when i deploy things?
[14:04] <cnf> because not all machines should have ip's in all networks, for example
[14:04] <cnf> or not all machines have big storage for ceph etc
[14:05] <zeestrat> cnf: Check out machine constraints: https://jujucharms.com/docs/stable/reference-constraints
[14:06] <cnf> zeestrat: yeah, so that;s on cpu and ram etc, but network spaces only work on ec2?
[14:06] <rick_h> cnf: the networks are meant to be handled by defining spaces and then using the endpoint binding in charms so that you can tell ceph to get a management network interface on network X, a data transmission interface on network Y, etc.
[14:06] <cnf> and i don't see a way to use raw disk space?
[14:06] <rick_h> cnf: heh, network spaces work in maas better than ec2
[14:06] <cnf> oh, ok
[14:06] <rick_h> cnf: what do you mean by "raw disk space" ?
[14:06] <cnf> docs say "EC2 is the only provider supporting spaces constraints. Support for other providers is planned for future releases."
[14:07] <cnf> ok, so i'll have a look at spaces
[14:07] <rick_h> cnf: you can constrain based on disk space available and then do some stuff with https://jujucharms.com/docs/2.0/charms-storage
[14:07] <cnf> rick_h: so ceph doesn't want a raid5 partition, it wants just raw disks
[14:07] <cnf> well, ideally
[14:08] <rick_h> oic, hmm. You can specify size and such, but not sure if there's a way to read that level of data about a disk to decide if the machine is ideal or not.
[14:08] <cnf> so you'd want ceph to deploy to the machine that has 10 x 2T of disks
[14:08] <rick_h> cnf: I think folks tend to tag their machines they want for storage, as you mention, they tend to be phyically different and setup specifically for that purpose
[14:08] <cnf> yeah, indeed
[14:08] <cnf> ok, i'll focus on networking first
[14:09] <cnf> so atm all my networking in maas is in space-0
[14:09] <cnf> because i didn't get what they where for
[14:09] <rick_h> cnf: yea, they take a second to get around
[14:10] <cnf> hmm, especially the vlan , fabric , spaces thing is a bit weird
[14:10] <cnf> i still don't quite get the distintions
[14:10] <cnf> distinction
[14:10] <rick_h> cnf: https://jujucharms.com/docs/2.1/network-spaces hopefully helps
[14:11] <cnf> yeah, i have that open together with https://docs.ubuntu.com/maas/2.1/en/intro-concepts
[14:11] <rick_h> so spaces are any group of subnets that are routable and have similar ingress/egress rules. e.g. juju can help spread workloads across subnets in this space and it'll work out.
[14:11] <cnf> "that are routable" ?
[14:11] <cnf> among themselves, you mean?
[14:11] <rick_h> cnf: yes, within that space
[14:11] <cnf> ok
[14:12] <rick_h> cnf: so if I deploy 10 of something and they get on different subnets it's important to know they'll still be able to behave in the same way
[14:12] <cnf> right
[14:13] <cnf> and how do spaces and fabric differ?
[14:24] <Zic> lazyPwr / mbruzek : hi, just to let you know, my upgrade of CDK in production from 1.5.2 to 1.5.3 was successfull, I just encountered a little "bug" at "juju status"-side, master was stuck at: kubernetes-master/0*      waiting   idle   2        mth-k8smaster-01           6443/tcp        Waiting for kube-system pods to start
[14:24] <Zic> but pods of kube-system namespace was in fact Running
[14:25] <Zic> I waited some minutes with no evolution, so I just restarted the juju controller VM and went it was back online, all was simply green/idle
[14:25] <mbruzek> great
[14:25] <mbruzek> Zic: We are interested in feedback if you have any thing we can improve
[14:32] <cnf> rick_h: also, can an ipv6 and an ipv4 subnet be in the same space?
[14:32] <cnf> or would juju / maas expect 2 spaces for them?
[16:54] <stormmore> o/ juju world
[17:01] <ybaumy> maybe somebody here can help
[17:01] <ybaumy> why is that if a disable proxy arp on a interface intervlan routing doesnt work anymore?
[17:01] <ybaumy> disable on the asa firewall which is also a router
[17:11] <lazyPwr> \o stormmore
[17:15] <stormmore> hows it going today lazyPwr
[17:16] <lazyPwr> stormmore: still feeling poorly so I'm trying to keep on trucking
[17:16] <stormmore> lazyPwr, I feel you there... took me a couple of hours yesterday to figure out that I was having routing issues
[17:18] <lazyPwr> ahh networking, so fun :)
[17:19] <stormmore> lazyPwr, ain't that the truth! :P hence why I have asked if we can hire a network engineer ;-)
[17:21] <stormmore> oh and type MASS instead of MAAS definitely doesn't help
[17:23] <stormmore> lazyPwr, was troubleshooting why juju bootstrap was hanging at fetching the juju agent even though it could do apt update / apt dist-upgrade
[17:23] <lazyPwr> stormmore: ah that seems...fun?
[17:23] <lazyPwr> what was the trouble?
[17:26] <stormmore> lazyPwr, my MaaS server isn't masquerading the traffic right
[17:27] <stormmore> lazyPwr, it is basically to do with the fact that the maas server has multiple NICs and I choose the "wrong" one to be the outbound
[17:28] <lazyPwr> aaahhh that'll do it
[17:28] <lazyPwr> wrong gateway and all that fun business
[17:29] <lazyPwr> i would have thought that you'd have seen that much earlier though like when doing a single unit validation on just the maas setup
[17:29] <stormmore> lazyPwr, yeah I would have too
[17:30] <stormmore> lazyPwr, but I could do enlistment and commissioning... even most of the initial deploy install before it failed
[17:30] <lazyPwr> heh
[17:30] <lazyPwr> gremlins man
[17:30] <lazyPwr> i hate it when its intermittent like that, because its only 10k times harder to debug
[17:31] <stormmore> yup true dat!
[17:31] <lazyPwr> glad you got it sorted though, i dont know that i would have been much help in that scenario
[17:31] <lazyPwr> "did you try turning it off and on again?"
[17:31] <stormmore> it isn't quite sorted, I know what the issue is but I am trying to decide which path to use to fix
[17:32] <stormmore> the problem is if I change the NIC then the traffic is going to be double nated, so I am currently attempting to change the default gateway to go out to the not NAT NIC
[17:37] <stormmore> I think people seem to forget that there is sometimes reason to set a gateway address on each interface!
[17:46] <rick_h> juju show hangout url:  https://hangouts.google.com/hangouts/_/75g7b4wrhvgfff66e6howu2dlqe
[17:46] <rick_h> juju show viewing url: http://youtu.be/tjp_JHSZCyA
[17:46] <rick_h> marcoceppi: lazyPwr arosales bdx and anyone that wants to join ^
[17:48] <stormmore> patiently waiting rick_h
[17:54] <rick_h> stormmore: wheeee
[17:54] <stormmore> not if I could come up with a non-"hackish" way to solve my gateway problems
[17:55] <stormmore> now*
[17:55] <stormmore> oh rick_h btw I don't believe in best practices per say ;-)
[17:56] <rick_h> stormmore: fine, "somewhat potentially nice to have practices" :P
[17:56] <rick_h> externalreality: you can join as well ^
[17:56] <rick_h> perrito666: ^
[17:56] <rick_h> if any core folks want to join in
[17:57] <stormmore> rick_h, yeah that is a bit better phrasing, I just like to push the limits of the tools to the max
[17:57] <rick_h> stormmore: I'll update it for you
[18:00] <rick_h> ok, going once ... before we start
[18:01] <jrwren> ooh, i'm back JUST in time for the show
[18:04] <zeestrat> Any date on 2.1.2?
[18:05] <zeestrat> Hit some binding bugs in 2.1.1
[18:05] <stormmore> I really should look at snaps
[18:11] <jrwren> lazyPwr: link?
[18:11] <lazyPwr> https://jujucharms.com/charmscaler/
[18:12] <jrwren> lazyPwr: WOW!!!!
[18:14] <lazyPwr> jrwren: ikr? :)
[18:23] <stormmore> lazyPwr, CDK ftw on that :)
[18:24] <lazyPwr> stormmore: interesting times indeed :) we're getting more features thanks to our great community
[18:24] <stormmore> lazyPwr, oh and that is what is awesome :)
[18:28] <stormmore> lazyPwr, I wouldn't mind having the juju data in the same grafana as the k8s stuff
[18:28] <lazyPwr> i'm pretty sure you could do that
[18:29] <lazyPwr> multiple promethius's with a single grafana
[18:29] <lazyPwr> i'd want to pilot that before i commit though
[18:29] <stormmore> lazyPwr, yeah adding it on my list of things to look at
[18:31] <lazyPwr> there ya go stormmore :)
[18:31] <stormmore> :)
[18:32] <stormmore> oh I can think of 2 other things that would be nice to merge into grafana
[18:33] <zeestrat> rick_h: Any ETA on 2.1.2?
[18:34] <rick_h> zeestrat: sorry, not sure. perrito666 any hints I should be aware of? ^
[18:34] <bdx> rick_h: the controller monitoring setup is really sweet
[18:34] <bdx> rick_h: I almost want my controllers back
[18:35] <rick_h> bdx: cool, yea it's a road paved by our folks internally running controllers for JAAS
[18:35] <rick_h> bdx: :P always good to keep a couple controllers over on the side to play with
[18:35] <rick_h> its' not really cheating...
[18:36] <zeestrat> rick_h: No worries. Dropped out for a bit. Links for Prometheus stuff coming in the show notes?
[18:36] <lazyPwr> famous last words
[18:36] <lazyPwr> :D
[18:36] <lazyPwr> "its not really cheating if only have one side controller... and i only use it once in a while"
[18:36] <rick_h> zeestrat: yes, I'll give you the first look: https://github.com/juju/stressjuju/tree/master/prometheus-config
[18:37] <zeestrat> Cool. Thanks!
[18:42] <stormmore> that was a really cool discussion, thanks guys
[19:37] <stormmore> man I feel sorry for the ops team to have a dev that believes they aren't paid enough to be on call for their application!
[21:29] <stormmore> lazyPwr, so since I have upgraded to 1.5.3 I am no longer getting logs into the kubernetes-dashboard
[21:39] <stormmore> "an error on the server ("unknown") has prevented the request from succeeding (get pods" is the error I am getting both from the UI and kubectl
[21:46] <andrew-ii> I am getting an `Incomplete relations: identity` though keystone is active, ready, and idle. Is there a good way for me to troubleshoot it?
[21:57] <stormmore> lazyPwr, I suspect a DNS issue
[21:58] <andrew-ii> May as well rebuild. I'm going to try again from the bundle instead
[22:42] <lazyPwr> stormmore: yep
[22:42] <lazyPwr> stormmore: if your units to not have FQDN kubectl logs and kubectl exec are broken for you atm
[22:43] <lazyPwr> stormmore: that other issue however, get po was giving you an "unknown" error? thats new... that typically happens in an HA control plane scenario and only on specific commands. get po is not one of those...
[23:18] <stormmore> lazyPwr, the one that is broken right now kubectl logs
[23:20] <stormmore> lazyPwr, https://paste.ubuntu.com/24185553/ is what I get just trying to get the logs from the default http backend
[23:37] <stormmore> lazyPwr, collecting an output from kubectl --v=8
[23:40] <stormmore> lazyPwr, https://paste.ubuntu.com/24185633/