[05:44] <ybaumy> please make remove-unit really work
[05:49] <ybaumy> also does set-model-constraints tags=test work for all machines und unit deployments?
[05:50] <ybaumy> the problem im having is that constraints do not work when add-unit is used
[05:51] <ybaumy> so to scale out there is no way to use a specific machine set or is there?
[05:55] <ybaumy> this really a showstopper if i cant scale out to machine sets who share the same tags
[06:51] <pranav_> Hi Folks. I am facing issue in publishing our charm for review
[06:52] <pranav_> charm list resources shows
[06:52] <pranav_> [Service] RESOURCE REVISION install  -1
[06:52] <pranav_> but running "charm release cs:~vtas-hyperscale-ci/hyperscale-0 --resource install-1 --channel edge" Gives error
[06:53] <pranav_> ERROR cannot release charm or bundle: bad request: charm published with incorrect resources: cs:~vtas-hyperscale-ci/hyperscale-0 resource "install/1" not found
[08:25] <kjackal> Good morning Juju world!
[08:53] <Guest50267> Hey all. Any one has idea on charm push-term?
[08:53] <Guest50267> getting error ERROR unrecognized command: charm push-term
[08:54] <Guest50267> charm version is 2.2.0-0ubuntu1~ubuntu16.
[09:00] <tejaswi> hi team, I am trying to install all OpenStack services using Juju. However, due to this bug - https://bugs.launchpad.net/charms/+bug/1417407, the haproxy cfg files are getting overridden by new services.
[09:00] <mup> Bug #1417407: haproxy enabled by default for keystone, cinder, openstack-dashboard, nova-cloud-controller and glance breaks deploying multiple light services to the same node. <Juju Charms Collection:New> <https://launchpad.net/bugs/1417407>
[09:01] <tejaswi> How to fix this issue ?
[09:01] <tejaswi> Is there a way to install OpenStack without haproxy ?
[09:10] <ybaumy> hmm
[09:10] <ybaumy>   '3':
[09:10] <ybaumy>     series: xenial
[09:10] <ybaumy>     contraints: tags=devcloud,node
[09:10] <ybaumy> ah forget it
[09:11] <ybaumy> i should learn how to write constraints
[09:25] <cnf> ybaumy: what is wrong with it?
[09:26] <ybaumy> i wrote constraints instead of constraints
[09:26] <ybaumy> -s
[09:26] <ybaumy> in the first
[09:26] <cnf> ah :P
[09:26] <cnf> dem morning typos
[09:27] <ybaumy> im up since 5 so its forgiven
[09:28] <ybaumy> cnf have you ever had the problem to add a unit with constraints?
[09:28] <cnf> uhm, i learned of constraints yesterday
[09:28] <cnf> i have yet to successfully use them
[09:29] <ybaumy> i wonder if i have to add-machine first then add-unit --to
[09:29] <cnf> ask me again in 20 minutes, i'll tell you if my srtuff came up :P
[10:14] <kjackal> tejaswi: You could ask at #openstack-charms
[10:15] <kjackal> ybaumy: you do not need to first add a machine and then deploy the app. You just do a juju deploy .... --constraints "mem=4G ...."
[10:16] <ybaumy> kjackal: no its about scale out with add-unit
[10:17] <ybaumy> how do i make constraints inherited
[10:17] <ybaumy> if that is the correct term
[10:21] <tejaswi> kjackal: ok
[14:23] <magicaltrout> need a hack, how do i manually unset a state?
[14:32] <magicaltrout> lazyPower: ping
[14:35] <lazyPower> magicaltrout: charms.reactive remove_state
[14:35] <magicaltrout> I actually had another question, but that will hopefully help resolve it
[14:35] <magicaltrout> ever seen this
[14:36] <magicaltrout> 2017-03-17 14:30:26 INFO install Generating RSA private key, 2048 bit long modulus
[14:36] <magicaltrout> 2017-03-17 14:30:27 INFO install unable to write 'random state'
[14:36] <magicaltrout> 2017-03-17 14:30:27 INFO install e is 65537 (0x10001)
[14:36] <lazyPower> O_o
[14:36] <lazyPower> unable to write random state?
[14:36] <magicaltrout> the weird thing is if i actually run the command it runs fine
[14:36] <lazyPower> thats a new one on me
[14:36] <lazyPower> marcoceppi: ^ have you seen this?
[14:36] <lazyPower> you've seen like every nit noid thing we've come up with
[14:37] <magicaltrout> its in your code, thats why i'm asking :P
[14:37]  * lazyPower crosses fingers
[14:37] <lazyPower> magicaltrout: well lets not point fingers
[14:37] <lazyPower> what code?
[14:37] <magicaltrout> hehe, k8s master
[14:37] <magicaltrout> but this is on my weird openstack deployment platform, so i'm not blaming you, its probably entropy rubbish
[14:38] <lazyPower> yeah thats what i was about to say is that it sounds like the unit hasn't generated enough entropy
[14:38] <magicaltrout> but my openstack cluster and canonical k8s really hate each other :)
[14:38] <lazyPower> and we host entropy.ubuntu.com
[14:38] <lazyPower> which should have given you enough of a random seed for entropy
[14:38] <magicaltrout> I'm gonna remove authentication.setup
[14:38] <magicaltrout> in the hope a rerun will kick it enough
[14:39] <magicaltrout> cause it just stops, but weirdly, i wouldn't have thought it got set
[14:39] <magicaltrout> so I don't know why it isn't run a second time
[14:41] <magicaltrout> bombed again
[14:41] <magicaltrout> what the
[14:42] <magicaltrout> weird and interesting
[14:43] <marcoceppi> magicaltrout: not enough /dev/random bits?
[14:44] <magicaltrout> http://pastebin.com/5WUhAr7Y
[14:44] <magicaltrout> well
[14:44] <magicaltrout> if I run the hook
[14:44] <magicaltrout> it fails
[14:44] <magicaltrout> if I run the command via juju run
[14:44] <magicaltrout> it fails
[14:44] <magicaltrout> if I ssh into the box and run it manually
[14:44] <magicaltrout> it runs
[14:45]  * magicaltrout gets out the big # based crowbar
[14:47] <marcoceppi> magicaltrout: is there a .rnd file in /root ?
[14:47] <magicaltrout> yup
[14:56] <magicaltrout> sorry lazyPower not trying to grumble, just trying to figure out juju/k8s weirdness on a random openstack cluster I have zero control over but have been asked to deploy kubernetes to
[14:56] <magicaltrout> :)
[14:56] <magicaltrout> anyway, i've commented it out, we'll see how we go
[15:10] <lazyPower> magicaltrout: oh not even upset man, just trying to keep my head above water atm. I'm sick, heavily medicated, and in/out of meetings for the last hour.
[15:11] <magicaltrout> join the club, my nose was streaming on the way back from NYC
[15:11] <magicaltrout> i'm not sure my fellow passengers were overly impressed
[15:12] <lazyPower> yeah, must be going around.
[15:12]  * lazyPower reads more of the backscroll
[15:12] <lazyPower> magicaltrout: this does seem like an issue with entropy
[15:13] <lazyPower> i'm not sure why its working when you manually run it vs script execution though.
[15:13] <magicaltrout> yeah its a bit odd
[15:13] <lazyPower> perhaps the fact you've logged in and issued the command gave it the bump it needed in terms of entropy
[15:13] <lazyPower> this is outside my scope of knowledge in crypto
[15:13] <magicaltrout> but also fails in juj run as well
[15:13] <magicaltrout> so its happy when you have a shell open
[15:13] <magicaltrout> oh well
[15:13] <lazyPower> i would probably be poking dustin about this and asking him to teach me to fish, or links so i can read about fishing.
[15:14] <magicaltrout> this is possibly the slowest openstack cluster the US Govt could possibly own
[15:14] <lazyPower> pfft
[15:14] <lazyPower> you had to challenge them didnt you?
[15:14] <lazyPower> your next stack will be a bunch of minnowboards
[15:14] <magicaltrout> well i got more ram
[15:14] <magicaltrout> but apt update takes ~25 minutes per server :)
[15:14] <magicaltrout> I think i might be a bit IO bound
[15:15] <jrwren> magicaltrout: welcome to my slow-server case. There are some apt tunings you can do.
[15:15] <Budgie^Smore> o/ juju world!
[15:15] <Budgie^Smore> hope your feeling better today lazyPower
[15:15] <jrwren> magicaltrout: I've always felt that charms should be optimized to run apt-update minimally, but instead they run it rather often. It really makes for slow charm deploys when things are on slow IO :(
[15:15] <lazyPower> Budgie^Smore: i'm mostly dead inside because of all the medication, but i'm present and accounted for.
[15:16] <magicaltrout> tell me about it jrwren, slow or completely time out
[15:16]  * rick_h ships lazyPower some OJ
[15:16] <lazyPower> Budgie^Smore: how are things giong for you and your maas k8s deploy?
[15:16] <magicaltrout> i spin up units manually and run apt update befoe dumping juju workloads on them
[15:16] <lazyPower> rick_h: <3
[15:16] <lazyPower> magicaltrout: are you using the daily image stream?
[15:17] <Budgie^Smore> lazyPower oh I keep hitting hurdle after hurdle there and all of them of my own making probably
[15:17] <lazyPower> magicaltrout: if you're using daily images your deltas should be pretty minimal
[15:17] <magicaltrout> there aren't that many lazyPower, but it takes forever installing a new kernel
[15:17] <lazyPower> magicaltrout: i know if you're using the "stable" image stream you could be somewhere near 3 months of package updates, depending on patch release and which image.
[15:18] <Budgie^Smore> lazyPower the latest one is having me rethink using VBox in favor of kvm due to not be able to change the power types in MaaS 2.1
[15:18] <jrwren> magicaltrout: do you find that dumping juju workloads on them is also slow because the juju workload apt update / upgrade as well?
[15:19] <magicaltrout> na thats not as bad jrwren cause the delta is 0 by this time
[15:19] <Budgie^Smore> lazyPower still need to figure out how to solve the cert issue on my clound k8s cluster too
[15:19] <jrwren> magicaltrout: ah, well that is good.
[15:19] <lazyPower> Budgie^Smore: - cert issue? i think i've missed context here... sounds like you've told me about this one and i forgo about it....
[15:19] <lazyPower> Budgie^Smore: do you mind re-iterating and i can try to help?
[15:20] <Budgie^Smore> lazyPower but that one is getting some larger instances as dev created a tool that will swallow memory like it is going out of fashion
[15:20] <lazyPower> hah
[15:20] <magicaltrout> I spent about 6 hours on tuesday trying to get the k8s routing working and it kept failing, i suspect because the update times were so rubbish juju failed or hadn't updated some services
[15:20] <lazyPower> is it java based? :D
[15:20] <magicaltrout> I have more ram now, just slow disks, so hopefully I'll get further :)
[15:21] <lazyPower> magicaltrout: k8s routing... meaning ingress?
[15:21] <magicaltrout> yeah, and dashboard
[15:21] <magicaltrout> they all failed to route out
[15:21] <Budgie^Smore> lazyPower sure, give me a few mins to pull the acctual error but basically it looks like one of the tools is complaining about an unknown CA which is causing it not to display logs either in the dash board or from kubectl logs
[15:21] <lazyPower> ok, you do know that we dont enable external dashboard access outside of proxy by default right?
[15:21] <lazyPower> you can easily default that, but we dont make it easy because its a huge security hole
[15:21] <magicaltrout> yeah i run the kubctrl proxy
[15:21] <magicaltrout> just get 504's
[15:22] <lazyPower> Budgie^Smore: ok, that sounds like 1 of 2 things. I"ll wait for the error before i diagnose
[15:22] <magicaltrout> anyway, i'm not there yet today, when I am, we'll see :)
[15:22] <lazyPower> magicaltrout: ok, when you do get there, make sure you have my attention and i'll help
[15:22] <lazyPower> we have a bug open from another user that ran into unexpected behavior
[15:22] <lazyPower> stuff that i haven't been able to reproduce
[15:22] <magicaltrout> i mean i've only been spinning up openstack nodes for it for like 3 hours! :P
[15:22] <lazyPower> so i'm highly interested in findign out wtf is going on
[15:22] <magicaltrout> yeah i saw that ticket
[15:22] <magicaltrout> it looked similar
[15:22] <lazyPower> yeah man, if its the same, lets find it and nail it to the wall
[15:22] <lazyPower> i dont like bugs i cant reproduce
[15:26] <Budgie^Smore> I have one of those with the MaaS team right now lazyPower ;-)
[15:32] <Budgie^Smore> oh I just noticed that I am using IPv6 at home!
[15:38] <lazyPower> Budgie^Smore: you mean a non-reproduceable bug by the maintainers?
[15:38] <lazyPower> Budgie^Smore: kill it with fire
[15:39] <Budgie^Smore> lazyPower yeah, install related issue where the rack controller doesn't register to the region controller
[15:39] <lazyPower> :S
[15:39] <lazyPower> fun times, that one
[15:40] <lazyPower> Budgie^Smore: i'm guessing nothing in the logs of the region controller? Nothing indicative that the network is not the culprit?
[15:41] <Budgie^Smore> lazyPower I saved a copy of the virtual disk one of the times it happened but haven't heard from the maintainers if they would like it... there is an error in there but didn't point to the culprit, I suspect network latency issues though
[15:47] <stormmore> ok so lazyPower, here is the output from kubectl --v=8 logs - http://paste.ubuntu.com/24195846/
[15:49] <lazyPower> ok, i see the x509 errors in here. This is consistent with a couple problems that might have arisen...
[15:50] <stormmore> lazyPower, it is worth noting that I only started getting this error once I upgrade my kubectl client version to 1.5.3, with 1.5.1 it was an annoying generic catch all failure
[15:50] <lazyPower> 1 sec, let me finish gettign this multi-cloud test run of the etcd3 stuff going. Can you get me the output of juju run-action kubernetes-master/0 debug
[15:50] <lazyPower> post that somewhere secure, it'll have logs + config files that you dont want to leak
[15:51] <lazyPower> stormmore: that sounds like some of the extra debugging that landed in 1.5.2, so it makes sense that was expanded when you upgraded.
[15:51] <lazyPower> but i'll want the output of the debug action on your master so i can verify the tls components were installed correctly
[15:52] <stormmore> I am wondering if this is an affect from upgrading the cluster
[15:53] <stormmore> (and messing it up when I just did a juju deploy over the top of the current one)
[15:54] <stormmore> lazyPower, OK to DDC it to you?
[15:56] <arosales> hatch: I added a comment to https://github.com/juju/docs/issues/1651
[15:56] <arosales> hatch: I also opened up https://github.com/juju/docs/issues/1723
[15:56] <hatch> kewl thanks
[15:58] <lazyPower> stormmore: hang on and i'll see about deploying a secure upload
[15:58] <lazyPower> i'm on znc + weechat and haven't used DCC since the 90's
[15:59] <lazyPower> that combination means i'm probably going to have tears at the end of it :P
[15:59] <stormmore> right! yeah it has been awhile for me too... not even sure how well Hexchat handles it
[16:02] <magicaltrout> DCC! jesus, i'd forgotten that was a thing
[16:02] <magicaltrout> those were the days
[16:02] <magicaltrout> it didn't work then
[16:02] <magicaltrout> it won't work now :)
[16:05] <psarwate> Hey folks. I am unable to get juju terms to work with my charm.
[16:06] <psarwate> Eventhough the charm is not agreed upon, juju still doesn't prompt me
[16:06] <psarwate> the terms is not agreed upon*
[16:13] <stormmore> OK I have the debug log :) reminds me of sosreport back in the day
[16:22] <magicaltrout> I figured out my dashboard faux pas lazyPower , no UDP routes between the nodes
[16:22] <magicaltrout> oops :)
[16:30] <arosales> hatch: one more https://github.com/juju/docs/issues/1724
[16:30] <lazyPower> magicaltrout: ah interesting
[16:31] <arosales> hmm seems I missed psarwate
[16:31] <arosales> I wonder if Juju was actually deploying his charm though . .  .
[16:33] <hatch> thanks arosales
[16:34] <arosales> also if psarwate was deploying locally then terms will be bypassed
[17:39] <lazyPower> cs:~containers/etcd-25  (edge channel) was just released, introducing deb=>snap migration path and roll forwards to etcd 3.1.3  (juju config etcd channel=3.1/stable)  -- you'll need to follow the migration path in the readme however. All early feedback appreciated
[17:42] <lazyPower> plus sesnding a notice to the list with a bit more details than this blurb ^
[17:43] <rick_h> lazyPower: congrats!
[17:44] <lazyPower> rick_h: its pretty cool to see the capability of rolling forward and rolling backwords using a single config option. (potential data mismatch if you're actively reciving data during the forward jump, rollback will restore to the state just prior to the upgrade). but its nice to know we support this now in the event of troubled clusters.
[17:44] <rick_h> lazyPower: yea, cool how things are moving
[20:19] <petevg> cory_fu: you did some work to making rebuilding the python-libjuju _client easier, right?
[20:26] <stormmore> btw lazyPower I am looking at using SDN in this cluster
[20:47] <cory_fu> petevg: Yes.  Are you planning to rebuild it?
[20:48] <petevg> cory_fu: yes. Though we may have to tackle the multiple versions thing sooner rather than later.
[20:48] <petevg> cory_fu: filing a bug and pushing a repro right now ...
[20:50] <petevg> cory_fu: issue at https://github.com/juju/python-libjuju/issues/90, and test reproducing it at https://github.com/juju/python-libjuju/pull/91
[20:50] <petevg> cory_fu: do the docs on building live anywhere?
[20:51] <cory_fu> petevg: Yes, but my Makefile changes seem to have disappeared
[20:51] <petevg> cory_fu: that's why I couldn't find them :-/
[20:53] <cory_fu> wth
[20:56] <petevg> cory_fu: git ate your homework!
[20:59] <cory_fu> petevg: Well, I can't for the life of me figure out what happened to the Makefile changes I made, but they're not strictly necessary and were relatively minor.  The important bit is at https://github.com/juju/schemagen
[21:00] <cory_fu> petevg: Build and run that, directing the output to juju/client/schemas.json, and then run the "client" target: https://github.com/juju/python-libjuju/blob/master/Makefile#L13
[21:00] <petevg> cory_fu: cool. thx.
[21:01] <petevg> cory_fu: since the alpha has breaking changes, this is also the time to worry about different versions of the _client.py in python-libjuju, right? As in: I shouldn't just slam these changes in ...
[21:03] <cory_fu> petevg: Well, it's just one API, and we could work around it in the wrapper around FullStatus that we definitely should have.  But we do need to support facade versions eventually so if you can come up with a good way to handle that, I'm down
[21:04] <magicaltrout> so lazyPower if I run the microbot service and then want to check it
[21:04] <petevg> cory_fu: cool. That sounds more fun than downgrading juju and having to deal with halfway destroyed models cluttering my env :-)
[21:04] <magicaltrout> do I have to munge my hosts file to make it resolve?
[21:04] <cory_fu> petevg: It would be good to have _client.py split up anyway, and splitting it by facade version shouldn't be much harder.
[21:05] <cory_fu> petevg: The hard bit would be figuring out which facade versions to use based on the controller.  I don't know if there's a way to query that
[21:05] <cory_fu> petevg: Also, I would imagine that the object layer code would need to know how to work with the different facade versions as well.
[21:05] <petevg> Ayup.
[21:07] <cory_fu> petevg: Aren't the facades supposed to keep us from having version-specific logic in the code?  I'm a little unclear on how they're supposed to help
[21:08]  * magicaltrout is getting 502's and nothing else \o/
[21:08] <petevg> cory_fu: My expectation would be that if you do nothing, you are operating against stable, and if you pass in a flag somewhere, you operate against some other version.
[21:08] <petevg> cory_fu: it would be cool to do some magic with imports, but I don't think that import time is the right time.
[21:08] <petevg> connection time probably isn't right, either.
[21:10] <cory_fu> petevg: Maybe tvansteenburgh has some insight for us here?
[21:12]  * tvansteenburgh wakes up
[21:13]  * tvansteenburgh reads scrollback
[21:15] <tvansteenburgh> petevg: cory_fu: https://github.com/juju/python-libjuju/issues/49. we can dynamically use the right facade version pretty easily. the trick will be handling changing api signatures in the object layer
[21:16] <tvansteenburgh> happy to discuss that at some point, but that point is not today :)
[21:16] <magicaltrout> okay... lazyPower my microbot containers are all paused... no idea why or what that is, answers on a postcard when you get bored
[21:16] <petevg> tvansteenburgh: Cool. I'm going to head out to dinner soon, anyway. Thank you for the linky :-)
[21:16] <tvansteenburgh> sure thing
[21:29] <lazyPower> magicaltrout Budgie^Smore - i think i foudn the issue with your x509 validations - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238
[21:31] <lazyPower> magicaltrout Budgie^Smore - if you could check your /etc/default/kubelet file's for the presence of the tls flags + disabling anonymous auth on kubelet, that would be choice to validate its the root cause of the validation issues post 1.5.3 update
[21:33] <magicaltrout> i don't have cert issues :)
[21:33] <magicaltrout> not yet anyway
[21:34] <magicaltrout> http://pastebin.com/raw/gSWGEk1E
[21:34] <magicaltrout> any idea lazyPower \o/ :)
[21:34] <magicaltrout> even docker run -d dontrebootme/microbot:v1
[21:34] <magicaltrout> is failing on me
[21:34] <magicaltrout> which is weird cause I have other containers running fine
[21:35] <lazyPower> whats this about ipv6
[21:36] <lazyPower> and it cant seem to find any of the image layers
[21:36] <lazyPower> i'm not sure how you got into this state but this is certainly fun magicaltrout
[21:36] <magicaltrout> lol
[21:36] <magicaltrout> this is just post install
[21:36] <lazyPower> not without some effort it isnt
[21:37] <lazyPower> how do you get docker to know about the meta of the image but have none of the data to back it up?
[21:37] <magicaltrout> i swear gov'ner over than installing the charms i've not touched it from base xenial
[21:40] <lazyPower> yeah i'm not sure what to recommend here magicaltrout
[21:40] <lazyPower> i've not seen this before. I suspect i'm missing pieces to the puzzle
[21:40] <lazyPower> magicaltrout best i can suggest at this juncture would be to capture the model with a juju-crashdump report and post that for post analysis
[21:44] <magicaltrout> reboot managed to make things worse \o/
[21:44] <magicaltrout> ho ho, nothing better on a friday night
[21:45] <stormmore> oh nice lazyPower! let me look, sorry I have taken today as a chance to write some documentation
[21:48] <stormmore> you still there lazyPower? http://paste.ubuntu.com/24197670/ is the /etc/default/kubelet from one of the workers
[21:50] <lazyPower> stormmore yeah looks like the tls flags + auth flags didn't make it in the upgrade
[21:50] <lazyPower> Cynerva not sure if you're still here, its pretty late in the day. ^
[21:50] <lazyPower> i filed https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238 in reference to this.
[21:51] <lazyPower> stormmore thanks for validating, we should have a patch or working fix for this soon
[21:51] <lazyPower> the manual fix is pretty simple too, but i'd rather the charms take care of themselves and update that defaults file.
[21:51] <stormmore> so the "workaround" would be to add those flags?
[21:51] <lazyPower> yeah
[21:52] <stormmore> yeah no problem, I think I have a bought a little bit of time on that anyway. need to add 3 larger instances and remove 3
[21:57] <magicaltrout> lazyPower: the ipv6 stuff is cause flannel has an ipv6 address and dishes them out to the container eth adapters
[21:58] <magicaltrout> i removes /var/lib/docker which seems to have resolved the layer issues
[21:58] <magicaltrout> what is amazing though is
[21:59] <magicaltrout> nginx-ingress-controller runs fine on the same machine?!
[21:59]  * magicaltrout needs hard liquor
[22:02] <petevg> cory_fu: my first (probably naive) attempt at "make client" failed with this error: https://pastebin.canonical.com/182956/  (This is using a schema.json that I generated with schemagen).
[22:02] <petevg> cory_fu: it's dinner time, though, so I'm going to do that, and then worry about things more on Monday.
[22:02] <petevg> Have a happy weekend!
[22:02] <magicaltrout> oooh fsck
[22:02] <magicaltrout> you can't docker exec into microbot
[22:03] <magicaltrout> that explains that then
[22:03] <cory_fu> petevg: Damnit!  That was the error I fixed in the same code that I modified the Makefile target
[22:03] <cory_fu> petevg: I really don't want to have to re-do that.  It was a bit of a PITA
[22:04] <petevg> cory_fu: darn!  I will try doing some git Magic on Monday. Maybe I can fin your change ...
[22:04] <petevg> *find
[22:06] <magicaltrout> oooh blimey microbot is running
[22:06] <magicaltrout> i take it all back lazyPower
[22:06] <magicaltrout> its an amazing platform
[22:06] <magicaltrout> never questioned it!
[22:10] <magicaltrout> lazyPower: other question, trying to understand ingress stuff.  Is the microbot supposed to route through the loadbalancer?
[22:11] <magicaltrout> or mbruzek you're on the hook as well :P
[22:24] <magicaltrout> ooh the loadbalancer is only for the masters?
[22:42] <mbruzek> magicaltrout: no microbot is not going through the kubeapi-load-balancer, it goes through a kubernetes load balancer
[22:42] <mbruzek> when you create ingress rules
[22:49] <cory_fu> petevg: ah ha!  It's in the branch bug/77-deploy-resources!  You can easily see the changes I made in this commit: https://github.com/juju/python-libjuju/commit/ef59035b2596a1998615cb0ad3f73fe539531898
[22:49] <mup> Bug #77: Maybe drop you in the 'edit bug' page after adding a bug? <lp-bugs> <Launchpad itself:Invalid by bjornt> <https://launchpad.net/bugs/77>
[22:49] <petevg> cory_fu: yay! Thx.
[22:50] <cory_fu> petevg: If you want, I could submit that commit by itself as a PR
[22:50] <cory_fu> Probably worth doing
[22:51] <petevg> cory_fu: sure. Thx.
[22:53] <cory_fu> petevg: https://github.com/juju/python-libjuju/pull/92
[23:02] <stormmore> lazyPower, you don't have any toys to help devs assess the resource usage - cpu, mem - of their containers do you?