[12:00] <Dwellr> query.. what does juju expose actually do from an lxc/lxd perspective ?
[12:22] <rick_h> Dwellr: nothing, expose is meant to update firewall rules to enable ports the charm needs to open properly. On LXD and providers without a built in firewall setup (security groups or the like) there's nothing there to work against
[13:01] <SimonKLB> is there any reason why the ceph charm does not implement the ceph-admin interface?
[13:03] <SimonKLB> i would like to try out ceph with kubernetes with a minimal setup as a poc, not having to deploy ceph-mon x3 and ceph-osd x3 if possible
[13:04] <rick_h> SimonKLB: not sure. Have to check in with the OS folks. Cool experiment to try out.
[13:07] <SimonKLB> jamespage icey cholcombe ^
[13:08] <jamespage> SimonKLB: the ceph charm is officially deprecated; as such it does not always grow the same features as ceph-mon and ceph-osd do over time
[13:08] <SimonKLB> jamespage: got it, is there any way to have a ceph deployment smaller than 6 machines?
[13:08] <jamespage> only reason its still in the charm store is we've not found a satisfactory migration approach for existing ceph deployments.
[13:09] <jamespage> SimonKLB: what provider are you using?
[13:09] <SimonKLB> jamespage: aws
[13:09] <jamespage> SimonKLB: hmm no not really
[13:09] <SimonKLB> jamespage: alright, then ill go with that!
[13:09] <jamespage> with MAAS you can of course place ceph-mon on LXD containers, and ceph-osd alongside k8s
[13:10] <SimonKLB> ah, would that not work on aws though?
[13:11] <SimonKLB> ceph-mon on 3 LXDs and ceph-osd on one worker each would be pretty neat
[13:11] <SimonKLB> just to try it out
[13:19] <icey> SimonKLB: AWS doesn't have the overlay networking required to get packets into the ceph-mon's in containers
[13:19] <SimonKLB> icey: ah too bad!
[13:24] <wpk> (that problem will be solved in 2.3)
[13:27] <SimonKLB> wpk: im running 2.3-alpha1.1, that fix is not there yet?
[13:29] <wpk> SimonKLB: no
[14:19] <bdx> wpk: you are my hero
[14:20] <bdx> great news
[14:42] <tvanhove> does anybody know what is up with the bigtop repos?
[14:42] <tvanhove> kafka charm is currently failing because of 403 forbidden
[14:42] <tvanhove> http://bigtop-repos.s3.amazonaws.com/releases/1.2.0/ubuntu/16.04/x86_64
[14:50] <rick_h> kwmonroe: ^
[14:51] <kwmonroe> hm - not sure tvanhove, but i'll send a "wat?" to the dev list
[14:57] <Dwellr> wondering if (after installing conjure-up kubernetes-core) I should be using iptables to route traffic to the node InternalIP .. or if I should be figuring out how to add an 'ExternalIP' to the node
[15:07] <kwmonroe> tvanhove: not that you needed it, but i verified the 403 on a different arch as well.  looks like something is afoul with all 1.2.0 repos:
[15:07] <kwmonroe> E: Failed to fetch http://bigtop-repos.s3.amazonaws.com/releases/1.2.0/ubuntu/16.04/ppc64le/pool/contrib/k/kafka/kafka_0.10.1.1-1_all.deb  403  Forbidden
[15:13] <tvanhove> yeah we were setting up for a demo next week and noticed the failure in our juju storm deployments with kafka
[15:26] <kwmonroe> tvanhove: mail sent to dev@bigtop.apache.org.  i'll keep you in the loop as soon as we figure out what's up.
[15:27] <kwmonroe> tvanhove: until then, one possible workaround would be to manually set the repo on each affected unit to the CI builders.  to do that, you'd edit your apt sources like this:  http://paste.ubuntu.com/25445303/
[15:29] <kwmonroe> i've verified an apt-get update / install works from those repos.
[15:30] <kwmonroe> buuuuut, that's upstream vs the official 1.2.0 release.  so don't go to production with that ;)
[15:34] <tvanhove> it's just for demos right now
[15:34] <tvanhove> no production
[15:34] <tvanhove> thanks :)
[15:34] <kwmonroe> "it's just for demos" <-- that's what they all say ;)
[15:34] <tvanhove> ;)
[16:03] <stormmore> o/ juju world
[16:07] <kwmonroe> \o stormmore
[16:13] <kwmonroe> tvansteenburgh: do you recall if stub's "hookenv.principal_unit" fix was the only thing in ch-0.18.1 (vs 0.18.0)?  wanna make sure i'm reading this right: https://code.launchpad.net/~charm-helpers/charm-helpers/devel
[16:14] <stormmore> b 40
[16:15] <tvansteenburgh> kwmonroe: yeah it was just that one commit
[16:15] <kwmonroe> ack, thx tvansteenburgh
[17:56] <xarses> hml: is there a way to remove/stop the auto subnet scanning neutron subnets for network-space mapping? it keeps finding embarrassing duplicates and crashes the whole install
[17:57] <hml> xarses: i’m not sure, let me see what I can find out.
[17:58] <hml> rick_h: ^^^ any ideas?
[17:58]  * rick_h erads backlog
[17:58] <xarses> https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05#file-duplicate-neutron-subnets
[17:59] <rick_h> hml: xarses is that in the neutron charm itself? Or in juju trying to figure out what's up?
[17:59] <hml> rick_h: i think it’s juju investigating subnets
[17:59] <xarses> juju bootsrap on a openstack cloud fails with a duplicate subnet in neutron
[17:59] <rick_h> yea, gotcha
[18:00] <hml> xarses: oh, try bootstrapping with the network uuid instead?
[18:00] <rick_h> normally that's the bootstrap path ^
[18:00] <hml> xarses: i thought it was a different issue
[18:00] <xarses> no, its scanning networks in the output of `openstack subnet list`
[18:01] <xarses> the network to boot the instance selector is not the problem here
[18:01] <xarses> it's found 2 duplicates on me so far, and I'm guessing I have another 4 based on what happened here
[18:02] <hml> xarses: sounds like bootstrap is failing when discovering subnets for juju - where juju list-subnets etc would be used
[18:02] <xarses> but the bootstrap haults on reading the duplicate
[18:08] <hml> xarses: while we look for a workaround on this… can you file a bug on this please?
[18:08] <xarses> sure, on lp?
[18:09] <hml> xarses: yes: https://bugs.launchpad.net/juju
[18:09] <hml> xarses:  I have an idea, but not sure if it’ll work - let me test out first
[18:10] <xarses> cool
[18:13] <xarses> yay, it only took 3 days, but I finally have a controller installed
[18:13] <hml> xarses: w00t!  sorry it took so long.   openstack is one of the harder to bootstrap unforunately.
[18:13] <kwmonroe> tvanhove: bad news!  see the [IMPORTANT] thread here:  http://mail-archives.apache.org/mod_mbox/bigtop-user/201708.mbox/browser.  tl;dr, kafka was removed from the repos due to licensing.
[18:13] <xarses> hml: dont worry about the work around, I was more easily than expected able to remove the duplicates
[18:14] <hml> xarses: good news -
[18:17] <xarses> ya, I wish it wasn't so stubborn, the main issue where with the image and tools metadata process being overly weird to the uninitiated
[18:19] <hml> xarses: usually the tools aren’t an issue - the images are, because they are specific to your openstack.  we can’t just download the info - it’s not an issue with other clouds.  :-/
[18:19] <hml> xarses: that said, we’re looking to improve the docuementation on this
[18:19] <xarses> it was the offlining that I had to do for the tools to get them through whatever was wrong with the network that was causing them to time out
[18:20] <xarses> The network guys still haven't come back to me on that
[18:20] <zeestrat> xarses: Somewhat related is this bug (though that is concerning the generic subnet that is created for HA routers): https://bugs.launchpad.net/juju/+bug/1710848
[18:20] <mup> Bug #1710848: Bootstrapping Juju 2.2.x fails on a Openstack cloud with Neutron running in HA. <network> <openstack-provider> <juju:Incomplete> <https://launchpad.net/bugs/1710848>
[18:21] <xarses> oh, well that is about what I was to report
[18:22] <xarses> close enough anyway
[18:27] <xarses> I wacked it back to new, while confirmed might be just as valid, its not my project
[18:28] <xarses> so it doesn't languish in some incomplete filter
[18:28] <zeestrat> xarses: Great. Feel free to hit the "affects me too" button too.
[18:28] <zeestrat> Duplicate subnets is a pretty common scenario in OpenStack so Juju will need to handle that anyway
[18:33] <xarses> yep
[18:39] <hml> xarses: good thing the subnets were easy to remove - my idea didn’t work. :-(
[18:50] <xarses> happens
[20:53] <xarses> thanks for your help hml, wouldn't have gotten throug this w/out it
[20:53] <hml> xarses: glad I could help!
[20:54] <xarses> now I can continue with the getting started videos
[20:54] <xarses> =)
[20:54] <hml> :-)
[20:54] <hml> there’s a juju show on youtube also with different topics - maybe that’s what you’ve found?
[20:56] <xarses> https://www.youtube.com/watch?v=ovsBVZsQqtg
[20:56] <xarses> for  1.25
[20:56] <xarses> which means a bunch of these commands aren't around anymore
[20:56] <hml> 1.25 is a bit different than 2.0
[20:56] <hml> the concepts have changed too
[20:56] <xarses> ya, finding useful videos is is hard
[20:57] <xarses> most are a billion years old
[20:57] <xarses> I've ran into a few that appear to be 0.x
[20:58] <hml> here’s one of the bi-weekly juju show: https://www.youtube.com/watch?v=YoZsP7TDyZI
[20:58] <hml> let me look for me
[20:58] <hml> more
[21:00] <xarses> ya, I've watched most of that and felt lacking from it
[21:01] <hml> ah
[21:03] <hml> that’s more on-going juju news rather than getting started
[21:07] <xarses> ya
[21:58] <xarses> zeestrat: hmm, a possible work around is to hide the duplicate networks from the user bootstraping the controller