[00:00] <bdx> I just need to add zks to the deployment matrix
[00:00] <kwmonroe> yup
[00:00] <kwmonroe> then, if hbase or zks change, install_hive will run through the actual hive.install(**args) method
[00:02]  * kwmonroe wanders off to dinner
[00:03] <magicaltrout> dinner?!
[00:31] <bdx> kwmonroe: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py - still giving the same result (not getting the hive zk config in the hive-site.xml)
[00:32] <bdx> possibly I've missed adding something somewhere to get those picked up all the way
[00:33] <bdx> I feel its close
[00:36] <bdx> oooo I think I see it
[00:36] <bdx> well nm
[00:40] <bdx> willing to bet that https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hbase/layer-hbase/reactive/hbase.py#L70 matters that its named 'zookeepers' somewhere outside of my peripheral
[00:40] <bdx> I've decided to name mine zks https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L95
[00:41] <bdx> bet if I s/zks/sookeepers/ it will be gold
[00:44] <bdx> oh man .... these layers dont have "series" tag in the metadata.yaml
[00:45] <bdx> I've been deploying the wrong thing all this timeGRRRR
[00:46] <bdx> it defaults to repo/builds/trusty lol oh man
[00:47] <bdx> at least hive will be super consistent with hbase :)
[01:17] <bdx> oooooh still not workin, but I bet I need to add entries here https://github.com/apache/bigtop/blob/be9a183b4db8f183c14cc9a4ed853cf7bbbab2e5/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml#L191
[05:51] <bdx> kwmonroe: I found the source of the issue
[05:51] <bdx> cat /etc/hive/conf/hive-site.xml | https://paste.ubuntu.com/p/fm7mMsyvJS/
[05:52] <bdx> as you can see I'm not getting the vars that I want in there, even though the template has them https://github.com/jamesbeedy/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml
[05:53] <bdx> and the facts are there on the filesystem - cat /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/hieradata/site.yaml | http://paste.ubuntu.com/p/dK9mhgmYvc/
[05:53] <bdx> so I figured puppet must be using some template it pulls from upstream
[05:54] <bdx> instead of whats in teh bigtop repo
[05:54] <bdx> which is correct
[05:54] <bdx> and here is the template that is used https://paste.ubuntu.com/p/T5dBbw8hzm/
[05:55] <bdx>  /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml
[05:55] <bdx> lol
[05:55] <bdx> you could have just told me
[05:55] <bdx> either way
[05:56] <bdx> I think what I have in my hive_config_add_zookeeper branch will work just fine
[05:56] <bdx> once we have the correct template there
[05:57] <bdx> alright I can stop now
[05:57] <bdx> thanks for your help today
[05:58] <bdx> here's the PR btw https://github.com/apache/bigtop/pull/344/files
[06:11] <bdx> setting 'bigtop_version: master' was all I needed
[06:26] <bdx> geh ... that did not allow me to do what I thought it would... still no way to get whats in my branch onto the instance
[06:40] <bdx> the bigtop-repo resource is the way!!
[08:47] <gsimondo1> Hi. Trying to juju bootstrap with a LXC container as controller. Fails because of publickey. What's the conventional way of going about manual clouds and keys with juju?
[11:59] <petevg> gsimondo1: do you mean that you're doing "juju boostrap localhost"? Or are you doing something else?
[12:00] <petevg> gsimondo1: With some of the versions of juju 2, you needed to add the credential manually, with "juju add-credential localhost".
[12:00] <petevg> I think that's been fixed with the latest releases, though.
[12:02] <petevg> gsimondo1: you might want to try "snap install juju" to get the latest juju, if you don't want to mess around with adding the credential.
[12:03] <magicaltrout> petevg ... he lives
[12:03] <petevg> magicaltrout: I'm even in your time zone for a little while.
[12:04] <magicaltrout> jesus
[12:04] <petevg> Enjoying lovely snowy Dublin.
[12:04] <magicaltrout> yeah its pretty bleak
[12:04] <magicaltrout> i'm stuck at home today
[12:04] <magicaltrout> kids are pissing me off
[12:04] <petevg> Snow day!
[12:05] <magicaltrout> kill me
[12:07] <petevg> magicaltrout: it'll melt eventually. It's supposed to warm up and rain by the weekend :-)
[12:43] <gsimondo1> pegevg: doing juju bootstrap manual/ip.address where ip.address is an address of a LXC container
[12:44] <gsimondo1> petevg: juju uses that default ubuntu user. I was looking into configuration options to mess with that.
[12:44] <gsimondo1> petevg: temporary solution is copying ssh keys in the container before running juju bootstrap
[12:44] <gsimondo1> petevg: currently thinking and looking if there are some conf management tool integrations for this stack - can't find any. not sure if it's a good direction too
[12:50] <gsimondo1> petevg: also running juju 2.3.4-xenial-amd64. why would I reinstall?
[12:54] <petevg> gismondo1: if you're up to date, not need to reinstall.
[12:55] <petevg> I'm afraid that I don't know manual providers well.
[12:55] <petevg> Anyone else have a suggestion?
[12:56] <magicaltrout> i use manual clouds all the time, but i usually have a precanned image kicking around with a key in that becomes the base image
[12:56] <gsimondo1> magicaltrout: that makes a lot of sense
[12:57] <gsimondo1> magicaltrout: do you do cross-host lxc networking? using flannel by any chance?
[12:57] <magicaltrout> afraid not gsimondo1
[12:58] <petevg> gismondo1: according to my lunch companions, you are doing the correct thing by copying the key into the image.
[12:58] <magicaltrout> ircing on your lunch break
[12:58] <magicaltrout> what a geek
[12:58] <gsimondo1> haha
[12:59] <petevg> Since juju doesn't create the machine, it can't drop the key on the machine by itself
[12:59] <petevg> A geek is what I am :-)
[12:59] <gsimondo1> petevg: any way to make it use a different user than ubuntu?
[12:59] <magicaltrout> yeah
[12:59] <magicaltrout> you just alter the ssh command
[12:59] <magicaltrout> when adding the machine
[13:00] <magicaltrout> user@
[13:00] <gsimondo1> gotcha thanks
[13:00] <gsimondo1> that's what I did to set it up but just making sure I'm not hacking around it too much
[13:00] <magicaltrout> na what you're doing sounds pretty standard
[13:01] <petevg> It sounds like you've got stuff figured out. We're always glad to verify, though :-)
[13:01] <gsimondo1> I'm more than grateful! :)
[13:01] <magicaltrout> don't ask petevg he only knows openstack
[13:01] <magicaltrout> he's been corrupted
[13:02] <gsimondo1> spoiled lovechild?
[13:02] <magicaltrout> those were the days... when kevin used to have team members....
[13:02] <petevg> My mind has been expanded :-p
[13:03] <magicaltrout> as I told beisner last night
[13:03] <magicaltrout> openstack is so 2010
[13:03] <gsimondo1> do I feel a pitch for MaaS coming up?
[13:03] <magicaltrout> ha
[13:04] <magicaltrout> i'm only bitter because i have to run CDK on openstack
[13:04] <magicaltrout> Kubernetes on VMs.....
[13:04] <magicaltrout> cause thats a great idea
[13:05] <gsimondo1> I suddenly feel lucky using this LXC cross host networking with questionable amount of testing that went into it
[13:07] <petevg> Exciting!
[13:08] <magicaltrout> everyone loves software networks with questionable testing
[13:08] <magicaltrout> its what petevg lives for
[13:08] <petevg> That just means I get to have fun writing tests :-)
[13:09] <petevg> Abs making test frameworks...
[13:09] <petevg> *and
[13:09] <petevg> Sometimes, I write non test code, too...
[13:10] <magicaltrout> I'll refer you at this point to my point about you IRCing at lunch
[15:56] <magicaltrout> alright folks
[15:56] <magicaltrout> need a hand here
[15:56] <magicaltrout> kjackal: or someone
[15:57] <magicaltrout> i need to set these GC variables
[15:57] <magicaltrout> and if I update the snaps args file by hand
[15:57] <magicaltrout> your stuff zaps it
[15:57] <magicaltrout> anyone got any bright ideas?
[15:59] <kwmonroe> magicaltrout: what "stuff" zaps it?  you talking about the k8s snaps?
[15:59] <kwmonroe> if so, ryebot loves the k8s snaps...
[15:59] <magicaltrout> i dunno if its the snap
[16:00] <magicaltrout> i suspect its the charm
[16:00] <stub> cory_fu: https://github.com/stub42/juju-relation-pgsql/pull/1/files if you are interested. The new Enpoint version of the pgsql interface is better in all respects I think. Its more confusing that it needs due to backwards compatibility.
[16:00] <ryebot> magicaltrout: which snap/config are we talking about?
[16:01] <magicaltrout> kubelet ryebot
[16:01] <magicaltrout> the charm config doesn't let me set GC attributes
[16:01] <kwmonroe> bdx: about the series stuff... we don't set that in the individual charms since all the bigtop charms are xenial-only.  that's defined in https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml.  but yeah, you have to build with charm build --series xenial :)
[16:01] <magicaltrout> but then when I append the args file for kubelet at some point they get removed
[16:01] <ryebot> magicaltrout: okay cool - there's a kubelet-extra-args option on kubernetes-workere
[16:01] <ryebot> worker*
[16:01] <magicaltrout> aah
[16:02] <kwmonroe> k8s: almost as easy as big data.
[16:02] <ryebot> magicaltrout: if you have any trouble with it hit me up and I'll help out
[16:02] <magicaltrout> oh
[16:02] <magicaltrout> hold up
[16:02] <kwmonroe> run ryebot
[16:02] <magicaltrout> thats the bug we filed last week
[16:02] <magicaltrout> which impacts this
[16:02] <ryebot> welp, that sounds like trouble
[16:02] <ryebot> let me take a look
[16:02] <magicaltrout> ha
[16:03] <magicaltrout> you guys put some logic into the charm and filter out flags you don't grep
[16:03] <magicaltrout> but the GC settings aren't picked up
[16:03] <magicaltrout> or so the greek told me
[16:03] <magicaltrout> i take most things he says with a pinch of salt
[16:03] <magicaltrout> but this seemed to ring true
[16:04] <ryebot> magicaltrout: looks like this one? https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/498
[16:04] <magicaltrout> yeah thats me
[16:04] <ryebot> magicaltrout: give me a few minutes to dig into this
[16:04] <magicaltrout> thanks
[16:12] <ryebot> magicaltrout: Yeah this is a bit of a pickle. Have you looked into the --eviction-hard and --eviction-soft cli options for kubelet?
[16:13] <ryebot> They're supposed to be the replacements for the deprecated gc flags.
[16:13] <magicaltrout> well
[16:13] <magicaltrout> no
[16:13] <magicaltrout> can i set them? :P
[16:14] <ryebot> You should be able to, since they're in the --help output :)
[16:14] <ryebot> Also look at --eviction-minimum-reclaim	
[16:15] <magicaltrout> alright, i'll give it a go and see if it stops the NIST folks complaining
[16:15] <magicaltrout> they want max-dead-containers set and container-ttl set
[16:15] <magicaltrout> but i'll see if i can get away with it
[16:17] <ryebot> magicaltrout: If it helps, there's a deprecation/replacement table here: https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/
[16:17] <ryebot> hopefully that's convincing
[16:17] <magicaltrout> ha
[16:17] <magicaltrout> yeah i know that inside out these days :P
[16:17] <magicaltrout> alright thanks ryebot i'll translate what we have and see how i get on
[16:17] <ryebot> haha sorry to hear that xD
[16:17] <ryebot> alright good luck!
[21:31] <kwmonroe> hey cory_fu, does charm build still require layers to be at the top level of a repo?